News from the company...

Major enhancements to AzureWatch Rules engine

clock March 11, 2014 11:47 by author Igor Papirov

A number of important changes are being introduced to AzureWatch this weekend (March 16th):

  • Ability to execute rules only after a sustained period of time
    • This feature allows for much better control of the scaling process.  For example, in certain situations it can be far more effective to scale when sustained load is over a certain threshold rather than try to predict what a moving average looks like.  It is important to know that if a rule is configured with a sustained time delay, it will only be executed after continuously being evaluated to TRUE for the specified period of time.
  • Ability to send ON and OFF alerts (a single ON email when alert evaluates to TRUE, and a single OFF email when it evaluates to FALSE)
    • This feature reduces spam when a certain rule's condition is continuously TRUE.  It also simplifies configuration since ON/OFF alerts no longer require throttling.  Unless modified, existing Alerts will work as they currently do.
  • Separation of Alerts from Management Actions (rules that notify will be separated from rules that execute scale actions, shutdowns, restarts, etc.)
    • This feature is relatively important as it may impact existing rule sets.  Going forward, rules that are Alerts will be evaluated separately from rules that are Management Actions.  When evaluating rules, all Alerts that qualify for execution will be evaluated and acted upon, not just the first one. Management Actions will continue to be evaluated until the first rule that qualifies for execution. Users who currently rely on Alerts to control execution of their Management Actions will want to revisit their scaling configurations.  We do expect percentage of such users to be either very small or non-existent.
    • After the upgrade, we plan to monitor AzureWatch's email queues and switch highly spamming Alerts to have ON/OFF logic.  Impacted customers will be notified.

While we expect minimum impact during or after the upgrade, we want to be transparent with our users: this is probably the most significant change to the Rules engine since the inception of AzureWatch. If you have any concerns, please contact Paraleap support team



Monitor Windows Azure Service Dashboard!

clock January 22, 2014 23:28 by author Igor Papirov

AzureWatch users can now receive notifications when changes are published to the Windows Azure Service Dashboard.  Upon logging into AzureWatch portal, users can choose to subscribe to any of the Azure Service Dashboard feeds as shown in the screenshot below.  This feature is available free of charge to active AzureWatch users.

 

AzureWatch Dashboard notifications



Introducing Automated Daily Performance Reports

clock July 18, 2012 09:04 by author Igor Papirov

We are excited to introduce a new AzureWatch feature: daily performance charts delivered via email to our users.  This feature was requested rather frequently in the recent months, and we are happy to oblige.  No action is required to enable this feature.  Every active account (whether trial or paid) will automatically begin receiving daily reports free of charge.

We know that this new capability will provide AzureWatch users with greater insight into the performance of their applications running on top of the Windows Azure platform.



Free Azure resource-monitoring utility: AzurePing

clock October 25, 2011 07:49 by author Igor Papirov

We are pleased to announce the release of AzurePing: a free Azure resource-monitoring utility.  AzurePing is a simple Windows Service that pings any number of Azure Storage resources, SQL (Azure) databases, and web URL's on a continuous basis.  Any errors are logged through log4net framework via a variety of appenders, such as email, SQL, flat files, Trace, etc.  For those not familiar with log4net, it is a popular open-source logging framework that can store logging entries to a variety of extendable appenders.

To find more information about AzurePing, visit our website at http://www.paraleap.com/azureping

And please help us spread the word about AzurePing!



Managing environments in a distributed, Azure, or other cloud based Visual Studio solution

clock September 13, 2011 05:00 by author Igor Papirov
During development, testing and support stages of the project, there is usually a need to test or debug code against multiple environments: Local with and without Azure emulation, Dev environment, QA environment, UAT environment, perhaps even Prod environment, etc.  Configuration changes can get very complicated when servers, URL’s, connection strings, etc. need to switch in unison when switching between different environments.  In general, handling more than one environment from Visual Studio can be very laborious when the overall project structure is complex.  Throwing Windows Azure in the mix only adds to debugging and deployment vows.

This article will describe techniques that we use to manage configuration changes among various projects in a large distributed Azure-based solution called AzureWatch and keep settings in sync as various environments are targeted during development, testing, and debugging sessions.  AzureWatch is an auto-scaling and monitoring system for Windows Azure applications.  By its nature it utilizes multiple WCF services, a few Windows-based clients and services, a number of web-based projects.  As readers can imagine, keeping all configuration settings in sync and "shifting" them together on demand without making a mistake can be challenging.

Visual Studio technologies utilized

 

Approach at a high level

For every debugging environment, create Visual Studio Configuration.  Break out sections from every .Config file that vary by environment and include them back via ConfigSource settings.  Include these sections as unique files into a separate Config project that has folders for every Configuration.  Also, copy *.csdef and *.csccfg files from Azure-related projects into these folders as well. In each folder, customize the broken off partial configuration files (that are included via ConfigSource) and azure configuration files. Create Post or Pre-Build event of the Config project to xcopy the files from a folder that matches current Configuration environment to root folder of Config project.  Include files from root folder of Config project into other projects as needed via “Add as Link” command.

Utilize Config transformations to instrument Production (and other environments if needed) specific configuration changes that deal with security, debugging, tracing, etc.

 

Now, let's look at these steps in detail:

 

Step 1

Create as many Visual Studio Configurations as necessary to support number of different environments that developers must be able to debug from Visual Studio

 

Step 2

Create a separate empty project that is a part of the overall solution called Config.  Include into the project folders, one per matching Visual Studio Configuration environment that were created in Step 1

 

Step 3

Break-off Connection strings, end-point configuration, and other environment-specific sections from existing .config files into separate “include-only” config sections.

Source example web.config:

  <connectionStrings configSource="config\connectionStrings.config"/>

Broken off example connectionStrings.config:

  <connectionStrings>

    <add connectionString="..." name="db_connection"></add>

  </connectionStrings>

 

May want to make sure that web-projects have "bin" folder as a part of the relative path: "\bin\config\partialConfigFileName.config"

Step 4

Include separate customized partial .config files and Azure-specific .cscfg files under every sub-folder within Config project

 

 

 

 

 

Step 5

Customize Pre-Build event for Config project to copy config files for current configuration to root folder:

xcopy /Y /R "$(ProjectDir)$(ConfigurationName)\*.config" "$(ProjectDir)"

xcopy /Y /R "$(ProjectDir)$(ConfigurationName)\*.cscfg" "$(ProjectDir)"

 

Step 6

Add-as-Link partial .config files into main projects from the ROOT of the Config project.  To keep things organized, feel free to include partial .config files under Config sub-folders.  Do the same with Azure-specific .cscfg files

Files added as links have a shortcut icon in Solution Explorer

Step 7

Manually add project dependencies to Config project from the other projects in the solution, that need partial config files to function.  This will ensure that Config project is built before other projects.  Project Dependencies can be found inside Solution Properties window, under "Project Dependencies" tab.

Do not forget to specify that partial config files set "Copy to Output Directory" as "Copy if Newer"

 

Step 8

You can still use Config Transformations to strip out debug information from Release configuration setting that will be ultimately used for publishing.

 

Conclusion

Now, every time you debug and run, the pre-compile event from Config project will copy all the files from its particular environment folder into its root folder.  And all other projects that link to this root folder for their partial .config files will automatically get updated .configs switched to proper environment in unison.

 

Revisions

This blog was revised on 06/10/2012 to include the extra XCOPY step that keeps Azure cscfg/csdef files in sync with the Config project

 



Data Storage in Azure: SQL Azure or Azure Table Services?

clock April 5, 2011 14:16 by author Igor Papirov

A question frequently asked during architectural stages of a new Azure-based project: should we choose SQL Azure or Azure Table Storage to store our data?

Answer is typically: YES. Both of these storage technologies compliment rather than compete with each other.  If you are looking to gain maximum scalability, performance, and flexibility while at the same time paying the least amount of money, the trick is to understand the strong points of each technology and utilize both effectively.

Azure Table Services (ATS) is a new storage technology from Microsoft and is specifically designed to handle mega-scalability for applications and websites of Twitter/Facebook/Amazon's capacity.  Storage space is super cheap with ATS.  To offset the low cost and mega-scalability there are a few negative impacts that one must be aware of.  You are not only paying for storing the data, but also for accessing the data that lives in ATS.  There is also only limited support for transactions.  This is key to understand and design around.  ATS also is a new paradigm for developers to grasp.  Lastly, ATS forces your compute nodes to become mini-relational servers.  It simply does not do any of the relational processing we are all used to.  JOINs, GROUP BY's, ORDER BY's all have to be designed around or performed manually. 

I would say that ATS is best suited for large amounts of data that only rarely needs to be accessed or massaged.

On the other hand, SQL Azure is a fast, lighter-weight cloud-based relational storage.  Your developers will know how to code against it right away, because (barring a few small gotchas) coding for it simply like coding for any other SQL database.  On the negative side: SQL Azure is not meant to support huge applications like Facebook, Ebay, Twitter, etc.  Azure, stores your SQL Azure databases along with others in a multi-tenant environment on same servers and thus it has to throttle access should your application become too hot and impacts other databases on the same SQL Azure node.  SQL Azure is also somewhat pricey, at $10/gigabyte/month.  Having said, there is no cost to access the data stored in SQL Azure and there is plenty of CPU power dedicated to perform relational functionality on your data.  

I would recommend that SQL Azure for smaller amounts of frequently accessed data.

 

A few typical use-cases to illustrate the points:

Data in a banking system that stores customer account information along side a large amount of financial transactions would best divided across SQL Azure and ATS in the following way: Customer account information in SQL Azure, Financial Transactions in ATS.

Content for a large blog site should probably live in SQL Azure until it reaches a certain age and can be archived to ATS.



Setup auto-scaling for your Windows Azure applications in under 10 minutes!

clock November 10, 2010 15:26 by author Igor Papirov

So you are deploying your application on Windows Azure cloud platform.  One of the key features of a cloud platform like Azure is the ability to consume your compute resources with a utility model.  Pay for only what is used, whether storage space, compute power, or amount of data transferred. Dynamic allocation of more space or bandwidth is built-in to Azure, but with respect to compute power, Windows Azure allows you to issue scale up or scale-down commands relatively easily.  However, deciding when to do so can be a challenging task.  This blog entry describes a service AzureWatch and how it can dynamically scale Windows Azure applications.

Part One - Introduction

At its core, AzureWatch aggregates and analyzes performance counters, queue lengths, and other metrics and matches that data against user-defined rules. When a rule produces a "hit", a scaling action or a notification occurs.  You will need an account to install and use AzureWatch.  Follow this link to fill out a simple registration form.  After registration, download link for Windows-based configuration utility will be provided.

AzureWatch currently ships in two flavors: desktop edition and server-side edition.  Server-side AzureWatch will monitor and auto-scale your Azure applications from its cloud-based servers.  Desktop edition requires a special AzureWatch Monitoring agent to be installed on your premises.  In desktop-edition the agent is responsible for gathering metrics and initiating scaling events.

 

Part Two - Start Control Panel

After installation is complete, start AzureWatch ControlPanel and login with your newly created account.  You will be presented with a wizard to enter your Azure connection information. 

Subscription ID can be found on your Windows Azure developer portal.  If you do not already have the X.509 certificate, AzureWatch can create one for you.  Follow the hyperlink on the wizard to get detailed instructions on creation certificates.  It is a good idea to visit AzureWatch page to understand how your certificates and storage keys are kept secure.

After entering your account SubscriptionID and specifying a valid X.509 certificate, press Connect to Azure.  You will be presented with a list of storage accounts.  Storage account that is monitored by your Diagnostics Monitor is required.

 

On the next wizard page you can validate default settings for such things as throttle times, notification email, etc.

After the connection wizard is completed, AzureWatch will figure out what services, deployments and roles are present.  For each role found, you will be offered a chance to create simple predefined rules.

 

 

The few sample rules offered are simple rules that rely upon basic metrics. We will come back to these rules in a short while.  For now, wizards need to be completed.

 

Part Three - First time in Control Panel

After wizards complete, you are presented with a dashboard screen.  It likely contains empty historical charts since no data has been collected so far.  Navigation Explorer on the left shows various parameters that can be customized, while Instructions tab on the right shows context-sensitive instructions.

 

 

It is a good idea to visit the the Rules section to see the rules that have been defined by the wizard.  Two sample rules should be present and can be edited by double-clicking on each.  Rule Edit screen is simple yet powerful.  You can specify what formula needs to be evaluated, what happens when the evaluation returns TRUE, and what time of day should evaluation be restricted to.  To make formula entry easier, a list of already defined aggregated metrics is provided.  Hovering over the formula box will display allowed operands.

 

 

One last place to visit before starting the monitoring process is the screen that contains safety limits for the number of instances that AzureWatch can scale up to or down to.  By clicking on the appropriate Role name in the navigation explorer, you will be presented with a chance to modify these boundaries.

 

 

This is it.  If you are ready, press "Publish Changes" button.  Provided your AzureWatch Monitor service is running, it will pick up these configuration settings in the next iteration of its internal loop and instruct Azure Diagnostics Manager to start capturing the metrics required for formulas to work.  Windows Azure will need a few minutes to instruct your instances to start capturing those metrics afterwards, and then a few minutes more before these metrics will be transferred to your storage.  Thus, give AzureWatch at least 5-10 minutes before expecting to see anything on the Dashboard screen.

 

Part Four - A few tips and tricks

Some things to keep in mind while using AzureWatch

If you just started using AzureWatch and have not accumulated enough metric data, evaluation of your rules may be suspect as your aggregations will lack sufficient data.  It maybe prudent to disable scaling inside Rules in the beginning so that your scaling actions do not trigger unexpectedly.

Metric transfer occurs only when Monitor is running.  If you stopped Monitor service for an hour and then restarted it, it does not "go back" and send the missing hour's worth of metrics to AzureWatch.

When deploying new packages into Azure, you must use do it thru the same storage account that is specified in your DiagnosticsConnectionString and in AzureWatch System Settings.  Failure to do this, will result in incomplete or incorrect metrics.

AzureWatch will always instruct your instances to capture metrics that are defined in the Raw Metrics screen.  You do not need to do anything special with existing or newly started instances.  It may be worthwhile, however, to visit the System Settings screen to further configure how metric enforcement and gathering works.

AzureWatch will send a notification when it scales your instances up or down.  In it, it will provide values for all the aggregated metrics it knows about to help you understand why the scaling event occurred.

Locally installed AzureWatch components automatically self-update whenever a new version is released.



Paraleap Technologies Joins Microsoft® BizSparkā„¢ program!

clock October 15, 2010 03:35 by author Igor Papirov

Paraleap Technologies is proud to announce that it became a partner in Microsoft® BizSpark™ program designed to accelerate the success of emerging startups by providing key resources such as software, support and visibility.

“We are very excited to participate in the BizSpark program,” says Igor Papirov, Founder of Paraleap Technologies. "Thanks to the program we now have better access to cutting edge software and services and can concentrate on building cutting edge products targeting Microsoft Windows Azure platform"


About Paraleap Technologies

Founded in 2010, Paraleap Technologies is an emerging Chicago-based startup, focused on providing tools and services for cloud computing technologies.
AzureWatch is Paraleap’s flagship product, designed to add dynamic scalability and monitoring to applications running in Microsoft Windows Azure cloud platform.