Since we began deploying Sitefinity on Microsoft Azure this year, we’ve had a wonderful experience with Sitefinity in the cloud. However, we’ve found Sitefinity’s implementation to be a little lackluster when it comes to the qualities that make the Cloud the Cloud—flexibility and scaling. Along with a colleague of mine, Nicholas Balcolm, I have been toying around with Sitefinity and Azure for a couple months now, and we’re both proud to say we’ve been able to put some of the auto-scaling shine back on the Sitefinity-Azure integration.
If you’ve ever deployed Sitefinity to a load-balanced environment like Azure Cloud Services, you know that the CMS has to be configured manually with the IP addresses of each server instance. This is so that the servers can ensure a consistent user experience by notifying each other when content has changed and their caches have become invalid.
Configuring these IP addresses manually works fine if you‘re running a predetermined number of instances, but one of the biggest selling points of Azure Cloud Services is its ability to automatically scale, bringing up new instances to cope with heavy workloads and taking down old ones when the workload is light. Requiring manual configuration for scaling pulls the pin on this feature, taking away one of the best reasons to consider Azure when deploying Sitefinity.
We have come up with a solution that allows Sitefinity to automatically configure itself when Azure scales the cloud service. The RoleEnvironment object in Azure’s ServiceRuntime module can provide the list of IP addresses for our currently running server instances. Through the Sitefinity API, we can make changes to the LoadBalancing section of the SystemConfig. In light of this, all we have to do to keep Sitefinity configured correctly is run an update when the list of IP addresses changes.
The problem then is how to time the updates. One option we considered was to listen for Azure’s OnStart and OnStop events. Unfortunately, we discovered that OnStart is called before Sitefinity has initialized, so we wouldn’t have access to Sitefinity’s configs from there. Another possibility would have been to listen for the RoleEnvironment.Changing event, but the downside there is that our update would run on every server instance, when we only need it to run on one of them.
Ultimately, we decided to use conventional Sitefinity techniques to time the updates. We know that our server has started when Sitefinity’s “Bootstrapped” event fires, so we run the first update then. Since there isn’t an event that fires when a server goes down, we set up a scheduled task to run every five minutes and compare the list of running instances against the configs, removing any servers that are no longer running.
With the auto-scaling in place you can save your company money by only running the instances you need at the times you need them. We have made our project available via GitHub—please contribute if you find any issues or have additional ideas.