Azure Planned Maintenance Experience

The Azure Team recently announced the availability of the new Planned Maintenance experience. The goal is to provide customers a new experience that enables them to initiate a proactive redeploy of their VMs for planned maintenance events that require a reboot. Maintenance will be communicated via the Azure portal, Azure Service Health dashboard, CLI and PowerShell. Subscription owners and admins will also continue to get maintenance notifications via e-mail.

Customers can now test the Planned Maintenance experience in the US West Central region. To do so, you need to use a special link for accessing the Azure Portal. In the overview screen for a VM which will be impacted by scheduled maintenance, you’ll see a screen similar to the one below.


Choosing to initiate maintenance now will redeploy the VM onto a node which has already been upgraded, allowing you to control when the VM is restarted. In addition, the VM blade in the preview portal will contain new columns which can be used to determine the maintenance status of the VM:


Maintenance windows which involve VM reboots follow the same update patterns Azure would use in any other situation. This means that VMs that are part of Availability Sets or VM Scale Sets will be part of separate update domains and Azure will never update hosts in more than one update domain at a time. Each Azure region also has a region pair (like US East and US West) and updates are never performed across region pairs at the same time. However, for VMs which are not in an availability set, this new feature gives you the ability to control when the updates are done and schedule those within maintenance windows. Also, if you need an exact window during which maintenance is performed even for VMs in an availability set, this is the way to accomplish that. Lastly, if you need to coordinate the impact of VM redeployment or applications don’t support HA, the planned maintenance support experience would help you deal with those situations as well.

Full control of this is also available via PowerShell as you would expect. You can query the planned maintenance status of a VM using Get-AzureRmVM: Get-AzureRmVM -ResourceGroupName “PrevMaintenance” -Name “mytestvm2” -Status. You can also perform (kick off) the maintenance task using Restart-AzureRmVM and the “PerformMaintenance” parameter: Restart-AzureRmVM -PerformMaintenance -name “mytestvm2” -ResourceGroupName “PrevMaintenance”. PowerShell makes it easy to get the status of all of the VMs across a subscription very easily. Here’s a sample of how you would get a list of all VMs and their planned maintenance status in the current subscription:


Some other useful resources:

Overview Video on Channel 9 (15min)
Linux Planned Maintenance Documentation
Windows Planned Maintenance Documentation
Monitoring Service Notifications using Service Health


Azure Application Gateway SSL Policies

A client recently ran Qualys SSL Server Test against their web applications published through the Azure Application Gateway. The test graded the SSL security on the site as a “B” mainly because the server supported weak Diffie-Hellman (DH) key exchange parameters.

Diffie-Hellman key exchange is a popular cryptographic algorithm that allows Internet protocols to agree on a shared key and negotiate a secure connection. SSL sites that support export cipher suites and don’t use 2048-bit or stronger Diffie-Hellman groups with “safe” primes are susceptible to attacks like LogJam. Luckily, a feature known as SSL Policy in the Azure Application Gateway allows you to reduce the potential for these types of attacks.

The SSL handling in Azure Application Gateway (used for things such as SSL offloading and centralized SSL handling) allows you to specify a central SSL policy that’s suited to your organizational security requirements. The SSL policy includes control of the SSL protocol version as well as the cipher suites and the order in which ciphers are used during an SSL handshake. Application Gateway offers two mechanisms for controlling SSL policy: either a predefined policy or a custom policy. Here’s a link to the documentation for SSL policy with Azure Application Gateway. Changing the SSL policy for a new Application Gateway deployment can be accomplished using PowerShell and changing an existing deployment’s SSL policy is also easily done via the cmdlets. Below is an example of how to do this with a few lines of PowerShell.

One “gotcha” is that the predefined SSL policy which disables the weaker cipher suites also sets a minimum TLS version of v1.2 and breaks most older browsers. If that’s not a concern, use the latest predefined SSL policy – otherwise you’ll have to use a custom policy and specify a lower minimum TLS version to support older IE browsers running on Windows 7, for example.

# Get Configuration of AppGW
$appgw = Get-AzureRmApplicationGateway -Name $GWName -ResourceGroupName $GWResourceGroupName
# Set SSL Policy on AppGW to Custom Policy based on Most Recent Security Policy w/TLSv1.0 Support. FYI: Will work on any version of IE > 8.0 running on Windows 7. No Windows XP support!
# Set SSL Policy on AppGW to Most Recent Policy w/TLSv1.2 Minimum Support. FYI: Becuase TLS v1.0 is not supported, this will break any browser earlier that IE 11!
Set-AzureRmApplicationGatewaySslPolicy -ApplicationGateway $appgw -PolicyType Predefined -PolicyName “AppGwSslPolicy20170401S”
# Update the gateway with validated SSL policy
Set-AzureRmApplicationGateway -ApplicationGateway $appgw

Azure AppGW and IP Address Usage

I had to do some digging recently to answer a question about the Azure Application Gateway and its consumption of IP addresses from the Application Gateway subnet address space. It’s not well documented and without an understanding of it you might end up painting yourself in a corner when designing the layout of your subnets and address spaces in an Azure Virtual Network.

If you’re not familiar with it, the Azure Application Gateway is a dedicated virtual appliance which provides various HTTP/HTTP (layer 7) load balancing applications and is provided as a multi-instance, fully managed service. It also has some web application firewall (WAF) capabilities and can be configured as an internet facing gateway, internal gateway or a combination of both.


The first important point to understand is that the Azure Application Gateway must be deployed in its own subnet. The subnet created or used for application gateway cannot contain any other types of resources (VMs, load balancers, etc.) but *can* contain other application gateways. If the subnet in question doesn’t meet this criteria, it won’t appear in the Azure portal as one of the possible subnets for selection in the deployment wizard. Because of this, one might be tempted to create the smallest address space possible for the subnet (a /29 with 3 usable IPs once Azure takes the first three from the range) especially if there’s no plan to deploy Azure Application Gateways with internal addresses. That would be a big mistake and here’s why: each instance of an application gateway consumes an address from the subnet’s address pool.


When you create an application gateway, an endpoint (public VIP or internal ILB IP) is associated and used for ingress network traffic. This VIP or ILB IP is provided by Azure and works at the transport layer (TCP/UDP). The application gateway then routes the HTTP/HTTPS traffic based on its configuration to a backend pool comprised of a virtual machine, cloud service, or an internal or an external IP address.


Regardless of whether an application gateway is deployed with a Public IP, Private IP or both each instance of a gateway consumes an address from the subnet’s address pool. If the gateway is then configured with a private ILB IP, it consumes one additional IP address from the subnet’s address pool for that ILB IP. The only IP that will show up in the subnet as being consumed, though, is the ILB IP. You won’t see any of the IPs in use by the gateway instances. The only way you’ll know if you hit the limit is that your deployment (or instance increase) will fail with an error that there’s not enough address space. This is definitely not ideal and hopeful will be improved at some point.

Some examples:

2 instance GW with a Public IP Only = 2 IPs (2 + 0 = 2)

2 instance GW with a Private IP Only = 3 IPs (2 + 1 = 3)

4 instance GW with a Public & Private IP = 5 IPs (4 + 1 = 5)

So keep this in mind when designing the layout of your Azure VNET and its subnets and address spaces. Not having enough addresses in the subnet to expand the number of instances for your production Application Gateway can be a real pain and certainly will require downtime to resolve.