You are on page 1of 560

1

Disabling ARRs Instance Affinity in Windows Azure Web Sites............................3

2 How to change Subnet and Virtual Network for Azure Virtual Machines (ASM &
ARM)........................................................................................................................... 7
3

Configure a virtual network using a network configuration file..........................10

Designing a Scalable Partitioning Strategy for Azure Table Storage...................11

Create a VNet with a Site-to-Site connection using the Azure classic portal......27

Managing access to resources with Azure Active Directory groups....................33

Getting started with access management..........................................................36

MICROSOFT AZURE - VIRTUAL NETWORKS (VNETS) EXPLAINED........................58

Auto scale application block............................................................................... 89

10

Get started with Azure Queue storage using .NET..........................................92

11

Azure Exam Prep Fault Domains and Update Domains...............................111

12

Azure Exam Prep Virtual Networks.............................................................118

13

Warnings sent to customers when Azure is about to be updated..................124

14

Azure Web Sites/Web Apps and SSL..............................................................128

15

Deploy your app to Azure App Service..........................................................136

16

Create a .NET WebJob in Azure App Service..................................................146

17

Copy Blob...................................................................................................... 192

18

Load balancing for Azure infrastructures......................................................207

19

How to monitor cloud services......................................................................212

20

Continuous delivery to Azure using Visual Studio Team Services..................222

21

Configuring SSL for azure application...........................................................259

22

Using shared access signature......................................................................267

23

Monitor a storage account............................................................................ 293

24

Active directory Graph API Rest....................................................................302

25

Notification Hubs monitoring and Telemtry- programmatic access...............304

26

Staging environment in azure apps..............................................................311

27

How to auto scale a cloud service.................................................................329

28

https node.js documentation........................................................................337

29

Introducing Asynchronous Cross-Account Copy Blob....................................342

30

Connecting Powershell to your Azure Subscription.......................................349

31

Downloading Windows Azure Subscription Files............................................354

32

Switch azure website slot.............................................................................. 356

33

How to Use Azure Redis Cache......................................................................363

34

Monitor a storage account in the Azure Portal..............................................384

35

Update Data Disk.......................................................................................... 393

36

Map a custom domain name to an Azure app...............................................396

37

Secure your app's custom domain with HTTPS.............................................406

38

What is an endpoint Access Control List (ACLs)?...........................................433

39

Cost estimates.............................................................................................. 437

40

Adding Trace Statements to Application Code..............................................439

41

Trace.TraceInformation Method.....................................................................443

42

Trace.WriteIf Method..................................................................................... 443

43

Windows Azure Remote Debugging..............................................................444

46

Migrate Azure Virtual Machines between Storage Accounts.........................449

47

Blob Service REST API................................................................................... 456

48

Creating A Custom Domain Name For Azure Web Sites................................460

49

Create and upload a Windows Server VHD to Azure.....................................462

50

Secure your app's custom domain with HTTPS.............................................467

51
Add a Site-to-Site connection to a VNet with an existing VPN gateway
connection.............................................................................................................. 494
52

VM Sizes........................................................................................................ 503

53

Database tiers............................................................................................... 504

54

How to configure and run startup tasks for a cloud service..........................506

55

Using Shared Access Signatures (SAS)..........................................................511

56

Container related ops....................................................................................532

Listing the blobs within a container...............................................................................532

............................................................................................................................... 533
57

Emulator express.......................................................................................... 533

............................................................................................................................... 533

1 Disabling ARRs Instance Affinity in


Windows Azure Web Sites
Setting up multiple instances of a website in Windows Azure Websites is a terrific
way to scale out your website, and Azure makes great use of the Application
Request Routing IIS Extension to distribute your connecting users between your
active instances. ARR cleverly keeps track of connecting users by giving them a
special cookie (known as an affinity cookie), which allows it to know, upon
subsequent requests, to which server instance they were talking to. This way, we
can be sure that once a client establishes a session with a specific server instance,
it will keep talking to the same server as long as his session is active. This is of
particular importance for session-sensitive application (a.k.a. stateless applications),
because the session-specific data wont move from one server to another on its
own. Applications can be designed to do this (typically by storing that data in some

kind of shared storage like SQL), but most are not and so normally, we want to keep
every user attached to his designated server. If the user moves to another server, a
new session is started, and whatever session data the application was using is gone
(for example, the content of a shopping cart). Heres a brief description of this
process:
1.

Client connects to an Azure Web Sites website

2.

ARR runs on the front-end Azure server and receives the request

3.

ARR decides to which of the available instances the request should go

4.

ARR forwards the request to the selected server, crafts and attaches an
ARRAffinity cookie to the request

5.

The response comes back to the client, holding the ARRAffinity cookie.

6.

When the client receives the request, it stores the cookie for later use
(browsers are designed to do this for cookies they receive from servers)

7.

When the client submits a subsequent request, it includes the cookie in it

8.

When ARR receives the request, it sees the cookie, and decodes it.

9.

The decoded cookie holds the name of the instance that was used earlier,
and so ARR forwards the request to the same instance, rather than choosing one
from the pool

10.

The same thing (steps 7-9) repeat upon every subsequent request for the
same site, until the user closes the browser, at which point the cookie is cleared
However, there are situations where keeping affinity is not desired. For example,
some users dont close their browser, and remain connected for extended periods of
time. When this happens, the affinity cookie remains in the browser, and this keeps
the user attached to his server for a period that could last hours, days or even more
(in theory, indefinitely!). Keeping your computer on and browser open is not
unusual, and many people (especially on their work-place computers) do it all the
time. In the real world, this leads to the distribution of users per instance falling out
of balance (thats a little like how the line behind some registers in the supermarket
can get hogged by a single customer, leading to others waiting in line more than
they normally should). Depending on your applications and what they do, you may
care more or less about users being tied to to their servers. In case this is of little or
no importance and youd rather disable this affinity and opt for better load
balancing, we have introduced the ability for you to control it. Because the affinity is
controlled by an affinity cookie, all you have to do to disable affinity is make sure
that Azure doesnt give the cookies out. If it doesnt, subsequent requests by the
user will be treated as new, and instead of trying to route them to their server,
ARR will use its normal load-balancing behavior to route the request to the best
server.

This is how the affinity cookie looks:

Disabling the affinity can be done in two ways:


1.

In your application

2.

In a site configuration
To control this behavior in an application, you need to write code to send out a
special HTTP header, which will tell the Application Request Router to remove the
affinity cookie. This header is Arr-Disable-Session-Affinity, and if you set it to
true, ARR will strip out the cookie. For example, you could add a line similar to this
to your applications code:
headers.Add("Arr-Disable-Session-Affinity", "True");
* This example is for C#, but this could just as easily be done in any other language
or platform. Setting this in the applications code would be suitable for situations
you DO want affinity to be kept for the most part, and only reset on specific
application pages. If, however, you prefer to have it completely disabled, you could
have ARR remove the cookie always by having IIS itself inject that header directly.
This is done with a customHeaders configuration section in web.config. Simply
add the following into your web.config, and upload it to the root of the site:

Keep in
mind, though, that the configuration in web.config is sensitive, and a badly
formatted file can stop the site from working properly. If you havent had a chance
to work with web.config files before, read this getting-started guide.
Troubleshooting If you intend on implementing this, you might wonder how to
confirm its working and troubleshoot it. The ARR Affinity cookie is normally included
with the 1st response from any Azure Web Sites web site, and subsequently included
with any request sent from the client and response received from the server. To see
it in action, you could use any of multiple HTTP troubleshooting and diagnostic tools.
Here is a list of some of the more popular options:
1.

Fiddler

2.

HTTPWatc

3.

Network Monitor

4.

WireShark

5.

Firebug
You can find info about several other tools here. The 1st one on the list, Fiddler, is
one of the most popular, because it can interact with any browser, and is available
for free. Once Fiddler is installed, it will record any URL you browse to, and you can
then click on the Inspector tab for either the request or response to see the details.
For example, below you can see the HTTP Headers tab, which show the affinity
cookie sent by the server using the Set-Cookie header:

If you add
the Arr-Disable-Session-Affinity header to disable the affinity cookie, ARR will

not set the cookie, but it will also remove the Arr-Disable-Session-Affinity header
itself, so if your process is working correctly, you will see neither. If you see both the
cookie AND the header, this means that something is wrong with the way you set
the header. Possibly an error in the text of the header name or its value. If you see
the cookie and not the header, this probably means your changes to Web.config are
invalid, or your header-injection code is not working, and you could try to confirm it
by adding another, unrelated header. Generally speaking, its easier to set the
headers with web.config than with code, so in case of doubt, you should start by
simplifying it to reduce the surface area of your investigation. In closing, we should
mention that disabling the affinity is not something that should be taken lightly. For
static content, it would rarely be an issue, but if youre running applications, and
they are not designed for dealing with users jumping from one server to another, it
might not end well. For scenarios where the affinity has led to imbalance, this new
ability will come as great news.
Web

2 How to change Subnet and Virtual


Network for Azure Virtual Machines (ASM
& ARM)
During the lifecycle of an Azure Virtual Machines (VM), you may encounter
situations where you need to change the subnet, or maybe the Virtual Network
(VNET) where your VM has been created. It is worth mentioning that in Azure
Resource Manager (ARM) it is mandatory to place every VM in a VNET, you cannot
avoid this. In legacy Azure Service Management (ASM) this requirement didnt exist.
This blog post focuses on how to change subnet, using PowerShell, for an existing
VM in ARM and ASM, and clarifies what would happen in case of VNET movement.
Before effectively starting, it is important to understand that the networking model
in ARM is very different from the ASM model used in the earlier Azure days, you can
have an immediate idea looking at the picture below:

2.1 Change Subnet in ASM


Changing subnet in ASM is pretty simple, there is a nice PowerShell cmdlet that
makes the magic:
Set-AzureSubnet
https://msdn.microsoft.com/en-us/library/mt589103.aspx
You can use this same cmdlet at VM creation time to configure the subnet, but you
can also use to change subnet for an existing VM, PowerShell code is very simple:
Get-AzureVM Name $MyVMname ServiceName $MyCloudServiceName `
| Set-AzureSubnet -SubnetNames $NewTargetSubnet `
| Update-AzureVM
IMPORTANT: There is a strict requirement here: you can only change subnet if the
new one is in the same VNET as the old one.

2.2 Change Subnet in ARM


In ARM, this cmdlet does not exist anymore, but changing subnet is very simple
once you know where to operate the change. What I found particularly interesting is
that this change needs to happen at the NIC object, not to the VM level, but if you
give a look to the picture above, you will agree that is expected. First start in the
procedure is to obtain references to your VM, subnet, VNET and NIC objects as
reported in the code snippet below:
# Obtain VM reference and check Availability Set #
$VMName = igor-vm1
$VirtualMachine = Get-AzureRmVM -ResourceGroupName $RGname -Name
$VMName
# VM created in Availability Set
$VirtualMachine.AvailabilitySetReference
# Obtain NIC, subnet and VNET references: #
$NIC = Get-AzureRmNetworkInterface -Name $NICname -ResourceGroupName
$RGname
$NIC.IpConfigurations[0].PrivateIpAddress # 10.1.1.4
$VNET = Get-AzureRmVirtualNetwork -Name $VNETname -ResourceGroupName
$RGname
$Subnet2 = Get-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $VNET -Name
$Subnet2name
Now, the subnet setting you need to change resides in the IPConfigurations
property array, you can access it with index [0] and change directly the Subnet.Id
value to the ID of the new target subnet:

$NIC.IpConfigurations[0].Subnet.Id = $Subnet2.Id
Set-AzureRmNetworkInterface -NetworkInterface $NIC
Once you have done this operation, you need to commit the change with the Azure
ARM Network Provider using the cmdlet Set-AzureRmNetworkInterface. Please
note that the execution of this cmdlet will take about two minutes, at least in my
scenario with an A3 VM type. Why not an immediate operation? Because once you
will commit the change, Azure will automatically restart the VM, then you can
execute this procedure while the VM is running, but at a certain point you will have
a service downtime.

If you want to check the change completed, you need to re-acquire the NIC object
reference and access the private IP address or subnet properties as in the example
below:
$NIC = Get-AzureRmNetworkInterface -Name $NICname -ResourceGroupName
$RGname
$NIC.IpConfigurations[0].PrivateIpAddress # 10.1.1.4 -> 10.1.2.4
There is a last aspect I want to mention here: what happens if you have more than
one VM in an Availability Set and you want to move one of more VMs in a different
subnet? No problem, I tested this scenario and you can have one (or more) VMs in
one subnet S1 and one (or more) VMs in a different subnet S2, provided that S1 and
S2 are in the same VNET.

2.3 Change Virtual Network in ASM and ARM


Unfortunately, today is not possible/supported to change directly a VNET for an
existing VM. the only way to operate this change is export the VM definition, drop
the VM preserving disks, then re-create the VM with the same settings, except for
the new VNET assignment, and attaching previous disks. As a proof in ARM, if you
try to move a VM to a subnet in a different VNET, you will obtain an error like this:
Update-AzureRmVM : Subnet Subnet4 referenced by resource
/subscriptions/8e95e0bb-d7cc-4454-9443-

75ca862d34c1/resourceGroups/igorrg7/providers/Microsoft.Network/networkInterfac
es/nic4-igor-vm1/ipConfigurations/ipconfig1 is not in the same Virtual Network as
the subnets of other VMs in the availability set.
StatusCode: 400
ReasonPhrase: Bad Request
OperationID :
At line:1 char:1
+ Update-AzureRmVM -VM $VirtualMachine -ResourceGroupName $rgname
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~
+ CategoryInfo
: CloseError: (:) [Update-AzureRmVM],
ComputeCloudException
+ FullyQualifiedErrorId :
Microsoft.Azure.Commands.Compute.UpdateAzureVMCommand
Another very important property you cannot change once the VM is created, is the
Availability Set, then you should think about it carefully. Finally, please note that
even if with a single VM into an Availability Set, trying to move to a different VNET
will give you exactly the same error as above, even if there only one VM.
Thats all I wanted to share with you on this topic, if you want you can also follow
me on Twitter (@igorpag). Best regards.

3 Configure a virtual network using a


network configuration file
You can configure a virtual network (VNet) by using the Azure Management portal,
or by using a network configuration file.
Creating and modifying a network configuration file
The easiest way to author a network configuration file is to export the network
settings from an existing virtual network configuration, then modify the file to
contain the settings that you want to configure for your virtual networks.
To edit the network configuration file, you can simply open the file, make the
appropriate changes, then save the file. You can use any xml editor to make
changes to the network configuration file.
You should closely follow the guidance for network configuration file schema
settings.
Azure considers a subnet that has something deployed to it as in use. When a
subnet is in use, it cannot be modified. Before modifying, move anything that you
have deployed to the subnet to a different subnet that isn't being modified. See
Move a VM or Role Instance to a Different Subnet.
Export and import virtual network settings using the Management Portal

You can import and export network configuration settings contained in your network
configuration file by using PowerShell or the Management Portal. The instructions
below will help you export and import using the Management Portal.
To export your network settings
When you export, all of the settings for the virtual networks in your subscription will
be written to an .xml file.
1. Log into the Management Portal.
2. In the Management Portal, on the bottom of the networks page, click
Export.
3. On the Export network configuration window, verify that you have
selected the subscription for which you want to export your network settings.
Then, click the checkmark on the lower right.
4. When you are prompted, save the NetworkConfig.xml file to the location of
your choice.
To import your network settings
1. In the Management Portal, in the navigation pane on the bottom left, click
New.
2. Click Network Services -> Virtual Network -> Import Configuration.
3. On the Import the network configuration file page, browse to your
network configuration file, and then click the next arrow.
4. On the Building your network page, you'll see information on the screen
showing which sections of your network configuration will be changed or
created. If the changes look correct to you, click the checkmark to proceed to
update or create your virtual network.
4

Designing a Scalable Partitioning Strategy for Azure Table Storage

Azure provides cloud storage that is highly available and scalable. The underlying
storage system for Azure is provided through a set of services, including the Blob,
Table, Queue, and File services. The Azure Table service is designed for storing
structured data. The Azure Storage service supports an unlimited number of tables,
and each table can scale to massive levels, providing terabytes of physical storage.
To take best advantage of tables, you will need to partition your data optimally. This
article explores strategies that allow you to efficiently partition data for Azure Table
storage. +

4.1 Table Entities


Table entities represent the units of data stored in a table and are similar to rows in
a typical relational database table. Each entity defines a collection of properties.
Each property is key/value pair defined by its name, value, and the value's data
type. Entities must define the following three system properties as part of the
property collection: +

PartitionKey The PartitionKey property stores string values that identify


the partition that an entity belongs to. Partitions, as discussed later, are integral
to the scalability of the table. Entities with the same PartitionKey values are
stored in the same partition.

RowKey The RowKey property stores string values that uniquely identify
entities within each partition. The PartitionKey and the RowKey together form the
primary key for the entity

Timestamp The Timestamp property provides traceability for an entity. A


timestamp is a DateTime value that tells you the last time the entity was
modified. A timestamp is sometimes referred to as the entity's version.
Modifications to timestamps are ignored because the table service maintains the
value for this property during all inserts and update operations.

4.2 Table Primary Key


The primary key for an Azure entity consists of the combined PartitionKey and
RowKey properties, forming a single clustered index within the table. The
PartitionKey and RowKey properties can store up to 1 KB of string values. Empty
strings are also permitted; however, null values are not. The clustered index is
sorted by the PartitionKey in ascending order and then by RowKey also in ascending
order. The sort order is observed in all query responses. Lexical comparisons are
used during the sorting operation. Therefore, a string value of "111" will appear
before a string value of "2". In some cases, you may want the order to be numeric.
To sort in a numeric and ascending order, you will need to use fixed-length, zero-

padded strings. In the previous example, using "002" will allow it to appear before
"111". +
The clustered index sorts by the PartitionKey in ascending order and then by
RowKey in ascending order. The sort order is observed in all query responses.
Lexical comparisons are used during the sorting operation. Therefore, a string value
of "111" will appear before a string value of "2". In some cases, you may want the
order to be numeric. To sort in a numeric and ascending order, you will need to use
fixed-length, zero-padded strings. In the previous example, using "002" will allow it
to appear before "111". +
4.3 Table Partitions
Partitions represent a collection of entities with the same PartitionKey values.
Partitions are always served from one partition server and each partition server can
serve one or more partitions. A partition server has a rate limit of the number of
entities it can serve from one partition over time. Specifically, a partition has a
scalability target of 500 entities per second. This throughput may be higher during
minimal load on the storage node, but it will be throttled down when the node
becomes hot or very active. To better illustrate the concept of partitioning, the
following figure illustrates a table that contains a small subset of data for footrace
event registrations. It presents a conceptual view of partitioning where the
PartitionKey contains three different values comprised of the event's name and
distance. In this example, there are two partition servers. Server A contains
registrations for the half-marathon and 10 Km distances while Server B contains
only the full-marathon distances. The RowKey values are shown to provide context
but are not meaningful for this example. +

A table with three partitions +


4.3.1 Scalability
Because a partition is always served from a single partition server and each
partition server can serve one or more partitions, the efficiency of serving entities is
correlated with the health of the server. Servers that encounter high traffic for their
partitions may not be able to sustain a high throughput. For example, in the figure
above, if there are many requests for "2011 New York City Marathon__Half", server
A may become too hot. To increase the throughput of the server, the storage system
load-balances the partitions to other servers. The result is that the traffic is
distributed across many other servers. For optimal load balancing of traffic, you
should use more partitions, so that the Azure Table service can distribute the
partitions to more partition servers. +
4.3.2 Entity Group Transactions
An entity group transaction is a set of storage operations that are implemented
atomically on entities with the same PartitionKey value. If any storage operation in
the entity group fails, then all the storage operations in it are rolled back. An entity
group transaction comprises no more than 100 storage operations and may be no
more than 4 MB in size. Entity group transactions provide Azure Table with a limited
form of the atomicity, consistency, isolation, durability (ACID) semantics provided

by relational databases. Entity group transactions improve throughput since they


reduce the number of individual storage operations that must be submitted to the
Azure Table service. They also provide an economic benefit since an entity group
transaction is billed as a single storage operation regardless of how many storage
operations it contains. Since all the storage operations in an entity group
transaction affect entities with the same PartitionKey value, a need to use entity
group transactions can drive the selection of PartitionKey value. +
4.3.3 Range Partitions
If you are using unique PartitionKey values for your entities, then each entity will
belong in its own partition. If the unique values you are using are increasing or
decreasing in value, it is possible that Azure will create range partitions. Range
partitions group entities that have sequential unique PartitionKey values to improve
the performance of range queries. Without range partitions, a range query will need
to cross partition boundaries or server boundaries, which can decrease the query
performance. Consider an application that uses the following table with an
increasing sequence value for the PartitionKey. +

PartitionKey

RowKey

"0001"

"0002"

"0003"

"0004"

"0005"

"0006"

Azure may group the first three entities into a range partition. If you apply a range
query to this table that uses the PartitionKey as the critiera and requests entities
from "0001" to "0003,", the query may perform efficiently because they will be
served from a single partition server. There is no guarantee when and how a range
partition will be created. +
The existence of range partitions for your table can affect the performance of your
insert operations if you are inserting entities with increasing, or decreasing,
PartitionKey values. Inserting entities with increasing PartitionKey values is called an
Append Only pattern, and inserting with decreasing values is called a Prepend Only
pattern. You should consider not using such patterns because the overall throughput
of your insert requests will be limited by a single partition server. This is because, if
range partitions exists, then the first and last (range) partitions will contain the least
and greatest PartitionKey values, respectively. Therefore, the insert of a new entity,
with a sequentially lower or higher PartitionKey value, will target one of the end
partitions. The following figure shows a possible set of range partitions based on the
previous example. If a set of "0007", "0008" and "0009" entities were inserted, they
would be assigned to the last (orange) partition. +

Set of range partitions +


It is important to note that there is no negative effect on performance if the insert
operations are using more scattered PartitionKey values. +

4.4 Analyzing Data


Unlike a table in relational databases that allow you to manage indexes, Azure
tables can only have one index which is always comprised of the PartitionKey and
RowKey properties. You are not afforded the luxury of performance-tuning your table
by adding more indexes or altering an existing one after you have rolled it out.
Therefore, you must analyze the data as you design your table. The most important
aspects to consider for optimal scalability and query and insert efficiency are the
PartitionKey and RowKey values. This article places more stress on how to choose
the PartitionKey because it directly relates to how tables are partitioned. +
4.4.1 Partition Sizing
Partition sizing refers to the number of entities a partition contains. As was
mentioned in the "Scalability"section, more partitions mean better load balancing.
The granularity of the PartitionKey value affects the size of the partitions. At the
coarsest level, if a single value is used as the PartitionKey, all the entities are in a
single, very large, partition. Alternatively, at the finest level of granularity, the
PartitionKey can contain unique values for each entity. The result is that there is a
partition for each entity. The following table shows the advantages and
disadvantages for the range of granularities. +

PartitionKe
y
Granularity

Partitio
n Size

Advantages

Single value

Small
number
of
entities

Batch transactions are


possible with any entity

Large
number

Entity Group Transactions


may be possible with any

Single value

Disadvantages

All entities are local and


served from the same
storage node
Scaling is
limited.

Multiple
values

of
entities

entity. See
http://msdn.microsoft.com/li
brary/dd894038.aspxfor
more information on the
limits of entity group
transactions

There
are
multiple
partitions

Batch transactions are


possible on some entities

Partition
sizes
depend
on entity
distributi
on

Unique
values

There
are many
small
partitions
.

Dynamic partitioning is
possible
Single-request queries
possible (no continuation
tokens)

Throughput is
limited to the
performance of
a single server.

A highly uneven
distribution of
entities across
partitions may
limit the
performance of
the larger and
more active
partitions

Load balancing across more


partition servers possible
The table is highly scalable
Range partitions may
improve the performance of
cross-partition range
queries

Queries that
involve ranges
may require
visits to more
than one server.
Batch
transactions are
not possible.
Append or
prepend-only
patterns can
affect insertthroughput

This table shows how scaling is affected by the PartitionKey values. It is a best
practice to favor smaller partitions because they offer better load balancing. Larger
partitions may be appropriate in some scenarios, and are not necessarily

disadvantageous. For example, if your application does not require scalability, a


single large partition may be appropriate. +
4.4.2 Determining Queries
Queries retrieve data from tables. When you analyze the data for an Azure table, it
is important to consider which queries the application will use. If an application has
several queries, you may need to prioritize them, although your decisions might be
somewhat subjective. In many cases, dominant queries are discernable from the
other queries. In terms of performance, queries fall into different categories.
Because a table only has one index, query performance is usually related to the
PartitionKey and RowKey properties. The following table shows the different types of
queries and their performance ratings. +

Query
Type

PartitionKe
y Match

RowKey
Match

Performance Rating

Point

Exact

Exact

Best

Row range
scan

Exact

Partial

Better with smaller-sized


partitions
Bad with very large-sized
partitions

Partition
range scan

Partial

Partial

Good with a small number of


partition servers being touched
Worse with more servers being
touched

Full table
scan

Partial, none

Partial,
none

Worse with a subset of


partitions being scanned
Worst with all partitions being

scanned
4.4.2.1.1 Note
The table defines performance ratings relative to each other. The number and size
of the partitions may ultimately dictate how the query performs. For example, a
partition range scan for a table with many and large partitions may perform poorly
compared to a full table scan for a table with few and small partitions. +
The query types listed in this table show a progression from the best types of
queries to use to the worst types, based on their performance ratings. Point queries
are the best types of queries to use because they fully use the table's clustered
index. The following point query uses the data from the footraces registration table.
+
http://<account>.windows.core.net/registrations(PartitionKey=2011 New York City
Marathon__Full,RowKey=1234__John__M__55)
If the application uses multiple queries, not all of them can be point queries. In
terms of performance, range queries follow point queries. There are two types of
range queries: the row range scan and the partition range scan. The row range scan
specifies a single partition. Because the operation occurs on a single partition
server, row range scans are generally more efficient then partition range scans.
However, one key factor in the performance of row range scans is how selective a
query is. Query selectivity dictates how many rows must be iterated to find the
matching rows. More selective queries are more efficient during row range scans. +
To assess the priorities of your queries, you need to consider the frequency and
response time requirements for each query. Queries that are frequently executed
may be prioritized higher. However, an important but rarely used query may have
low latency requirements that could rank it higher on the priority list. +

4.5 Choosing the PartitionKey


The core of any table's design is based on its scalability, the queries used to access
it, and storage operation requirements. The PartitionKey values you choose will
dictate how a table will be partitioned and the type of queries that can be used.
Storage operations, in particular inserts, can also affect your choice of PartitionKey
values. The PartitionKey values can range from single values to unique values and
also can be composed from multiple values. Entity properties can be composed to
form the PartitionKey value. Additionally, the application can compute the value. +
4.5.1 Considering Entity Group Transactions
Developers should first consider if the application will use entity group transactions
(batch updates). Entity group transactions require entities to have the same
PartitionKey value. Also, because batch updates are for an entire group, the choices
for PartitionKey values can be limited. For example, a banking application that
maintains cash transactions must insert cash transactions into the table atomically.
This is because cash transactions represent both the debit and the credit sides and
must net to zero. This requirement means that the account number cannot be used
as any part of the PartitionKey because each side of the transaction uses different
account numbers. Instead, a transaction ID may be a more natural choice. +
4.5.2 Considering Partitions
Partition numbers and sizes affect the scalability of a table that is under load and
are controlled by how granular the PartitionKey values are. It can be challenging to
determine the PartitionKey based on the partition size, especially if the distribution
of values is hard to predict. A good rule of thumb is to use multiple, smaller
partitions. Many table partitions make it easier for the Azure Table service to
manage the storage nodes in which the partitions are served from. +
Choosing unique or finer values for the PartitionKey will result in smaller but many
partitions. This is generally favorable because the system can load balance the
many partitions to distribute the load across many partitions. However, you should
consider the effect of having many partitions on cross-partition range queries.
These type of queries must visit multiple partitions to satisfy the query. It is possible

that the partitions are distributed across many partition servers. If a query crosses a
server boundary, continuation tokens must be returned. Continuation tokens specify
the next PartitionKey or RowKey values that will retrieve the next set of data for the
query. In other words, continuation tokens represent at least one more request to
the service which can degrade the overall performance of the query. Query
selectivity is another factor that can affect the performance of the query. Query
selectivity is a measure of how many rows must be iterated for each partition. The
more selective a query is, the more efficient it is at returning the desired rows. The
overall performance of range queries may depend on the number of partition
servers that must be touched or how selective the query is. You also should avoid
using the append or prepend only patterns when inserting data into your table.
Using such patterns, despite creating small and many partitions, can limit the
throughput of your insert operations. The append and prepend only patterns are
discussed in "Range Partitions" section. +
4.5.3 Considering Queries
Knowing the queries that you will be using will allow you to determine which
properties are important to consider for the PartitionKey. The properties that are
used in the queries are candidates for the PartitionKey. The following table provides
a general guideline of how to determine the PartitionKey. +

If the entity

Action

Has one key property

Use it as the PartitionKey

Has two key properties

Use one as the PartitionKey and the other as


the RowKey

Has more than two key


properties

Use a composite key of concatenated values

If there is more than one equally dominant query, you can insert the information
multiple times with different RowKey values that you need. The secondary (or
tertiary, etc) rows will be managed by your application. This pattern will allow you to
satisfy the performance requirements of your queries. The following example uses
the data from the footrace registration example. It has two dominant queries. They
are: +

Query by bib number

Query by age
To serve both dominant queries, insert two rows as an entity group transaction.
The following table shows the Partitionkey and RowKey properties for this
scenario. The RowKey values provide a prefix for the bib and age to enable the
application to distinguish from the two values.

PartitionKey

RowKey

2011 New York City Marathon__Full

BIB:01234__John__M__55

2011 New York City Marathon__Full

AGE:055__1234__John__M

In this example, an entity group transaction is possible because the PartitionKey


values are the same. The group transaction provide atomicity of the insert
operation. Although it is possible to use this pattern with different PartitionKey
values, it is recommended that you use the same values to gain this benefit.
Otherwise, you may have to write extra logic to ensure atomic transactions with
different PartitonKey values. +

4.5.4 Considering Storage Operations


Azure tables can encounter load not just from queries, but also storage operations
such as inserts, updates, and deletes. You need to consider what type of storage
operations you will be performing on the table and at what rate. If you are
performing these operations infrequently, then you may not need to worry about
them. However, for very frequent operations such as performing many inserts in a
short period, you will need to consider how those operations are served as a result
of the PartitionKey values that you choose. One important example is the append or
prepend-only patterns. These patterns were discussed in the previous section
"Range Partitions." When the append or prepend-only pattern is used, it means that
you are using unique ascending or descending values for the PartitionKey on
subsequent insertions. If you combine this pattern with frequent insert operations,
then your table will not be able service the insert operations with great scalability.
The scalability of your table is affected because Azure will not be able to load
balance the operation requests to other partition servers. Therefore, in this case,
you may want to consider using values that are random, such as GUID values. This
will allow your partition sizes to remain small while maintaining load balancing
during storage operations. +
4.6 Table Partition Stress Testing
When the PartitionKey value is complex or requires comparisons to other
PartitionKey mappings, you may need to test the table's performance. The test
should examine how well a partition performs under peak loads. +
To perform a stress test +
1.

Create a test table.

2.

Load the test table with data so that it contains entities with the PartitionKey
that you will be targeting.

3.

Use the application to simulate peak load to the table, and target a single
partition by using the PartitionKey from step 2. This step is different for every
application, but the simulation should include all the necessary queries and

storage operations. The application may need to be tweaked so that it targets a


single partition.
4.

Examine the throughput of the GET or PUT operations on the table.


To examine the throughput, compare the actual values to the specified limit of a
single partition on a single server. Partitions are limited to 500 entities per
second. If the throughput exceeds 500 entities per second for a partition, the
server may run too hot in a production setting. In this case, the PartitionKey
values may be too coarse, so that there are not enough partitions or the
partitions are too large. You may need to modify the PartitionKey value so that the
partitions can be distributed among more servers.

+
4.7 Load Balancing
Load balancing at the partition layer occurs when a partition gets too hot, which
means the partition, specifically the partition server, is operating beyond its target
scalability. For Azure storage, each partition has a scalability target of 500 entities
per second. Load balancing also occurs at the Distributed File System layer, or DFS
layer. The load balancing at the DFS layer deals with I/O load, and is outside the
scope of this article. Load balancing at the partition layer does not immediately
occur after exceeding the scalability target. Instead, the system waits a few minutes
before beginning the load balancing process. This ensures that a partition has truly
become hot. It is not necessary to prime partitions with generated load that triggers
load balancing because the system will automatically perform the task. It may be
possible that if a table was primed with a certain load, the system will balance the
partitions based on actual load, which results in a very different distribution of the
partitions. Instead of priming partitions, you should consider writing code that
handles the Timeout and Server Busy errors. Such errors are returned when the
system is performing load balancing. By handling those errors using a retry
strategy, your application can better handle peak load. Retry strategies are
discussed in more detail in the following section. When load balancing occurs, the
partition will become offline for a few seconds. During the offline period, the system

is reassigning the partition to a different partition server. It is important to note that


your data is not stored by the partition servers. Instead, the partition servers serve
entities from the DFS layer. Because your data is not stored at the partition layer,
moving partitions to different servers is a fast process. This greatly limits the period
of downtime, if any, that your application may encounter. +
4.8 Using a Retry Strategy
It is important for your application to handle storage operation failures to ensure
that you do not lose any data updates. Some failures do not require a retry strategy.
For example, updates that return a 401 Unauthorized error will not benefit from
retrying the operation because it is likely the application state will not change
between retries that will resolve the 401 error. However, certain errors, such as
Server Busy or Timeout, are related to the load balancing features of Azure that
provide table scalability. When the storage nodes serving your entities become hot,
Azure will balance the load by moving partitions to other nodes. During this time,
the partition may be inaccessible which results in the Server Busy or Timeout errors.
Eventually, the partition will be reenabled and updates can resume. A retry strategy
is appropriate for busy and timeout errors. In most cases, you can exclude 400-level
errors and some 500-level errors from the retry logic, such as 501 Not Implemented
or 505 HTTP Version Not Supported, and implement a retry strategy for some 500level errors, such as Server Busy (503) and Timeout (504). +
There are three common retry strategies that you can use for your application. The
following is a list of those retry strategies and their descriptions: +

No Retry No retry attempt is made

Fixed Backoff The operation is retried N times with a constant backoff value

Exponential Backoff The operation is retried N times with an exponential


backoff value

The No Retry strategy is a simple (and evasive) way to handle operation failures.
However it is not very useful. Not imposing any retry attempts poses obvious
risks with data not being stored correctly after failed operations. Therefore, a
better strategy is to use the Fixed Backoff strategy that provides the ability to
retry operations with the same backoff duration. However, this strategy is not
optimized for handling highly scalable tables because if many threads or
processes are waiting for the same duration, collisions can occur. The
recommended retry strategy is one that uses an exponential backoff where each
retry attempt is longer than the last attempt. It is similar to the collision
avoidance (CA) algorithm used in computer networks, such as Ethernet. The
exponential backoff uses a random factor to provide an additional variance to the
resulting interval. The backoff value is then constrained to minimum and
maximum limits. The following formula can be used for calculating the next
backoff value using an exponential algorithm:
y = Rand(0.8z, 1.2z)(2x-1
y = Min(zmin + y, zmax
Where:
z = default backoff in milliseconds
zmin = default minimum backoff in milliseconds
zmax = default maximum backoff in milliseconds
x = the number of retries
y = the backoff value in milliseconds
The 0.8 and 1.2 multipliers used in the Rand (random) function produces a
random variance of the default backoff within 20% of the original value. The
20% range is acceptable for most retry strategies and prevents further
collisions. The formula can be implemented using the following code:
int retries = 1;

// Initialize variables with default values


var defaultBackoff = TimeSpan.FromSeconds(30);
var backoffMin = TimeSpan.FromSeconds(3);
var backoffMax = TimeSpan.FromSeconds(90);

var random = new Random();

double backoff = random.Next(


(int)(0.8D * defaultBackoff.TotalMilliseconds),
(int)(1.2D * defaultBackoff.TotalMilliseconds));
backoff *= (Math.Pow(2, retries) - 1);
backoff = Math.Min(
backoffMin.TotalMilliseconds + backoff,
backoffMax.TotalMilliseconds);
4.8.1 Using the Storage Client Library
If you are developing your application using the Azure Managed Library, you can
leverage the included retry policies in the Storage Client Library. The retry
mechanism in the library also allows you to extend the functionality with your
custom retry policies. The RetryPolicies class in the
Microsoft.WindowsAzure.StorageClient namespace provides static methods
that return a RetryPolicy object. The RetryPolicy object is used in conjunction with
the SaveChangesWithRetries method in the TableServiceContext class. The
default policy that a TableServiceContext object uses is an instance of a
RetryExponential class constructed using the RetryPolicies.DefaultClientRetryCount
and RetryPolicies.DefaultClientBackoff values. The following code shows how to
construct a TableServiceContext class with a different RetryPolicy. +

class MyTableServiceContext : TableServiceContext


{
public MyTableServiceContext(string baseAddress, CloudStorageAccount account)
: base(baseAddress, account)
{
int retryCount = 5; // Default is 3
var backoff = TimeSpan.FromSeconds(45); // Default is 30 seconds

RetryPolicy = RetryPolicies.RetryExponential(retryCount, backoff);


}
...
}
4.9 Summary
Azure Table Storage allows applications to store a massive amount of data because
it manages and reassigns partitions across many storage nodes. You can use data
partitioning to control the tables scalability. Plan ahead when you define a table's
schema to ensure efficient partitioning strategies. Specifically, analyze the
applications requirements, data, and queries before you select PartitionKey values.
Each partition may be reassigned to different storage nodes as the system responds
to traffic. Use a partition stress test to ensure that the table has the correct
PartitionKey values. This test will allow you to recognize when partitions are too hot
and to make the necessary partition adjustments. To ensure that your application
handles intermittent errors and you data is persisted, a retry strategy with backoff
should be used. The default retry policy that the Azure Storage Client Library uses is
one with an exponential backoff that avoids collisions and maximized the
throughput of your application.

5 Create a VNet with a Site-to-Site


connection using the Azure classic
portal
This article walks you through creating a virtual network and a site-to-site
VPN gateway connection to your on-premises network using the classic
deployment model and the classic portal. Site-to-Site connections can be
used for cross-premises and hybrid configurations.+

+
5.1.1 Deployment models and methods for Site-to-Site connections
It's important to understand that Azure currently works with two deployment
models: Resource Manager and classic. Before you begin your configuration,
verify that you are using the instructions for the deployment model that you
want to work in. The two models are not completely compatible with each
other.+
For example, if you are working with a virtual network that was created using
the classic deployment model and wanted to add a connection to the VNet,
you would use the deployment methods that correspond to the classic
deployment model, not Resource Manager. If you are working with a virtual
network that was created using the Resource Manager deployment model,

you would use the deployment methods that correspond with Resource
Manager, not classic.+
For information about the deployment models, see Understanding Resource
Manager deployment and classic deployment.+
The following table shows the currently available deployment models and
methods for Site-to-Site configurations. When an article with configuration
steps is available, we link directly to it from this table.+

Deployment
Model/Method

Azure
Portal

Classic
Portal

PowerShe
ll

Resource Manager

Article

Not Supported

Article

Classic

Supported**

Article*

Article+

(*) denotes that the classic portal can only support creating one S2S VPN
connection.+
(**) denotes that an end-to-end scenario is not yet available for the Azure
portal.+
(+) denotes that this article is written for multi-site connections.+
5.1.1.1 Additional configurations

If you want to connect VNets together, see Configure a VNet-to-VNet


connection for the classic deployment model. If you want to add a Site-toSite connection to a VNet that already has a connection, see Add a S2S
connection to a VNet with an existing VPN gateway connection.+

5.2 Before you begin


Verify that you have the following items before beginning configuration.+

A compatible VPN device and someone who is able to configure it. See About
VPN Devices. If you aren't familiar with configuring your VPN device, or are
unfamiliar with the IP address ranges located in your on-premises network
configuration, you need to coordinate with someone who can provide those
details for you.

An externally facing public IP address for your VPN device. This IP address
cannot be located behind a NAT.

An Azure subscription. If you don't already have an Azure subscription, you


can activate your MSDN subscriber benefits or sign up for a free account.

5.3 Create your virtual network


1.

Log in to the Azure classic portal.

2.

In the lower left corner of the screen, click New. In the navigation pane, click
Network Services, and then click Virtual Network. Click Custom Create to
begin the configuration wizard.

3.

To create your VNet, enter your configuration settings on the following pages:

5.4 Virtual network details page


Enter the following information:+

Name: Name your virtual network. For example, EastUSVNet. You'll use this
virtual network name when you deploy your VMs and PaaS instances, so you may
not want to make the name too complicated.

Location: The location is directly related to the physical location (region)


where you want your resources (VMs) to reside. For example, if you want the VMs
that you deploy to this virtual network to be physically located in East US, select
that location. You can't change the region associated with your virtual network
after you create it.

5.5 DNS servers and VPN connectivity page


Enter the following information, and then click the next arrow on the lower
right.+

DNS Servers: Enter the DNS server name and IP address, or select a
previously registered DNS server from the shortcut menu. This setting does not
create a DNS server. It allows you to specify the DNS servers that you want to use
for name resolution for this virtual network.

Configure Site-To-Site VPN: Select the checkbox for Configure a site-tosite VPN.

Local Network: A local network represents your physical on-premises


location. You can select a local network that you've previously created, or you can
create a new local network. However, if you select to use a local network that you
previously created, go to the Local Networks configuration page and verify that
the VPN Device IP address (public facing IPv4 address) for the VPN device is
accurate.

5.6 Site-to-site connectivity page


If you're creating a new local network, you'll see the Site-To-Site
Connectivity page. If you want to use a local network that you previously
created, this page will not appear in the wizard and you can move on to the
next section.+
Enter the following information, and then click the next arrow.+

Name: The name you want to call your local (on-premises) network site.

VPN Device IP Address: The public facing IPv4 address of your on-premises
VPN device that you use to connect to Azure. The VPN device cannot be located
behind a NAT.

Address Space: Include Starting IP and CIDR (Address Count). You specify
the address range(s) that you want to be sent through the virtual network

gateway to your local on-premises location. If a destination IP address falls within


the ranges that you specify here, it is routed through the virtual network gateway.

Add address space: If you have multiple address ranges that you want to
be sent through the virtual network gateway, specify each additional address
range. You can add or remove ranges later on the Local Network page.

5.7 Virtual network address spaces page


Specify the address range that you want to use for your virtual network.
These are the dynamic IP addresses (DIPS) that will be assigned to the VMs
and other role instances that you deploy to this virtual network.+
It's especially important to select a range that does not overlap with any of
the ranges that are used for your on-premises network. You need to
coordinate with your network administrator. Your network administrator may
need to carve out a range of IP addresses from your on-premises network
address space for you to use for your virtual network.+
Enter the following information, and then click the checkmark on the lower
right to configure your network.+

Address Space: Include Starting IP and Address Count. Verify that the
address spaces you specify don't overlap any of the address spaces that you
have on your on-premises network.

Add subnet: Include Starting IP and Address Count. Additional subnets are
not required, but you may want to create a separate subnet for VMs that will have
static DIPS. Or you might want to have your VMs in a subnet that is separate from
your other role instances.

Add gateway subnet: Click to add the gateway subnet. The gateway subnet
is used only for the virtual network gateway and is required for this configuration.

Click the checkmark on the bottom of the page and your virtual network will
begin to create. When it completes, you will see Created listed under
Status on the Networks page in the Azure Classic Portal. After the VNet has
been created, you can then configure your virtual network gateway.+
5.7.1.1.1 Important

When working with gateway subnets, avoid associating a network security


group (NSG) to the gateway subnet. Associating a network security group to
this subnet may cause your VPN gateway to stop functioning as expected.
For more information about network security groups, see What is a network
security group?+

5.8 Configure your virtual network gateway


Configure the virtual network gateway to create a secure site-to-site
connection. See Configure a virtual network gateway in the Azure classic
portal.

6 Managing access to resources with Azure


Active Directory groups
+

Azure Active Directory (Azure AD) is a comprehensive identity and access


management solution that provides a robust set of capabilities to manage
access to on-premises and cloud applications and resources including
Microsoft online services like Office 365 and a world of non-Microsoft SaaS
applications. This article provides an overview, but if you want to start using
Azure AD groups right now, follow the instructions in Managing security
groups in Azure AD. If you want to see how you can use PowerShell to

manage groups in Azure Active directory you can read more in Azure Active
Directory preview cmdlets for group management.+
6.1.1.1.1 Note

To use Azure Active Directory, you need an Azure account. If you don't have
an account, you can sign up for a free Azure account.+
Within Azure AD, one of the major features is the ability to manage access to
resources. These resources can be part of the directory, as in the case of
permissions to manage objects through roles in the directory, or resources
that are external to the directory, such as SaaS applications, Azure services,
and SharePoint sites or on premise resources. There are four ways a user can
be assigned access rights to a resource:+
1.

Direct assignment
Users can be assigned directly to a resource by the owner of that resource.

2.

Group membership
A group can be assigned to a resource by the resource owner, and by doing
so, granting the members of that group access to the resource. Membership
of the group can then be managed by the owner of the group. Effectively, the
resource owner delegates the permission to assign users to their resource to
the owner of the group.

3.

Rule-based
The resource owner can use a rule to express which users should be assigned
access to a resource. The outcome of the rule depends on the attributes used
in that rule and their values for specific users, and by doing so, the resource
owner effectively delegates the right to manage access to their resource to
the authoritative source for the attributes that are used in the rule. The

resource owner still manages the rule itself and determines which attributes
and values provide access to their resource.
4.

External authority
The access to a resource is derived from an external source; for example, a
group that is synchronized from an authoritative source such as an onpremises directory or a SaaS app such as WorkDay. The resource owner
assigns the group to provide access to the resource, and the external source
manages the members of the group.

6.2 Watch a video that explains access management


You can watch a short video that explains more about this:+
Azure AD: Introduction to dynamic membership for groups+

6.3 How does access management in Azure Active Directory work?


At the center of the Azure AD access management solution is the security
group. Using a security group to manage access to resources is a well-known
paradigm, which allows for a flexible and easily understood way to provide
access to a resource for the intended group of users. The resource owner (or
the administrator of the directory) can assign a group to provide a certain
access right to the resources they own. The members of the group will be
provided the access, and the resource owner can delegate the right to
manage the members list of a group to someone else, such as a department
manager or a helpdesk administrator.+

+
The owner of a group can also make that group available for self-service
requests. In doing so, an end user can search for and find the group and
make a request to join, effectively seeking permission to access the
resources that are managed through the group. The owner of the group can
set up the group so that join requests are approved automatically or require
approval by the owner of the group. When a user makes a request to join a

group, the join request is forwarded to the owners of the group. If one of the
owners approves the request, the requesting user is notified and the user is
joined to the group. If one of the owners denies the request, the requesting
user is notified but not joined to the group.+

7 Getting started with access management


Introduction
Microsoft Azure supports two types of queue mechanisms: Azure Queues and
Service Bus Queues.+
Azure Queues, which are part of the Azure storage infrastructure, feature a simple
REST-based Get/Put/Peek interface, providing reliable, persistent messaging within
and between services.+
Service Bus queues are part of a broader Azure messaging infrastructure that
supports queuing as well as publish/subscribe, Web service remoting, and
integration patterns. For more information about Service Bus queues,
topics/subscriptions, and relays, see the overview of Service Bus messaging.+
While both queuing technologies exist concurrently, Azure Queues were introduced
first, as a dedicated queue storage mechanism built on top of the Azure storage
services. Service Bus queues are built on top of the broader "brokered messaging"
infrastructure designed to integrate applications or application components that
may span multiple communication protocols, data contracts, trust domains, and/or
network environments.+
This article compares the two queue technologies offered by Azure by discussing
the differences in the behavior and implementation of the features provided by
each. The article also provides guidance for choosing which features might best suit
your application development needs.+
Technology selection considerations
Both Azure Queues and Service Bus queues are implementations of the message
queuing service currently offered on Microsoft Azure. Each has a slightly different
feature set, which means you can choose one or the other, or use both, depending
on the needs of your particular solution or business/technical problem you are
solving.+
When determining which queuing technology fits the purpose for a given solution,
solution architects and developers should consider the recommendations below. For
more details, see the next section.+
As a solution architect/developer, you should consider using Azure Queues when:+

Your application must store over 80 GB of messages in a queue, where the


messages have a lifetime shorter than 7 days.
Your application wants to track progress for processing a message inside of the
queue. This is useful if the worker processing a message crashes. A subsequent
worker can then use that information to continue from where the prior worker left
off.
You require server side logs of all of the transactions executed against your queues.
+
As a solution architect/developer, you should consider using Service Bus queues
when:+
Your solution must be able to receive messages without having to poll the queue.
With Service Bus, this can be achieved through the use of the long-polling receive
operation using the TCP-based protocols that Service Bus supports.
Your solution requires the queue to provide a guaranteed first-in-first-out (FIFO)
ordered delivery.
You want a symmetric experience in Azure and on Windows Server (private cloud).
For more information, see Service Bus for Windows Server.
Your solution must be able to support automatic duplicate detection.
You want your application to process messages as parallel long-running streams
(messages are associated with a stream using the SessionId property on the
message). In this model, each node in the consuming application competes for
streams, as opposed to messages. When a stream is given to a consuming node,
the node can examine the state of the application stream state using transactions.
Your solution requires transactional behavior and atomicity when sending or
receiving multiple messages from a queue.
The time-to-live (TTL) characteristic of the application-specific workload can exceed
the 7-day period.
Your application handles messages that can exceed 64 KB but will not likely
approach the 256 KB limit.
You deal with a requirement to provide a role-based access model to the queues,
and different rights/permissions for senders and receivers.
Your queue size will not grow larger than 80 GB.
You want to use the AMQP 1.0 standards-based messaging protocol. For more
information about AMQP, see Service Bus AMQP Overview.
You can envision an eventual migration from queue-based point-to-point
communication to a message exchange pattern that enables seamless integration
of additional receivers (subscribers), each of which receives independent copies of

either some or all messages sent to the queue. The latter refers to the
publish/subscribe capability natively provided by Service Bus.
Your messaging solution must be able to support the "At-Most-Once" delivery
guarantee without the need for you to build the additional infrastructure
components.
You would like to be able to publish and consume batches of messages.
You require full integration with the Windows Communication Foundation (WCF)
communication stack in the .NET Framework.
+
Comparing Azure Queues and Service Bus queues
The tables in the following sections provide a logical grouping of queue features and
let you compare, at a glance, the capabilities available in both Azure Queues and
Service Bus queues.+
Foundational capabilities
This section compares some of the fundamental queuing capabilities provided by
Azure Queues and Service Bus queues.+
Compariso
n Criteria
Ordering
guarantee

Delivery
guarantee

Azure Queues

Service Bus Queues

No

Yes - First-In-First-Out
(FIFO)

For more information, see the


first note in the Additional
Information section.

(through the use of


messaging sessions)

At-Least-Once

At-Least-Once
At-Most-Once

Atomic
operation
support

No

Yes

Receive
behavior

Non-blocking

Blocking with/without
timeout

(completes immediately if no
new message is found)

(offers long polling, or the

Compariso
n Criteria

Azure Queues

Service Bus Queues


"Comet technique")
Non-blocking
(through the use of .NET
managed API only)

Push-style
API

No

Yes
OnMessage and
OnMessage sessions .NET
API.

Receive
mode

Peek & Lease

Peek & Lock


Receive & Delete

Exclusive
access
mode

Lease-based

Lock-based

Lease/Lock
duration

30 seconds (default)

60 seconds (default)

7 days (maximum) (You can


renew or release a message
lease using the UpdateMessage
API.)

You can renew a message


lock using the RenewLock
API.

Message level

Queue level

(each message can have a


different timeout value, which
you can then update as needed
while processing the message,
by using the UpdateMessage API)

(each queue has a lock


precision applied to all of
its messages, but you can
renew the lock using the
RenewLock API.)

Lease/Lock
precision

Compariso
n Criteria
Batched
receive

Batched
send

Azure Queues

Service Bus Queues

Yes

Yes

(explicitly specifying message


count when retrieving messages,
up to a maximum of 32
messages)

(implicitly enabling a prefetch property or


explicitly through the use
of transactions)

No

Yes
(through the use of
transactions or client-side
batching)

Additional information
Messages in Azure Queues are typically first-in-first-out, but sometimes they can be
out of order; for example, when a message's visibility timeout duration expires (for
example, as a result of a client application crashing during processing). When the
visibility timeout expires, the message becomes visible again on the queue for
another worker to dequeue it. At that point, the newly visible message might be
placed in the queue (to be dequeued again) after a message that was originally
enqueued after it.
If you are already using Azure Storage Blobs or Tables and you start using queues,
you are guaranteed 99.9% availability. If you use Blobs or Tables with Service Bus
queues, you will have lower availability.
The guaranteed FIFO pattern in Service Bus queues requires the use of messaging
sessions. In the event that the application crashes while processing a message
received in the Peek & Lock mode, the next time a queue receiver accepts a
messaging session, it will start with the failed message after its time-to-live (TTL)
period expires.
Azure Queues are designed to support standard queuing scenarios, such as
decoupling application components to increase scalability and tolerance for failures,
load leveling, and building process workflows.
Service Bus queues support the At-Least-Once delivery guarantee. In addition, the
At-Most-Once semantic can be supported by using session state to store the
application state and by using transactions to atomically receive messages and
update the session state.

Azure Queues provide a uniform and consistent programming model across queues,
tables, and BLOBs both for developers and for operations teams.
Service Bus queues provide support for local transactions in the context of a single
queue.
The Receive and Delete mode supported by Service Bus provides the ability to
reduce the messaging operation count (and associated cost) in exchange for
lowered delivery assurance.
Azure Queues provide leases with the ability to extend the leases for messages.
This allows the workers to maintain short leases on messages. Thus, if a worker
crashes, the message can be quickly processed again by another worker. In
addition, a worker can extend the lease on a message if it needs to process it longer
than the current lease time.
Azure Queues offer a visibility timeout that you can set upon the enqueueing or
dequeuing of a message. In addition, you can update a message with different lease
values at run-time, and update different values across messages in the same
queue. Service Bus lock timeouts are defined in the queue metadata; however, you
can renew the lock by calling the RenewLock method.
The maximum timeout for a blocking receive operation in Service Bus queues is 24
days. However, REST-based timeouts have a maximum value of 55 seconds.
Client-side batching provided by Service Bus enables a queue client to batch
multiple messages into a single send operation. Batching is only available for
asynchronous send operations.
Features such as the 200 TB ceiling of Azure Queues (more when you virtualize
accounts) and unlimited queues make it an ideal platform for SaaS providers.
Azure Queues provide a flexible and performant delegated access control
mechanism.
+
Advanced capabilities
This section compares advanced capabilities provided by Azure Queues and Service
Bus queues.+
Compariso
n Criteria

Azure
Queues

Service Bus Queues

Scheduled
delivery

Yes

Yes

Automatic

No

Yes

Compariso
n Criteria

Azure
Queues

Service Bus Queues

Increasing
queue
time-tolive value

Yes

Yes

(via inplace
update of
visibility
timeout)

(provided via a dedicated API function)

Poison
message
support

Yes

Yes

In-place
update

Yes

Yes

Serverside
transactio
n log

Yes

No

Storage
metrics

Yes

Yes

Minute
Metrics:
provides
real-time
metrics
for
availabilit
y, TPS,
API call
counts,
error
counts,

(bulk queries by calling GetQueues)

dead
lettering

Compariso
n Criteria

Azure
Queues

Service Bus Queues

and more,
all in real
time
(aggregat
ed per
minute
and
reported
within a
few
minutes
from what
just
happened
in
productio
n. For
more
informatio
n, see
About
Storage
Analytics
Metrics.
State
managem
ent

No

Message
autoforwarding

No

Yes
Microsoft.ServiceBus.Messaging.EntityStatus.Activ
e,
Microsoft.ServiceBus.Messaging.EntityStatus.Disa
bled,
Microsoft.ServiceBus.Messaging.EntityStatus.Sen
dDisabled,
Microsoft.ServiceBus.Messaging.EntityStatus.Rece
iveDisabled
Yes

Compariso
n Criteria

Azure
Queues

Service Bus Queues

Purge
queue
function

Yes

No

Message
groups

No

Yes
(through the use of messaging sessions)

Applicatio
n state
per
message
group

No

Yes

Duplicate
detection

No

Yes
(configurable on the sender side)

WCF
integratio
n

No

Yes

WF
integratio
n

Custom

Native

(requires
building a
custom
WF
activity)

(offers out-of-the-box WF activities)

Browsing
message
groups

No

Yes

Fetching

No

Yes

(offers out-of-the-box WCF bindings)

Compariso
n Criteria

Azure
Queues

Service Bus Queues

message
sessions
by ID
Additional information
Both queuing technologies enable a message to be scheduled for delivery at a later
time.
Queue auto-forwarding enables thousands of queues to auto-forward their
messages to a single queue, from which the receiving application consumes the
message. You can use this mechanism to achieve security, control flow, and isolate
storage between each message publisher.
Azure Queues provide support for updating message content. You can use this
functionality for persisting state information and incremental progress updates into
the message so that it can be processed from the last known checkpoint, instead of
starting from scratch. With Service Bus queues, you can enable the same scenario
through the use of message sessions. Sessions enable you to save and retrieve the
application processing state (by using SetState and GetState).
Dead lettering, which is only supported by Service Bus queues, can be useful for
isolating messages that cannot be processed successfully by the receiving
application or when messages cannot reach their destination due to an expired
time-to-live (TTL) property. The TTL value specifies how long a message remains in
the queue. With Service Bus, the message will be moved to a special queue called
$DeadLetterQueue when the TTL period expires.
To find "poison" messages in Azure Queues, when dequeuing a message the
application examines the DequeueCount property of the message. If DequeueCount
is above a given threshold, the application moves the message to an applicationdefined "dead letter" queue.
Azure Queues enable you to obtain a detailed log of all of the transactions executed
against the queue, as well as aggregated metrics. Both of these options are useful
for debugging and understanding how your application uses Azure Queues. They are
also useful for performance-tuning your application and reducing the costs of using
queues.
The concept of "message sessions" supported by Service Bus enables messages
that belong to a certain logical group to be associated with a given receiver, which
in turn creates a session-like affinity between messages and their respective
receivers. You can enable this advanced functionality in Service Bus by setting the
SessionID property on a message. Receivers can then listen on a specific session ID
and receive messages that share the specified session identifier.

The duplication detection functionality supported by Service Bus queues


automatically removes duplicate messages sent to a queue or topic, based on the
value of the MessageId property.
+
Capacity and quotas
This section compares Azure Queues and Service Bus queues from the perspective
of capacity and quotas that may apply.+
Comparison
Criteria
Maximum
queue size

Maximum
message size

Azure Queues

Service Bus Queues

200 TB

1 GB to 80 GB

(limited to a single storage


account capacity)

(defined upon creation of a


queue and enabling
partitioning see the
Additional Information
section)

64 KB

256 KB or 1 MB

(48 KB when using Base64


encoding)

(including both header and


body, maximum header
size: 64 KB).

Azure supports large


messages by combining
queues and blobs at which
point you can enqueue up to
200GB for a single item.

Depends on the service tier.

Maximum
message TTL

7 days

TimeSpan.Max

Maximum
number of
queues

Unlimited

10,000

Maximum
number of

Unlimited

(per service namespace,


can be increased)
Unlimited

Comparison
Criteria

Azure Queues

concurrent
clients

Service Bus Queues


(100 concurrent connection
limit only applies to TCP
protocol-based
communication)

Additional information
Service Bus enforces queue size limits. The maximum queue size is specified upon
creation of the queue and can have a value between 1 and 80 GB. If the queue size
value set on creation of the queue is reached, additional incoming messages will be
rejected and an exception will be received by the calling code. For more information
about quotas in Service Bus, see Service Bus Quotas.
You can create Service Bus queues in 1, 2, 3, 4, or 5 GB sizes (the default is 1 GB).
With partitioning enabled (which is the default), Service Bus creates 16 partitions for
each GB you specify. As such, if you create a queue that is 5 GB in size, with 16
partitions the maximum queue size becomes (5 * 16) = 80 GB. You can see the
maximum size of your partitioned queue or topic by looking at its entry on the Azure
portal.
With Azure Queues, if the content of the message is not XML-safe, then it must be
Base64 encoded. If you Base64-encode the message, the user payload can be up to
48 KB, instead of 64 KB.
With Service Bus queues, each message stored in a queue is comprised of two
parts: a header and a body. The total size of the message cannot exceed the
maximum message size supported by the service tier.
When clients communicate with Service Bus queues over the TCP protocol, the
maximum number of concurrent connections to a single Service Bus queue is
limited to 100. This number is shared between senders and receivers. If this quota is
reached, subsequent requests for additional connections will be rejected and an
exception will be received by the calling code. This limit is not imposed on clients
connecting to the queues using REST-based API.
If you require more than 10,000 queues in a single Service Bus namespace, you can
contact the Azure support team and request an increase. To scale beyond 10,000
queues with Service Bus, you can also create additional namespaces using the
Azure portal.
+
Management and operations
This section compares the management features provided by Azure Queues and
Service Bus queues.+

Comparison
Criteria

Azure Queues

Service Bus Queues

Management
protocol

REST over HTTP/HTTPS

REST over HTTPS

Runtime
protocol

REST over HTTP/HTTPS

REST over HTTPS


AMQP 1.0 Standard
(TCP with TLS)

.NET Managed
API

Yes

Yes

(.NET managed Storage Client


API)

(.NET managed
brokered messaging
API)

Native C++

Yes

No

Java API

Yes

Yes

PHP API

Yes

Yes

Node.js API

Yes

Yes

Arbitrary
metadata
support

Yes

No

Queue naming
rules

Up to 63 characters long

Up to 260 characters
long

(Letters in a queue name must


be lowercase.)

Get queue

Yes

(Queue paths and


names are caseinsensitive.)
Yes

Comparison
Criteria

Azure Queues

Service Bus Queues

(Approximate value if messages


expire beyond the TTL without
being deleted.)

(Exact, point-in-time
value.)

Yes

Yes

length function

Peek function
Additional information

Azure Queues provide support for arbitrary attributes that can be applied to the
queue description, in the form of name/value pairs.
Both queue technologies offer the ability to peek a message without having to lock
it, which can be useful when implementing a queue explorer/browser tool.
The Service Bus .NET brokered messaging APIs leverage full-duplex TCP connections
for improved performance when compared to REST over HTTP, and they support the
AMQP 1.0 standard protocol.
Names of Azure queues can be 3-63 characters long, can contain lowercase letters,
numbers, and hyphens. For more information, see Naming Queues and Metadata.
Service Bus queue names can be up to 260 characters long and have less restrictive
naming rules. Service Bus queue names can contain letters, numbers, periods,
hyphens, and underscores.
+
Authentication and authorization
This section discusses the authentication and authorization features supported by
Azure Queues and Service Bus queues.+

Comparison Criteria

Azure Queues

Service Bus
Queues

Authentication

Symmetric key

Symmetric key

Security model

Delegated access via SAS


tokens.

SAS

Comparison Criteria

Azure Queues

Service Bus
Queues

Identity provider
federation

No

Yes

Additional information
Every request to either of the queuing technologies must be authenticated. Public
queues with anonymous access are not supported. Using SAS, you can address this
scenario by publishing a write-only SAS, read-only SAS, or even a full-access SAS.
The authentication scheme provided by Azure Queues involves the use of a
symmetric key, which is a hash-based Message Authentication Code (HMAC),
computed with the SHA-256 algorithm and encoded as a Base64 string. For more
information about the respective protocol, see Authentication for the Azure Storage
Services. Service Bus queues support a similar model using symmetric keys. For
more information, see Shared Access Signature Authentication with Service Bus.
+
Cost
This section compares Azure Queues and Service Bus queues from a cost
perspective.+
Comparison
Criteria

Azure Queues

Service Bus Queues

Queue
transaction
cost

$0.0036

Basic tier: $0.05

(per 100,000 transactions)

(per million operations)

Billable
operations

All

Send/Receive only
(no charge for other
operations)

Idle
transactions

Billable

Billable

(Querying an empty queue


is counted as a billable
transaction.)

(A receive against an empty


queue is considered a
billable message.)

Comparison
Criteria

Azure Queues

Service Bus Queues

Storage cost

$0.07

$0.00

(per GB/month)
Outbound data
transfer costs

$0.12 - $0.19

$0.12 - $0.19

(Depending on
geography.)

(Depending on geography.)

Additional information
Data transfers are charged based on the total amount of data leaving the Azure
datacenters via the internet in a given billing period.
Data transfers between Azure services located within the same region are not
subject to charge.
As of this writing, all inbound data transfers are not subject to charge.
Given the support for long polling, using Service Bus queues can be cost effective in
situations where low-latency delivery is required.
+
Note
All costs are subject to change. This table reflects current pricing and does not
include any promotional offers that may currently be available. For up-to-date
information about Azure pricing, see the Azure pricing page. For more information
about Service Bus pricing, see Service Bus pricing.+
Conclusion
By gaining a deeper understanding of the two technologies, you will be able to
make a more informed decision on which queue technology to use, and when. The
decision on when to use Azure Queues or Service Bus queues clearly depends on a
number of factors. These factors may depend heavily on the individual needs of
your application and its architecture. If your application already uses the core
capabilities of Microsoft Azure, you may prefer to choose Azure Queues, especially if
you require basic communication and messaging between services or need queues
that can be larger than 80 GB in size.+
Because Service Bus queues provide a number of advanced features, such as
sessions, transactions, duplicate detection, automatic dead-lettering, and durable

publish/subscribe capabilities, they may be a preferred choice if you are building a


hybrid application or if your application otherwise requires these features.

7.1.1 Windows Azure Service Bus Publish/Subscribe Example


Published April 12, 2009 Cloud Computing , Windows Azure 26 Comments
Tags: .NET, Azure, Cloud Computing, ESB, netEventRelayBinding, Publish/Subscribe, Service
Bus, STS, WCF

Within the Azure Platform, there is a set of services named .NET Services. These set of services were
originally known as BizTalk.NET, and it includes the Workflow Services, the Access Control
Services, and the one we will talk about, the Service Bus.

The Service Bus implements the familiar Enterprise Service Bus Pattern. In a nutshell, the service
bus allows for service location unawareness between the service and its consumer, along with a set of
other, rather important, capabilities. The Service Bus allows you to build composite applications
based on services that you really do not need to know where they are. They could be in servers inside
your company, or on a server on the other side of the world; the location is irrelevant. There are,
nevertheless, important things you need to know about the service you are calling, namely, security.
The Access Control Service integrates seamlessly with the Service Bus to provide authentication and
authorization. The Access Control Service will be addressed in some other entry, for now we are
concentrating on the Service Bus.
The following diagrams depict different scenarios where it makes sense to use the Service Bus.

Depending on the Service Bus location, it can take a slightly different designation. If the Service Bus
is installed and working on-premises, it is commonly known as an ESB (Enterprise Service Bus), if it
is on the cloud, it takes the designation ISB (Internet Service Bus). It is still not clear what Microsoft
s intentions are regarding an on-premises offering of the Azure Platform. The following diagram
shows another possible scenario for using the Service Bus.

As I mentioned before, there are several other benefits associated with the use of the Service Bus that
can be leveraged by the configuration shown in this diagram. For instance, the Service Bus also
provides protocol mediation allowing use of non-standard bindings inside the enterprise (e.g.,
NetTcpBinding), and more standard protocols once a request is forwarded to the cloud (e.g.,
BasicHttpBinding).
Going back to our example, we are going to setup the publisher/subscriber scenario depicted in the
following diagram.

Lets start by building the service. To do so follow the steps:


1) Sign in to the Azure Services Platform Portal at http://portal.ex.azure.microsoft.com/
2) Create a solution in the Azure Services Platform Portal. This solution will create an account issued
by the Access Control Service (accesscontrol.windows.net). The Access Control Service creates this
account for convenience only, and this is going to be deprecated. The Access Control Service is
basically an STS (Security Token Service), there is no intention from Microsoft to build yet another
Identity Management System. Although it integrates with Identity Management Systems such as
Windows CardSpace, Windows Live Id, Active Directory Federation Services, etc.
3) Create a console application named ESBServiceConsole
4) Add a reference to the System.ServiceModel assembly
5) Add a reference to the Microsoft.ServiceBus assembly. You can find this assembly in the folder
C:\Program Files\Microsoft .NET Services SDK (March 2009
CTP)\Assemblies\Microsoft.ServiceBus.dll. By the way I am using the March 2009 CTP on this
example, you can find it at http://www.microsoft.com/downloads/details.aspx?FamilyID=b44c10e8425c-417f-af10-3d2839a5a362&displaylang=en
6) Add the following interface to the program.cs file

[ServiceContract(Name = IEchoContract, Namespace =


http://azure.samples/&#8221;)]
public interface IEchoContract
{
[OperationContract(IsOneWay = true)]
void Echo(string text);
}

7) Add the following class to the program program.cs file

[ServiceBehavior(Name = EchoService, Namespace =


http://azure.samples/&#8221;)]
class EchoService : IEchoContract
{
public void Echo(string text)
{
Console.WriteLine(Echoing: {0}, text);
}
}

8) Add the following code to the main function

// since we are using a netEventRelayBinding based endpoint we can set


the conectivity protocol, in this case we are setting it to http
ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.Http;

// read the solution credentials to connect to the Service Bus. this type
of credentials are going to be deprecated, they just exist for
convenience, in a real scenario one should use CardSpace, Certificates,
Live Services Id, etc.

Console.Write(Your Solution Name: );


string solutionName = Console.ReadLine();
Console.Write(Your Solution Password: );
string solutionPassword = Console.ReadLine();

// create the endpoint address in the solutions namespace


Uri address = ServiceBusEnvironment.CreateServiceUri(sb, solutionName,
EchoService);

// create the credentials object for the endpoint


TransportClientEndpointBehavior userNamePasswordServiceBusCredential =
new TransportClientEndpointBehavior();
userNamePasswordServiceBusCredential.CredentialType =
TransportClientCredentialType.UserNamePassword;
userNamePasswordServiceBusCredential.Credentials.UserName.UserName =
solutionName;
userNamePasswordServiceBusCredential.Credentials.UserName.Password =
solutionPassword;

// create the service host reading the configuration


ServiceHost host = new ServiceHost(typeof(EchoService), address);

// add the Service Bus credentials to all endpoints specified in


configuration
foreach (ServiceEndpoint endpoint in host.Description.Endpoints)
{
endpoint.Behaviors.Add(userNamePasswordServiceBusCredential);
}

// open the service


host.Open();

Console.WriteLine(Service address: + address);


Console.WriteLine(Press [Enter] to exit);

Console.ReadLine();

// close the service


host.Close();

Notice that I chose the Tcp protocol as the connectivity mode. In the client side, I will specify the
Http protocol. This is to show that protocol mediation can be accomplished with the use of the
Service Bus.
9) Add an app.config file to the project
10) Add the following configuration to the app.config file

<system.serviceModel>
<services>
<service name=ESBServiceConsole.EchoService>
<endpoint contract=ESBServiceConsole.IEchoContract
binding=netEventRelayBinding />
</service>
</services>
</system.serviceModel>

11) Compile and run the service. Enter the solution credentials, and you should get the following:

Now lets build a client application.


1) Add a console project named ESBClientConsole to the solution.
2) Add a reference to the System.ServiceModel assembly.
3) Add a reference to the Microsoft.ServiceBus assembly.
4) Add the following interface to the program.cs file

[ServiceContract(Name = IEchoContract, Namespace =


http://azure.samples/&#8221;)]
public interface IEchoContract
{
[OperationContract(IsOneWay = true)]
void Echo(string text);
}

public interface IEchoChannel : IEchoContract, IClientChannel { }

5) Add the following code to the main function

// since we are using a netEventRelayBinding based endpoint we can set


the conectivity protocol, in this case we are setting it to http
ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.Tcp;

// read the solution credentials to connect to the Service Bus. this type
of credentials are going to be deprecated, they just exist for
convenience, in a real scenario one should use CardSpace, Certificates,
Live Services Id, etc.
Console.Write(Your Solution Name: );
string solutionName = Console.ReadLine();
Console.Write(Your Solution Password: );
string solutionPassword = Console.ReadLine();

// create the service URI based on the solution name


Uri serviceUri = ServiceBusEnvironment.CreateServiceUri(sb,
solutionName, EchoService);

// create the credentials object for the endpoint


TransportClientEndpointBehavior userNamePasswordServiceBusCredential =
new TransportClientEndpointBehavior();

userNamePasswordServiceBusCredential.CredentialType =
TransportClientCredentialType.UserNamePassword;
userNamePasswordServiceBusCredential.Credentials.UserName.UserName =
solutionName;
userNamePasswordServiceBusCredential.Credentials.UserName.Password =
solutionPassword;

// create the channel factory loading the configuration


ChannelFactory<IEchoChannel> channelFactory = new
ChannelFactory<IEchoChannel>(RelayEndpoint, new
EndpointAddress(serviceUri));

// apply the Service Bus credentials


channelFactory.Endpoint.Behaviors.Add(userNamePasswordServiceBusCredentia
l);

// create and open the client channel


IEchoChannel channel = channelFactory.CreateChannel();
channel.Open();

Console.WriteLine(Enter text to echo (or [Enter] to exit):);


string input = Console.ReadLine();
while (input != String.Empty)
{
try
{
channel.Echo(input);
Console.WriteLine(Done!);
}
catch (Exception e)
{
Console.WriteLine(Error: + e.Message);
}
input = Console.ReadLine();

channel.Close();
channelFactory.Close();

6) Add an app.config file to the project


7) Add the following configuration to the app.config file

<system.serviceModel>
<client>
<endpoint name=RelayEndpoint
contract=ESBClientConsole.IEchoContract
binding=netEventRelayBinding/>
</client>
</system.serviceModel>

8) Compile the client, run three instances of the service, enter the credentials, run the client and type
some text, the result should be as follows.

There you have it, a publish/subscribe example using the Service Bus.

8 MICROSOFT AZURE - VIRTUAL


NETWORKS (VNETS)
EXPLAINED

Written on 08 July 2016. Posted in Cloud

8.1 What is a Virtual Network?


A Virtual Network, also known as a VNet is an isolated network within the
Microsoft Azure cloud.
VNets are synonymous to AWS VPC (Virtual Private Cloud), providing a range of
networking features such as the ability to customize DHCP blocks, DNS, routing,
inter-VM connectivity, access control and Virtual Private Networks (VPN).

8.2 Tips
Always Deploy VNet First - You should always build your VNet before you
deploy your VM instance. If you do not then Azure will create a default VNet,
which may contain an overlapping address range with your onprem network.
Moving VMs between VNets - A VM can be moved from one subnet to
another within a VNet. However to move a VM from one VNet to another VNet,
you must delete the VM and recreate using the previous VHD.

8.3 Deployment Models


Azure provides 2 deployment models, Classic (ASM) and ARM.
ARM is the new deployment model that all resources within the Azure Cloud are
being transition to. The key point to note is that the configuration, management,
and functionality of resources (Compute, Network and Storage), along with
portals are different between the 2.
Below outlines the key differences,
ASM (Azure Service Management)

ARM (Azure Resource Manager)

Also known as Classic

Also known as IaaS v2

Old deployment model

New deployment model

REST based API

REST based API

Uses XML data serialization

Uses JSON data serialization

ASM (Azure Service Management)

Does NOT support Resource Groups

ARM (Azure Resource Manager)

Supports Resource Groups

What are resource groups ?


A Resource Group is a logical container for a collection of resources, on which
monitoring, provisioning and billing can be performed.

8.4 Components
Below are the key components within Microsoft Azure Virtual Networks.

8.4.1 Subnets
A subnet is a range of IP addresses in the VNet, you can divide a VNet into
multiple subnets for organization and security. Additionally you can configure
VNet routing tables and Network Security Groups (NSG) to a subnet [2].

8.4.2 IP Addresses
There are 2 types of IP addresses that can be assigned to an Azure resource Public or Private.
Public

Used for internet/Azure public facing communication.

A dynamic IP is assigned to the VM, by default. At the point the VM is


started /stopped the IP is released/renewed.

A static IP can be assigned to a VM which is only released when the VM


is deleted.
Private

Used for connectivity within a VNet, and also when using a VPN
gateway or ExpressRoute.

A dynamic IP, by default, is allocated to the VM from the resources

subnet via DHCP. At the point the VM is started/stopped the IP may be


released/renewed based on the DHCP lease.
A static IP can be assigned to the VM.

8.4.3 Network Security Groups


Network Security Groups (NSGs) allow you to permit or deny traffic (via a
rule base), to either a subnet or a network interface. By default the inbound and
outbound rules include an implied deny all. Finally NSGs are stateful, meaning
that the TCP sequence numbers are checked in addition to, if the connection is
already established.

8.4.4 Loadbalancing
Azure provides three different load balancing solutions. They are,

Azure Traffic Manager - Similar to Route53 within AWS, DNS is used to


direct traffic to necessary destination. There are 3 destination selection methods
- failover, performance or round robin.

Azure Loadbalancer - Performs L4 loadbalancing within a Virtual Network.


Currently only supports round robin distribution.

Azure Application Gateway - Performs L7 loadbalancing. Supports HTTP


Request based loadbalancing, SSL Termination and cookie based persistence.

8.4.5 Routing Tables


Although Azure automates the provisioning of system routes, there may be a
scenario where you need to further dictate how your traffic is routed.
Routes are applied on traffic leaving a subnet, and traffic can be sent to a next
hop of either a virtual network, virtual network gateway or virtual machine.

8.5 VPN
There will be times when you will need to encrypt your data when sending it
over the internet. Or there may be times where you need to send traffic
between 2 VNets. This is where Azure VPNs come into play.

8.5.1 Gateway Types


There are 2 types of gateways, VPN and Express Route. Further details are
shown below,
VPN - Traffic is encrypted between the endpoints. There are 3 modes,

Site-to-Site - Traffic is secured using IPSEC/IKE between 2 VPN


gateways, for example between Azure and your onprem firewall.

Point-to-Site - Via a VPN client, a user connects onto Azure, and


traffic is encrypted using TLS.

VNet-to-VNet - Traffic is secured between 2 Virtual Networks using


IPSEC/IKE.

Express Route - This provides a dedicated peered connection into Azure.


This is covered in more detail within the Express Route section.

The amount of traffic and/or tunnels that your gateway can support is controlled
via a set of 3 SKUs, these SKUs are updated via powershell cmdlets.

8.5.2 VPN Types


There are 2 types of VPN - policy based and route based. Each coming with
their own caveats.
Policy Based - Traffic is encrypted/decrypted based upon a policy. This policy
contains the IPs and subnets or the endpoints on each side. Policy based VPNs
are named 'static routing gateway' within the Classic (ASM) deployment model.
However there are some key caveats,

Supports IKEv1 only.

Only 1 Site-2-Site VPN can be configured. Multi-point not supported.

Point-to-Site is not supported.

VNet-to-VNet VPN is not supported.

Route Based - Traffic is routed via a tunnel interface. This interface then
encrypts or decrypts the packets that pass the tunnel. Route based VPNs are

named 'dynamic routing gateway' in the Classic deployment model. Additionally


they only supports IKEv2.

8.6 Express Route


Microsoft Azure ExpressRoute lets you extend your on-premises networks
into the Microsoft cloud over a dedicated private connection facilitated by a
connectivity provider[3]. This offers greater bandwidth, improved reliability, and
also greater security due to bypassing the internet.

8.6.1 Connectivity Options


Express Route offers 2 connectivity options:
Exchange Provider
You connect into Azure via a point-to-point connection, through an Exchange
provider who has a direct connection into Azure. With this option you have full
control over routing, however is not ideal for multi-point WANs, due to the pointto-point connection requirement.
Network Service Provider
With this option you connect your MPLS WAN into Azure via your Network
Service Provider.
Routing is typically managed via your Network Service Provider. However this
option does provide multiple-point connectivity (i.e each branch/site) into Azure,
unlike point-to-point as this goes via an Exchange Provider.

Differences between connection types

[4]

8.7 Enable diagnostics in cloud service projects before deploying them


In Visual Studio, you can choose to collect diagnostics data for roles that run
in Azure, when you run the service in the emulator before deploying it. All
changes to diagnostics settings in Visual Studio are saved in the
diagnostics.wadcfgx configuration file. These configuration settings specify
the storage account where diagnostics data is saved when you deploy your
cloud service.+
8.7.1 To enable diagnostics in Visual Studio before deployment
1.

On the shortcut menu for the role that interests you, choose Properties, and
then choose the Configuration tab in the roles Properties window.

2.

In the Diagnostics section, make sure that the Enable Diagnostics


check box is selected.

3.

Choose the ellipsis () button to specify the storage account where you
want the diagnostics data to be stored. The storage account you choose will
be the location where diagnostics data is stored.

4.

In the Create Storage Connection String dialog box, specify whether


you want to connect using the Azure Storage Emulator, an Azure subscription,
or manually entered credentials.

If you choose the Microsoft Azure Storage Emulator option, the


connection string is set to UseDevelopmentStorage=true.

If you choose the Your subscription option, you can choose the Azure
subscription you want to use and the account name. You can choose the
Manage Accounts button to manage your Azure subscriptions.

If you choose the Manually entered credentials option, you're prompted


to enter the name and key of the Azure account you want to use.

5.

Choose the Configure button to view the Diagnostics configuration


dialog box. Each tab (except for General and Log Directories) represents a
diagnostic data source that you can collect. The default tab, General, offers

you the following diagnostics data collection options: Errors only, All
information, and Custom plan. The default option, Errors only, takes the
least amount of storage because it doesnt transfer warnings or tracing
messages. The All information option transfers the most information and is,
therefore, the most expensive option in terms of storage.

6.

For this example, select the Custom plan option so you can customize the
data collected.

7.

The Disk Quota in MB box specifies how much space you want to allocate in
your storage account for diagnostics data. You can change the default value if
you want.

8.

On each tab of diagnostics data you want to collect, select its Enable
Transfer of check box. For example, if you want to collect application logs, select

the Enable transfer of Application Logs check box on the Application Logs
tab. Also, specify any other information required by each diagnostics data type.
See the section Configure diagnostics data sources later in this topic for
configuration information on each tab.
9.

After youve enabled collection of all the diagnostics data you want, choose
the OK button.

10.

Run your Azure cloud service project in Visual Studio as usual. As you use
your application, the log information that you enabled is saved to the Azure
storage account you specified.

8.8 Enable diagnostics in Azure virtual machines


In Visual Studio, you can choose to collect diagnostics data for Azure virtual
machines.+
8.8.1 To enable diagnostics in Azure virtual machines
1.

In Server Explorer, choose the Azure node and then connect to your Azure
subscription, if you're not already connected.

2.

Expand the Virtual Machines node. You can create a new virtual machine,
or select one that's already there.

3.

On the shortcut menu for the virtual machine that interests you, choose
Configure. This shows the virtual machine configuration dialog box.

4.

If it's not already installed, add the Microsoft Monitoring Agent Diagnostics
extension. This extension lets you gather diagnostics data for the Azure
virtual machine. In the Installed Extensions list, choose the Select an available
extension drop-down menu and then choose Microsoft Monitoring Agent
Diagnostics.

8.8.1.1.1 Note

Other diagnostics extensions are available for your virtual machines. For more
information, see Azure VM Extensions and Features.
5.

Choose the Add button to add the extension and view its Diagnostics
configuration dialog box.

6.

Choose the Configure button to specify a storage account and then


choose the OK button.
Each tab (except for General and Log Directories) represents a diagnostic
data source that you can collect.

The default tab, General, offers you the following diagnostics data collection
options: Errors only, All information, and Custom plan. The default
option, Errors only, takes the least amount of storage because it doesnt
transfer warnings or tracing messages. The All information option transfers
the most information and is, therefore, the most expensive option in terms of
storage.
7.

For this example, select the Custom plan option so you can customize the
data collected.

8.

The Disk Quota in MB box specifies how much space you want to allocate in
your storage account for diagnostics data. You can change the default value if
you want.

9.

On each tab of diagnostics data you want to collect, select its Enable
Transfer of check box.
For example, if you want to collect application logs, select the Enable
transfer of Application Logs check box on the Application Logs tab. Also,
specify any other information required by each diagnostics data type. See the
section Configure diagnostics data sources later in this topic for
configuration information on each tab.

10.

After youve enabled collection of all the diagnostics data you want, choose
the OK button.

11.

Save the updated project.

You'll see a message in the Microsoft Azure Activity Log window that the
virtual machine has been updated.
+

8.9 Configure diagnostics data sources


After you enable diagnostics data collection, you can choose exactly what
data sources you want to collect and what information is collected. The
following is a list of tabs in the Diagnostics configuration dialog box and
what each configuration option means.+
8.9.1 Application logs
Application logs contain diagnostics information produced by a web
application. If you want to capture application logs, select the Enable
transfer of Application Logs check box. You can increase or decrease the
number of minutes when the application logs are transferred to your storage
account by changing the Transfer Period (min) value. You can also change
the amount of information captured in the log by setting the Log level value.
For example, you can choose Verbose to get more information or choose

Critical to capture only critical errors. If you have a specific diagnostics


provider that emits application logs, you can capture them by adding the
providers GUID to the Provider GUID box.+

+
See Enable diagnostics logging for web apps in Azure App Service for more
information about application logs.+
8.9.2 Windows event logs
If you want to capture Windows event logs, select the Enable transfer of
Windows Event Logs check box. You can increase or decrease the number
of minutes when the event logs are transferred to your storage account by
changing the Transfer Period (min) value. Select the check boxes for the
types of events that you want to track.+

+
If you're using Azure SDK 2.6 or later and want to specify a custom data
source, enter it in the text box and then choose the Add button next to it.
The data source is added to the diagnostics.cfcfg file.+
If you're using Azure SDK 2.5 and want to specify a custom data source, you
can add it to the WindowsEventLog section of the diagnostics.wadcfgx file,
such as in the following example.+
Copy

<WindowsEventLog scheduledTransferPeriod="PT1M">
<DataSource name="Application!*" />
<DataSource name="CustomDataSource!*" />

</WindowsEventLog>

8.9.3 Performance counters


Performance counter information can help you locate system bottlenecks and
fine-tune system and application performance. See Create and Use
Performance Counters in an Azure Application for more information. If you
want to capture performance counters, select the Enable transfer of
Performance Counters check box. You can increase or decrease the
number of minutes when the event logs are transferred to your storage
account by changing the Transfer Period (min) value. Select the check
boxes for the performance counters that you want to track.+

+
To track a performance counter that isnt listed, enter it by using the
suggested syntax and then choose the Add button. The operating system on
the virtual machine determines which performance counters you can track.
For more information about syntax, see Specifying a Counter Path.+
8.9.4 Infrastructure logs
If you want to capture infrastructure logs, which contain information about
the Azure diagnostic infrastructure, the RemoteAccess module, and the
RemoteForwarder module, select the Enable transfer of Infrastructure

Logs check box. You can increase or decrease the number of minutes when
the logs are transferred to your storage account by changing the Transfer
Period (min) value.+

+
See Collect Logging Data by Using Azure Diagnostics for more information.+
8.9.5 Log directories
If you want to capture log directories, which contain data collected from log
directories for Internet Information Services (IIS) requests, failed requests, or
folders that you choose, select the Enable transfer of Log Directories
check box. You can increase or decrease the number of minutes when the
logs are transferred to your storage account by changing the Transfer
Period (min) value.+
You can select the boxes of the logs you want to collect, such as IIS Logs
and Failed Request Logs. Default storage container names are provided,
but you can change the names if you want.+

Also, you can capture logs from any folder. Just specify the path in the Log
from Absolute Directory section and then choose the Add Directory
button. The logs will be captured to the specified containers.+

+
8.9.6 ETW logs
If you use Event Tracing for Windows (ETW) and want to capture ETW logs,
select the Enable transfer of ETW Logs check box. You can increase or
decrease the number of minutes when the logs are transferred to your
storage account by changing the Transfer Period (min) value.+

The events are captured from event sources and event manifests that you
specify. To specify an event source, enter a name in the Event Sources
section and then choose the Add Event Source button. Similarly, you can
specify an event manifest in the Event Manifests section and then choose
the Add Event Manifest button.+

+
The ETW framework is supported in ASP.NET through classes in the
[System.Diagnostics.aspx]

(https://msdn.microsoft.com/library/system.diagnostics(v=vs.110)
namespace. The Microsoft.WindowsAzure.Diagnostics namespace, which
inherits from and extends standard [System.Diagnostics.aspx]
(https://msdn.microsoft.com/library/system.diagnostics(v=vs.110) classes,
enables the use of [System.Diagnostics.aspx]
(https://msdn.microsoft.com/library/system.diagnostics(v=vs.110) as a
logging framework in the Azure environment. For more information, see Take
Control of Logging and Tracing in Microsoft Azure and Enabling Diagnostics in
Azure Cloud Services and Virtual Machines.+
8.9.7 Crash dumps
If you want to capture information about when a role instance crashes, select
the Enable transfer of Crash Dumps check box. (Because ASP.NET
handles most exceptions, this is generally useful only for worker roles.) You
can increase or decrease the percentage of storage space devoted to the
crash dumps by changing the Directory Quota (%) value. You can change
the storage container where the crash dumps are stored, and you can select
whether you want to capture a Full or Mini dump.+
The processes currently being tracked are listed. Select the check boxes for
the processes that you want to capture. To add another process to the list,
enter the process name and then choose the Add Process button.+

+
See Take Control of Logging and Tracing in Microsoft Azure and Microsoft
Azure Diagnostics Part 4: Custom Logging Components and Azure
Diagnostics 1.3 Changes for more information.+

8.10 View the diagnostics data


After youve collected the diagnostics data for a cloud service or a virtual
machine, you can view it.+

8.10.1

To view cloud service diagnostics data

1.

Deploy your cloud service as usual and then run it.

2.

You can view the diagnostics data in either a report that Visual Studio
generates or tables in your storage account. To view the data in a report, open
Cloud Explorer or Server Explorer, open the shortcut menu of the node for
the role that interests you, and then choose View Diagnostic Data.

A report that shows the available data appears.

If the most recent data doesn't appear, you might have to wait for the transfer
period to elapse.
Choose the Refresh link to immediately update the data, or choose an
interval in the Auto-Refresh dropdown list box to have the data updated
automatically. To export the error data, choose the Export to CSV button to
create a comma-separated value file you can open in a spreadsheet.
In Cloud Explorer or Server Explorer, open the storage account that's
associated with the deployment.
3.

Open the diagnostics tables in the table viewer, and then review the data
that you collected. For IIS logs and custom logs, you can open a blob
container. By reviewing the following table, you can find the table or blob
container that contains the data that interests you. In addition to the data for
that log file, the table entries contain EventTickCount, DeploymentId, Role,

and RoleInstance to help you identify what virtual machine and role generated
the data and when.

Diagnostic
data

Description

Location

Applicatio
n Logs

Logs that your code


generates by calling
methods of the
System.Diagnostics.Tra
ce class.

WADLogsTable

Event Logs

This data is from the


Windows event logs on
the virtual machines.
Windows stores
information in these
logs, but applications
and services also use
them to report errors or
log information.

WADWindowsEventLogsTable

Performan
ce
Counters

You can collect data on


any performance
counter thats available
on the virtual machine.
The operating system
provides performance
counters, which include
many statistics such as
memory usage and
processor time.

WADPerformanceCountersTable

Infrastruct
ure Logs

These logs are


generated from the
diagnostics
infrastructure itself.

WADDiagnosticInfrastructureLogsTa
ble

IIS Logs

These logs record web


requests. If your cloud

You can find failed-request logs in


the blob container under wad-iis-

Diagnostic
data

4.

Description

Location

service gets a
significant amount of
traffic, these logs can
be quite lengthy, so you
should collect and store
this data only when you
need it.

failedreqlogs under a path for that


deployment, role, and instance. You
can find complete logs under wadiis-logfiles. Entries for each file are
made in the WADDirectories table.

Crash
dumps

This information
provides binary images
of your cloud services
process (typically a
worker role).

wad-crush-dumps blob container

Custom
log files

Logs of data that you


predefined.

You can specify in code the location


of custom log files in your storage
account. For example, you can
specify a custom blob container.

If data of any type is truncated, you can try increasing the buffer for that data
type or shortening the interval between transfers of data from the virtual
machine to your storage account.

5.

(optional) Purge data from the storage account occasionally to reduce overall
storage costs.

6.

When you do a full deployment, the diagnostics.cscfg file (.wadcfgx for Azure
SDK 2.5) is updated in Azure, and your cloud service picks up any changes to
your diagnostics configuration. If you, instead, update an existing deployment,
the .cscfg file isnt updated in Azure. You can still change diagnostics settings,
though, by following the steps in the next section. For more information about
performing a full deployment and updating an existing deployment, see Publish
Azure Application Wizard.

8.10.2
1.

To view virtual machine diagnostics data

On the shortcut menu for the virtual machine, choose View Diagnostics
Data.

This opens the Diagnostics summary window.

If the most recent data doesn't appear, you might have to wait for the transfer
period to elapse.
Choose the Refresh link to immediately update the data, or choose an
interval in the Auto-Refresh dropdown list box to have the data updated
automatically. To export the error data, choose the Export to CSV button to
create a comma-separated value file you can open in a spreadsheet.
+

8.11 Configure cloud service diagnostics after deployment


If you're investigating a problem with a cloud service that already running,
you might want to collect data that you didn't specify before you originally
deployed the role. In this case, you can start to collect that data by using the
settings in Server Explorer. You can configure diagnostics for either a single
instance or all the instances in a role, depending on whether you open the
Diagnostics Configuration dialog box from the shortcut menu for the instance

or the role. If you configure the role node, any changes apply to all instances.
If you configure the instance node, any changes apply to that instance only.+
8.11.1
1.

To configure diagnostics for a running cloud service

In Server Explorer, expand the Cloud Services node, and then expand
nodes to locate the role or instance that you want to investigate or both.

2.

On the shortcut menu for an instance node or a role node, choose Update
Diagnostics Settings, and then choose the diagnostic settings that you
want to collect.
For information about the configuration settings, see Configure diagnostics
data sources in this topic. For information about how to view the diagnostics
data, see View the diagnostics data in this topic.

If you change data collection in Server Explorer, these changes remain in


effect until you fully redeploy your cloud service. If you use the default publish
settings, the changes are not overwritten, since the default publish setting is
to update the existing deployment, rather than do a full redeployment. To
make sure the settings clear at deployment time, go to the Advanced
Settings tab in the Publish wizard and clear the Deployment update
checkbox. When you redeploy with that checkbox cleared, the settings revert
to those in the .wadcfgx (or .wadcfg) file as set through the Properties editor
for the role. If you update your deployment, Azure keeps the old settings.
+

8.12 Troubleshoot Azure cloud service issues


If you experience problems with your cloud service projects, such as a role
that gets stuck in a "busy" status, repeatedly recycles, or throws an internal
server error, there are tools and techniques you can use to diagnose and fix
these problems. For specific examples of common problems and solutions, as
well as an overview of the concepts and tools used to diagnose and fix such
errors, see Azure PaaS Compute Diagnostics Data.+

8.13 Q & A
What is the buffer size, and how large should it be?+
On each virtual machine instance, quotas limit how much diagnostic data
can be stored on the local file system. In addition, you specify a buffer size
for each type of diagnostic data that's available. This buffer size acts like an
individual quota for that type of data. By checking the bottom of the dialog
box, you can determine the overall quota and the amount of memory that
remains. If you specify larger buffers or more types of data, you'll approach
the overall quota. You can change the overall quota by modifying the
diagnostics.wadcfg/.wadcfgx configuration file. The diagnostics data is stored

on the same filesystem as your applications data, so if your application uses


a lot of disk space, you shouldnt increase the overall diagnostics quota.+
What is the transfer period, and how long should it be?+
The transfer period is the amount of time that elapses between data
captures. After each transfer period, data is moved from the local filesystem
on a virtual machine to tables in your storage account. If the amount of data
that's collected exceeds the quota before the end of a transfer period, older
data is discarded. You might want to decrease the transfer period if you're
losing data because your data exceeds the buffer size or the overall quota.+
What time zone are the time stamps in?+
The time stamps are in the local time zone of the data center that hosts your
cloud service. The following three timestamp columns in the log tables are
used.+

PreciseTimeStamp is the ETW timestamp of the event. That is, the time the
event is logged from the client.

TIMESTAMP is PreciseTimeStamp rounded down to the upload frequency


boundary. So, if your upload frequency is 5 minutes and the event time 00:17:12,
TIMESTAMP will be 00:15:00.

Timestamp is the timestamp at which the entity was created in the Azure
table.

How do I manage costs when collecting diagnostic information?+


The default settings (Log level set to Error and Transfer period set to 1
minute) are designed to minimize cost. Your compute costs will increase if
you collect more diagnostic data or decrease the transfer period. Dont

collect more data than you need, and dont forget to disable data collection
when you no longer need it. You can always enable it again, even at runtime,
as shown in the previous section.+
How do I collect failed-request logs from IIS?+
By default, IIS doesnt collect failed-request logs. You can configure IIS to
collect them if you edit the web.config file for your web role.+
Im not getting trace information from RoleEntryPoint methods like
OnStart. Whats wrong?+
The methods of RoleEntryPoint are called in the context of WAIISHost.exe,
not IIS. Therefore, the configuration information in web.config that normally
enables tracing doesnt apply. To resolve this issue, add a .config file to your
web role project, and name the file to match the output assembly that
contains the RoleEntryPoint code. In the default web role project, the name
of the .config file would be WAIISHost.exe.config. Then add the following lines
to this file:+
Copy

<system.diagnostics>
<trace>
<listeners>
<add name AzureDiagnostics
type=Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener>
<filter type= />
</add>
</listeners>

</trace>
</system.diagnostics>

Now, in the Properties window, set the Copy to Output Directory


property to Copy always.

9 Auto scale application block


This topic describes how to host the Autoscaling Application Block in a Microsoft Azure worker role.
This is the most common deployment scenario for the block.
The Autoscaling Application Block uses rules to determine which scaling operations it should
perform on your Azure application and when. You must have a running Autoscaler instance that can
perform the scaling operations. The following code sample shows how you can start and stop an
Autoscaler instance when a worker role starts and stops.
You may decide to include this logic in an existing worker role that also performs other tasks, or
create a worker role that just performs the autoscaling activities.
Note:
The worker role that performs the autoscaling activities can be in the same or a
different hosted service from the application to which you are adding autoscaling
behavior.
C#
Copy
public class WorkerRole : RoleEntryPoint
{
private Autoscaler autoscaler;

...

public override bool OnStart()


{
// Set the maximum number of concurrent connections

ServicePointManager.DefaultConnectionLimit = 12;

CloudStorageAccount.SetConfigurationSettingPublisher(
(configName, configSetter) =>
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)));

DiagnosticMonitorConfiguration dmc =
DiagnosticMonitor.GetDefaultInitialConfiguration();
dmc.Logs.BufferQuotaInMB = 4;
dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
DiagnosticMonitor.Start(
"Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc);

autoscaler =
EnterpriseLibraryContainer.Current.GetInstance<Autoscaler>();
autoscaler.Start();

return base.OnStart();
}

public override void OnStop()


{
autoscaler.Stop();

}
}

Note:
If you decide to host the block in the same worker role as your application, you
should get the Autoscaler instance and call the Start method in the Run method
of the WorkerRole class instead of in the OnStart method.

To understand and troubleshoot the block's behavior, you must use the log messages that the block
writes. To ensure that the block can write log messages, you must configure logging for the worker
role. By default, the block uses the logging infrastructure from the System.Diagnostics namespace.
The block can also use the Enterprise Library Logging Application Block or a custom logger.
Note:
When you call the Start method of the Autoscaler class, the block attempts to
read and parse the rules in your rules store. If any error occurs during the reading
and validation of the rules, the block will log the exception with a "Rules store
exception" message and continue. You should correct the error condition identified
in the log message and save a new version of the rules to your rules store. The
block will automatically attempt to load your new set of rules.
By default, the block checks for changes in the rules store every 30 seconds. To
change this setting, see the topic "Entering Configuration Information."

For more information about how to configure the System.Diagnostics namespace logger or the
Enterprise Library Logging Application Block logger, see the topic "Autoscaling Application Block
Logging."
For more information about how to select the logging infrastructure that the Autoscaling Application
Block should use, see the topic "Entering Configuration Information."
When the block communicates with the target application, it uses a service certificate to secure the
Azure Service Management API calls that it makes. The administrator must upload the appropriate
service certificate to Azure. For more information, see the topic "Deploying the Autoscaling
Application Block."

9.1 Usage Notes


Here is some additional information:

For more details of the integration of Enterprise Library and Unity, see
"Creating and Referencing Enterprise Library Objects."

If you have multiple instances of your worker role, then the Autoscaler class
can use a lease on an Azure blob to ensure that only a single instance of the
Autoscaler can execute the autoscaling rules at any one time. See the topic
"Entering Configuration Information" for more details.

Note:
The default setting is that the lease is not enabled. If you are planning to run
multiple instances of the worker role that hosts the Autoscaling Application Block,
you must enable the lease.

The block uses the FromConfigurationSetting method in the Azure Storage


API to read connecting strings from the .cscfg file. Therefore, you must call
the SetConfigurationSettingPublisher method, as shown in the sample
code.

It is important to call the Stop method in the Autoscaler class when the
worker stops. This ensures that the block releases its lease on the blob before
the role instance stops.

The block uses information collected by Azure diagnostics to evaluate some


reactive rules.

10 Get started with Azure Queue storage


using .NET
10.1.1.1.1

Tip

Try the Microsoft Azure Storage Explorer+


Microsoft Azure Storage Explorer is a free, standalone app from Microsoft
that enables you to work visually with Azure Storage data on Windows, OS X,
and Linux.+

10.2 Overview
Azure Queue storage provides cloud messaging between application
components. In designing applications for scale, application components are
often decoupled, so that they can scale independently. Queue storage
delivers asynchronous messaging for communication between application
components, whether they are running in the cloud, on the desktop, on an
on-premises server, or on a mobile device. Queue storage also supports
managing asynchronous tasks and building process work flows.+

10.2.1
About this tutorial
This tutorial shows how to write .NET code for some common scenarios using
Azure Queue storage. Scenarios covered include creating and deleting
queues and adding, reading, and deleting queue messages.+
Estimated time to complete: 45 minutes+
Prerequisities:+

Microsoft Visual Studio

Azure Storage Client Library for .NET

Azure Configuration Manager for .NET

An Azure storage account

+
10.2.1.1.1

Note

We recommend that you use the latest version of the Azure Storage Client
Library for .NET to complete this tutorial. The latest version of the library is
7.x, available for download on Nuget. The source for the client library is
available on GitHub.+
If you are using the storage emulator, note that version 7.x of the client
library requires at least version 4.3 of the storage emulator +

10.3 What is Queue Storage?


Azure Queue storage is a service for storing large numbers of messages that
can be accessed from anywhere in the world via authenticated calls using
HTTP or HTTPS. A single queue message can be up to 64 KB in size, and a
queue can contain millions of messages, up to the total capacity limit of a
storage account.+
Common uses of Queue storage include:+

Creating a backlog of work to process asynchronously

Passing messages from an Azure web role to an Azure worker role

10.4 Queue Service Concepts


The Queue service contains the following components:+

URL format: Queues are addressable using the following URL format:
http:// <storage account> .queue.core.windows.net/ <queue>
The following URL addresses a queue in the diagram:

http://myaccount.queue.core.windows.net/images-to-download

Storage Account: All access to Azure Storage is done through a storage


account. See Azure Storage Scalability and Performance Targets for details
about storage account capacity.

Queue: A queue contains a set of messages. All messages must be in a


queue. Note that the queue name must be all lowercase. For information on
naming queues, see Naming Queues and Metadata.

Message: A message, in any format, of up to 64 KB. The maximum time that


a message can remain in the queue is 7 days.

10.5 Create an Azure storage account


The easiest way to create your first Azure storage account is by using the
Azure Portal. To learn more, see Create a storage account.+
You can also create an Azure storage account by using Azure PowerShell,
Azure CLI, or the Storage Resource Provider Client Library for .NET.+
If you prefer not to create a storage account at this time, you can also use
the Azure storage emulator to run and test your code in a local environment.
For more information, see Use the Azure Storage Emulator for Development
and Testing.+

10.6 Set up your development environment


Next, set up your development environment in Visual Studio so that you are
ready to try the code examples provided in this guide.+
10.6.1
Create a Windows console application project
In Visual Studio, create a new Windows console application, as shown:+

+
All of the code examples in this tutorial can be added to the Main() method
in program.cs in your console application.+
Note that you can use the Azure Storage Client Library from any type of .NET
application, including an Azure cloud service, an Azure web app, a desktop
application, or a mobile application. In this guide, we use a console
application for simplicity.+

10.6.2
Use NuGet to install the required packages
There are two packages that you'll need to install to your project to complete
this tutorial:+

Microsoft Azure Storage Client Library for .NET: This package provides
programmatic access to data resources in your storage account.

Microsoft Azure Configuration Manager library for .NET: This package provides
a class for parsing a connection string from a configuration file, regardless of
where your application is running.

You can use NuGet to obtain both packages. Follow these steps:+
1.

Right-click your project in Solution Explorer and choose Manage NuGet


Packages.

2.

Search online for "WindowsAzure.Storage" and click Install to install the


Storage Client Library and its dependencies.

3.

Search online for "ConfigurationManager" and click Install to install the Azure
Configuration Manager.

+
10.6.2.1.1

Note

The Storage Client Library package is also included in the Azure SDK for
.NET. However, we recommend that you also install the Storage Client
Library from NuGet to ensure that you always have the latest version of the
client library.+
The ODataLib dependencies in the Storage Client Library for .NET are
resolved through the ODataLib (version 5.0.2 and greater) packages
available through NuGet, and not through WCF Data Services. The ODataLib
libraries can be downloaded directly or referenced by your code project
through NuGet. The specific ODataLib packages used by the Storage Client
Library are OData, Edm, and Spatial. While these libraries are used by the

Azure Table storage classes, they are required dependencies for


programming with the Storage Client Library.+
10.6.3
Determine your target environment
You have two environment options for running the examples in this guide:+

You can run your code against an Azure Storage account in the cloud.

You can run your code against the Azure storage emulator. The storage
emulator is a local environment that emulates an Azure Storage account in the
cloud. The emulator is a free option for testing and debugging your code while
your application is under development. The emulator uses a well-known account
and key. For more details, see Use the Azure Storage Emulator for Development
and Testing

If you are targeting a storage account in the cloud, copy the primary access
key for your storage account from the Azure Portal. For more information, see
View and copy storage access keys.+
10.6.3.1.1

Note

You can target the storage emulator to avoid incurring any costs associated
with Azure Storage. However, if you do choose to target an Azure storage
account in the cloud, costs for performing this tutorial will be negligible.+
10.6.4
Configure your storage connection string
The Azure Storage Client Library for .NET supports using a storage
connection string to configure endpoints and credentials for accessing
storage services. The best way to maintain your storage connection string is
in a configuration file. +
For more information about connection strings, see Configure a Connection
String to Azure Storage.+

10.6.4.1.1

Note

Your storage account key is similar to the root password for your storage
account. Always be careful to protect your storage account key. Avoid
distributing it to other users, hard-coding it, or saving it in a plain-text file
that is accessible to others. Regenerate your key using the Azure Portal if you
believe it may have been compromised.+
To configure your connection string, open the app.config file from Solution
Explorer in Visual Studio. Add the contents of the <appSettings> element
shown below. Replace account-name with the name of your storage account,
and account-key with your account access key:+
Copy
xml
<configuration>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.2" />
</startup>
<appSettings>
<add key="StorageConnectionString"
value="DefaultEndpointsProtocol=https;AccountName=accountname;AccountKey=account-key" />
</appSettings>
</configuration>

For example, your configuration setting will be similar to:+


Copy
xml

<add key="StorageConnectionString"
value="DefaultEndpointsProtocol=https;AccountName=storagesample;AccountKey=nYV0gl
n6fT7mvY+rxu2iWAEyzPKITGkhM88J8HUoyofvK7C6fHcZc2kRZp6cKgYRUM74lHI84L50Iau1+9
hPjB==" />

To target the storage emulator, you can use a shortcut that maps to the wellknown account name and key. In that case, your connection string setting
will be:+
Copy
xml
<add key="StorageConnectionString" value="UseDevelopmentStorage=true;" />

10.6.5
Add namespace declarations
Add the following using statements to the top of the program.cs file:+
Copy
C#
using Microsoft.Azure; // Namespace for CloudConfigurationManager
using Microsoft.WindowsAzure.Storage; // Namespace for CloudStorageAccount
using Microsoft.WindowsAzure.Storage.Queue; // Namespace for Queue storage types

10.6.6
Parse the connection string
The Microsoft Azure Configuration Manager Library for .NET provides a class
for parsing a connection string from a configuration file. The
CloudConfigurationManager class parses configuration settings regardless of
whether the client application is running on the desktop, on a mobile device,
in an Azure virtual machine, or in an Azure cloud service.+
To reference the CloudConfigurationManager package, add the following
using directive:+
Copy

C#
using Microsoft.Azure;

//Namespace for CloudConfigurationManager

Here's an example that shows how to retrieve a connection string from a


configuration file:+
Copy
C#
// Parse the connection string and return a reference to the storage account.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

Using the Azure Configuration Manager is optional. You can also use an API
like the .NET Framework's ConfigurationManager class.+
10.6.7
Create the Queue service client
The CloudQueueClient class enables you to retrieve queues stored in
Queue storage. Here's one way to create the service client:+
Copy
C#
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

Now you are ready to write code that reads data from and writes data to
Queue storage.+

10.7 Create a queue


This example shows how to create a queue if it does not already exist:+
Copy
C#

// Retrieve storage account from connection string.


CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client.


CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a container.


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Create the queue if it doesn't already exist


queue.CreateIfNotExists();

10.8 Insert a message into a queue


To insert a message into an existing queue, first create a new
CloudQueueMessage. Next, call the AddMessage method. A
CloudQueueMessage can be created from either a string (in UTF-8 format)
or a byte array. Here is code which creates a queue (if it doesn't exist) and
inserts the message 'Hello, World':+
Copy
C#
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client.

CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue.


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Create the queue if it doesn't already exist.


queue.CreateIfNotExists();

// Create a message and add it to the queue.


CloudQueueMessage message = new CloudQueueMessage("Hello, World");
queue.AddMessage(message);

10.9 Peek at the next message


You can peek at the message in the front of a queue without removing it
from the queue by calling the PeekMessage method.+
Copy
C#
// Retrieve storage account from connection string
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client


CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Peek at the next message


CloudQueueMessage peekedMessage = queue.PeekMessage();

// Display message.
Console.WriteLine(peekedMessage.AsString);

10.10

Change the contents of a queued message

You can change the contents of a message in-place in the queue. If the
message represents a work task, you could use this feature to update the
status of the work task. The following code updates the queue message with
new contents, and sets the visibility timeout to extend another 60 seconds.
This saves the state of work associated with the message, and gives the
client another minute to continue working on the message. You could use
this technique to track multi-step workflows on queue messages, without
having to start over from the beginning if a processing step fails due to
hardware or software failure. Typically, you would keep a retry count as well,
and if the message is retried more than n times, you would delete it. This
protects against a message that triggers an application error each time it is
processed.2
Copy
C#
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client.

CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue.


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Get the message from the queue and update the message contents.
CloudQueueMessage message = queue.GetMessage();
message.SetMessageContent("Updated contents.");
queue.UpdateMessage(message,
TimeSpan.FromSeconds(60.0), // Make it invisible for another 60 seconds.
MessageUpdateFields.Content | MessageUpdateFields.Visibility);

10.11

De-queue the next message

Your code de-queues a message from a queue in two steps. When you call
GetMessage, you get the next message in a queue. A message returned
from GetMessage becomes invisible to any other code reading messages
from this queue. By default, this message stays invisible for 30 seconds. To
finish removing the message from the queue, you must also call
DeleteMessage. This two-step process of removing a message assures that
if your code fails to process a message due to hardware or software failure,
another instance of your code can get the same message and try again. Your
code calls DeleteMessage right after the message has been processed.+
Copy
C#
// Retrieve storage account from connection string
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client


CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Get the next message


CloudQueueMessage retrievedMessage = queue.GetMessage();

//Process the message in less than 30 seconds, and then delete the message
queue.DeleteMessage(retrievedMessage);

10.12

Use Async-Await pattern with common Queue storage APIs

This example shows how to use the Async-Await pattern with common Queue
storage APIs. The sample calls the asynchronous version of each of the given
methods, as indicated by the Async suffix of each method. When an async
method is used, the async-await pattern suspends local execution until the
call completes. This behavior allows the current thread to do other work,
which helps avoid performance bottlenecks and improves the overall
responsiveness of your application. For more details on using the AsyncAwait pattern in .NET see Async and Await (C# and Visual Basic)+
Copy
C#
// Create the queue if it doesn't already exist
if(await queue.CreateIfNotExistsAsync())
{

Console.WriteLine("Queue '{0}' Created", queue.Name);


}
else
{
Console.WriteLine("Queue '{0}' Exists", queue.Name);
}

// Create a message to put in the queue


CloudQueueMessage cloudQueueMessage = new CloudQueueMessage("My message");

// Async enqueue the message


await queue.AddMessageAsync(cloudQueueMessage);
Console.WriteLine("Message added");

// Async dequeue the message


CloudQueueMessage retrievedMessage = await queue.GetMessageAsync();
Console.WriteLine("Retrieved message with content '{0}'", retrievedMessage.AsString);

// Async delete the message


await queue.DeleteMessageAsync(retrievedMessage);
Console.WriteLine("Deleted message");

10.13

Leverage additional options for de-queuing messages

There are two ways you can customize message retrieval from a queue. First,
you can get a batch of messages (up to 32). Second, you can set a longer or
shorter invisibility timeout, allowing your code more or less time to fully
process each message. The following code example uses the GetMessages

method to get 20 messages in one call. Then it processes each message


using a foreach loop. It also sets the invisibility timeout to five minutes for
each message. Note that the 5 minutes starts for all messages at the same
time, so after 5 minutes have passed since the call to GetMessages, any
messages which have not been deleted will become visible again.+
Copy
C#
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client.


CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue.


CloudQueue queue = queueClient.GetQueueReference("myqueue");

foreach (CloudQueueMessage message in queue.GetMessages(20,


TimeSpan.FromMinutes(5)))
{
// Process all messages in less than 5 minutes, deleting each message after processing.
queue.DeleteMessage(message);
}

10.14

Get the queue length

You can get an estimate of the number of messages in a queue. The


FetchAttributes method asks the Queue service to retrieve the queue

attributes, including the message count. The ApproximateMessageCount


property returns the last value retrieved by the FetchAttributes method,
without calling the Queue service.+
Copy
C#
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client.


CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue.


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Fetch the queue attributes.


queue.FetchAttributes();

// Retrieve the cached approximate message count.


int? cachedMessageCount = queue.ApproximateMessageCount;

// Display number of messages.


Console.WriteLine("Number of messages in queue: " + cachedMessageCount);

10.15

Delete a queue

To delete a queue and all the messages contained in it, call the Delete
method on the queue object.+
Copy
C#
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client.


CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue.


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Delete the queue.


queue.Delete();

11 Azure Exam Prep Fault Domains and


Update Domains
11.1

First Azure VMs.

11.1.1

Fault Domains

When you put VMs in to an availability set, Azure guarantees to spread them across
Fault Domains and Update Domains. A Fault Domain (FD) is essentially a rack of servers.
It consumes subsystems like network, power, cooling etc. So 2 VMs in the same

availability set means Azure will provision them in to 2 different racks so that if say, the
network or the power failed, only one rack would be affected.
I discovered there are always only 2 fault domains: FD0 and FD1. It makes it seem like
your VMs only get spread across 2 racks but thats not the case. They can be spread
across more racks if youve got lots of VMs. But as far as your availability set is
concerned FD0 and FD1 are a way of saying This bit of infrastructure (FD0) is different
to this bit (FD1). As you boot VMs in to an availability set, they get allocated like this
FD0, FD1, FD0, FD1, FD0, FD1 and so on. The pattern never changes. Youve probably
seen this diagram hundreds of times:

Figure 1: Fault Domains and Availability Sets


You can see IIS1 and 2 are the web-front end. Theyre both in different fault domains. If
something happens to the power going to rack 1, IIS1 will fail and so will SQL1 but the
other 2 servers will continue to operate.
Now, if you add more servers to each availability set, this is what happens:

Figure 2: FD0 and FD1 are populated.


Azure continues to distribute them across fault domains. Looking at a list of the 4 IIS
VMs would give a table like this:

VM

Fault Domain

IIS1

IIS2

IIS3

IIS4

They are allocated to FDs in the order in which they boot. So if Id booted these systems
in reverse order then theyd all be in different FDs.

11.1.2

Update Domains

Sometimes you need to update your app, or Microsoft needs to update the host on
which your VM(s) are running. Note that with IaaS VMs, Microsoft does not automatically
update your VMs. You have complete control (and responsibility) over that. But say if a
serious security vulnerability is identified and a patch created. Its in Microosfts interest
to get that applied to the host underneath your VM as soon as possible. So how is that
done without taking your service offline? Update Domains. Its similar to the FD
methods, only this time, instead of an accidental failure, there is a purposeful move to
take down one (or more) of your servers. So to make sure your service doesnt go offline
because of an update, it will walk through your update domains one after the other.
Whereas FDs are assigned in the pattern 0, 1, 0, 1, 0, 1, 0, 1. UDs are assigned 0, 1, 2,
3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4.
Both FDs and UDs are assigned in the order that Azure discovers them as they are
provisioned. So if you provision machines in the order Srv0, Srv1, Srv2, Srv3, Srv4, Srv5,
Srv6, Srv7, Srv8, Srv9, Srv10, Srv11 youll end up with a table that looks like this:
VM

Fault Domain

Update Domain

Srv0

Srv1

Srv2

Srv3

Srv4

Srv5

Srv6

Srv7

Srv8

Srv9

Srv10

Srv11

you can see that UDs loop around a count of 5 (0, 1, 2, 3, 4).
You can see that in the following screen shot of a collection of 9 VMs in a single
availability set.

Figure 3: Fault and Update Domains in a Cloud Service comprised of Azure VMs

11.2

Second Azure Cloud Services

With Azure VMs, FDs and UDs are assigned to the VMs in an availability set in the order
in which they are provisioned. With Cloud Servces its almost the same but roles are
used instead of availability sets. For example you might have a web role with 8
instances. The role would be assigned FDs and UDs as the instances are provisioned and
discovered by Azure. The order of the instance numbers is not necessarily the order in
which they are successfully provisioned. Its just a fact of life that some machines that
start the provisioning process slow down in the middle and machines that started later,
catch up and overtake them.
Instance Number Fault Domain

Update Domain

WebRole_IN0

WebRole_IN1

WebRole_IN2

WebRole_IN3

WebRole_IN4

WebRole_IN5

WebRole_IN6

WebRole_IN7

You can also see in this shot of a Cloud Service, the same pattern:

but notice how by the time we get to page three (there are 100 servers in this cloud
service), it starts to break down:

thats because UDs and FDs are assigned in the order that instances are provisioned.
Some of them provision more quickly than others and that causes the pattern to break
down. But there are still the correct number of FDs and UDs.
In Cloud Services, you can also set the number of update domains in the service
models .csdef file. By default its set to 5 but you can increase that to a maximum of
20.

11.2.1

Practice Questions:

Q: If you add a new VM to an availability set, how many extra fault domains and update
domains will you get if there are already 4 instances in the availability set?
A:
VMs

UD

FD

Srv0

1 UD and 1 FD

Srv1

2 UDs (UD0 & UD1)


and 2 FD (FD0 &
FD1)

Srv2

3 UDs and 2 FDs

Srv3

4 UDs and 2 FDs

Add new VM =
Srv4

You get one extra


UD (UD4), but no
extra FDs

Q: In a Cloud Service that has a Web Role with 12 instances, how many FDs and UDs
will you get by default?
A: 2 FDs (FD0 and FD1). 5 UDs (UD0, UD1, UD2, UD3, UD4).

Q: You have set the maximum number of UDs in the .csdef for a Cloud Service to 20.
You use Azure Virtual Machines to provision 18 VMs. How many Update Domains will you
have as a result?
A: This is a bit of a trick question the .csdef is only used for Cloud Services, not for
VMs. So regardless of what you set or even how you try to do it, Azure VM UDs come in
groups of 5. With 18 VMs, that means youll have 5 UDs. UD0 to UD4 a la:
VM

Update Domain

VM0

VM1

VM2

VM3

VM4

VM5

VM6

VM7

VM8

VM9

VM10

VM11

VM12

VM13

VM14

VM15

VM16

VM17

Q: If you have 13 VMs in an availability set, how many VMs will be in UD0?
A: Use the table above. You can see UD0 lines up with VM0, VM5 and VM10. So there
will be 3 VMs in UD0.

12 Azure Exam Prep Virtual Networks


If you see any of the martetecture material on the Internet, youd rapidly come to the
conclusion that VNets are a way to securely bind your on-premises corporate-network to
the network you have in Azure on which all your servers and resources are sat. The idea
behind this is that the stuff you have in Azure becomes a bit like a branch-office
attached to your private network. The connection medium between the 2 is the Internet
and the traffic is carried over SSL, which makes it very difficult for an eavesdropper to
listen in on your communications. But did you know you can use VNets to also connect
lets say some servers in the Azure Dublin data centre to some servers in the Azure
Singapore data centre? You can also use it to connect say an Azure Website running
WordPress to an Azure VM running MySQL: its a way of bridging the gap between PaaS
and IaaS services in Azure. It also gives you a good degree of control over the way the
internal network works. This might be important if you intend to deploy core
infrastructure such as Active Directory where the sites and services architecture
assumes you have control over such things as sites and subnets.
There are 2 things to get your head around: address spaces and subnets. An address
space is the contiguous universe of IP addresses between 2 limits the lower address
and the upper address. The subnet is an IP subnet in the traditional sense of the word
but in this case, it lives within an address space.

12.1

Address Spaces

Address spaces and subnets are usually declared in CIDR notation. A number like
10.0.0.0/8. Thats a 32-bit base IP address plus a mask. This example means mask off
the top 8 bits, and the range of addresses that are left in the last 24 bits is the address
space.

Figure 1: Address Space map for 10.0.0.0/8


The above example shows an address space from 10.0.0.0 (the lower address) to
10.255.255.255 (the upper address). Another example might be 10.0.0.0/16 as shown in
the following figure.

Figure 2: Address Space map for 10.0.0.0/16


That would mask off the top 16 bits and leave the final 16 bits for an address space
range of 10.0.0.0 to 10.0.255.255.
Whatever address space you define, you are creating a space with a collection of
addresses (from the lowest to the highest), hence the term address space. Lets
imagine you had a VNet with a couple of IaaS VMs connected to it like the figure below:

Figure 3: VMs connected to a VNet


The text in red represents the address space. Note its the same for each VM. The text
in black represents the individual address within that space. These 2 machines can
communicate with each other on the VNet. Their addresses are within the permitted
range for this address space of 10.0.0.0 to 10.0.255.255. But imagine if the IP address
of the machine of the right had somehow got set to 10.1.20.31. It would be unable to
communicate because its outside the range. Azures infrastructure would specifically
prevent communication to or from that machine. It would become an island on its own.

12.2

Subnets

A subnet must exist within an address space. That means the range of addresses in a
subnet must fit inside the address space. So hopefully you can see how subnetting an
address space gives you a way to logically divide up the address space.
Look back up at figure 1. The lowest-order 24 bits represent all the possible addresses
you could have in the address space. By further dividing those 24 bits, you could
segment the network. The higher order bits would determine which subnet you are in,
and the lower order bits would define which host on that particular network you are on.

Figure 4: Subnet within an address space.


Figure 4 shows how the remaining 24 bits have been subnetted. The following table
gives a few examples of how to break these ideas down.
The table below assumes an address space of 10.0.0.0/8

IP Address

Subnet

Host (within
subnet)

10.0.0.20

0.20

10.1.0.20

0.20

10.50.2.20

50

2.20

10.200.0.200

200

0.200

10.250.255.147

250

255.147

11.0.0.10

invalid (its
outside the
address space)

invalid (its
outside the
address space)

8.0.0.12

invalid (its
outside the
address space)

invalid (its
outside the
address space)

This is all very easy because Ive divided the address space and subnet boundaries up
on very neat byte boundaries. So the numbers fit exactly in the dotted address notation.
But you can also create address spaces and subnets that dont divide on these
boundaries. You could for example have a subnet defined by the top say, 13 bits of a 32bit address. The reason it makes things more difficult is because part of the subnet
identifier would come from the second number and part of it would come from the third
number of an IP address. And theres nothing to be done other than visualise the
address space in your mind, or draw it out.

Figure 5: Non byte-boundary subnet.


This would give a CIDR address for the address space as 10.0.0.0/8 and a subnet
address of 10.0.0.0/13. If you count in from the most-significant-bit, you can see the
subnet ends on the 13th bit. It certainly makes for complexity when trying to do
calculations in your head in an exam! Im hoping you can see that as you make the
number of bits you use for the subnet bigger, you take them away from the maximum
number of hosts you can have within a subnet. So these things are a compromise
against each other. If you have no need (and foresee no need way in to the future) to be
really tight up against the boundaries with subnets and address spaces, and youre not
at or near the limit make life easy for yourself set address space and subnets up on
byte-boundaries. I reckon in the exam, if they want you to calculate something like this
theyll give you a question with everything lining up on byte boundaries. After all, its
not a TCP/IP exam

12.3

Whats in a Virtual Network.

The security in Azure works in such a way that IP addresses that are not known to the
infrastructure (on other words it didnt issue them in the first place) are not routed
anywhere. The result is that if you want things to talk to each other, youve got to let
Azures infrastructure give you a dynamically assigned IP address. You might be getting
all uppity at this point thinking you want to deploy something like an AD Domain
Controller and best practice says to avoid DHCP. Well, with VNets you can relax. You will
be issued an address over the DHCP protocol, but the lease duration is set to either 168
years or until you delete the VNet, whichever comes first. Im pretty sure youll delete
the VNet first and in fact, I havent thought about what will happen in 168 years. Itll be
somebody elses problem by then.
This means whenever you shut a machine down, when it reboots, itll get the same IP
address back again. So now you can relax about your AD Domain Controller
deployments in Azure. The fact that every time you boot the machine and it gets the
same address makes it just as good as a statically assigned IP address.
Each VNet reserves a few addresses. You might have noticed whenever you fire up the
first machine in a VNet it gets a .4 address. Thats because .0 is reserved for broadcast
requests while .1 is reserved for the default gateway (router). .2 and .3 are reserved for
a special sort of gateway that you might later configure to communicate with your onpremises network. Note that .255 is also a broadcast address.
In the following figure, I configured a VNet thus:
10.0.0.0/8

Address space = PlankyNet

10.0.0.0/16

subnet = SubNet-1

Figure 6: ipconfig output for a VNet.

You can see this is the first host in the subnet because it is assigned a .4 address. You
can see its a 16-bit subnet, because the subnet mask (255.255.0.0) masks off the top
16 bits. You can also see the default gateway (router) is at 10.0.0.1 as predicted.
It doesnt matter how large or small the subnet is, any range of available addresses are
automatically reduced by 5 (because of the 5 reserved addresses mentioned above). So
for example a 16-bit subnet (which gives a total of 65536 host addresses) is reduced to
65531. An 8 bit subnet has a maximum of 251 addresses (256-5). You can see this
illustrated in the following figure.

Figure 7: Subnet range displayed in the Azure portal.


In Figure 7 you can see the 16 bit subnet mask results in 65531 available addresses
giving a usable address range of 10.0.0.4 to 10.0.255.254.

12.4

Practice Questions

Q: How many hosts can occupy a single subnet in the following VNet?
Address space: 10.0.0.0/16
Subnet: 10.0.0.0/24
A: the /24 means the first 3 octets fill in from the most significant bit and mask off the
first 3 bytes. That leaves one byte (or 8 bits) of subnet space or 256 hosts. But 5
addresses have to be subtracted which leaves 251 hosts.

Q: How many hosts can occupy a single subnet in the following VNet?
Address space: 10.0.0.0/8
Subnet: 10.0.0.0/24
A: THis is just a question to test that you understand the difference between the
address space and the subnet. The subnet definition in this question is the same as the
first question, the number of hosts doesnt change. 251.

Q: You have the following VNet:


Address space: 10.0.0.0/8
Subnet: 10.0.0.0/16

You connect to a VM on this network over RDP and assign the following fixed IP address
10.0.0.100. Will you be able to connect to a VM at IP address 10.0.0.101?
A: No the Azure VNet infrastructure prevents traffic to/from hosts it didnt assign an IP
address to.

I hope this will help you if youre taking the Azure Infrastructure Exam (533)

13 Warnings sent to customers when Azure


is about to be updated
I sometimes get customers asking me about the warnings theyll get when updates are
rolled out across Azure. Well at the bottom of this post is an example email sent to me.
Notice the emphasis placed on:

Putting multiple VMs in to availability sets

Creating multiple instances of each role in Cloud Services

I cant remember exactly but Igal Figlin from Microsoft did some background research in
to this and found that 40% (it might be higher I cant remember exactly you can watch
the video here) of deployments are not in availability sets. Have a read of the email
below and youll start to realise how much risk you are putting yourself to if you dont
use multiple VMs in availability sets.
When you put VMs in to availability sets they are also distributed across up to 5 update
domains. When Microsoft updates Azure, theyll walk from one update-domain to the
next. You can see what they are saying in this email theyll leave 30 minutes between
updating each update domain. Lets say you have 2 machines in an availability set.
Theyll be spread across 2 fault domains and 2 update domains. That means if an
infrastructure fault occurs (like say power or a network segment), only one of your VMs
will be affected. It also means if Microsoft has to do an update, it will take one of your
machines out of the configuration at a time.
If you want to be super-cautious, you could protect against the scenario that while
Microsoft is walking the update domains in your availability set, you also get an
infrastructure failure that could take out a further machine. The table below shows
how.
Update Domain Update
0
Domain1
Fault Domain Instance 0
0
Fault Domain
1

Update
Domain 2
Instance 2

Instance 1

Imagine the update process had done the update on the instance in Update Domain 0, it
had then walked on to Update Domain 1 and was in the middle of updating that
instance. Instance 1 is now offline. At the same time a power failure occurs to the rack
on Fault Domain 0. That would cause Instance 0 and instance 2 to also be taken offline.
Youd now have an availability set with no running machines. You can counter this by
adding a VM to the availability set. Because there can only ever be one Update Domain
in an availability set undergoing an update you are protected. Lets say you are in the
middle of updating one of the services yourself. Your update will be stalled, the Microsoft
update will complete and then your update will continue. In other words updates are
applied to an update domain synchronously. And if you are in the middle of updating
one Update Domain, Microsoft wont start simultaneously updating a different Update
Domain. So the following table will remove all risk from simultaneous Update Domain
and Fault Domain operations.

Update Domain Update


0
Domain1
Fault Domain Instance 0
0
Fault Domain
1

Update
Domain 2

Update Domain
3

Instance 2
Instance 1

Instance 3

The failure of any Fault Domain will take out 2 instances and a simultaneous update can
take out only one Update Domain. This means a maximum of 3 instances can be offline
because of simultaneous Update Domain/Fault Domain operations. That would leave you
with one running instance.
Youd have to be very unlucky to get an infrastructure failure occur while an update is
going on. The availability SLA takes the above scenarios in to consideration you only
have to have 2 instances in your availability sets to enjoy the uptime guarantee. If you
are unlucky enough to suffer a double problem and the availability drops below the
guarantee then Microsoft compensates you.
I made a post about Update Domains and Fault Domains a couple of weeks ago.
Interesting stuff if youre going to take the Azure Infrastructure exam.

Anyway heres the email:


- cut here

Upcoming maintenance will affect


deployments of Azure Virtual Machines in
availability sets and Cloud Services.

As part of our ongoing commitment to performance, reliability, and


security, we sometimes perform maintenance operations in our Azure
regions and datacenters. We want to notify you of upcoming
maintenance operations that will impact Virtual Machines in an
availability set and Cloud Services.
Note: Currently, were only able to provide 2 days advance notice for
updates that impact Virtual Machines in availability sets and Cloud
Services. Were working to provide more advance notice in the future.
The following are the planned start times for infrastructure-as-a-service
(IaaS) and platform-as-a-service (PaaS) maintenance operations,
provided in both Coordinated Universal Time (UTC) and United States
Pacific Daylight Time (PDT). Impacted deployments are listed at the
bottom of this email.
Region

PDT

UTC

North Central US

08:00
Monday, June 1, 2015

15:00
Monday, June 1, 2015

North Europe

08:00
Tuesday, June 2, 2015

15:00
Tuesday, June 2, 2015

East US

08:00
Wednesday, June 3,
2015

15:00
Wednesday, June 3,
2015

Microsoft Azure Virtual Machines (IaaS)


Maintenance operations are split between virtual machines (VMs) that
are and are not in an availability set. This maintenance will impact VMs
in an availability set. VM deployments referenced below will reboot
during this maintenance operation, but temporary storage disk contents
will be retained. We expect the update to finish within 48 hours of
the start time.
Note: If you have a single VM in an availability set, it will still be
impacted by this maintenance operation. In addition, all VMs in the
same availability set are not taken down at the same timethese VMs
are spread across five update domains. Only VMs in the same update

domain for the availability set may be rebooted at the same time, and
there will be at least a 30-minute interval between processing each
update domain. VMs that are in different availability sets may be taken
down at the same time. For more information, please visit the
availability sets documentation webpage.
If youre not already, we recommend using availability sets in your
architecture to ensure higher availability of your service. You can read
our multiple instances service level agreement (SLA) commitment for
Virtual Machines.
To learn more about our planned maintenance, please visit the Planned
maintenance for Azure virtual machines documentation webpage. If you
have questions, please visit the Azure Virtual Machines forums.
To ensure higher availability, the maintenance is scheduled in region
pairs. To help determine whether the reboot you observed on your VM is
due to a planned maintenance event, please visit the Viewing VM
Reboot Logs blog post.
Microsoft Azure Cloud Services (PaaS)
All Cloud Services running web and/or worker roles referenced below
will experience downtime during this maintenance. Cloud Services with
two or more role instances in different upgrade domains will have
external connectivity at least 99.95 percent of the time. Please note
that the SLA guaranteeing service availability only applies to services
that are deployed with more than one instance per role. Azure updates
one upgrade domain at a time. For more information about distribution
of roles across upgrade domains and the update process, please visit
the Update an Azure Service webpage. If you have questions, please
visit the Azure Cloud Services forums.
Please note that email addresses provided for any of the following
account roles also received this communication: account and service
administrators, and co-administrators.
Thank you,
Your Azure Team

14 Azure Web Sites/Web Apps and SSL


I just had a rash of questions about SSL and Azure Web SItes/Web Apps (just for clarity,
Azure Web SItes was renamed to Azure Web Apps a few weeks ago). Ive been looking
for an article or post to send people to and am quite amazed at the lack of info. Yes,
there are plenty of posts and articles but you have to read quite a few of them in the
right order if you want to fill in the blanks. I aim to address all the questions you might
ask in this post.

First Id recommend you read my post on how SSL actually works first. You might then
find, you dont know enough about the relationships between keys, certificates and
signatures, so Id recommend you watch my crypto primer video.
There is a video of the whole article here

Watch the video full-size on Channel 9.

14.1

SSL is already enabled on your Azure WebSite/WebApp

Yes, thats right. Lets imagine you go to the portal and create an entirely naked website
in the Free tier called plankytronixx.azurewebsites.net, it will already be enabled for SSL.
You can just type in to the browser address bar https://plankytronixx.azurewebsites.net
and it will work just fine.

I created plankytronixx.azurewebsites.net 90 seconds ago, as shown above in the figure.


Ive done absolutely nothing to the site since I created it. When I connect to it over SSL
it works! And this is a Free WebSite/WebApp.

You can see the certificate that is being used to protect it. More interesting is the cert
path up to the Microsoft Internal Corporate Root.

Of course the disadvantage of this SSL implementation is that you have to use the
.azurewebsites.net address. Even if you map to a new custom domain, like say,
plankytronixx.co.uk, the .azurewebsites.net configuration is still there, it still exists, you
can still connect to it But users might get suspicious if you ask them for a password
over SSL and they now appear to be on an entirely different site. Most of them wont
know how Azure works so their suspicions would be reasonable
You can actually see in the screenshot above that a wildcard cert exists for every single
Azure Web Site/Web App in the world. *.azurewebsites.net. Plus this certificate is
included in the price (and that means the Free tier as well!). SO free SSL is available,
but but but

14.2

Custom Domain Names

This is the bit where it gets a bit more complicated and this is the bit everybody talks
about: when you want to SSL-enable a site to give a URL like
https://plankytronixx.co.uk First things first you cant get a custom domain name in
the Free Tier. So you can now start to see where the confusion comes in? People say SSL
is not available for Free sites because of that. But as youve just seen, you do get SSL,
only it comes with a collection of limitations. You have to move up to Basic or Standard
to get SSL on custom domain names.
To set up a custom domain name you have to set up a mapping between the IP address
of the web server and the DNS name. You do this at a Domain Registrar. I use Go Daddy.
The record you add is called an A (address) record. So plankytronixx.co.uk might get
mapped to say 104.45.81.79. Its dead easy to configure this at a domain registrar you
just update the zone file. Theyll give you some kind of tool to do it, usually web-based.
This is what it looks like on Go Daddy

To get Azure to recognise it involves a hurdle to jump. If you just update the A record
and nothing else, Azure will give you this error:

Its because Azure wants to be assured that you own this domain. It wants you to
prove that you own it. It does this using the azure websites verify process more
normally known as awverify.

14.2.1.1

Anatomy of an HTTP GET request

Lets have a quick review of what happens when you type http://plankytronixx.com.uk in
to a web browser.
1. The browser does a DNS query against the name plankytronixx.couk.
2. The DNS server returns the IP address lets say its 104.45.81.79.
3. The web browser formats an HTTP GET and sends it to IP address 104.45.81.79.
4. In the header of the GET request is the host name plankytronixx..co.uk
5. Azure is sat at IP address 104.45.81.79; it inspects the host name in the header
and looks to see if it has a site that matches the name. If it does
6. It returns the page.
7. If it doesnt you get the error in the screenshot above.
When you are setting this up for the first time, Azure gives you a piece of DNS
information which you add to your DNS registrars zone file for your domain
(plankytronixx.co,uk in this case). When you set up the Azure end of the configuration,
Azure sends a query to your domain registrar and expects to see the DNS information it
gave you. By this mechanism you are proving that you have control of the records in the
domain you are trying to configure: that you own the domain.

If this works, any requests received in steps 5 and 6 above return the correct page. If
you cant prove you own the domain, Azure wont configure the domain for you and it
returns the blue-page 404 you can see in the screenshot above.

14.2.2

Proving Domain Ownership

To prove you own the domain, Azure gives you some instructions on the Domains
configuration page

Ill go through it assuming the custom domain name I want is plankytronixx.co.uk and
the default Azure Website/WebApps name is plankytronixx.azurewebsites.net.
1. Add a CNAME record called awverify and point it to
awverify.plankytronixx.azurewebsites.net. Im showing this in the Go Daddy
screen below:

2.
3. You might have to wait a few minutes (or even a few hours in some cases) for the
DNS records to propagate (dont forget to save the changes)

4. If you get it right, when you type your domain name in to the Azure configuration
page youll get a little green tick in the text-box.

5.
6. If something has gone wrong (or the domain update hasnt yet propagated), you
get a little red exclamation mark.
7. You can now save the configuration. Youll see both the custom domain name and
the raw Azure Websites/WebApps name in the portal screen.

8.
Essentially what Azure did during this process was send a DNS query for
awverify.plankytronixx.co.uk and it expected to see back, exactly what it told you to
configure in the first place awverify.plankytronixx.azurewebsites.net. If it gets that
back, it reasons that you must have the power to make that record change at the DNS
registrar and you must therefore own that domain name.

14.3

Custom Domain SSL

Now you have a custom domain name, you can set up SSL for it (in this case,
plankytronixx.co.uk). But there are 2 types of SSL certificate. Traditional certificates,
known as IP-based SSL certs. And SNI certs (Server Name Indication).

Lets go back to 1994 when SSL was first introduced by Netscape. There werent really
all that many web servers running. And the assumption made at the time was that each
web site ran on a single server with a single IP address. But these days, IP addresses are
very scarce resources. Its not unusual to have many thousands, or even tens of
thousands of web sites all running at the same IP address. One of the things that helped
this was the introduction of the host name in the HTTP header as mentioned in Anatomy
of an HTTP GET request above.
You can still use IP-based certificates for SSL with Azure. But theres a 1:1 mapping
between the certificate and the site its attached to. For an IP-based certificate, it will
run on exactly one IP address. Youd think it must be possible to use the host-header
trick I mentioned above. But the trouble is that the SSL session is set up before the
HTTP request is executed. Its therefore not possible for the server to see what site to
route the traffic to. When theres only one site to send the traffic to, theres no decision
to make. But it means the site needs to have its own dedicated public IP address. The
disadvantage is that it costs more as a result.
SNI certificates were introduced to get round the problem of the 1:1 mapping between
sites and IP addresses. When an SNI certificate is used, the browser sends the host
name as part of the SSL setup. But youve probably already spotted the problem? Older
browsers that dont support SNI certificates wont work. So you have to decide between
high coverage and higher costs (IP-based certs) or lower coverage and lower costs (SNIbased certs). Your ball
The certificate needs to be in a format that can transport not only the public key, but
also the private key. This usually means there is some form of protection on the file that
contains the certificate, like a password. But theres another problem. Normally, the
server on which the certificate sits will generate both the public and the private key. You
will then send the public key, plus some information about your site (its DNS name for
example) and yourself as the site admin. All this gets wrapped up in to a Certificate
Signing Request (CSR). Azure Websites dont give you access to the underlying server
(in the way that say Cloud Services do). You keep the private key, well, private (you
dont even reveal the private key to the certificate provider). You add the private key in
to the certificate file. THe file has to be in .pfx format.
Probably the easiest way to do this is to fire up IIS Manager on a machine and create a
Certificate Request on it. Open the root then on the main pane double-click Server
Certificates. Youll see an option in the panel on the right hand side to Create
Certificate Request. Go through the wizard and save the text file in a convenient
location. This process generates a public and a private key. The public key is in the CSR
text file. The private key is kept on the machine and is, well, private.

Now you need to present this Certificate Request to a Cert Issuer such as VeriSign,
InstantSSL, Thawte and so on. Or your own CA Quite a few of the providers will give
you a 30-day or 90-day trial certificate. Typically if you apply for a certificate for say,
plankytronixx.co.uk, youll need an email address at plankytronixx.co.uk. Each issuer
has its own process: some ask for lots of information, others ask for a little. What you
will end up with is a signed certificate. They will email the response to you.
Remember that when you created the request, the private key was kept private? Well,
its still hanging around. When you complete the certificate request in IIS, it will marry
up with the file the CA sent you in email. You go back and click Complete Certificate

Request.

Youll be asked for the file the CA sent you. Put all the details in and click OK.

You now get the option to export the certificate and it and thats a good thing because
you can export the certificate in .pfx format which is exactly what Azure
WebSites/WebApps requires.
The easiest way to do this is to find your certificate in the Server Certificates section
of IIS. Identify your certificate, right-click it and select Export. Youll end up with a .pfx
file.
You go to the Azure Portal and click upload a certificate on the Configure page (this is
as long as you are in the Basic or higher tier).

Thats not the end of the process. You now have a certificate in Azure to get your
Azure Website/WebApp to use it you need to configure the SSL Bindings section of
the page. Youll specify which DNS name you want to protect with SSL, youll select the
certificate you just uploaded and, depending on the type of cert you bought, youll
select either SNI or IP based certificate.

Once you save those settings youre done. Try it out by connecting a browser to your
site. If you used an EV certificate, the address bar turns green. A padlock icon appears
in the address bar of most browsers. If you click it in IE you can actually view the
certificate plus its certification path all the way up to the root authority.

15 Deploy your app to Azure App Service


This article helps you determine the best option to deploy the files for your
web app, mobile app backend, or API app to Azure App Service, and then
guides you to appropriate resources with instructions specific to your
preferred option.+

15.1 Azure App Service deployment overview


Azure App Service maintains the application framework for you (ASP.NET,
PHP, Node.js, etc). Some frameworks are enabled by default while others,
like Java and Python, may need a simple checkmark configuration to enable
it. In addition, you can customize your application framework, such as the
PHP version or the bitness of your runtime. For more information, see
Configure your app in Azure App Service.+
Since you don't have to worry about the web server or application
framework, deploying your app to App Service is a matter of deploying your
code, binaries, content files, and their respective directory structure, to the

/site/wwwroot directory in Azure (or the /site/wwwroot/App_Data/Jobs/


directory for WebJobs). App Service supports the following deployment
options: +
FTP or FTPS: Use your favorite FTP or FTPS enabled tool to move your files to

Azure, from FileZilla to full-featured IDEs like NetBeans. This is strictly a file
upload process. No additional services are provided by App Service, such as
version control, file structure management, etc.

Kudu (Git/Mercurial or OneDrive/Dropbox): Use the deployment engine in

App Service. Push your code to Kudu directly from any repository. Kudu also
provides added services whenever code is pushed to it, including version
control, package restore, MSBuild, and web hooks for continuous deployment
and other automation tasks. The Kudu deployment engine supports 3 different
types of deployment sources:
o

Content sync from OneDrive and Dropbox

Repository-based continuous deployment with auto-sync from GitHub,


Bitbucket, and Visual Studio Team Services

Repository-based deployment with manual sync from local Git


Web Deploy: Deploy code to App Service directly from your favorite Microsoft

tools such as Visual Studio using the same tooling that automates deployment to
IIS servers. This tool supports diff-only deployment, database creation, transforms
of connection strings, etc. Web Deploy differs from Kudu in that application
binaries are built before they are deployed to Azure. Similar to FTP, no additional
services are provided by App Service.
+

Popular web development tools support one or more of these deployment


processes. While the tool you choose determines the deployment processes
you can leverage, the actual DevOps functionality at your disposal depends
on the combination of the deployment process and the specific tools you

choose. For example, if you perform Web Deploy from Visual Studio with
Azure SDK, even though you don't get automation from Kudu, you do get
package restore and MSBuild automation in Visual Studio. +
15.1.1.1.1

Note

These deployment processes don't actually provision the Azure resources


that your app may need. However, most of the linked how-to articles show
you how to provision the app AND deploy your code to it end-to-end. You can
also find additional options for provisioning Azure resources in the Automate
deployment by using command-line tools section.+

15.2 Deploy via FTP by copying files to Azure manually


If you are used to manually copying your web content to a web server, you
can use an FTP utility to copy files, such as Windows Explorer or FileZilla.+
The pros of copying files manually are:+

Familiarity and minimal complexity for FTP tooling.

Knowing exactly where your files are going.

Added security with FTPS.

The cons of copying files manually are:+

Having to know how to deploy files to the correct directories in App Service.

No version control for rollback when failures occur.

No built-in deployment history for troubleshooting deployment issues.

Potential long deployment times because many FTP tools don't provide diffonly copying and simply copy all the files.

15.3 Deploy by syncing with a cloud folder


A good alternative to copying files manually is syncing files and folders to
App Service from a cloud storage service like OneDrive and Dropbox. Syncing
with a cloud folder utilizes the Kudu process for deployment (see Overview of
deployment processes).+
The pros of syncing with a cloud folder are:+

Simplicity of deployment. Services like OneDrive and Dropbox provide


desktop sync clients, so your local working directory is also your deployment
directory.

One-click deployment.

All functionality in the Kudu deployment engine is available (e.g. package


restore, automation).

The cons of syncing with a cloud folder are:+

No version control for rollback when failures occur.

No automated deployment, manual sync is required.

15.3.1
How to deploy by syncing with a cloud folder
In the Azure Portal, you can designate a folder for content sync in your
OneDrive or Dropbox cloud storage, work with your app code and content in
that folder, and sync to App Service with the click of a button.+

Sync content from a cloud folder to Azure App Service.

15.4 Deploy continuously from a cloud-based source control service


If your development team uses a cloud-based source code management
(SCM) service like Visual Studio Team Services, GitHub, or BitBucket, you can

configure App Service to integrate with your repository and deploy


continuously. +
Pros of deploying from a cloud-based source control service are:+

Version control to enable rollback.

Ability to configure continuous deployment for Git (and Mercurial where


applicable) repositories.

Branch-specific deployment, can deploy different branches to different slots.

All functionality in the Kudu deployment engine is available (e.g. deployment


versioning, rollback, package restore, automation).

Con of deploying from a cloud-based source control service is:+

Some knowledge of the respective SCM service required.

15.4.1
How to deploy continuously from a cloud-based source control
service
In the Azure Portal, you can configure continuous deployment from GitHub,
Bitbucket, and Visual Studio Team Services.+

Continous Deployment to Azure App Service.

15.5 Deploy from local Git


If your development team uses an on-premises local source code
management (SCM) service based on Git, you can configure this as a
deployment source to App Service. +
Pros of deploying from local Git are:+

Version control to enable rollback.

Branch-specific deployment, can deploy different branches to different slots.

All functionality in the Kudu deployment engine is available (e.g. deployment


versioning, rollback, package restore, automation).

Con of deploying from local Git is:+

Some knowledge of the respective SCM system required.

No turn-key solutions for continuous deployment.

15.5.1
How to deploy from local Git
In the Azure Portal, you can configure local Git deployment.+

Local Git Deployment to Azure App Service.

Publishing to Web Apps from any git/hg repo.

15.6 Deploy using an IDE


If you are already using Visual Studio with an Azure SDK, or other IDE suites
like Xcode, Eclipse, and IntelliJ IDEA, you can deploy to Azure directly from
within your IDE. This option is ideal for an individual developer.+
Visual Studio supports all three deployment processes (FTP, Git, and Web
Deploy), depending on your preference, while other IDEs can deploy to App
Service if they have FTP or Git integration (see Overview of deployment
processes).+
The pros of deploying using an IDE are:+

Potentially minimize the tooling for your end-to-end application life-cycle.


Develop, debug, track, and deploy your app to Azure all without moving outside
of your IDE.

The cons of deploying using an IDE are:+

Added complexity in tooling.

Still requires a source control system for a team project.

Additional pros of deploying using Visual Studio with Azure SDK are:+

Azure SDK makes Azure resources first-class citizens in Visual Studio. Create,
delete, edit, start, and stop apps, query the backend SQL database, live-debug
the Azure app, and much more.

Live editing of code files on Azure.

Live debugging of apps on Azure.

Integrated Azure explorer.

Diff-only deployment.

15.6.1

How to deploy from Visual Studio directly

Get started with Azure and ASP.NET. How to create and deploy a simple
ASP.NET MVC web project by using Visual Studio and Web Deploy.

How to Deploy Azure WebJobs using Visual Studio. How to configure Console
Application projects so that they deploy as WebJobs.

Deploy a Secure ASP.NET MVC 5 app with Membership, OAuth, and SQL
Database to Web Apps. How to create and deploy an ASP.NET MVC web project
with a SQL database, by using Visual Studio, Web Deploy, and Entity Framework
Code First Migrations.

ASP.NET Web Deployment using Visual Studio. A 12-part tutorial series that
covers a more complete range of deployment tasks than the others in this list.
Some Azure deployment features have been added since the tutorial was written,
but notes added later explain what's missing.

Deploying an ASP.NET Website to Azure in Visual Studio 2012 from a Git


Repository directly. Explains how to deploy an ASP.NET web project in Visual

Studio, using the Git plug-in to commit the code to Git and connecting Azure to
the Git repository. Starting in Visual Studio 2013, Git support is built-in and
doesn't require installation of a plug-in.
+

15.6.2
How to deploy using the Azure Toolkits for Eclipse and IntelliJ
IDEA
Microsoft makes it possible to deploy Web Apps to Azure directly from Eclipse
and IntelliJ via the Azure Toolkit for Eclipse and Azure Toolkit for IntelliJ. The
following tutorials illustrate the steps that are involved in deploying simple a
"Hello" world Web App to Azure using either IDE:+

Create a Hello World Web App for Azure in Eclipse. This tutorial shows you
how to use the Azure Toolkit for Eclipse to create and deploy a Hello World Web
App for Azure.

Create a Hello World Web App for Azure in IntelliJ. This tutorial shows you how
to use the Azure Toolkit for IntelliJ to create and deploy a Hello World Web App for
Azure.

15.7 Automate deployment by using command-line tools

Automate deployment with MSBuild

Copy files with FTP tools and scripts

Automate deployment with Windows PowerShell

Automate deployment with .NET management API

Deploy from Azure Command-Line Interface (Azure CLI)

Deploy from Web Deploy command line

Using FTP Batch Scripts.

Another deployment option is to use a cloud-based service such as Octopus


Deploy. For more information, see Deploy ASP.NET applications to Azure Web
Sites.+
15.7.1
Automate deployment with MSBuild
If you use the Visual Studio IDE for development, you can use MSBuild to
automate anything you can do in your IDE. You can configure MSBuild to use
either Web Deploy or FTP/FTPS to copy files. Web Deploy can also automate
many other deployment-related tasks, such as deploying databases.+
For more information about command-line deployment using MSBuild, see
the following resources:+

ASP.NET Web Deployment using Visual Studio: Command Line Deployment.


Tenth in a series of tutorials about deployment to Azure using Visual Studio.
Shows how to use the command line to deploy after setting up publish profiles in
Visual Studio.

Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build.
Hard-copy book that includes chapters on how to use MSBuild for deployment.

15.7.2
Automate deployment with Windows PowerShell
You can perform MSBuild or FTP deployment functions from Windows
PowerShell. If you do that, you can also use a collection of Windows
PowerShell cmdlets that make the Azure REST management API easy to call.
+
For more information, see the following resources:+

Deploy a web app linked to a GitHub repository

Provision a web app with a SQL Database

Provision and deploy microservices predictably in Azure

Building Real-World Cloud Apps with Azure - Automate Everything. E-book


chapter that explains how the sample application shown in the e-book uses
Windows PowerShell scripts to create an Azure test environment and deploy to it.
See the Resources section for links to additional Azure PowerShell documentation.

Using Windows PowerShell Scripts to Publish to Dev and Test Environments.


How to use Windows PowerShell deployment scripts that Visual Studio generates.

15.7.3
Automate deployment with .NET management API
You can write C# code to perform MSBuild or FTP functions for deployment. If
you do that, you can access the Azure management REST API to perform site
management functions.+
For more information, see the following resource:+

Automating everything with the Azure Management Libraries and .NET.


Introduction to the .NET management API and links to more documentation.

15.7.4
Deploy from Azure Command-Line Interface (Azure CLI)
You can use the command line in Windows, Mac or Linux machines to deploy
by using FTP. If you do that, you can also access the Azure REST
management API using the Azure CLI.+
For more information, see the following resource:+

Azure Command line tools. Portal page in Azure.com for command line tool
information.

15.7.5
Deploy from Web Deploy command line
Web Deploy is Microsoft software for deployment to IIS that not only provides
intelligent file sync features but also can perform or coordinate many other
deployment-related tasks that can't be automated when you use FTP. For

example, Web Deploy can deploy a new database or database updates along
with your web app. Web Deploy can also minimize the time required to
update an existing site since it can intelligently copy only changed files.
Microsoft Visual Studio and Team Foundation Server have support for Web
Deploy built-in, but you can also use Web Deploy directly from the command
line to automate deployment. Web Deploy commands are very powerful but
the learning curve can be steep.+
For more information, see the following resource:+

Simple Web Apps: Deployment. Blog by David Ebbo about a tool he wrote to
make it easier to use Web Deploy.

Web Deployment Tool. Official documentation on the Microsoft TechNet site.


Dated but still a good place to start.

Using Web Deploy. Official documentation on the Microsoft IIS.NET site. Also
dated but a good place to start.

ASP.NET Web Deployment using Visual Studio: Command Line Deployment.


MSBuild is the build engine used by Visual Studio, and it can also be used from
the command line to deploy web applications to Web Apps. This tutorial is part of
a series that is mainly about Visual Studio deployment.

15.8 Next Steps


In some scenarios you might want to be able to easily switch back and forth
between a staging and a production version of your app. For more
information, see Staged Deployment on Web Apps.+
Having a backup and restore plan in place is an important part of any
deployment workflow. For information about the App Service backup and
restore feature, see Web Apps Backups. +

For information about how to use Azure's Role-Based Access Control to


manage access to App Service deployment, see RBAC and Web App
Publishing.

16 Create a .NET WebJob in Azure App


Service
This tutorial shows how to write code for a simple multi-tier ASP.NET MVC 5
application that uses the WebJobs SDK.+
The purpose of the WebJobs SDK is to simplify the code you write for common tasks
that a WebJob can perform, such as image processing, queue processing, RSS
aggregation, file maintenance, and sending emails. The WebJobs SDK has built-in
features for working with Azure Storage and Service Bus, for scheduling tasks and
handling errors, and for many other common scenarios. In addition, it's designed to
be extensible, and there's an open source repository for extensions.+
The sample application is an advertising bulletin board. Users can upload images for
ads, and a backend process converts the images to thumbnails. The ad list page
shows the thumbnails, and the ad details page shows the full size image. Here's a
screenshot:+

+
This sample application works with Azure queues and Azure blobs. The tutorial
shows how to deploy the application to Azure App Service and Azure SQL Database.
+
Prerequisites
The tutorial assumes that you know how to work with ASP.NET MVC 5 projects in
Visual Studio.+
The tutorial was written for Visual Studio 2013. If you don't have Visual Studio
already, it will be installed for you automatically when you install the Azure SDK
for .NET.+
The tutorial can be used with Visual Studio 2015, but before you run the application
locally you have to change the Data Source part of the SQL Server LocalDB

connection string in the Web.config and App.config files from Data


Source=(localdb)\v11.0 to Data Source=(LocalDb)\MSSQLLocalDB.+
Note
You need an Azure account to complete this tutorial:+
You can open an Azure account for free: You get credits you can use to try out paid
Azure services, and even after they're used up you can keep the account and use
free Azure services, such as Websites. Your credit card will never be charged, unless
you explicitly change your settings and ask to be charged.
You can activate MSDN subscriber benefits: Your MSDN subscription gives you
credits every month that you can use for paid Azure services.
+
If you want to get started with Azure App Service before signing up for an Azure
account, go to Try App Service, where you can immediately create a short-lived
starter web app in App Service. No credit cards required; no commitments.+
What you'll learn
The tutorial shows how to do the following tasks:+
Enable your machine for Azure development by installing the Azure SDK.
Create a Console Application project that automatically deploys as an Azure WebJob
when you deploy the associated web project.
Test a WebJobs SDK backend locally on the development computer.
Publish an application with a WebJobs backend to a web app in App Service.
Upload files and store them in the Azure Blob service.
Use the Azure WebJobs SDK to work with Azure Storage queues and blobs.
+
Application architecture
The sample application uses the queue-centric work pattern to off-load the CPUintensive work of creating thumbnails to a backend process.+
The app stores ads in a SQL database, using Entity Framework Code First to create
the tables and access the data. For each ad, the database stores two URLs: one for
the full-size image and one for the thumbnail.+

+
When a user uploads an image, the web app stores the image in an Azure blob, and
it stores the ad information in the database with a URL that points to the blob. At
the same time, it writes a message to an Azure queue. In a backend process
running as an Azure WebJob, the WebJobs SDK polls the queue for new messages.
When a new message appears, the WebJob creates a thumbnail for that image and
updates the thumbnail URL database field for that ad. Here's a diagram that shows
how the parts of the application interact:+

+
Set up the development environment
To start, set up your development environment by installing the Azure SDK for Visual
Studio 2015 or the Azure SDK for Visual Studio 2013.+
If you don't have Visual Studio installed, use the link for Visual Studio 2015, and
Visual Studio will be installed along with the SDK.+
Note
Depending on how many of the SDK dependencies you already have on your
machine, installing the SDK could take a long time, from several minutes to a half
hour or more.+
The tutorial instructions apply to Azure SDK for .NET 2.7.1 or later.+
Create an Azure Storage account
An Azure storage account provides resources for storing queue and blob data in the
cloud. It's also used by the WebJobs SDK to store logging data for the dashboard.+
In a real-world application, you typically create separate accounts for application
data versus logging data, and separate accounts for test data versus production
data. For this tutorial you'll use just one account.+
Open the Server Explorer window in Visual Studio.

Right-click the Azure node, and then click Connect to Microsoft Azure.

Sign in using your Azure credentials.


Right-click Storage under the Azure node, and then click Create Storage Account.

In the Create Storage Account dialog, enter a name for the storage account.
The name must be must be unique (no other Azure storage account can have the
same name). If the name you enter is already in use you'll get a chance to change
it.
The URL to access your storage account will be {name}.core.windows.net.
Set the Region or Affinity Group drop-down list to the region closest to you.
This setting specifies which Azure datacenter will host your storage account. For this
tutorial, your choice won't make a noticeable difference. However, for a production
web app, you want your web server and your storage account to be in the same
region to minimize latency and data egress charges. The web app (which you'll

create later) datacenter should be as close as possible to the browsers accessing


the web app in order to minimize latency.
Set the Replication drop-down list to Locally redundant.
When geo-replication is enabled for a storage account, the stored content is
replicated to a secondary datacenter to enable failover to that location in case of a
major disaster in the primary location. Geo-replication can incur additional costs. For
test and development accounts, you generally don't want to pay for geo-replication.
For more information, see Create, manage, or delete a storage account.
Click Create.

+
Download the application
Download and unzip the completed solution.
Start Visual Studio.
From the File menu choose Open > Project/Solution, navigate to where you
downloaded the solution, and then open the solution file.
Press CTRL+SHIFT+B to build the solution.
By default, Visual Studio automatically restores the NuGet package content, which
was not included in the .zip file. If the packages don't restore, install them manually
by going to the Manage NuGet Packages for Solution dialog and clicking the Restore
button at the top right.

In Solution Explorer, make sure that ContosoAdsWeb is selected as the startup


project.
+
Configure the application to use your storage account
Open the application Web.config file in the ContosoAdsWeb project.
The file contains a SQL connection string and an Azure storage connection string for
working with blobs and queues.
The SQL connection string points to a SQL Server Express LocalDB database.
The storage connection string is an example that has placeholders for the storage
account name and access key. You'll replace this with a connection string that has
the name and key of your storage account.
<connectionStrings>
<add name="ContosoAdsContext" connectionString="Data
Source=(localdb)\v11.0; Initial Catalog=ContosoAds; Integrated Security=True;
MultipleActiveResultSets=True;" providerName="System.Data.SqlClient" />
<add name="AzureWebJobsStorage"
connectionString="DefaultEndpointsProtocol=https;AccountName=[accountname];
AccountKey=[accesskey]"/>
</connectionStrings>
The storage connection string is named AzureWebJobsStorage because that's the
name the WebJobs SDK uses by default. The same name is used here so you have to
set only one connection string value in the Azure environment.
In Server Explorer, right-click your storage account under the Storage node, and
then click Properties.

In the Properties window, click Storage Account Keys, and then click the ellipsis.

Copy the Connection String.

Replace the storage connection string in the Web.config file with the connection
string you just copied. Make sure you select everything inside the quotation marks
but not including the quotation marks before pasting.
Open the App.config file in the ContosoAdsWebJob project.
This file has two storage connection strings, one for application data and one for
logging. You can use separate storage accounts for application data and logging,
and you can use multiple storage accounts for data. For this tutorial you'll use a
single storage account. The connection strings have placeholders for the storage
account keys.
<configuration>
<connectionStrings>
<add name="AzureWebJobsDashboard"
connectionString="DefaultEndpointsProtocol=https;AccountName=[accountname];
AccountKey=[accesskey]"/>
<add name="AzureWebJobsStorage"
connectionString="DefaultEndpointsProtocol=https;AccountName=[accountname];
AccountKey=[accesskey]"/>
<add name="ContosoAdsContext" connectionString="Data
Source=(localdb)\v11.0; Initial Catalog=ContosoAds; Integrated Security=True;
MultipleActiveResultSets=True;"/>
</connectionStrings>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
</startup>
</configuration>

By default, the WebJobs SDK looks for connection strings named


AzureWebJobsStorage and AzureWebJobsDashboard. As an alternative, you can
store the connection string however you want and pass it in explicitly to the JobHost
object.
Replace both storage connection strings with the connection string you copied
earlier.
Save your changes.
+
Run the application locally
To start the web frontend of the application, press CTRL+F5.
The default browser opens to the home page. (The web project runs because you've
made it the startup project.)

To start the WebJob backend of the application, right-click the ContosoAdsWebJob


project in Solution Explorer, and then click Debug > Start new instance.
A console application window opens and displays logging messages indicating the
WebJobs SDK JobHost object has started to run.

In your browser, click Create an Ad.


Enter some test data and select an image to upload, and then click Create.

The app goes to the Index page, but it doesn't show a thumbnail for the new ad
because that processing hasn't happened yet.
Meanwhile, after a short wait a logging message in the console application window
shows that a queue message was received and has been processed.

After you see the logging messages in the console application window, refresh the
Index page to see the thumbnail.

Click Details for your ad to see the full-size image.

+
You've been running the application on your local computer, and it's using a SQL
Server database located on your computer, but it's working with queues and blobs
in the cloud. In the following section you'll run the application in the cloud, using a
cloud database as well as cloud blobs and queues. +

Run the application in the cloud


You'll do the following steps to run the application in the cloud:+
Deploy to Web Apps. Visual Studio automatically creates a new web app in App
Service and a SQL Database instance.
Configure the web app to use your Azure SQL database and storage account.
+
After you've created some ads while running in the cloud, you'll view the WebJobs
SDK dashboard to see the rich monitoring features it has to offer.+
Deploy to Web Apps
Close the browser and the console application window.
In Solution Explorer, right-click the ContosoAdsWeb project, and then click Publish.
In the Profile step of the Publish Web wizard, click Microsoft Azure web apps.

Sign in to Azure if you aren't still signed in.


Click New.
The dialog box may look slightly different depending on which version of the Azure
SDK for .NET you have installed.

In the Create web app on Microsoft Azure dialog box, enter a unique name in the
Web app name box.
The complete URL will consist of what you enter here plus .azurewebsites.net (as
shown next to the Web app name text box). For example, if the web app name is
ContosoAds, the URL will be ContosoAds.azurewebsites.net.
In the App Service plan drop-down list choose Create new App Service plan. Enter a
name for the App Service plan, such as ContosoAdsPlan.
In the Resource group drop-down list choose Create new resource group.
Enter a name for the resource group, such as ContosoAdsGroup.
In the Region drop-down list, choose the same region you chose for your storage
account.
This setting specifies which Azure datacenter your web app will run in. Keeping the
web app and storage account in the same datacenter minimizes latency and data
egress charges.
In the Database server drop-down list choose Create new server.
Enter a name for the database server, such as contosoadsserver + a number or
your name to make the server name unique.
The server name must be unique. It can contain lower-case letters, numeric digits,
and hyphens. It cannot contain a trailing hyphen.
Alternatively, if your subscription already has a server, you can select that server
from the drop-down list.
Enter an administrator Database username and Database password.

If you selected New SQL Database server you aren't entering an existing name and
password here, you're entering a new name and password that you're defining now
to use later when you access the database. If you selected a server that you created
previously, you'll be prompted for the password to the administrative user account
you already created.
Click Create.

Visual Studio creates the solution, the web project, the web app in Azure, and the
Azure SQL Database instance.
In the Connection step of the Publish Web wizard, click Next.

In the Settings step, clear the Use this connection string at runtime check box, and
then click Next.

You don't need to use the publish dialog to set the SQL connection string because
you'll set that value in the Azure environment later.
You can ignore the warnings on this page.
Normally the storage account you use when running in Azure would be different
from the one you use when running locally, but for this tutorial you're using the
same one in both environments. So the AzureWebJobsStorage connection string
does not need to be transformed. Even if you did want to use a different storage
account in the cloud, you wouldn't need to transform the connection string because

the app uses an Azure environment setting when it runs in Azure. You'll see this
later in the tutorial.
For this tutorial you aren't going to be making changes to the data model used for
the ContosoAdsContext database, so there is no need to use Entity Framework Code
First Migrations for deployment. Code First automatically creates a new database
the first time the app tries to access SQL data.
For this tutorial, the default values of the options under File Publish Options are fine.
In the Preview step, click Start Preview.

You can ignore the warning about no databases being published. Entity Framework
Code First creates the database; it doesn't need to be published.
The preview window shows that binaries and configuration files from the WebJob
project will be copied to the app_data\jobs\continuous folder of the web app.

Click Publish.
Visual Studio deploys the application and opens the home page URL in the browser.

You won't be able to use the web app until you set connection strings in the Azure
environment in the next section. You'll see either an error page or the home page
depending on web app and database creation options you chose earlier.
+
Configure the web app to use your Azure SQL database and storage account.
It's a security best practice to avoid putting sensitive information such as
connection strings in files that are stored in source code repositories. Azure provides
a way to do that: you can set connection string and other setting values in the
Azure environment, and ASP.NET configuration APIs automatically pick up these
values when the app runs in Azure. You can set these values in Azure by using
Server Explorer, the Azure Portal, Windows PowerShell, or the cross-platform
command-line interface. For more information, see How Application Strings and
Connection Strings Work.+
In this section you use Server Explorer to set connection string values in Azure.+
In Server Explorer, right-click your web app under Azure > App Service > {your
resource group}, and then click View Settings.
The Azure Web App window opens on the Configuration tab.
Change the name of the DefaultConnection connection string to
ContosoAdsContext.
Azure automatically created this connection string when you created the web app
with an associated database, so it already has the right connection string value.
You're changing just the name to what your code is looking for.
Add two new connection strings, named AzureWebJobsStorage and
AzureWebJobsDashboard. Set type to Custom, and set the connection string value to
the same value that you used earlier for the Web.config and App.config files. (Make
sure you include the entire connection string, not just the access key, and don't
include the quotation marks.)
These connection strings are used by the WebJobs SDK, one for application data and
one for logging. As you saw earlier, the one for application data is also used by the
web front end code.
Click Save.

In Server Explorer, right-click the web app, and then click Stop.
After the web app stops, right-click the web app again, and then click Start.
The WebJob automatically starts when you publish, but it stops when you make a
configuration change. To restart it you can either restart the web app or restart the
WebJob in the Azure Portal. It's generally recommended to restart the web app after
a configuration change.
Refresh the browser window that has the web app URL in its address bar.

The home page appears.


Create an ad, as you did when you ran the application locally.
The Index page shows without a thumbnail at first.
Refresh the page after a few seconds, and the thumbnail appears.
If the thumbnail doesn't appear, you may have to wait a minute or so for the
WebJob to restart. If after a a while you still don't see the thumbnail when you
refresh the page, the WebJob may not have started automatically. In that case, go to
the WebJobs tab in the classic portal page for your web app, and then click Start.
+
View the WebJobs SDK dashboard
In the classic portal, select your web app.
Click the WebJobs tab.
Click the URL in the Logs column for your WebJob.

A new browser tab opens to the WebJobs SDK dashboard. The dashboard shows that
the WebJob is running and shows a list of functions in your code that the WebJobs
SDK triggered.
Click one of the functions to see details about its execution.

The Replay Function button on this page causes the WebJobs SDK framework to call
the function again, and it gives you a chance to change the data passed to the
function first.

+
Note
When you're finished testing, delete the web app and the SQL Database instance.
The web app is free, but the SQL Database instance and storage account accrue
charges (minimal due to small size). Also, if you leave the web app running, anyone
who finds your URL can create and view ads. In the classic portal, go to the
Dashboard tab for your web app, and then click the Delete button at the bottom of
the page. You can then select a check box to delete the SQL Database instance at
the same time. If you just want to temporarily prevent others from accessing the
web app, click Stop instead. In that case, charges will continue to accrue for the SQL
Database and Storage account. You can follow a similar procedure to delete the SQL
database and storage account when you no longer need them.+
Create the application from scratch
In this section you'll do the following tasks:+
Create a Visual Studio solution with a web project.
Add a class library project for the data access layer that is shared between front end
and backend.
Add a Console Application project for the backend, with WebJobs deployment
enabled.
Add NuGet packages.
Set project references.
Copy application code and configuration files from the downloaded application that
you worked with in the previous section of the tutorial.
Review the parts of the code that work with Azure blobs and queues and the
WebJobs SDK.
+
Create a Visual Studio solution with a web project and class library project
In Visual Studio, choose New > Project from the File menu.
In the New Project dialog, choose Visual C# > Web > ASP.NET Web Application.
Name the project ContosoAdsWeb, name the solution ContosoAdsWebJobsSDK
(change the solution name if you're putting it in the same folder as the downloaded
solution), and then click OK.

In the New ASP.NET Project dialog, choose the MVC template, and clear the Host in
the cloud check box under Microsoft Azure.
Selecting Host in the cloud enables Visual Studio to automatically create a new
Azure web app and SQL Database. Since you already created these earlier, you
don't need to do so now while creating the project. If you want to create a new one,
select the check box. You can then configure the new web app and SQL database
the same way you did earlier when you deployed the application.
Click Change Authentication.

In the Change Authentication dialog, choose No Authentication, and then click OK.

In the New ASP.NET Project dialog, click OK.


Visual Studio creates the solution and the web project.
In Solution Explorer, right-click the solution (not the project), and choose Add > New
Project.
In the Add New Project dialog, choose Visual C# > Windows Desktop > Class Library
template.
Name the project ContosoAdsCommon, and then click OK.
This project will contain the Entity Framework context and the data model which
both the front end and back end will use. As an alternative you could define the EFrelated classes in the web project and reference that project from the WebJob
project. But then your WebJob project would have a reference to web assemblies
which it doesn't need.
+
Add a Console Application project that has WebJobs deployment enabled
Right-click the web project (not the solution or the class library project), and then
click Add > New Azure WebJob Project.

In the Add Azure WebJob dialog, enter ContosoAdsWebJob as both the Project name
and the WebJob name. Leave WebJob run mode set to Run Continuously.
Click OK.
Visual Studio creates a Console application that is configured to deploy as a WebJob
whenever you deploy the web project. To do that, it performed the following tasks
after creating the project:
Added a webjob-publish-settings.json file in the WebJob project Properties folder.
Added a webjobs-list.json file in the web project Properties folder.
Installed the Microsoft.Web.WebJobs.Publish NuGet package in the WebJob project.
For more information about these changes, see How to deploy WebJobs by using
Visual Studio.

+
Add NuGet packages
The new-project template for a WebJob project automatically installs the WebJobs
SDK NuGet package Microsoft.Azure.WebJobs and its dependencies.+
One of the WebJobs SDK dependencies that is installed automatically in the WebJob
project is the Azure Storage Client Library (SCL). However, you need to add it to the
web project to work with blobs and queues.+
Open the Manage NuGet Packages dialog for the solution.
In the left pane, select Installed packages.
Find the Azure Storage package, and then click Manage.
In the Select Projects box, select the ContosoAdsWeb check box, and then click OK.
All three projects use the Entity Framework to work with data in SQL Database.
In the left pane, select Online.
Find the EntityFramework NuGet package, and install it in all three projects.
+
Set project references
Both web and WebJob projects work with the SQL database, so both need a
reference to the ContosoAdsCommon project.+
In the ContosoAdsWeb project, set a reference to the ContosoAdsCommon project.
(Right-click the ContosoAdsWeb project, and then click Add > Reference. In the
Reference Manager dialog box, select Solution > Projects > ContosoAdsCommon,
and then click OK.)
In the ContosoAdsWebJob project, set a reference to the ContosAdsCommon project.
The WebJob project needs references for working with images and for accessing
connection strings.
In the ContosoAdsWebJob project, set a reference to System.Drawing and
System.Configuration.
+
Add code and configuration files
This tutorial does not show how to create MVC controllers and views using
scaffolding, how to write Entity Framework code that works with SQL Server
databases, or the basics of asynchronous programming in ASP.NET 4.5. So all that
remains to do is copy code and configuration files from the downloaded solution into
the new solution. After you do that, the following sections show and explain key
parts of the code.+

To add files to a project or a folder, right-click the project or folder and click Add >
Existing Item. Select the files you want and click Add. If asked whether you want to
replace existing files, click Yes.+
In the ContosoAdsCommon project, delete the Class1.cs file and add in its place the
following files from the downloaded project.
Ad.cs
ContosoAdscontext.cs
BlobInformation.cs
In the ContosoAdsWeb project, add the following files from the downloaded project.
Web.config
Global.asax.cs
In the Controllers folder: AdController.cs
In the Views\Shared folder: _Layout.cshtml file
In the Views\Home folder: Index.cshtml
In the Views\Ad folder (create the folder first): five .cshtml files
In the ContosoAdsWebJob project, add the following files from the downloaded
project.
App.config (change the file type filter to All Files)
Program.cs
Functions.cs
+
You can now build, run, and deploy the application as instructed earlier in the
tutorial. Before you do that, however, stop the WebJob that is still running in the first
web app you deployed to. Otherwise that WebJob will process queue messages
created locally or by the app running in a new web app, since all are using the same
storage account.+
Review the application code
The following sections explain the code related to working with the WebJobs SDK
and Azure Storage blobs and queues.+
Note
For the code specific to the WebJobs SDK, go to the Program.cs and Functions.cs
sections.+
ContosoAdsCommon - Ad.cs

The Ad.cs file defines an enum for ad categories and a POCO entity class for ad
information.+
Copy

public enum Category


{
Cars,
[Display(Name="Real Estate")]
RealEstate,
[Display(Name = "Free Stuff")]
FreeStuff
}

public class Ad
{
public int AdId { get; set; }

[StringLength(100)]
public string Title { get; set; }

public int Price { get; set; }

[StringLength(1000)]
[DataType(DataType.MultilineText)]
public string Description { get; set; }

[StringLength(1000)]
[DisplayName("Full-size Image")]
public string ImageURL { get; set; }

[StringLength(1000)]
[DisplayName("Thumbnail")]
public string ThumbnailURL { get; set; }

[DataType(DataType.Date)]
[DisplayFormat(DataFormatString = "{0:yyyy-MM-dd}",
ApplyFormatInEditMode = true)]
public DateTime PostedDate { get; set; }

public Category? Category { get; set; }


[StringLength(12)]
public string Phone { get; set; }
}
ContosoAdsCommon - ContosoAdsContext.cs
The ContosoAdsContext class specifies that the Ad class is used in a DbSet
collection, which Entity Framework stores in a SQL database.+
Copy

public class ContosoAdsContext : DbContext


{
public ContosoAdsContext() : base("name=ContosoAdsContext")
{
}
public ContosoAdsContext(string connString)
: base(connString)
{
}
public System.Data.Entity.DbSet<Ad> Ads { get; set; }
}
The class has two constructors. The first is used by the web project, and specifies
the name of a connection string that is stored in the Web.config file or the Azure

runtime environment. The second constructor enables you to pass in the actual
connection string. That is needed by the WebJob project since it doesn't have a
Web.config file. You saw earlier where this connection string was stored, and you'll
see later how the code retrieves the connection string when it instantiates the
DbContext class.+
ContosoAdsCommon - BlobInformation.cs
The BlobInformation class is used to store information about an image blob in a
queue message.+
Copy

public class BlobInformation


{
public Uri BlobUri { get; set; }

public string BlobName


{
get
{
return BlobUri.Segments[BlobUri.Segments.Length - 1];
}
}
public string BlobNameWithoutExtension
{
get
{
return Path.GetFileNameWithoutExtension(BlobName);
}
}
public int AdId { get; set; }
}
ContosoAdsWeb - Global.asax.cs

Code that is called from the Application_Start method creates an images blob
container and an images queue if they don't already exist. This ensures that
whenever you start using a new storage account, the required blob container and
queue are created automatically.+
The code gets access to the storage account by using the storage connection string
from the Web.config file or Azure runtime environment.+
Copy

var storageAccount = CloudStorageAccount.Parse


(ConfigurationManager.ConnectionStrings["AzureWebJobsStorage"].ToString());
Then it gets a reference to the images blob container, creates the container if it
doesn't already exist, and sets access permissions on the new container. By default
new containers allow only clients with storage account credentials to access blobs.
The web app needs the blobs to be public so that it can display images using URLs
that point to the image blobs.+
Copy

var blobClient = storageAccount.CreateCloudBlobClient();


var imagesBlobContainer = blobClient.GetContainerReference("images");
if (imagesBlobContainer.CreateIfNotExists())
{
imagesBlobContainer.SetPermissions(
new BlobContainerPermissions
{
PublicAccess = BlobContainerPublicAccessType.Blob
});
}
Similar code gets a reference to the thumbnailrequest queue and creates a new
queue. In this case no permissions change is needed.+
Copy

CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();


var imagesQueue = queueClient.GetQueueReference("thumbnailrequest");

imagesQueue.CreateIfNotExists();
ContosoAdsWeb - _Layout.cshtml
The _Layout.cshtml file sets the app name in the header and footer, and creates an
"Ads" menu entry.+
ContosoAdsWeb - Views\Home\Index.cshtml
The Views\Home\Index.cshtml file displays category links on the home page. The
links pass the integer value of the Category enum in a querystring variable to the
Ads Index page.+
Copy

<li>@Html.ActionLink("Cars", "Index", "Ad", new { category =


(int)Category.Cars }, null)</li>
<li>@Html.ActionLink("Real estate", "Index", "Ad", new { category =
(int)Category.RealEstate }, null)</li>
<li>@Html.ActionLink("Free stuff", "Index", "Ad", new { category =
(int)Category.FreeStuff }, null)</li>
<li>@Html.ActionLink("All", "Index", "Ad", null, null)</li>
ContosoAdsWeb - AdController.cs
In the AdController.cs file the constructor calls the InitializeStorage method to create
Azure Storage Client Library objects that provide an API for working with blobs and
queues.+
Then the code gets a reference to the images blob container as you saw earlier in
Global.asax.cs. While doing that, it sets a default retry policy appropriate for a web
app. The default exponential backoff retry policy could hang the web app for longer
than a minute on repeated retries for a transient fault. The retry policy specified
here waits 3 seconds after each try for up to 3 tries.+
Copy

var blobClient = storageAccount.CreateCloudBlobClient();


blobClient.DefaultRequestOptions.RetryPolicy = new
LinearRetry(TimeSpan.FromSeconds(3), 3);
imagesBlobContainer = blobClient.GetContainerReference("images");
Similar code gets a reference to the images queue.+
Copy

CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();


queueClient.DefaultRequestOptions.RetryPolicy = new
LinearRetry(TimeSpan.FromSeconds(3), 3);
imagesQueue = queueClient.GetQueueReference("blobnamerequest");
Most of the controller code is typical for working with an Entity Framework data
model using a DbContext class. An exception is the HttpPost Create method, which
uploads a file and saves it in blob storage. The model binder provides an
HttpPostedFileBase object to the method.+
Copy

[HttpPost]
[ValidateAntiForgeryToken]
public async Task<ActionResult> Create(
[Bind(Include = "Title,Price,Description,Category,Phone")] Ad ad,
HttpPostedFileBase imageFile)
If the user selected a file to upload, the code uploads the file, saves it in a blob, and
updates the Ad database record with a URL that points to the blob.+
Copy

if (imageFile != null && imageFile.ContentLength != 0)


{
blob = await UploadAndSaveBlobAsync(imageFile);
ad.ImageURL = blob.Uri.ToString();
}
The code that does the upload is in the UploadAndSaveBlobAsync method. It
creates a GUID name for the blob, uploads and saves the file, and returns a
reference to the saved blob.+
Copy

private async Task<CloudBlockBlob>


UploadAndSaveBlobAsync(HttpPostedFileBase imageFile)
{

string blobName = Guid.NewGuid().ToString() +


Path.GetExtension(imageFile.FileName);
CloudBlockBlob imageBlob =
imagesBlobContainer.GetBlockBlobReference(blobName);
using (var fileStream = imageFile.InputStream)
{
await imageBlob.UploadFromStreamAsync(fileStream);
}
return imageBlob;
}
After the HttpPost Create method uploads a blob and updates the database, it
creates a queue message to inform the back-end process that an image is ready for
conversion to a thumbnail.+
Copy

BlobInformation blobInfo = new BlobInformation() { AdId = ad.AdId, BlobUri =


new Uri(ad.ImageURL) };
var queueMessage = new
CloudQueueMessage(JsonConvert.SerializeObject(blobInfo));
await thumbnailRequestQueue.AddMessageAsync(queueMessage);
The code for the HttpPost Edit method is similar except that if the user selects a
new image file any blobs that already exist for this ad must be deleted.+
Copy

if (imageFile != null && imageFile.ContentLength != 0)


{
await DeleteAdBlobsAsync(ad);
imageBlob = await UploadAndSaveBlobAsync(imageFile);
ad.ImageURL = imageBlob.Uri.ToString();
}
Here is the code that deletes blobs when you delete an ad:+
Copy

private async Task DeleteAdBlobsAsync(Ad ad)


{
if (!string.IsNullOrWhiteSpace(ad.ImageURL))
{
Uri blobUri = new Uri(ad.ImageURL);
await DeleteAdBlobAsync(blobUri);
}
if (!string.IsNullOrWhiteSpace(ad.ThumbnailURL))
{
Uri blobUri = new Uri(ad.ThumbnailURL);
await DeleteAdBlobAsync(blobUri);
}
}
private static async Task DeleteAdBlobAsync(Uri blobUri)
{
string blobName = blobUri.Segments[blobUri.Segments.Length - 1];
CloudBlockBlob blobToDelete =
imagesBlobContainer.GetBlockBlobReference(blobName);
await blobToDelete.DeleteAsync();
}
ContosoAdsWeb - Views\Ad\Index.cshtml and Details.cshtml
The Index.cshtml file displays thumbnails with the other ad data:+
Copy

<img src="@Html.Raw(item.ThumbnailURL)" />


The Details.cshtml file displays the full-size image:+
Copy

<img src="@Html.Raw(Model.ImageURL)" />

ContosoAdsWeb - Views\Ad\Create.cshtml and Edit.cshtml


The Create.cshtml and Edit.cshtml files specify form encoding that enables the
controller to get the HttpPostedFileBase object.+
Copy

@using (Html.BeginForm("Create", "Ad", FormMethod.Post, new { enctype =


"multipart/form-data" }))
An <input> element tells the browser to provide a file selection dialog.+
Copy

<input type="file" name="imageFile" accept="image/*" class="form-control


fileupload" />
ContosoAdsWebJob - Program.cs
When the WebJob starts, the Main method calls the WebJobs SDK
JobHost.RunAndBlock method to begin execution of triggered functions on the
current thread.+
Copy

static void Main(string[] args)


{
JobHost host = new JobHost();
host.RunAndBlock();
}
ContosoAdsWebJob - Functions.cs - GenerateThumbnail method
The WebJobs SDK calls this method when a queue message is received. The method
creates a thumbnail and puts the thumbnail URL in the database.+
Copy

public static void GenerateThumbnail(


[QueueTrigger("thumbnailrequest")] BlobInformation blobInfo,
[Blob("images/{BlobName}", FileAccess.Read)] Stream input,

[Blob("images/{BlobNameWithoutExtension}_thumbnail.jpg")] CloudBlockBlob
outputBlob)
{
using (Stream output = outputBlob.OpenWrite())
{
ConvertImageToThumbnailJPG(input, output);
outputBlob.Properties.ContentType = "image/jpeg";
}

// Entity Framework context class is not thread-safe, so it must


// be instantiated and disposed within the function.
using (ContosoAdsContext db = new ContosoAdsContext())
{
var id = blobInfo.AdId;
Ad ad = db.Ads.Find(id);
if (ad == null)
{
throw new Exception(String.Format("AdId {0} not found, can't create
thumbnail", id.ToString()));
}
ad.ThumbnailURL = outputBlob.Uri.ToString();
db.SaveChanges();
}
}
The QueueTrigger attribute directs the WebJobs SDK to call this method when a new
message is received on the thumbnailrequest queue.
Copy

[QueueTrigger("thumbnailrequest")] BlobInformation blobInfo,


The BlobInformation object in the queue message is automatically deserialized into
the blobInfo parameter. When the method completes, the queue message is

deleted. If the method fails before completing, the queue message is not deleted;
after a 10-minute lease expires, the message is released to be picked up again and
processed. This sequence won't be repeated indefinitely if a message always causes
an exception. After 5 unsuccessful attempts to process a message, the message is
moved to a queue named {queuename}-poison. The maximum number of attempts
is configurable.
The two Blob attributes provide objects that are bound to blobs: one to the existing
image blob and one to a new thumbnail blob that the method creates.
Copy

[Blob("images/{BlobName}", FileAccess.Read)] Stream input,


[Blob("images/{BlobNameWithoutExtension}_thumbnail.jpg")] CloudBlockBlob
outputBlob)
Blob names come from properties of the BlobInformation object received in the
queue message (BlobName and BlobNameWithoutExtension). To get the full
functionality of the Storage Client Library you can use the CloudBlockBlob class to
work with blobs. If you want to reuse code that was written to work with Stream
objects, you can use the Stream class.
+
For more information about how to write functions that use WebJobs SDK attributes,
see the following resources:+
How to use Azure queue storage with the WebJobs SDK
How to use Azure blob storage with the WebJobs SDK
How to use Azure table storage with the WebJobs SDK
How to use Azure Service Bus with the WebJobs SDK
+
Note
If your web app runs on multiple VMs, multiple WebJobs will be running
simultaneously, and in some scenarios this can result in the same data getting
processed multiple times. This is not a problem if you use the built-in queue, blob,
and Service Bus triggers. The SDK ensures that your functions will be processed
only once for each message or blob.
For information about how to implement graceful shutdown, see Graceful Shutdown.
The code in the ConvertImageToThumbnailJPG method (not shown) uses classes in
the System.Drawing namespace for simplicity. However, the classes in this
namespace were designed for use with Windows Forms. They are not supported for

use in a Windows or ASP.NET service. For more information about image processing
options, see Dynamic Image Generation and Deep Inside Image Resizing.
+
Next steps
In this tutorial you've seen a simple multi-tier application that uses the WebJobs SDK
for backend processing. This section offers some suggestions for learning more
about ASP.NET multi-tier applications and WebJobs.+
Missing features
The application has been kept simple for a getting-started tutorial. In a real-world
application you would implement dependency injection and the repository and unit
of work patterns, use an interface for logging, use EF Code First Migrations to
manage data model changes, and use EF Connection Resiliency to manage
transient network errors.+
Scaling WebJobs
WebJobs run in the context of a web app and are not scalable separately. For
example, if you have one Standard web app instance, you have only one instance of
your background process running, and it is using some of the server resources (CPU,
memory, etc.) that otherwise would be available to serve web content.+
If traffic varies by time of day or day of week, and if the backend processing you
need to do can wait, you could schedule your WebJobs to run at low-traffic times. If
the load is still too high for that solution, you can run the backend as a WebJob in a
separate web app dedicated for that purpose. You can then scale your backend web
app independently from your frontend web app.+
For more information, see Scaling WebJobs.+
Avoiding web app timeout shut-downs
To make sure your WebJobs are always running, and running on all instances of your
web app, you have to enable the AlwaysOn feature.+
Using the WebJobs SDK outside of WebJobs
A program that uses the WebJobs SDK doesn't have to run in Azure in a WebJob. It
can run locally, and it can also run in other environments such as a Cloud Service
worker role or a Windows service. However, you can only access the WebJobs SDK
dashboard through an Azure web app. To use the dashboard you have to connect
the web app to the storage account you're using by setting the
AzureWebJobsDashboard connection string on the Configure tab of the classic
portal. Then you can get to the Dashboard by using the following URL:+
https://{webappname}.scm.azurewebsites.net/azurejobs/#/functions+

For more information, see Getting a dashboard for local development with the
WebJobs SDK, but note that it shows an old connection string name.

17 Copy Blob
The Copy Blob operation copies a blob to a destination within the storage
account. In version 2012-02-12 and later, the source for a Copy Blob
operation can be a committed blob in any Azure storage account. +
Beginning with version 2015-02-21, the source for a Copy Blob operation can
be an Azure file in any Azure storage account. +
17.1.1.1.1

Note

Only storage accounts created on or after June 7th, 2012 allow the Copy Blob
operation to copy from another storage account. +

17.2 Request
The Copy Blob request may be constructed as follows. HTTPS is
recommended. Replace myaccount with the name of your storage account,
mycontainer with the name of your container, and myblob with the name of
your destination blob. +
Beginning with version 2013-08-15, you may specify a shared access
signature for the destination blob if it is in the same account as the source
blob. Beginning with version 2015-04-05, you may also specify a shared
access signature for the destination blob if it is in a different storage account.
+

PUT
Metho
d
Reque
st URI

https://myaccount.blob.core.windows.net/mycontaine
r/myblob

HTTP
Versi
on

HTTP/
1.1

17.2.1

Emulated Storage Service URI


When making a request against the emulated storage service, specify the
emulator hostname and Blob service port as 127.0.0.1:10000, followed by
the emulated storage account name: +

PUT
Metho
d
Reque
st URI

http://127.0.0.1:10000/devstoreaccount1/mycontaine
r/myblob

HTTP
Versi
on

HTTP/
1.1

For more information, see Using the Azure Storage Emulator for
Development and Testing. +
17.2.2
URI Parameters
The following additional parameters may be specified on the request URI. +

Paramet
er

timeou
t

Description
Optional. The timeout parameter is expressed in seconds. For
more information, see Setting Timeouts for Blob Service
Operations.

17.2.3
Request Headers
The following table describes required and optional request headers. +

Request
Header

Description

Authoriza
tion

Required. Specifies the authentication scheme, account name,


and signature. For more information, see Authentication for the
Azure Storage Services.

Date or
x-msdate

Required. Specifies the Coordinated Universal Time (UTC) for the


request. For more information, see Authentication for the Azure
Storage Services.

x-msversion

Required for all authenticated requests. For more information,


see Versioning for the Azure Storage Services.

x-msmetaname:val
ue

Optional. Specifies a user-defined name-value pair associated


with the blob. If no name-value pairs are specified, the operation
will copy the metadata from the source blob or file to the
destination blob. If one or more name-value pairs are specified,
the destination blob is created with the specified metadata, and
metadata is not copied from the source blob or file.
Note that beginning with version 2009-09-19, metadata names
must adhere to the naming rules for C# identifiers. See Naming
and Referencing Containers, Blobs, and Metadata for more
information.

x-mssource-ifmodifiedsince

Optional. A DateTime value. Specify this conditional header to


copy the blob only if the source blob has been modified since
the specified date/time. If the source blob has not been
modified, the Blob service returns status code 412 (Precondition
Failed). This header cannot be specified if the source is an Azure
File.

x-mssource-if-

Optional. A DateTime value. Specify this conditional header to


copy the blob only if the source blob has not been modified

Request
Header

Description

unmodifi
ed-since

since the specified date/time. If the source blob has been


modified, the Blob service returns status code 412 (Precondition
Failed). This header cannot be specified if the source is an Azure
File.

x-mssource-ifmatch

Optional. An ETag value. Specify this conditional header to copy


the source blob only if its ETag matches the value specified. If
the ETag values do not match, the Blob service returns status
code 412 (Precondition Failed). This header cannot be specified
if the source is an Azure File.

x-mssource-ifnonematch

Optional. An ETag value. Specify this conditional header to copy


the blob only if its ETag does not match the value specified. If
the values are identical, the Blob service returns status code 412
(Precondition Failed). This header cannot be specified if the
source is an Azure File.

IfModifiedSince

Optional. A DateTime value. Specify this conditional header to


copy the blob only if the destination blob has been modified
since the specified date/time. If the destination blob has not
been modified, the Blob service returns status code 412
(Precondition Failed).

IfUnmodifi
ed-Since

Optional. A DateTime value. Specify this conditional header to


copy the blob only if the destination blob has not been modified
since the specified date/time. If the destination blob has been
modified, the Blob service returns status code 412 (Precondition
Failed).

If-Match

Optional. An ETag value. Specify an ETag value for this


conditional header to copy the blob only if the specified ETag
value matches the ETag value for an existing destination blob. If
the ETag for the destination blob does not match the ETag
specified for If-Match , the Blob service returns status code 412
(Precondition Failed).

Request
Header

If-NoneMatch

Description
Optional. An ETag value, or the wildcard character ().
Specify an ETag value for this conditional header to copy the
blob only if the specified ETag value does not match the ETag
value for the destination blob.
Specify the wildcard character (\) to perform the operation only
if the destination blob does not exist.
If the specified condition isn't met, the Blob service returns
status code 412 (Precondition Failed).

x-mscopysource:n
ame

Required. Specifies the name of the source blob or file.


Beginning with version 2012-02-12, this value may be a URL of
up to 2 KB in length that specifies a blob. The value should be
URL-encoded as it would appear in a request URI. A source blob
in the same storage account can be authenticated via Shared
Key. However, if the source is a blob in another account, the
source blob must either be public or must be authenticated via a
shared access signature. If the source blob is blob is public, no
authentication is required to perform the copy operation.
Beginning with version 2015-02-21, the source object may be a
file in the Azure File service. If the source object is a file that is
to be copied to a blob, then the source file must be
authenticated usign a shared access signature, whether it
resides in the same account or in a different account.
Only storage accounts created on or after June 7th, 2012 allow
the Copy Blob operation to copy from another storage account.
Here are some examples of source object URLs:
-

https://myaccount.blob.core.windows.net/mycontainer/myb
lob
-

https://myaccount.blob.core.windows.net/mycontainer/myb
lob?snapshot=<DateTime>

Request
Header

Description

When the source object is a file in the Azure File service, the
source URL uses the following format; note that the URL must
include a valid SAS token for the file:
-

https://myaccount.file.core.windows.net/myshare/mydirect
orypath/myfile?sastoken
In versions before 2012-02-12, blobs can only be copied within
the same account, and a source name can use these formats:
- Blob in named container:

/accountName/containerName/blobName
- Snapshot in named container:

/accountName/containerName/blobName?
snapshot=<DateTime>
- Blob in root container: /accountName/blobName
- Snapshot in root container: /accountName/blobName?
snapshot=<DateTime>
x-msleaseid:<ID>

Required if the destination blob has an active lease. The lease ID


specified for this header must match the lease ID of the
destination blob. If the request does not include the lease ID or it
is not valid, the operation fails with status code 412
(Precondition Failed).
If this header is specified and the destination blob does not
currently have an active lease, the operation will also fail with
status code 412 (Precondition Failed).
In version 2012-02-12 and newer, this value must specify an
active, infinite lease for a leased blob. A finite-duration lease ID
fails with 412 (Precondition Failed).

x-mssourcelease-id:

Optional, versions before 2012-02-12 (unsupported in 2012-0212 and newer). Specify this header to perform the Copy Blob
operation only if the lease ID given matches the active lease ID

Request
Header

Description

<ID>

of the source blob.


If this header is specified and the source blob does not currently
have an active lease, the operation will also fail with status code
412 (Precondition Failed).

x-msclientrequestid

17.2.4
None. +

Optional. Provides a client-generated, opaque value with a 1 KB


character limit that is recorded in the analytics logs when
storage analytics logging is enabled. Using this header is highly
recommended for correlating client-side activities with requests
received by the server. For more information, see About Storage
Analytics Logging and Azure Logging: Using Logs to Track
Storage Requests.

Request Body

17.3 Response
The response includes an HTTP status code and a set of response headers. +
17.3.1
Status Code
In version 2012-02-12 and newer, a successful operation returns status code
202 (Accepted). +
In versions before 2012-02-12, a successful operation returns status code
201 (Created). +
For information about status codes, see Status and Error Codes. +
17.3.2
Response Headers
The response for this operation includes the following headers. The response
may also include additional standard HTTP headers. All standard headers
conform to the HTTP/1.1 protocol specification. +

Response header

Description

ETag

In version 2012-02-12 and newer, if the copy is


complete, contains the ETag of the destination blob. If
the copy isnt complete, contains the ETag of the empty
blob created at the start of the copy.
In versions before 2012-02-12, returns the ETag for the
destination blob.
In version 2011-08-18 and newer, the ETag value will
be in quotes.

Last-Modified

Returns the date/time that the copy operation to the


destination blob completed.

x-ms-request-id

This header uniquely identifies the request that was


made and can be used for troubleshooting the request.
For more information, see Troubleshooting API
Operations.

x-ms-version

Indicates the version of the Blob service used to


execute the request. This header is returned for
requests made against version 2009-09-19 and later.

Date

A UTC date/time value generated by the service that


indicates the time at which the response was initiated.

x-ms-copy-id:
<id>

Version 2012-02-12 and newer. String identifier for this


copy operation. Use with Get Blob or Get Blob
Properties to check the status of this copy operation,
or pass to Abort Copy Blob to abort a pending copy.

x-ms-copy-status:
<success &#124;
pending>

Version 2012-02-12 and newer. State of the copy


operation, with these values:
- success : the copy completed successfully.
- pending : the copy is in progress.

17.4 Response Body


None. +

17.5 Sample Response


The following is a sample response for a request to copy a blob: +
Copy

Response Status:
HTTP/1.1 202 Accepted

Response Headers:
Last-Modified: <date>
ETag: "0x8CEB669D794AFE2"
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: cc6b209a-b593-4be1-a38a-dde7c106f402
x-ms-version: 2015-02-21
x-ms-copy-id: 1f812371-a41d-49e6-b123-f4b542e851c5
x-ms-copy-status: pending
Date: <date>

17.6 Authorization
This operation can be called by the account owner. For requests made
against version 2013-08-15 and later, a shared access signature that has
permission to write to the destination blob or its container is supported for
copy operations within the same account. Note that the shared access
signature specified on the request applies only to the destination blob. +

Access to the source blob or file is authorized separately, as described in the


details for the request header x-ms-copy-source. +
The following table describes how the destination and source objects for a
Copy Blob operation may be authenticated. +

Authentication
with Shared
Key/Shared Key
Lite

Authentication
with Shared
Access
Signature

Public Object
Not Requiring
Authentication

Destination
blob

Yes

Yes

No

Source blob in
same account

Yes

Yes

Yes

Source blob in
another
account

No

Yes

Yes

Source file in
the same
account or
another
account

No

Yes

N/A

17.7 Remarks
In version 2012-02-12 and newer, the Copy Blob operation can complete
asynchronously. This operation returns a copy ID you can use to check or
abort the copy operation. The Blob service copies blobs on a best-effort
basis. +
The source blob for a copy operation may be a block blob, an append blob, or
a page blob, or a snapshot. If the destination blob already exists, it must be

of the same blob type as the source blob. Any existing destination blob will
be overwritten. The destination blob cannot be modified while a copy
operation is in progress. +
In version 2015-02-21 and newer, the source for the copy operation may also
be a file in the Azure File service. If the source is a file, the destination must
be a block blob. +
Multiple pending Copy Blob operations within an account might be processed
sequentially. A destination blob can only have one outstanding copy blob
operation. In other words, a blob cannot be the destination for multiple
pending Copy Blob operations. An attempt to Copy Blob to a destination blob
that already has a copy pending fails with status code 409 (Conflict). +
Only storage accounts created on or after June 7th, 2012 allow the Copy Blob
operation to copy from another storage account. An attempt to copy from
another storage account to an account created before June 7th, 2012 fails
with status code 400 (Bad Request). +
The Copy Blob operation always copies the entire source blob or file; copying
a range of bytes or set of blocks is not supported. +
A Copy Blob operation can take any of the following forms: +

You can copy a source blob to a destination blob with a different name.
The destination blob can be an existing blob of the same blob type (block,
append, or page), or can be a new blob created by the copy operation.

You can copy a source blob to a destination blob with the same name,
effectively replacing the destination blob. Such a copy operation removes any
uncommitted blocks and overwrites the blob's metadata.

You can copy a source file in the Azure File service to a destination blob.
The destination blob can be an existing block blob, or can be a new block blob
created by the copy operation. Copying from files to page blobs or append
blobs is not supported.

You can copy a snapshot over its base blob. By promoting a snapshot to
the position of the base blob, you can restore an earlier version of a blob.

You can copy a snapshot to a destination blob with a different name. The
resulting destination blob is a writeable blob and not a snapshot.
When copying from a page blob, the Blob service creates a destination page
blob of the source blobs length, initially containing all zeroes. Then the source
page ranges are enumerated, and non-empty ranges are copied.
For a block blob or an append blob, the Blob service creates a committed blob
of zero length before returning from this operation.
When copying from a block blob, all committed blocks and their block IDs are
copied. Uncommitted blocks are not copied. At the end of the copy operation,
the destination blob will have the same committed block count as the source.
When copying from an append blob, all committed blocks are copied. At the
end of the copy operation, the destination blob will have the same committed
block count as the source.
For all blob types, you can call Get Blob or Get Blob Properties on the
destination blob to check the status of the copy operation. The final blob will
be committed when the copy completes.
When the source of a copy operation provides ETags, if there are any changes
to the source while the copy is in progress, the copy will fail. An attempt to
change the destination blob while a copy is in progress will fail with 409

Conflict. If the destination blob has an infinite lease, the lease ID must be
passed to Copy Blob . Finite-duration leases are not allowed.
The ETag for a block blob changes when the Copy Blob operation is initiated
and when the copy finishes. The ETag for a page blob changes when the Copy

Blob operation is initiated, and continues to change frequently during the


copy. The contents of a block blob are only visible using a GET after the full
copy completes.
Copying Blob Properties and Metadata
When a blob is copied, the following system properties are copied to the
destination blob with the same values:

Content-Type

Content-Encoding

Content-Language

Content-Length

Cache-Control

Content-MD5

Content-Disposition

x-ms-blob-sequence-number (for page blobs only)

x-ms- committed-block-count (for append blobs only, and for version


2015-02-21 only)

The source blob's committed block list is also copied to the destination blob, if
the blob is a block blob. Any uncommitted blocks are not copied.
The destination blob is always the same size as the source blob, so the value
of the Content-Length header for the destination blob matches that for the
source blob.
When the source blob and destination blob are the same, Copy Blob removes
any uncommitted blocks. If metadata is specified in this case, the existing
metadata is overwritten with the new metadata.
Copying a Leased Blob
The Copy Blob operation only reads from the source blob so the lease state of
the source blob does not matter. However, the Copy Blob operation saves the
ETag of the source blob when the copy is initiated. If the ETag value changes
before the copy completes, the copy fails. You can prevent changes to the
source blob by leasing it during the copy operation.
If the destination blob has an active infinite lease, you must specify its lease
ID in the call to the Copy Blob operation. If the lease you specify is an active
finite-duration lease, this call fails with a status code 412 (Precondition Failed).
While the copy is pending, any lease operation on the destination blob will fail
with status code 409 (Conflict). An infinite lease on the destination blob is
locked in this way during the copy operation whether you are copying to a
destination blob with a different name from the source, copying to a
destination blob with the same name as the source, or promoting a snapshot
over its base blob. If the client specifies a lease ID on a blob that does not yet
exist, the Blob service will return status code 412 (Precondition Failed) for
requests made against version 2013-08-15 and later; for prior versions the
Blob service will return status code 201 (Created).
Copying Snapshots

When a source blob is copied, any snapshots of the source blob are not copied
to the destination. When a destination blob is overwritten with a copy, any
snapshots associated with the destination blob stay intact under its name.
You can perform a copy operation to promote a snapshot blob over its base
blob. In this way you can restore an earlier version of a blob. The snapshot
remains, but its destination is overwritten with a copy that can be both read
and written.
Working with a Pending Copy (version 2012-02-12 and newer)
The Copy Blob operation completes the copy asynchronously. Use the
following table to determine the next step based on the status code returned
by Copy Blob :
+
Status Code

Meaning

202 (Accepted), xms-copy-status:


success

Copy completed successfully.

202 (Accepted), xms-copy-status:


pending

Copy has not completed. Poll the destination blob using


Get Blob Properties to examine the x-ms-copy-status
until copy completes or fails.

4xx, 500, or 503

Copy failed.

During and after a Copy Blob operation, the properties of the destination
blob contain the copy ID of the Copy Blob operation and URL of the source
blob. When the copy completes, the Blob service writes the time and
outcome value (success, failed, or aborted) to the destination blob

properties. If the operation failed, the x-ms-copy-status-description header


contains an error detail string. +
A pending Copy Blob operation has a 2 week timeout. A copy attempt that
has not completed after 2 weeks times out and leaves an empty blob with
the x-ms-copy-status field set to failed and the x-ms-copy-status-description
set to 500 (OperationCancelled). Intermittent, non-fatal errors that can occur
during a copy might impede progress of the copy but not cause it to fail. In
these cases, x-ms-copy-status-description describes the intermittent errors.
+
Any attempt to modify or snapshot the destination blob during the copy will
fail with 409 (Conflict) Copy Blob in Progress. +
If you call the Abort Copy Blob operation, you will see a x-ms-copystatus:aborted header and the destination blob will have intact metadata
and a blob length of zero bytes. You can repeat the original call to Copy Blob
to try the copy again. +
Billing +
The destination account of a Copy Blob operation is charged for one
transaction to initiate the copy, and also incurs one transaction for each
request to abort or request the status of the copy operation. +
When the source blob is in another account, the source account incurs
transaction costs. In addition, if the source and destination accounts reside in
different regions (e.g., US North and US South), bandwidth used to transfer
the request is charged to the source storage account as egress. Egress
between accounts within the same region is free. +

When you copy a source blob to a destination blob with a different name
within the same account, you use additional storage resources for the new
blob, so the copy operation results in a charge against the storage accounts
capacity usage for those additional resources. However, if the source and
destination blob name are the same within the same account (for example,
when you promote a snapshot to its base blob), no additional charge is
incurred other than the extra copy metadata stored in version 2012-02-12
and newer. +
When you promote a snapshot to replace its base blob, the snapshot and
base blob become identical. They share blocks or pages, so the copy
operation does not result in an additional charge against the storage
account's capacity usage. However, if you copy a snapshot to a destination
blob with a different name, an additional charge is incurred for the storage
resources used by the new blob that results. Two blobs with different names
cannot share blocks or pages even if they are identical. For more information
about snapshot cost scenarios, see Understanding How Snapshots Accrue
Charges.

18 Load balancing for Azure infrastructures


There are two levels of load balancing available for Azure infrastructure
services:+

DNS Level: Load balancing for traffic to different cloud services located in
different data centers, to different Azure websites located in different data
centers, or to external endpoints. This is done with Azure Traffic Manager and the
Round Robin load balancing method.

Network Level: Load balancing of incoming Internet traffic to different


virtual machines of a cloud service, or load balancing of traffic between virtual
machines in a cloud service or virtual network. This is done with the Azure load
balancer.

18.1 Traffic Manager load balancing for cloud services and websites
Traffic Manager allows you to control the distribution of user traffic to
endpoints, which can include cloud services, websites, external sites, and
other Traffic Manager profiles. Traffic Manager works by applying an
intelligent policy engine to Domain Name System (DNS) queries for the
domain names of your Internet resources. Your cloud services or websites
can be running in different datacenters across the world.+
You must use either REST or Windows PowerShell to configure external
endpoints or Traffic Manager profiles as endpoints.+
Traffic Manager uses three load-balancing methods to distribute traffic:+

Failover: Use this method when you want to use a primary endpoint for all
traffic, but provide backups in case the primary becomes unavailable.

Performance: Use this method when you have endpoints in different


geographic locations and you want requesting clients to use the "closest"
endpoint in terms of the lowest latency.

Round Robin: Use this method when you want to distribute load across a set
of cloud services in the same datacenter or across cloud services or websites in
different datacenters.

For more information, see About Traffic Manager Load Balancing Methods.+
The following diagram shows an example of the Round Robin load balancing
method for distributing traffic between different cloud services.+

+
The basic process is the following:+
1.

An Internet client queries a domain name corresponding to a web service.

2.

DNS forwards the name query request to Traffic Manager.

3.

Traffic Manager chooses the next cloud service in the Round Robin list and
sends back the DNS name. The Internet client's DNS server resolves the name to
an IP address and sends it to the Internet client.

4.

The Internet client connects with the cloud service chosen by Traffic Manager.

For more information, see Traffic Manager.+

18.2 Azure load balancing for virtual machines


Virtual machines in the same cloud service or virtual network can
communicate with each other directly using their private IP addresses.
Computers and services outside the cloud service or virtual network can only

communicate with virtual machines in a cloud service or virtual network with


a configured endpoint. An endpoint is a mapping of a public IP address and
port to that private IP address and port of a virtual machine or web role
within an Azure cloud service.+
The Azure Load Balancer randomly distributes a specific type of incoming
traffic across multiple virtual machines or services in a configuration known
as a load-balanced set. For example, you can spread the load of web request
traffic across multiple web servers or web roles.+
The following diagram shows a load-balanced endpoint for standard
(unencrypted) web traffic that is shared among three virtual machines for
the public and private TCP port of 80. These three virtual machines are in a
load-balanced set.+

+
For more information, see Azure Load Balancer. For the steps to create a
load-balanced set, see Configure a load-balanced set.+

Azure can also load balance within a cloud service or virtual network. This is
known as internal load balancing and can be used in the following ways:+

To load balance between servers in different tiers of a multi-tier application


(for example, between web and database tiers).

To load balance line-of-business (LOB) applications hosted in Azure without


requiring additional load balancer hardware or software.

To include on-premises servers in the set of computers whose traffic is load


balanced.

Similar to Azure load balancing, internal load balancing is facilitated by


configuring an internal load-balanced set.+
The following diagram shows an example of an internal load-balanced
endpoint for a line of business (LOB) application that is shared among three
virtual machines in a cross-premises virtual network.+

18.3 Load balancer considerations


A load balancer is configured by default to timeout an idle session in 4
minutes. If your application behind a load balancer leaves a connection idle
for more than 4 minutes and it doesn't have a Keep-Alive configuration, the
connection will be dropped. You can change the load balancer behavior to
allow a longer timeout setting for Azure load balancer.+
Other consideration is the type of distribution mode supported by Azure Load
Balancer. You can configure source IP affinity (source IP, destination IP) or
source IP protocol (source IP , destination IP and protocol). Check out Azure
Load Balancer distribution mode (source IP affinity) for more information.

19 How to monitor cloud services


To use this feature and other new Azure capabilities, sign up for the free
preview.+
You can monitor key performance metrics for your cloud services in the
Azure classic portal. You can set the level of monitoring to minimal and
verbose for each service role, and can customize the monitoring displays.
Verbose monitoring data is stored in a storage account, which you can
access outside the portal. +
Monitoring displays in the Azure classic portal are highly configurable. You
can choose the metrics you want to monitor in the metrics list on the
Monitor page, and you can choose which metrics to plot in metrics charts
on the Monitor page and the dashboard. +

19.1 Concepts
By default, minimal monitoring is provided for a new cloud service using
performance counters gathered from the host operating system for the roles
instances (virtual machines). The minimal metrics are limited to CPU
Percentage, Data In, Data Out, Disk Read Throughput, and Disk Write
Throughput. By configuring verbose monitoring, you can receive additional
metrics based on performance data within the virtual machines (role
instances). The verbose metrics enable closer analysis of issues that occur
during application operations.+
By default performance counter data from role instances is sampled and
transferred from the role instance at 3-minute intervals. When you enable
verbose monitoring, the raw performance counter data is aggregated for
each role instance and across role instances for each role at intervals of 5
minutes, 1 hour, and 12 hours. The aggregated data is purged after 10 days.
+
After you enable verbose monitoring, the aggregated monitoring data is
stored in tables in your storage account. To enable verbose monitoring for a
role, you must configure a diagnostics connection string that links to the
storage account. You can use different storage accounts for different roles.+
Enabling verbose monitoring increases your storage costs related to data
storage, data transfer, and storage transactions. Minimal monitoring does
not require a storage account. The data for the metrics that are exposed at
the minimal monitoring level are not stored in your storage account, even if
you set the monitoring level to verbose.+

19.2 How to: Configure monitoring for cloud services


Use the following procedures to configure verbose or minimal monitoring in
the Azure classic portal. +

19.2.1

Before you begin

Create a classic storage account to store the monitoring data. You can use

different storage accounts for different roles. For more information, see How to
create a storage account.
Enable Azure Diagnostics for your cloud service roles. See Configuring

Diagnostics for Cloud Services.


+

Ensure that the diagnostics connection string is present in the Role


configuration. You cannot turn on verbose monitoring until you enable Azure
Diagnostics and include a diagnostics connection string in the Role
configuration. +
19.2.1.1.1

Note

Projects targeting Azure SDK 2.5 did not automatically include the
diagnostics connection string in the project template. For these projects, you
need to manually add the diagnostics connection string to the Role
configuration.+
To manually add diagnostics connection string to Role
configuration+
1.

Open the Cloud Service project in Visual Studio

2.

Double-click on the Role to open the Role designer and select the Settings
tab

3.

Look for a setting named


Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString.

4.

If this setting is not present, click on the Add Setting button to add it to the
configuration and change the type for the new setting to ConnectionString

5.

Set the value for connection string the by clicking on the ... button. This
will open up a dialog allowing you to select a storage account.

19.2.2
1.

To change the monitoring level to verbose or minimal

In the Azure classic portal, open the Configure page for the cloud service
deployment.

2.

In Level, click Verbose or Minimal.

3.

Click Save.

After you turn on verbose monitoring, you should start seeing the monitoring
data in the Azure classic portal within the hour.+
The raw performance counter data and aggregated monitoring data are
stored in the storage account in tables qualified by the deployment ID for the
roles. +

19.3 How to: Receive alerts for cloud service metrics


You can receive alerts based on your cloud service monitoring metrics. On
the Management Services page of the Azure classic portal, you can create
a rule to trigger an alert when the metric you choose reaches a value that
you specify. You can also choose to have email sent when the alert is

triggered. For more information, see How to: Receive Alert Notifications and
Manage Alert Rules in Azure.+

19.4 How to: Add metrics to the metrics table


1.

In the Azure classic portal, open the Monitor page for the cloud service.
By default, the metrics table displays a subset of the available metrics. The
illustration shows the default verbose metrics for a cloud service, which is
limited to the Memory\Available MBytes performance counter, with data
aggregated at the role level. Use Add Metrics to select additional aggregate
and role-level metrics to monitor in the Azure classic portal.

2.

To add metrics to the metrics table:


a.

Click Add Metrics to open Choose Metrics, shown below.


The first available metric is expanded to show options that are available.
For each metric, the top option displays aggregated monitoring data for all
roles. In addition, you can choose individual roles to display data for.

b.

To select metrics to display


Click the down arrow by the metric to expand the monitoring

options.

Select the check box for each monitoring option you want to

display.
You can display up to 50 metrics in the metrics table.

19.4.1.1.1

Tip

In verbose monitoring, the metrics list can contain dozens of metrics. To


display a scrollbar, hover over the right side of the dialog box. To filter the
list, click the search icon, and enter text in the search box, as shown
below.

3.

After you finish selecting metrics, click OK (checkmark).


The selected metrics are added to the metrics table, as shown below.

4.

To delete a metric from the metrics table, click the metric to select it, and
then click Delete Metric. (You only see Delete Metric when you have a metric
selected.)

19.4.2
To add custom metrics to the metrics table
The Verbose monitoring level provides a list of default metrics that you can
monitor on the portal. In addition to these you can monitor any custom
metrics or performance counters defined by your application through the
portal.+
The following steps assume that you have turned on Verbose monitoring
level and have configured your application to collect and transfer custom
performance counters. +
To display the custom performance counters in the portal you need to update
the configuration in wad-control-container:+
1.

Open the wad-control-container blob in your diagnostics storage account.


You can use Visual Studio or any other storage explorer to do this.

2.

Navigate the blob path using the pattern


DeploymentId/RoleName/RoleInstance to find the configuration for your
role instance.

3.

Download the configuration file for your role instance and update it to
include any custom performance counters. For example to monitor Disk Write
Bytes/sec for the C drive add the following under
PerformanceCounters\Subscriptions node
Copy
xml
<PerformanceCounterConfiguration>

<CounterSpecifier>\LogicalDisk(C:)\Disk Write Bytes/sec</CounterSpecifier>


<SampleRateInSeconds>180</SampleRateInSeconds>
</PerformanceCounterConfiguration>

4.

Save the changes and upload the configuration file back to the same location
overwriting the existing file in the blob.

5.

Toggle to Verbose mode in the Azure classic portal configuration. If you were
in Verbose mode already you will have to toggle to minimal and back to verbose.

6.

The custom performance counter will now be available in the Add Metrics
dialog box.

19.5 How to: Customize the metrics chart


1.

In the metrics table, select up to 6 metrics to plot on the metrics chart. To


select a metric, click the check box on its left side. To remove a metric from
the metrics chart, clear its check box in the metrics table.
As you select metrics in the metrics table, the metrics are added to the
metrics chart. On a narrow display, an n more drop-down list contains metric
headers that won't fit the display.

2.

To switch between displaying relative values (final value only for each
metric) and absolute values (Y axis displayed), select Relative or Absolute at
the top of the chart.

3.

To change the time range the metrics chart displays, select 1 hour, 24
hours, or 7 days at the top of the chart.

On the dashboard metrics chart, the method for plotting metrics is different. A
standard set of metrics is available, and metrics are added or removed by
selecting the metric header.
+

19.5.1

To customize the metrics chart on the dashboard

1.

Open the dashboard for the cloud service.

2.

Add or remove metrics from the chart:

To plot a new metric, select the check box for the metric in the chart
headers. On a narrow display, click the down arrow by n??metrics to plot a
metric the chart header area can't display.

To delete a metric that is plotted on the chart, clear the check box by
its header.

3.

Switch between Relative and Absolute displays.

4.

Choose 1 hour, 24 hours, or 7 days of data to display.

19.6 How to: Access verbose monitoring data outside the Azure classic
portal
Verbose monitoring data is stored in tables in the storage accounts that you
specify for each role. For each cloud service deployment, six tables are
created for the role. Two tables are created for each (5 minutes, 1 hour, and
12 hours). One of these tables stores role-level aggregations; the other table
stores aggregations for role instances. +
The table names have the following format:+

Copy

WAD*deploymentID*PT*aggregation_interval*[R|RI]Table

where:+

deploymentID is the GUID assigned to the cloud service deployment

aggregation_interval = 5M, 1H, or 12H

role-level aggregations = R

aggregations for role instances = RI

For example, the following tables would store verbose monitoring data
aggregated at 1-hour intervals:+
Copy

WAD8b7c4233802442b494d0cc9eb9d8dd9fPT1HRTable (hourly aggregations for the role)

WAD8b7c4233802442b494d0cc9eb9d8dd9fPT1HRITable (hourly aggregations for role


instances)

20 Continuous delivery to Azure using


Visual Studio Team Services

You can configure your Visual Studio Team Services team projects to
automatically build and deploy to Azure web apps or cloud services. (For
information on how to set up a continuous build and deploy system using an
on-premises Team Foundation Server, see Continuous Delivery for Cloud
Services in Azure.)+
This tutorial assumes you have Visual Studio 2013 and the Azure SDK
installed. If you don't already have Visual Studio 2013, download it by
choosing the Get started for free link at www.visualstudio.com. Install the
Azure SDK from here.+
20.1.1.1.1

Note

You need an Visual Studio Team Services account to complete this tutorial:
You can open a Visual Studio Team Services account for free.+
To set up a cloud service to automatically build and deploy to Azure by using
Visual Studio Team Services, follow these steps.+

20.2 1: Create a team project


Follow the instructions here to create your team project and link it to Visual
Studio. This walkthrough assumes you are using Team Foundation Version
Control (TFVC) as your source control solution. If you want to use Git for
version control, see the Git version of this walkthrough.+

20.3 2: Check in a project to source control


1.

In Visual Studio, open the solution you want to deploy, or create a new
one. You can deploy a web app or a cloud service (Azure Application) by
following the steps in this walkthrough. If you want to create a new solution,
create a new Azure Cloud Service project, or a new ASP.NET MVC project.
Make sure that the project targets .NET Framework 4 or 4.5, and if you are
creating a cloud service project, add an ASP.NET MVC web role and a worker

role, and choose Internet application for the web role. When prompted,
choose Internet Application. If you want to create a web app, choose the
ASP.NET Web Application project template, and then choose MVC. See Create
an ASP.NET web app in Azure App Service.
20.3.1.1.1 Note

Visual Studio Team Services only support CI deployments of Visual Studio Web
Applications at this time. Web Site projects are out of scope.
2.

Open the context menu for the solution, and choose Add Solution to
Source Control.

3.

Accept or change the defaults and choose the OK button. Once the
process completes, source control icons appear in Solution Explorer.

4.

Open the shortcut menu for the solution, and choose Check In.

5.

In the Pending Changes area of Team Explorer, type a comment for


the check-in and choose the Check In button.

Note the options to include or exclude specific changes when you check in. If
desired changes are excluded, choose the Include All link.

20.4 3: Connect the project to Azure


1.

Now that you have a VS Team Services team project with some source
code in it, you are ready to connect your team project to Azure. In the Azure
classic portal, select your cloud service or web app, or create a new one by
choosing the + icon at the bottom left and choosing Cloud Service or Web
App and then Quick Create. Choose the Set up publishing with Visual
Studio Team Services link.

2.

In the wizard, type the name of your Visual Studio Team Services account
in the textbox and click the Authorize Now link. You might be asked to sign
in.

3.

In the Connection Request pop-up dialog, choose the Accept button to


authorize Azure to configure your team project in VS Team Services.

4.

When authorization succeeds, you see a dropdown containing a list of


your Visual Studio Team Services team projects. Choose the name of team
project that you created in the previous steps, and then choose the wizard's
checkmark button.

5.

After your project is linked, you will see some instructions for checking in
changes to your Visual Studio Team Services team project. On your next
check-in, Visual Studio Team Services will build and deploy your project to
Azure. Try this now by clicking the Check In from Visual Studio link, and
then the Launch Visual Studio link (or the equivalent Visual Studio button
at the bottom of the portal screen).

20.5 4: Trigger a rebuild and redeploy your project


1.

In Visual Studio's Team Explorer, choose the Source Control Explorer


link.

2.

Navigate to your solution file and open it.

3.

In Solution Explorer, open up a file and change it. For example, change
the file _Layout.cshtml under the Views\Shared folder in an MVC web role.

4.

Edit the logo for the site and choose Ctrl+S to save the file.

5.

In Team Explorer, choose the Pending Changes link.

6.

Enter a comment and then choose the Check In button.

7.

Choose the Home button to return to the Team Explorer home page.

8.

Choose the Builds link to view the builds in progress.

Team Explorer shows that a build has been triggered for your check-in.

9.

Double-click the name of the build in progress to view a detailed log as


the build progresses.

10.

While the build is in-progress, take a look at the build definition that was

created when you linked TFS to Azure by using the wizard. Open the shortcut
menu for the build definition and choose Edit Build Definition.

In the Trigger tab, you will see that the build definition is set to build on
every check-in by default.

In the Process tab, you can see the deployment environment is set to the
name of your cloud service or web app. If you are working with web apps, the
properties you see will be different from those shown here.

11.

Specify values for the properties if you want different values than the

defaults. The properties for Azure publishing are in the Deployment section.
The following table shows the available properties in the Deployment
section:

12.

Property

Default Value

Allow Untrusted
Certificates

If false, SSL certificates must be signed by a root


authority.

Allow Upgrade

Allows the deployment to update an existing


deployment instead of creating a new one. Preserves
the IP address.

Do Not Delete

If true, do not overwrite an existing unrelated


deployment (upgrade is allowed).

Path to Deployment
Settings

The path to your .pubxml file for a web app, relative to


the root folder of the repo. Ignored for cloud services.

Sharepoint
Deployment
Environment

The same as the service name.

Azure Deployment
Environment

The web app or cloud service name.

If you are using multiple service configurations (.cscfg files), you can

specify the desired service configuration in the Build, Advanced, MSBuild


arguments setting. For example, to use ServiceConfiguration.Test.cscfg, set
MSBuild arguments line option /p:TargetProfile=Test .

By this time, your build should be completed successfully.

13.

If you double-click the build name, Visual Studio shows a Build

Summary, including any test results from associated unit test projects.

14.

In the Azure classic portal, you can view the associated deployment on

the Deployments tab when the staging environment is selected.

15.

Browse to your site's URL. For a web app, just click the Browse button on

the command bar. For a cloud service, choose the URL in the Quick Glance
section of the Dashboard page that shows the Staging environment for a
cloud service. Deployments from continuous integration for cloud services are
published to the Staging environment by default. You can change this by
setting the Alternate Cloud Service Environment property to Production.
This screenshot shows where the site URL is on the cloud service's dashboard
page.

A new browser tab will open to reveal your running site.

For cloud services, if you make other changes to your project, you trigger
more builds, and you will accumulate multiple deployments. The latest one
marked as Active.

20.6 5: Redeploy an earlier build


This step applies to cloud services and is optional. In the Azure classic portal,
choose an earlier deployment and then choose the Redeploy button to
rewind your site to an earlier check-in. Note that this will trigger a new build
in TFS and create a new entry in your deployment history.+

20.7 6: Change the Production deployment


This step applies only to cloud services, not web apps. When you are ready,
you can promote the Staging environment to the production environment by
choosing the Swap button in the Azure classic portal. The newly deployed
Staging environment is promoted to Production, and the previous Production
environment, if any, becomes a Staging environment. The Active deployment
may be different for the Production and Staging environments, but the
deployment history of recent builds is the same regardless of environment.+

20.8 7: Run unit tests


This step applies only to web apps, not cloud services. To put a quality gate
on your deployment, you can run unit tests and if they fail, you can stop the
deployment.+
1.

In Visual Studio, add a unit test project.

2.

Add project references to the project you want to test.

3.

Add some unit tests. To get started, try a dummy test that will always
pass.
Copy

```
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace UnitTestProject1
{
[TestClass]
public class UnitTest1

{
[TestMethod]
[ExpectedException(typeof(NotImplementedException))]
public void TestMethod1()
{
throw new NotImplementedException();
}
}
}
```

4.

Edit the build definition, choose the Process tab, and expand the Test node.

5.

Set the Fail build on test failure to True. This means that the
deployment won't occur unless the tests pass.

6.

Queue a new build.

7.

While the build is proceeding, check on its progress.

8.

When the build is done, check the test results.

9.

Try creating a test that will fail. Add a new test by copying the first one,
rename it, and comment out the line of code that states
NotImplementedException is an expected exception.
Copy

```
[TestMethod]
//[ExpectedException(typeof(NotImplementedException))]
public void TestMethod2()
{

throw new NotImplementedException();


}
```

10.

Check in the change to queue a new build.

11.

View the test results to see details about the failure.

21 Configuring SSL for azure application


Secure Socket Layer (SSL) encryption is the most commonly used method of
securing data sent across the internet. This common task discusses how to
specify an HTTPS endpoint for a web role and how to upload an SSL
certificate to secure your application.+
21.1.1.1.1

Note

The procedures in this task apply to Azure Cloud Services; for App Services,
see this article.+
This task uses a production deployment. Information on using a staging
deployment is provided at the end of this topic.+
Read this article first if you have not yet created a cloud service.+
21.1.1.1.2

Note

Get going faster--use the NEW Azure guided walkthrough! It makes


associating a custom domain name AND securing communication (SSL) with
Azure Cloud Services or Azure App Service Web Apps a snap.+

21.2 Step 1: Get an SSL certificate


To configure SSL for an application, you first need to get an SSL certificate
that has been signed by a Certificate Authority (CA), a trusted third party

who issues certificates for this purpose. If you do not already have one, you
need to obtain one from a company that sells SSL certificates.+
The certificate must meet the following requirements for SSL certificates in
Azure:+

The certificate must contain a private key.

The certificate must be created for key exchange, exportable to a Personal


Information Exchange (.pfx) file.

The certificate's subject name must match the domain used to access the
cloud service. You cannot obtain an SSL certificate from a certificate authority
(CA) for the cloudapp.net domain. You must acquire a custom domain name to
use when access your service. When you request a certificate from a CA, the
certificate's subject name must match the custom domain name used to access
your application. For example, if your custom domain name is contoso.com you
would request a certificate from your CA for .contoso.com* or
**www.contoso.com.

The certificate must use a minimum of 2048-bit encryption.

For test purposes, you can create and use a self-signed certificate. A selfsigned certificate is not authenticated through a CA and can use the
cloudapp.net domain as the website URL. For example, the following task
uses a self-signed certificate in which the common name (CN) used in the
certificate is sslexample.cloudapp.net.+
Next, you must include information about the certificate in your service
definition and service configuration files.+

21.3 Step 2: Modify the service definition and configuration files


Your application must be configured to use the certificate, and an HTTPS
endpoint must be added. As a result, the service definition and service
configuration files need to be updated.+
1.

In your development environment, open the service definition file


(CSDEF), add a Certificates section within the WebRole section, and include
the following information about the certificate (and intermediate certificates):
Copy
xml
<WebRole name="CertificateTesting" vmsize="Small">
...
<Certificates>
<Certificate name="SampleCertificate"
storeLocation="LocalMachine"
storeName="My"
permissionLevel="limitedOrElevated" />
<!-- IMPORTANT! Unless your certificate is either
self-signed or signed directly by the CA root, you
must include all the intermediate certificates
here. You must list them here, even if they are
not bound to any endpoints. Failing to list any of
the intermediate certificates may cause hard-to-reproduce
interoperability problems on some clients.-->
<Certificate name="CAForSampleCertificate"
storeLocation="LocalMachine"

storeName="CA"
permissionLevel="limitedOrElevated" />
</Certificates>
...
</WebRole>

The Certificates section defines the name of our certificate, its location, and
the name of the store where it is located.
Permissions ( permisionLevel attribute) can be set to one of the following
values:
Permission Value

Description

limitedOrElevated

(Default) All role processes can access the private key.

elevated

Only elevated processes can access the private key.

2.

In your service definition file, add an InputEndpoint element within the


Endpoints section to enable HTTPS:
Copy
xml
<WebRole name="CertificateTesting" vmsize="Small">
...
<Endpoints>
<InputEndpoint name="HttpsIn" protocol="https" port="443"
certificate="SampleCertificate" />
</Endpoints>

...
</WebRole>

3.

In your service definition file, add a Binding element within the Sites
section. This section adds an HTTPS binding to map the endpoint to your site:
Copy
xml
<WebRole name="CertificateTesting" vmsize="Small">
...
<Sites>
<Site name="Web">
<Bindings>
<Binding name="HttpsIn" endpointName="HttpsIn" />
</Bindings>
</Site>
</Sites>
...
</WebRole>

All the required changes to the service definition file have been completed,
but you still need to add the certificate information to the service
configuration file.
4.

In your service configuration file (CSCFG),


ServiceConfiguration.Cloud.cscfg, add a Certificates section within the Role
section, replacing the sample thumbprint value shown below with that of your
certificate:
Copy

xml
<Role name="Deployment">
...
<Certificates>
<Certificate name="SampleCertificate"
thumbprint="9427befa18ec6865a9ebdc79d4c38de50e6316ff"
thumbprintAlgorithm="sha1" />
<Certificate name="CAForSampleCertificate"
thumbprint="79d4c38de50e6316ff9427befa18ec6865a9ebdc"
thumbprintAlgorithm="sha1" />
</Certificates>
...
</Role>

(The preceding example uses sha1 for the thumbprint algorithm. Specify the
appropriate value for your certificate's thumbprint algorithm.)+
Now that the service definition and service configuration files have been
updated, package your deployment for uploading to Azure. If you are using
cspack, don't use the /generateConfigurationFile flag, as that overwrites
the certificate information you inserted.+

21.4 Step 3: Upload a certificate


Your deployment package has been updated to use the certificate, and an
HTTPS endpoint has been added. Now you can upload the package and
certificate to Azure with the Azure classic portal.+
1.

Log in to the Azure classic portal.

2.

Click Cloud Services on the left-side navigation pane.

3.

Click the desired cloud service.

4.

Click the Certificates tab.

5.

Click the Upload button.

6.

Provide the File, Password, then click Complete (the checkmark).

21.5 Step 4: Connect to the role instance by using HTTPS


Now that your deployment is up and running in Azure, you can connect to it
using HTTPS.+
1.

In the Azure classic portal, select your deployment, then click the link
under Site URL.

2.

In your web browser, modify the link to use https instead of http, and
then visit the page.
21.5.1.1.1 Note

If you are using a self-signed certificate, when you browse to an HTTPS


endpoint that's associated with the self-signed certificate you may see a
certificate error in the browser. Using a certificate signed by a trusted
certification authority eliminates this problem; in the meantime, you can
ignore the error. (Another option is to add the self-signed certificate to the
user's trusted certificate authority certificate store.)

If you want to use SSL for a staging deployment instead of a production


deployment, you first need to determine the URL used for the staging
deployment. Deploy your cloud service to the staging environment without
including a certificate or any certificate information. Once deployed, you can
determine the GUID-based URL, which is listed in the Azure classic portal's
Site URL field. Create a certificate with the common name (CN) equal to the
GUID-based URL (for example, 32818777-6e77-4ced-a8fc-

57609d404462.cloudapp.net). Use the Azure classic portal to add the


certificate to your staged cloud service. Then, add the certificate information
to your CSDEF and CSCFG files, repackage your application, and update your
staged deployment to use the new package.

22 Using shared access signature


22.1 Overview
Using a shared access signature (SAS) is a powerful way to grant limited
access to objects in your storage account to other clients, without having to
expose your account key. In Part 1 of this tutorial on shared access
signatures, we'll provide an overview of the SAS model and review SAS best
practices.+
For additional code examples using SAS beyond those presented here, see
Getting Started with Azure Blob Storage in .NET and other samples available
in the Azure Code Samples library. You can download the sample applications
and run them, or browse the code on GitHub.+

22.2 What is a shared access signature?


A shared access signature provides delegated access to resources in your
storage account. With a SAS, you can grant clients access to resources in
your storage account, without sharing your account keys. This is the key
point of using shared access signatures in your applications a SAS is a
secure way to share your storage resources without compromising your
account keys.+
22.2.1.1.1

Important

Your storage account key is similar to the root password for your storage
account. Always be careful to protect your account key. Avoid distributing it

to other users, hard-coding it, or saving it in a plain-text file that is accessible


to others. Regenerate your account key using the Azure Portal if you believe
it may have been compromised. To learn how to regenerate your account
key, see How to create, manage, or delete a storage account in the Azure
Portal.+
A SAS gives you granular control over what type of access you grant to
clients who have the SAS, including:+

The interval over which the SAS is valid, including the start time and the
expiry time.

The permissions granted by the SAS. For example, a SAS on a blob might
grant a user read and write permissions to that blob, but not delete permissions.

An optional IP address or range of IP addresses from which Azure Storage will


accept the SAS. For example, you might specify a range of IP addresses belonging
to your organization. This provides another measure of security for your SAS.

The protocol over which Azure Storage will accept the SAS. You can use this
optional parameter to restrict access to clients using HTTPS.

22.3 When should you use a shared access signature?


You can use a SAS when you want to provide access to resources in your
storage account to a client that can't be trusted with the account key. Your
storage account keys include both a primary and secondary key, both of
which grant administrative access to your account and all of the resources in
it. Exposing either of your account keys opens your account to the possibility
of malicious or negligent use. Shared access signatures provide a safe
alternative that allows other clients to read, write, and delete data in your
storage account according to the permissions you've granted, and without
need for the account key.+

A common scenario where a SAS is useful is a service where users read and
write their own data to your storage account. In a scenario where a storage
account stores user data, there are two typical design patterns:+
1. Clients upload and download data via a front-end proxy service, which
performs authentication. This front-end proxy service has the advantage of
allowing validation of business rules, but for large amounts of data or highvolume transactions, creating a service that can scale to match demand may
be expensive or difficult.+

+
2. A lightweight service authenticates the client as needed and then
generates a SAS. Once the client receives the SAS, they can access storage
account resources directly with the permissions defined by the SAS and for
the interval allowed by the SAS. The SAS mitigates the need for routing all
data through the front-end proxy service.+

Many real-world services may use a hybrid of these two approaches,


depending on the scenario involved, with some data processed and validated
via the front-end proxy while other data is saved and/or read directly using
SAS.+
Additionally, you will need to use a SAS to authenticate the source object in a
copy operation in certain scenarios:+

When you copy a blob to another blob that resides in a different storage
account, you must use a SAS to authenticate the source blob. With version 201504-05, you can optionally use a SAS to authenticate the destination blob as well.

When you copy a file to another file that resides in a different storage
account, you must use a SAS to authenticate the source file. With version 201504-05, you can optionally use a SAS to authenticate the destination file as well.

When you copy a blob to a file, or a file to a blob, you must use a SAS to
authenticate the source object, even if the source and destination objects reside
within the same storage account.

22.4 Types of shared access signatures


Version 2015-04-05 of Azure Storage introduces a new type of shared access
signature, the account SAS. You can now create either of two types of shared
access signatures:+

Account SAS. The account SAS delegates access to resources in one or


more of the storage services. All of the operations available via a service SAS are
also available via an account SAS. Additionally, with the account SAS, you can
delegate access to operations that apply to a given service, such as Get/Set
Service Properties and Get Service Stats. You can also delegate access to
read, write, and delete operations on blob containers, tables, queues, and file
shares that are not permitted with a service SAS. See Constructing an Account
SAS for in-depth information about constructing the account SAS token.

Service SAS. The service SAS delegates access to a resource in just one of
the storage services: the Blob, Queue, Table, or File service. See Constructing a
Service SAS and Service SAS Examples for in-depth information about
constructing the service SAS token.

22.5 How a shared access signature works


A shared access signature is a signed URI that points to one or more storage
resources and includes a token that contains a special set of query
parameters. The token indicates how the resources may be accessed by the
client. One of the query parameters, the signature, is constructed from the
SAS parameters and signed with the account key. This signature is used by
Azure Storage to authenticate the SAS.+
Here's an example of a SAS URI, showing the resource URI and the SAS
token:+

+
Note that the SAS token is a string generated on the client side (see the SAS
examples section below for code examples). The SAS token generated by the
storage client library is not tracked by Azure Storage in any way. You can
create an unlimited number of SAS tokens on the client side.+
When a client provides a SAS URI to Azure Storage as part of a request, the
service checks the SAS parameters and signature to verify that it is valid for

authenticating the request. If the service verifies that the signature is valid,
then the request is authenticated. Otherwise, the request is declined with
error code 403 (Forbidden).+

22.6 Shared access signature parameters


The account SAS and service SAS tokens include some common parameters,
and also take a few parameters that that are different.+
22.6.1

Parameters common to account SAS and service SAS tokens

Api version An optional parameter that specifies the storage service version
to use to execute the request.

Service version A required parameter that specifies the storage service


version to use to authenticate the request.

Start time. This is the time at which the SAS becomes valid. The start time
for a shared access signature is optional; if omitted, the SAS is effective
immediately. Must be expressed in UTC (Coordinated Universal Time), with a
special UTC designator ("Z") i.e. 1994-11-05T13:15:30Z.

Expiry time. This is the time after which the SAS is no longer valid. Best
practices recommend that you either specify an expiry time for a SAS, or
associate it with a stored access policy. Must be expressed in UTC (Coordinated
Universal Time), with a special UTC designator ("Z") i.e. 1994-11-05T13:15:30Z
(see more below).

Permissions. The permissions specified on the SAS indicate what operations


the client can perform against the storage resource using the SAS. Available
permissions differ for an account SAS and a service SAS.

IP. An optional parameter that specifies an IP address or a range of IP


addresses outside of Azure (see the section Routing session configuration state
for Express Route) from which to accept requests.

Protocol. An optional parameter that specifies the protocol permitted for a


request. Possible values are both HTTPS and HTTP (https,http), which is the
default value, or HTTPS only (https). Note that HTTP only is not a permitted value.

Signature. The signature is constructed from the other parameters specified

as part token and then encrypted. It's used to authenticate the SAS.
+

22.6.2

Parameters for an account SAS token

Service or services. An account SAS can delegate access to one or more of

the storage services. For example, you can create an account SAS that delegates
access to the Blob and File service. Or you can create a SAS that delegates
access to all four services (Blob, Queue, Table, and File).
Storage resource types. An account SAS applies to one or more classes of

storage resources, rather than a specific resource. You can create an account SAS
to delegate access to:
Service-level APIs, which are called against the storage account

resource. Examples include Get/Set Service Properties, Get Service Stats,


and List Containers/Queues/Tables/Shares.
Container-level APIs, which are called against the container objects for

each service: blob containers, queues, tables, and file shares. Examples
include Create/Delete Container, Create/Delete Queue, Create/Delete
Table, Create/Delete Share, and List Blobs/Files and Directories.
Object-level APIs, which are called against blobs, queue messages,

table entities, and files. For example, Put Blob, Query Entity, Get
Messages, and Create File.
+

22.6.3

Parameters for a service SAS token

Storage resource. Storage resources for which you can delegate access

with a service SAS include:


o

Containers and blobs

File shares and files

Queues

Tables and ranges of table entities.

22.7 Examples of SAS URIs


Here is an example of a service SAS URI that provides read and write
permissions to a blob. The table breaks down each part of the URI to
understand how it contributes to the SAS:+
Copy

https://myaccount.blob.core.windows.net/sascontainer/sasblob.txt?sv=2015-0405&st=2015-04-29T22%3A18%3A26Z&se=2015-0430T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=Z
%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D

Descriptio
n

Name

SAS portion

Blob URI

https://myaccount.blob.core.windows.net/sascontai
ner/sasblob.txt

The
address of
the blob.
Note that
using
HTTPS is
highly
recommen
ded.

Storage
services
version

sv=2015-04-05

For
storage
services
version
2012-0212 and
later, this
parameter
indicates
the
version to
use.

Start

st=2015-04-29T22%3A18%3A26Z

Specified

Name

SAS portion

time

Descriptio
n
in UTC
time. If
you want
the SAS to
be valid
immediate
ly, omit
the start
time.

Expiry
time

se=2015-04-30T02%3A23%3A26Z

Specified
in UTC
time.

Resourc
e

sr=b

The
resource
is a blob.

Permissi
ons

sp=rw

The
permissio
ns granted
by the
SAS
include
Read (r)
and Write
(w).

IP range

sip=168.1.5.60-168.1.5.70

The range
of IP
addresses
from
which a
request
will be
accepted.

Descriptio
n

Name

SAS portion

Protocol

spr=https

Only
requests
using
HTTPS are
permitted.

Signatur
e

sig=Z
%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDt
kk%3D

Used to
authentica
te access
to the
blob. The
signature
is an
HMAC
computed
over a
string-tosign and
key using
the
SHA256
algorithm,
and then
encoded
using
Base64
encoding.

And here is an example of an account SAS that uses the same common
parameters on the token. Since these parameters are described above, they
are not described here. Only the parameters that are specific to account SAS
are described in the table below.+
Copy

https://myaccount.blob.core.windows.net/?restype=service&comp=properties&sv=2015-0405&ss=bf&srt=s&st=2015-04-29T22%3A18%3A26Z&se=2015-0430T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=F
%6GRVAZ5Cdj2Pw4tgU7IlSTkWgn7bUkkAg8P6HESXwmf%4B

Descript
ion

Name

SAS portion

Resourc
e URI

https://myaccount.blob.core.windows.net/?
restype=service&comp=properties

The
Blob
service
endpoin
t, with
parame
ters for
getting
service
properti
es
(when
called
with
GET) or
setting
service
properti
es
(when
called
with
SET).

Services

ss=bf

The SAS
applies
to the
Blob
and File
services

Resourc
e types

srt=s

The SAS
applies
to
service-

Name

SAS portion

Descript
ion
level
operatio
ns.

Permissi
ons

sp=rw

The
permissi
ons
grant
access
to read
and
write
operatio
ns.

Given that permissions are restricted to the service level, accessible


operations with this SAS are Get Blob Service Properties (read) and Set
Blob Service Properties (write). However, with a different resource URI,
the same SAS token could also be used to delegate access to Get Blob
Service Stats (read).+

22.8 Controlling a SAS with a stored access policy


A shared access signature can take one of two forms:+

Ad hoc SAS: When you create an ad hoc SAS, the start time, expiry time,
and permissions for the SAS are all specified on the SAS URI (or implied, in the
case where start time is omitted). This type of SAS may be created as an account
SAS or a service SAS.

SAS with stored access policy: A stored access policy is defined on a


resource container - a blob container, table, queue, or file share - and can be
used to manage constraints for one or more shared access signatures. When you
associate a SAS with a stored access policy, the SAS inherits the constraints - the
start time, expiry time, and permissions - defined for the stored access policy.

+
22.8.1.1.1

Note

Currently, an account SAS must be an ad hoc SAS. Stored access policies are
not yet supported for account SAS.+
The difference between the two forms is important for one key scenario:
revocation. A SAS is a URL, so anyone who obtains the SAS can use it,
regardless of who requested it to begin with. If a SAS is published publicly, it
can be used by anyone in the world. A SAS that is distributed is valid until
one of four things happens:+
1.

The expiry time specified on the SAS is reached.

2.

The expiry time specified on the stored access policy referenced by the SAS is
reached (if a stored access policy is referenced, and if it specifies an expiry time).
This can either occur because the interval elapses, or because you have modified
the stored access policy to have an expiry time in the past, which is one way to
revoke the SAS.

3.

The stored access policy referenced by the SAS is deleted, which is another
way to revoke the SAS. Note that if you recreate the stored access policy with
exactly the same name, all existing SAS tokens will again be valid according to
the permissions associated with that stored access policy (assuming that the
expiry time on the SAS has not passed). If you are intending to revoke the SAS,
be sure to use a different name if you recreate the access policy with an expiry
time in the future.

4.

The account key that was used to create the SAS is regenerated. Note that
doing this will cause all application components using that account key to fail to
authenticate until they are updated to use either the other valid account key or
the newly regenerated account key.

22.8.1.1.2

Important

A shared access signature URI is associated with the account key used to
create the signature, and the associated stored access policy (if any). If no
stored access policy is specified, the only way to revoke a shared access
signature is to change the account key.+

22.9 Authenticating from a client application with a SAS


A client who is in possession of a SAS can use the SAS to authenticate a
request against a storage account for which they do not possess the account
keys. A SAS can be included in a connection string, or used directly from the
appropriate constructor or method.+
22.9.1
Using a SAS in a connection string
If you possess a shared access signature (SAS) URL that grants you access to
resources in a storage account, you can use the SAS in a connection string.
Because the SAS includes on the URI the information required to
authenticate the request, the SAS URI provides the protocol, the service
endpoint, and the necessary credentials to access the resource.+
To create a connection string that includes a shared access signature, specify
the string in the following format:+
Copy

BlobEndpoint=myBlobEndpoint;
QueueEndpoint=myQueueEndpoint;
TableEndpoint=myTableEndpoint;
FileEndpoint=myFileEndpoint;
SharedAccessSignature=sasToken

Each service endpoint is optional, although the connection string must


contain at least one.+
22.9.1.1.1

Note

Using HTTPS with a SAS is recommended as a best practice.+


If you are specifying a SAS in a connection string in a configuration file, you
may need to encode special characters in the URL.+
22.9.2
Service SAS example
Here's an example of a connection string that includes a service SAS for Blob
storage:+
Copy

BlobEndpoint=https://storagesample.blob.core.windows.net;SharedAccessSignature=sv=20
15-04-05&sr=b&si=tutorial-policy635959936145100803&sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX%2BXXIU%3D

And here's an example of the same connection string with encoding of


special characters:+
Copy

BlobEndpoint=https://storagesample.blob.core.windows.net;SharedAccessSignature=sv=20
15-04-05&amp;sr=b&amp;si=tutorial-policy635959936145100803&amp;sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX%2BXXIU
%3D

22.9.3
Account SAS example
Here's an example of a connection string that includes an account SAS for
Blob and File storage. Note that endpoints for both services are specified:+
Copy

BlobEndpoint=https://storagesample.blob.core.windows.net;
FileEndpoint=https://storagesample.file.core.windows.net;
SharedAccessSignature=sv=2015-07-08&sig=iCvQmdZngZNW
%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&spr=https&st=2016-0412T03%3A24%3A31Z&se=2016-04-13T03%3A29%3A31Z&srt=s&ss=bf&sp=rwl

And here's an example of the same connection string with URL encoding:+
Copy

BlobEndpoint=https://storagesample.blob.core.windows.net;
FileEndpoint=https://storagesample.file.core.windows.net;
SharedAccessSignature=sv=2015-07-08&amp;sig=iCvQmdZngZNW
%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&amp;spr=https&amp;st=2016-0412T03%3A24%3A31Z&amp;se=2016-0413T03%3A29%3A31Z&amp;srt=s&amp;ss=bf&amp;sp=rwl

22.9.4
Using a SAS in a constructor or method
Several Azure Storage client library constructors and method overloads offer
a SAS parameter, so that you can authenticate a request to the service with
a SAS.+
For example, here a SAS URI is used to create a reference to a block blob.
The SAS provides the only credentials needed for the request. The block blob
reference is then used for a write operation:+
Copy
C#
string sasUri = "https://storagesample.blob.core.windows.net/sample-container/" +
"sampleBlob.txt?sv=2015-0708&sr=b&sig=39Up9JzHkxhUIhFEjEH9594DJxe7w6cIRCg0V6lCGSo%3D" +
"&se=2016-10-18T21%3A51%3A37Z&sp=rcw";

CloudBlockBlob blob = new CloudBlockBlob(new Uri(sasUri));

// Create operation: Upload a blob with the specified name to the container.
// If the blob does not exist, it will be created. If it does exist, it will be overwritten.
try
{
MemoryStream msWrite = new MemoryStream(Encoding.UTF8.GetBytes(blobContent));
msWrite.Position = 0;
using (msWrite)
{
await blob.UploadFromStreamAsync(msWrite);
}

Console.WriteLine("Create operation succeeded for SAS {0}", sasUri);


Console.WriteLine();
}
catch (StorageException e)
{
if (e.RequestInformation.HttpStatusCode == 403)
{
Console.WriteLine("Create operation failed for SAS {0}", sasUri);
Console.WriteLine("Additional error information: " + e.Message);
Console.WriteLine();
}
else

{
Console.WriteLine(e.Message);
Console.ReadLine();
throw;
}
}

22.10

Best practices for using SAS

When you use shared access signatures in your applications, you need to be
aware of two potential risks:+

If a SAS is leaked, it can be used by anyone who obtains it, which can
potentially compromise your storage account.

If a SAS provided to a client application expires and the application is unable


to retrieve a new SAS from your service, then the application's functionality may
be hindered.

The following recommendations for using shared access signatures will help
balance these risks:+
1.

Always use HTTPS to create a SAS or to distribute a SAS. If a SAS is passed


over HTTP and intercepted, an attacker performing a man-in-the-middle attack
will be able to read the SAS and then use it just as the intended user could have,
potentially compromising sensitive data or allowing for data corruption by the
malicious user.

2.

Reference stored access policies where possible. Stored access policies


give you the option to revoke permissions without having to regenerate the
storage account keys. Set the expiration on these to be a very long time (or
infinite) and make sure that it is regularly updated to move it farther into the
future.

3.

Use near-term expiration times on an ad hoc SAS. In this way, even if a


SAS is compromised unknowingly, it will only be viable for a short time duration.
This practice is especially important if you cannot reference a stored access
policy. This practice also helps limit the amount of data that can be written to a
blob by limiting the time available to upload to it.

4.

Have clients automatically renew the SAS if necessary. Clients should


renew the SAS well before the expiration, in order to allow time for retries if the
service providing the SAS is unavailable. If your SAS is meant to be used for a
small number of immediate, short-lived operations that are expected to be
completed within the expiration period, then this may be unnecessary as the SAS
is not expected to be renewed. However, if you have client that is routinely
making requests via SAS, then the possibility of expiration comes into play. The
key consideration is to balance the need for the SAS to be short-lived (as stated
above) with the need to ensure that the client is requesting renewal early enough
to avoid disruption due to the SAS expiring prior to successful renewal.

5.

Be careful with SAS start time. If you set the start time for a SAS to now,
then due to clock skew (differences in current time according to different
machines), failures may be observed intermittently for the first few minutes. In
general, set the start time to be at least 15 minutes ago, or don't set it at all,
which will make it valid immediately in all cases. The same generally applies to
expiry time as well - remember that you may observe up to 15 minutes of clock
skew in either direction on any request. Note for clients using a REST version prior
to 2012-02-12, the maximum duration for a SAS that does not reference a stored
access policy is 1 hour, and any policies specifying longer term than that will fail.

6.

Be specific with the resource to be accessed. A typical security best


practice is to provide a user with the minimum required privileges. If a user only
needs read access to a single entity, then grant them read access to that single
entity, and not read/write/delete access to all entities. This also helps mitigate the
threat of the SAS being compromised, as the SAS has less power in the hands of
an attacker.

7.

Understand that your account will be billed for any usage, including
that done with SAS. If you provide write access to a blob, a user may choose to
upload a 200GB blob. If you've given them read access as well, they may choose
to download it 10 times, incurring 2TB in egress costs for you. Again, provide
limited permissions, to help mitigate the potential of malicious users. Use shortlived SAS to reduce this threat (but be mindful of clock skew on the end time).

8.

Validate data written using SAS. When a client application writes data to
your storage account, keep in mind that there can be problems with that data. If
your application requires that that data be validated or authorized before it is
ready to use, you should perform this validation after the data is written and
before it is used by your application. This practice also protects against corrupt or
malicious data being written to your account, either by a user who properly
acquired the SAS, or by a user exploiting a leaked SAS.

9.

Don't always use SAS. Sometimes the risks associated with a particular
operation against your storage account outweigh the benefits of SAS. For such
operations, create a middle-tier service that writes to your storage account after
performing business rule validation, authentication, and auditing. Also,
sometimes it's simpler to manage access in other ways. For example, if you want
to make all blobs in a container publically readable, you can make the container
Public, rather than providing a SAS to every client for access.

10.

Use Storage Analytics to monitor your application. You can use logging
and metrics to observe any spike in authentication failures due to an outage in
your SAS provider service or to the inadvertent removal of a stored access policy.
See the Azure Storage Team Blog for additional information.

22.11

SAS examples

Below are some examples of both types of shared access signatures, account
SAS and service SAS.+
To run these examples, you'll need to download and reference these
packages:+

Azure Storage Client Library for .NET, version 6.x or later (to use account

SAS).
Azure Configuration Manager

For additional examples that show how to create and test a SAS, see Azure
Code Samples for Storage.+
22.11.1
Example: Create and use an account SAS
The following code example creates an account SAS that is valid for the Blob
and File services, and gives the client permissions read, write, and list
permissions to access service-level APIs. The account SAS restricts the
protocol to HTTPS, so the request must be made with HTTPS.+
Copy
C#
static string GetAccountSASToken()
{
// To create the account SAS, you need to use your shared key credentials. Modify for your
account.
const string ConnectionString = "DefaultEndpointsProtocol=https;AccountName=accountname;AccountKey=account-key";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);

// Create a new access policy for the account.


SharedAccessAccountPolicy policy = new SharedAccessAccountPolicy()
{
Permissions = SharedAccessAccountPermissions.Read |
SharedAccessAccountPermissions.Write | SharedAccessAccountPermissions.List,
Services = SharedAccessAccountServices.Blob | SharedAccessAccountServices.File,

ResourceTypes = SharedAccessAccountResourceTypes.Service,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Protocols = SharedAccessProtocol.HttpsOnly
};

// Return the SAS token.


return storageAccount.GetSharedAccessSignature(policy);
}

To use the account SAS to access service-level APIs for the Blob service,
construct a Blob client object using the SAS and the Blob storage endpoint
for your storage account.+
Copy
C#
static void UseAccountSAS(string sasToken)
{
// Create new storage credentials using the SAS token.
StorageCredentials accountSAS = new StorageCredentials(sasToken);
// Use these credentials and the account name to create a Blob service client.
CloudStorageAccount accountWithSAS = new CloudStorageAccount(accountSAS,
"account-name", endpointSuffix: null, useHttps: true);
CloudBlobClient blobClientWithSAS = accountWithSAS.CreateCloudBlobClient();

// Now set the service properties for the Blob client created with the SAS.
blobClientWithSAS.SetServiceProperties(new ServiceProperties()
{

HourMetrics = new MetricsProperties()


{
MetricsLevel = MetricsLevel.ServiceAndApi,
RetentionDays = 7,
Version = "1.0"
},
MinuteMetrics = new MetricsProperties()
{
MetricsLevel = MetricsLevel.ServiceAndApi,
RetentionDays = 7,
Version = "1.0"
},
Logging = new LoggingProperties()
{
LoggingOperations = LoggingOperations.All,
RetentionDays = 14,
Version = "1.0"
}
});

// The permissions granted by the account SAS also permit you to retrieve service
properties.
ServiceProperties serviceProperties = blobClientWithSAS.GetServiceProperties();
Console.WriteLine(serviceProperties.HourMetrics.MetricsLevel);
Console.WriteLine(serviceProperties.HourMetrics.RetentionDays);
Console.WriteLine(serviceProperties.HourMetrics.Version);

22.11.2
Example: Create a stored access policy
The following code creates a stored access policy on a container. You can use
the access policy to specify constraints for a service SAS on the container or
its blobs.+
Copy
C#
private static async Task CreateSharedAccessPolicyAsync(CloudBlobContainer container,
string policyName)
{
// Create a new shared access policy and define its constraints.
// The access policy provides create, write, read, list, and delete permissions.
SharedAccessBlobPolicy sharedPolicy = new SharedAccessBlobPolicy()
{
// When the start time for the SAS is omitted, the start time is assumed to be the time
when the storage service receives the request.
// Omitting the start time for a SAS that is effective immediately helps to avoid clock
skew.
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.List |
SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Create |
SharedAccessBlobPermissions.Delete
};

// Get the container's existing permissions.


BlobContainerPermissions permissions = await container.GetPermissionsAsync();

// Add the new policy to the container's permissions, and set the container's permissions.
permissions.SharedAccessPolicies.Add(policyName, sharedPolicy);
await container.SetPermissionsAsync(permissions);
}

22.11.3
Example: Create a service SAS on a container
The following code creates a SAS on a container. If the name of an existing
stored access policy is provided, that policy is associated with the SAS. If no
stored access policy is provided, then the code creates an ad-hoc SAS on the
container.+
Copy
C#
private static string GetContainerSasUri(CloudBlobContainer container, string
storedPolicyName = null)
{
string sasContainerToken;

// If no stored policy is specified, create a new access policy and define its constraints.
if (storedPolicyName == null)
{
// Note that the SharedAccessBlobPolicy class is used both to define the parameters of
an ad-hoc SAS, and
// to construct a shared access policy that is saved to the container's shared access
policies.
SharedAccessBlobPolicy adHocPolicy = new SharedAccessBlobPolicy()
{
// When the start time for the SAS is omitted, the start time is assumed to be the
time when the storage service receives the request.

skew.

// Omitting the start time for a SAS that is effective immediately helps to avoid clock

SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessBlobPermissions.Write |
SharedAccessBlobPermissions.List
};

// Generate the shared access signature on the container, setting the constraints
directly on the signature.
sasContainerToken = container.GetSharedAccessSignature(adHocPolicy, null);

Console.WriteLine("SAS for blob container (ad hoc): {0}", sasContainerToken);


Console.WriteLine();
}
else
{
// Generate the shared access signature on the container. In this case, all of the
constraints for the
// shared access signature are specified on the stored access policy, which is provided
by name.
// It is also possible to specify some constraints on an ad-hoc SAS and others on the
stored access policy.
sasContainerToken = container.GetSharedAccessSignature(null, storedPolicyName);

Console.WriteLine("SAS for blob container (stored access policy): {0}",


sasContainerToken);
Console.WriteLine();
}

// Return the URI string for the container, including the SAS token.
return container.Uri + sasContainerToken;
}

22.11.4
Example: Create a service SAS on a blob
The following code creates a SAS on a blob. If the name of an existing stored
access policy is provided, that policy is associated with the SAS. If no stored
access policy is provided, then the code creates an ad-hoc SAS on the blob.+
Copy
C#
private static string GetBlobSasUri(CloudBlobContainer container, string blobName, string
policyName = null)
{
string sasBlobToken;

// Get a reference to a blob within the container.


// Note that the blob may not exist yet, but a SAS can still be created for it.
CloudBlockBlob blob = container.GetBlockBlobReference(blobName);

if (policyName == null)
{
// Create a new access policy and define its constraints.
// Note that the SharedAccessBlobPolicy class is used both to define the parameters of
an ad-hoc SAS, and
// to construct a shared access policy that is saved to the container's shared access
policies.
SharedAccessBlobPolicy adHocSAS = new SharedAccessBlobPolicy()
{

// When the start time for the SAS is omitted, the start time is assumed to be the
time when the storage service receives the request.
// Omitting the start time for a SAS that is effective immediately helps to avoid clock
skew.
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessBlobPermissions.Read |
SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Create
};

// Generate the shared access signature on the blob, setting the constraints directly on
the signature.
sasBlobToken = blob.GetSharedAccessSignature(adHocSAS);

Console.WriteLine("SAS for blob (ad hoc): {0}", sasBlobToken);


Console.WriteLine();
}
else
{
// Generate the shared access signature on the blob. In this case, all of the constraints
for the
// shared access signature are specified on the container's stored access policy.
sasBlobToken = blob.GetSharedAccessSignature(null, policyName);

Console.WriteLine("SAS for blob (stored access policy): {0}", sasBlobToken);


Console.WriteLine();
}

// Return the URI string for the container, including the SAS token.

return blob.Uri + sasBlobToken;


}

22.12

Conclusion

Shared access signatures are useful for providing limited permissions to your
storage account to clients that should not have the account key. As such,
they are a vital part of the security model for any application using Azure
Storage. If you follow the best practices listed here, you can use SAS to
provide greater flexibility of access to resources in your storage account,
without compromising the security of your application

23 Monitor a storage account


You can monitor your storage account from the Azure Portal. When you
configure your storage account for monitoring through the portal, Azure
Storage uses Storage Analytics to track metrics for your account and log
request data.+
23.1.1.1.1

Note

Additional costs are associated with examining monitoring data in the Azure
Portal. For more information, see Storage Analytics and Billing.
+
Azure File storage currently supports Storage Analytics metrics, but does not
yet support logging. You can enable metrics for Azure File storage via the
Azure Portal.+

Storage accounts with a replication type of Zone-Redundant Storage (ZRS)


do not have the metrics or logging capability enabled at this time. +
For an in-depth guide on using Storage Analytics and other tools to identify,
diagnose, and troubleshoot Azure Storage-related issues, see Monitor,
diagnose, and troubleshoot Microsoft Azure Storage.+

23.2 How to: Configure monitoring for a storage account


1.

In the Azure Portal, click Storage, and then click the storage account name
to open the dashboard.

2.

Click Configure, and scroll down to the monitoring settings for the blob,
table, and queue services.

3.

In monitoring, set the level of monitoring and the data retention policy
for each service:

To set the monitoring level, select one of the following:

Minimal - Collects metrics such as ingress/egress, availability, latency, and


success percentages, which are aggregated for the blob, table, and queue
services.
Verbose - In addition to the minimal metrics, collects the same set of
metrics for each storage operation in the Azure Storage Service API.
Verbose metrics enable closer analysis of issues that occur during
application operations.
Off - Turns off monitoring. Existing monitoring data is persisted through the
end of the retention period.
+

To set the data retention policy, in Retention (in days), type the number of
days of data to retain from 1 to 365 days. If you do not want to set a retention
policy, enter zero. If there is no retention policy, it is up to you to delete the
monitoring data. We recommend setting a retention policy based on how long you
want to retain storage analytics data for your account so that old and unused
analytics data can be deleted by system at no cost.

+
1.

When you finish the monitoring configuration, click Save.

You should start seeing monitoring data on the dashboard and the Monitor
page after about an hour.+
Until you configure monitoring for a storage account, no monitoring data is
collected, and the metrics charts on the dashboard and Monitor page are
empty.+

After you set the monitoring levels and retention policies, you can choose
which of the available metrics to monitor in the Azure Portal, and which
metrics to plot on metrics charts. A default set of metrics is displayed at each
monitoring level. You can use Add Metrics to add or remove metrics from
the metrics list.+
Metrics are stored in the storage account in four tables named
$MetricsTransactionsBlob, $MetricsTransactionsTable,
$MetricsTransactionsQueue, and $MetricsCapacityBlob. For more information,
see About Storage Analytics Metrics.+

23.3 How to: Customize the dashboard for monitoring


On the dashboard, you can choose up to six metrics to plot on the metrics
chart from nine available metrics. For each service (blob, table, and queue),
the Availability, Success Percentage, and Total Requests metrics are
available. The metrics available on the dashboard are the same for minimal
or verbose monitoring.+
1.

In the Azure Portal, click Storage, and then click the name of the storage
account to open the dashboard.

2.

To change the metrics that are plotted on the chart, take one of the
following actions:

To add a new metric to the chart, click the colored check box next to
the metric header in the table below the chart.

To hide a metric that is plotted on the chart, clear the colored check
box next to the metric header.

3.

By default, the chart shows trends, displaying only the current value of each
metric (the Relative option at the top of the chart). To display a Y axis so you can
see absolute values, select Absolute.

4.

To change the time range the metrics chart displays, select 6 hours, 24 hours,
or 7 days at the top of the chart.

23.4 How to: Customize the Monitor page


On the Monitor page, you can view the full set of metrics for your storage
account.+

If your storage account has minimal monitoring configured, metrics such as


ingress/egress, availability, latency, and success percentages are aggregated
from the blob, table, and queue services.

If your storage account has verbose monitoring configured, the metrics are
available at a finer resolution of individual storage operations in addition to the
service-level aggregates.

Use the following procedures to choose which storage metrics to view in the
metrics charts and table that are displayed on the Monitor page. These

settings do not affect the collection, aggregation, and storage of monitoring


data in the storage account.+

23.5 How to: Add metrics to the metrics table


1.

In the Azure Portal, click Storage, and then click the name of the storage
account to open the dashboard.

2.

Click Monitor.
The Monitor page opens. By default, the metrics table displays a subset of
the metrics that are available for monitoring. The illustration shows the
default Monitor display for a storage account with verbose monitoring
configured for all three services. Use Add Metrics to select the metrics you
want to monitor from all available metrics.

23.5.1.1.1 Note

Consider costs when you select the metrics. There are transaction and egress
costs associated with refreshing monitoring displays. For more information,
see Storage Analytics and Billing.
3.

Click Add Metrics.


The aggregate metrics that are available in minimal monitoring are at the top
of the list. If the check box is selected, the metric is displayed in the metrics
list.

4.

Hover over the right side of the dialog box to display a scrollbar that you
can drag to scroll additional metrics into view.

5.

Click the down arrow by a metric to expand a list of operations the metric
is scoped to include. Select each operation that you want to view in the
metrics table in the Azure Portal.
In the following illustration, the AUTHORIZATION ERROR PERCENTAGE metric
has been expanded.

6.

After you select metrics for all services, click OK (checkmark) to update the
monitoring configuration. The selected metrics are added to the metrics table.

7.

To delete a metric from the table, click the metric to select it, and then
click Delete Metric.

23.6 How to: Customize the metrics chart on the Monitor page
1.

On the Monitor page for the storage account, in the metrics table, select up
to 6 metrics to plot on the metrics chart. To select a metric, click the check box on
its left side. To remove a metric from the chart, clear the check box.

2.

To switch the chart between relative values (final value only displayed) and
absolute values (Y axis displayed), select Relative or Absolute at the top of the
chart.

3.

To change the time range the metrics chart displays, select 6 hours, 24
hours, or 7 days at the top of the chart.

23.7 How to: Configure logging


For each of the storage services available with your storage account (blob,
table, and queue), you can save diagnostics logs for Read Requests, Write
Requests, and/or Delete Requests, and can set the data retention policy for
each of the services.+

1.

In the Azure Portal, click Storage, and then click the name of the storage
account to open the dashboard.

2.

Click Configure, and use the Down arrow on the keyboard to scroll down
to logging.

3.

For each service (blob, table, and queue), configure the following:

The types of request to log: Read Requests, Write Requests, and Delete
Requests.

The number of days to retain the logged data. Enter zero is if you do
not want to set a retention policy. If you do not set a retention policy, it is up to
you to delete the logs.

4.
+

Click Save.

The diagnostics logs are saved in a blob container named $logs in your
storage account. For information about accessing the $logs container, see
About Storage Analytics Logging.

24 Active directory Graph API Rest


The Azure Active Directory Graph API provides programmatic access to Azure Active Directory
through REST API endpoints. Apps can use the Azure AD Graph API to perform create, read, update,
and delete (CRUD) operations on directory data and directory objects, such as users, groups, and
organizational contacts.
Important

Azure AD Graph API functionality is also available through Microsoft Graph, a unified API that also
includes APIs from other Microsoft services like Outlook, OneDrive, OneNote, Planner, and Office
Graph, all accessed through a single endpoint with a single access token.

24.1 In This Topic


24.2

Documentation Overview

Prerequisites for Using the Graph API in an App

Additional Resources

24.3 Documentation Overview


24.4
To learn more about how to use the Graph API, see the following documentation:

Azure Active Directory Graph API topic on Azure.com: Provides a brief overview of Graph
API features and scenarios.

Quickstart for the Azure AD Graph API on Azure.com: Provides essential details and
introduces resources like the Graph Explorer for those who want to jumpstart their
experience with the Graph API.

Azure AD Graph API concepts: Provides conceptual information about versioning,


functionality, advanced features, preview features, permission scopes, error handling, and
other topics.

Azure AD Graph API reference: Provides explicit examples of Graph API operations
(requests and responses) on users, groups, organizational contacts, directory roles, domains
(preview), functions, actions and others, as well as a reference for the Azure AD entities and
types exposed by the Graph API. The documentation is interactive and many of the topics
contain a Try It feature that you can use to execute Graph API requests against a sample
tenant and see the responses from inside the documentation itself.

24.5 Prerequisites for Using the Graph API in an App


24.6
The following list of prerequisites will help you develop Cloud apps that consume the Graph API:

An Azure AD Tenant: You need an Azure AD tenant that you can use to develop, configure,
and publish your app. This requires a valid subscription to one of Microsoft's cloud services,
such as Azure, Office 365, Microsoft Dynamic CRM, etc. If you don't already have a
subscription, you can get a free trial for Azure here: Azure Free Trial.

Your App Must be Registered with Azure AD: Your app must be registered with Azure AD.
This can be done through the Azure portal (which requires an Azure subscription), or through
tooling like Visual Studio 2013 or 2015. For information about how to register an app using
the Azure portal, see Adding an Application.

Azure AD Tenant Permissions to Access Directory Data: After your app is registered with
Azure AD, in order to call the Graph API against a directory tenant, you must first configure
your app to request permissions to the Graph API, and then a user or tenant administrator
must grant access to your app (and its configured permissions) during consent. For more
information about Azure AD consent flow and configuring your app for the Graph API, see
Understanding the Consent Framework and Accessing the Graph API in Integrating
Applications with Azure Active Directory.

24.7 Additional Resources


24.8
The following resources and tools may help you learn more about and use the Graph API:

Azure AD Graph Code Samples: We highly recommend downloading the sample applications
that demonstrate the capabilities of the Azure AD Graph API. For more information about the
code samples available for the Graph API, see Calling Azure AD Graph API.

Graph Explorer: You can use the Graph Explorer to execute read operations against your
own tenant or a sample tenant and view the responses returned by the Graph API. See
Quickstart for the Azure AD Graph API for instructions on how to use the Graph Explorer.

Azure portal: The Azure portal can be used by an administrator to perform administrative
tasks on Azure AD directory entities. An administrator (or a developer with sufficient
privileges) can also use the portal to register an app with Azure AD and to configure it with
the resources and access that it will request during consent. For more information about

registering an app and configuring it using the Azure portal, see the following topic:
Integrating Applications with Azure Active Directory.

Azure AD Graph API Team blog: Keep up with the latest announcements from the Graph
API team on the Microsoft Azure Active Directory Graph Team blog.

Microsoft Azure Active Directory Windows PowerShell Cmdlets: The Azure AD


Windows PowerShell Cmdlets can be used by an administrator to perform administrative
tasks on Azure AD directory entities. For example, an administrator can use these cmdlets to
manage their tenant's users, service principals, and domains. For more information about
these cmdlets, see the following topic: Azure AD PowerShell Cmdlets.

25 Notification Hubs monitoring and


Telemtry- programmatic access
You can access notification hubs telemetry data programmatically, analogous to Microsoft Azure
Service Bus metrics (using the REST identifiers provided in the preceding tables to access respective
metrics).

25.1 Step 1: Create a certificate


25.2
First, create a certificate to access your Azure subscription resources. In Windows, do the following:
1. Open Visual Studio administrator command prompt, and type the following command:
msdos
Copy
makecert -sky exchange -r -n "CN=<CertificateName>" -pe -a sha1 -len 2048 -ss My
"<CertificateName>.cer"

2. Run Certmgr.msc, click Personal on the left-hand side, then right-click the certificate you
created and click All Tasks, then Export.

3. Follow the wizard and choose the option to not export the private key. Choose the option to
export a CER cert, and then provide a filename ending with .cer .

4. Repeat the export process, this time choosing to export the private key in a PFX file. Then
select a name ending with .PFX .

25.3 Step 2: Upload the certificate to Azure


25.4
Now upload your .CER file to enable your certificate to perform operations on your Azure resources.
1. In the Azure management portal, click Settings on the left, and then click Management
Certificates.
2. Click Upload at the bottom of the screen, and then select your .CER file.

3. Take note of your subscription ID that you want to manage.


Note
The subscription ID must be the for the subscription that contains the notification
hub.
4.

25.5 Step 3: Access metrics via a REST interface


25.6
In order to read telemetry, you must issue REST calls to a URL constructed according to the rules
specified in Microsoft Azure Service Bus metrics (using the metric names reported in the previous
section).
The following code is a sample that retrieves the number of successful pushes aggregated in 5-minute
intervals since 2013-08-06T21:30:00Z (remember to replace the subscriptionIds, namespace name,
notification hub name, and pfx cert path with your values).
C#
Copy

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Net;
using System.Runtime.Serialization;
using System.Security.Cryptography.X509Certificates;
using System.ServiceModel.Syndication;
using System.Text;
using System.Threading.Tasks;
using System.Xml;

namespace telemetry1
{
class Program
{
[DataContract(Name = "properties", Namespace =
"http://schemas.microsoft.com/ado/2007/08/dataservices")]
public class MetricValue
{
[DataMember(Name = "Timestamp")]
public DateTime Timestamp { get; set; }

[DataMember(Name = "Min")]
public long Min { get; set; }

[DataMember(Name = "Max")]
public long Max { get; set; }

[DataMember(Name = "Total")]
public long Total { get; set; }

[DataMember(Name = "Average")]
public float Average { get; set; }
}

static void Main(string[] args)


{
string uri = @"https://management.core.windows.net/
{subscriptionId}/services/ServiceBus/namespaces/{namespaceName}/NotificationHubs/
{hubName}/metrics/outgoing.allpns.success/rollups/PT5M/Values?$filter=Timestamp%20gt
%20datetime'2014-08-06T21:30:00Z'";
HttpWebRequest sendNotificationRequest =
(HttpWebRequest)WebRequest.Create(uri);
sendNotificationRequest.Method = "GET";
sendNotificationRequest.ContentType = "application/xml";
sendNotificationRequest.Headers.Add("x-ms-version", "2015-01");
X509Certificate2 certificate = new X509Certificate2(@"{pathToPfxCert}",
"{certPassword}");
sendNotificationRequest.ClientCertificates.Add(certificate);

try
{

HttpWebResponse response =
(HttpWebResponse)sendNotificationRequest.GetResponse();

using (XmlReader reader = XmlReader.Create(response.GetResponseStream(),


new XmlReaderSettings { CloseInput = true }))
{
SyndicationFeed feed = SyndicationFeed.Load<SyndicationFeed>(reader);

foreach (SyndicationItem item in feed.Items)


{
XmlSyndicationContent syndicationContent = item.Content as
XmlSyndicationContent;
MetricValue value = syndicationContent.ReadContent<MetricValue>();
Console.WriteLine(value.Total);
}
}
}
catch (WebException exception)
{
string error = new
StreamReader(exception.Response.GetResponseStream()).ReadToEnd();
Console.WriteLine(error);
}
}

26 Staging environment in azure apps

When you deploy your web app, mobile back end, and API app to App
Service, you can deploy to a separate deployment slot instead of the default
production slot when running in the Standard or Premium App Service plan
mode. Deployment slots are actually live apps with their own hostnames.
App content and configurations elements can be swapped between two
deployment slots, including the production slot. Deploying your application to
a deployment slot has the following benefits:+

You can validate app changes in a staging deployment slot before swapping it
with the production slot.

Deploying an app to a slot first and swapping it into production ensures that
all instances of the slot are warmed up before being swapped into production.
This eliminates downtime when you deploy your app. The traffic redirection is
seamless, and no requests are dropped as a result of swap operations. This entire
workflow can be automated by configuring Auto Swap when pre-swap validation
is not needed.

After a swap, the slot with previously staged app now has the previous
production app. If the changes swapped into the production slot are not as you
expected, you can perform the same swap immediately to get your "last known
good site" back.

Each App Service plan mode supports a different number of deployment


slots. To find out the number of slots your app's mode supports, see App
Service Pricing.+

When your app has multiple slots, you cannot change the mode.

Scaling is not available for non-production slots.

Linked resource management is not supported for non-production slots. In the


Azure Portal only, you can avoid this potential impact on a production slot by
temporarily moving the non-production slot to a different App Service plan mode.
Note that the non-production slot must once again share the same mode with the
production slot before you can swap the two slots.

26.1 Add a deployment slot


The app must be running in the Standard or Premium mode in order for
you to enable multiple deployment slots.+
1.

In the Azure Portal, open your app's resource blade.

2.

Choose the Deployment slots option, then click Add Slot.

26.1.1.1.1 Note

If the app is not already in the Standard or Premium mode, you will receive
a message indicating the supported modes for enabling staged publishing. At
this point, you have the option to select Upgrade and navigate to the Scale
tab of your app before continuing.
3.

In the Add a slot blade, give the slot a name, and select whether to clone
app configuration from another existing deployment slot. Click the check mark
to continue.

The first time you add a slot, you will only have two choices: clone
configuration from the default slot in production or not at all. After you have
created several slots, you will be able to clone configuration from a slot other
than the one in production:

4.

In your app's resource blade, click Deployment slots, then click a


deployment slot to open that slot's resource blde, with a set of metrics and
configuration just like any other app. The name of the slot is shown at the top
of the blade to remind you that you are viewing the deployment slot.

5.

Click the app URL in the slot's blade. Notice the deployment slot has its own
hostname and is also a live app. To limit public access to the deployment slot, see
App Service Web App block web access to non-production deployment slots.

There is no content after deployment slot creation. You can deploy to the slot
from a different repository branch, or an altogether different repository. You
can also change the slot's configuration. Use the publish profile or
deployment credentials associated with the deployment slot for content
updates. For example, you can publish to this slot with git.+
+

26.2 Configuration for deployment slots


When you clone configuration from another deployment slot, the cloned
configuration is editable. Furthermore, some configuration elements will
follow the content across a swap (not slot specific) while other configuration
elements will stay in the same slot after a swap (slot specific). The following
lists show the configuration that will change when you swap slots.+
Settings that are swapped:+

General settings - such as framework version, 32/64-bit, Web sockets

App settings (can be configured to stick to a slot)

Connection strings (can be configured to stick to a slot)

Handler mappings

Monitoring and diagnostic settings

WebJobs content

Settings that are not swapped:+

Publishing endpoints

Custom Domain Names

SSL certificates and bindings

Scale settings

WebJobs schedulers

To configure an app setting or connection string to stick to a slot (not


swapped), access the Application Settings blade for a specific slot, then
select the Slot Setting box for the configuration elements that should stick
the slot. Note that marking a configuration element as slot specific has the

effect of establishing that element as not swappable across all the


deployment slots associated with the app.+

+
+

26.3 Swap deployment slots


You can swap deployment slots in the Overview or Deployment slots view
of your app's resource blade.+
26.3.1.1.1

Important

Before you swap an app from a deployment slot into production, make sure
that all non-slot specific settings are configured exactly as you want to have
it in the swap target.+
1.

To swap deployment slots, click the Swap button in the command bar of
the app or in the command bar of a deployment slot.

2.

Make sure that the swap source and swap target are set properly. Usually,
the swap target is the production slot. Click OK to complete the operation.
When the operation finishes, the deployment slots have been swapped.

For the Swap with preview swap type, see Swap with preview (multi-phase
swap).
+

26.4 Swap with preview (multi-phase swap)


Swap with preview, or multi-phase swap, simplify validation of slot-specific
configuration elements, such as connection strings. For mission-critical
workloads, you want to validate that the app behaves as expected when the
production slot's configuration is applied, and you must perform such
validation before the app is swapped into production. Swap with preview is
what you need.+

When you use the Swap with preview option (see Swap deployment slots),
App Service does the following:+

Keeps the destination slot unchanged so existing workload on that slot (e.g.
production) is not impacted.

Applies the configuration elements of the destination slot to the source slot,
including the slot-specific connection strings and app settings.

Restarts the worker processes on the source slot using these aforementioned
configuration elements.

When you complete the swap: Moves the pre-warmed-up source slot into the
destination slot. The destination slot is moved into the source slot as in a manual
swap.

When you cancel the swap: Reapplies the configuration elements of the
source slot to the source slot.

You can preview exactly how the app will behave with the destination slot's
configuration. Once you completes validation, you complete the swap in a
separate step. This step has the added advantage that the source slot is
already warmed up with the desired configuration, and clients will not
experience any downtime. +
Samples for the Azure PowerShell cmdlets available for multi-phase swap are
included in the Azure PowerShell cmdlets for deployment slots section.+

26.5 Configure Auto Swap


Auto Swap streamlines DevOps scenarios where you want to continuously
deploy your app with zero cold start and zero downtime for end customers of
the app. When a deployment slot is configured for Auto Swap into
production, every time you push your code update to that slot, App Service

will automatically swap the app into production after it has already warmed
up in the slot.+
26.5.1.1.1

Important

When you enable Auto Swap for a slot, make sure the slot configuration is
exactly the configuration intended for the target slot (usually the production
slot).+
Configuring Auto Swap for a slot is easy. Follow the steps below:+
1.

In Deployment Slots, select a non-production slot, and choose


Application Settings in that slot's resource blade.

2.

Select On for Auto Swap, select the desired target slot in Auto Swap
Slot, and click Save in the command bar. Make sure configuration for the slot
is exactly the configuration intended for the target slot.
The Notifications tab will flash a green SUCCESS once the operation is
complete.

26.5.1.1.2 Note

To test Auto Swap for your app, you can first select a non-production target
slot in Auto Swap Slot to become familiar with the feature.
3.

Execute a code push to that deployment slot. Auto Swap will happen after a
short time and the update will be reflected at your target slot's URL.

26.6 To rollback a production app after swap


If any errors are identified in production after a slot swap, roll the slots back
to their pre-swap states by swapping the same two slots immediately.+
+

26.7 Custom warm-up before swap


Some apps may require custom warm-up actions. The
applicationInitialization configuration element in web.config allows you to
specify custom initialization actions to be performed before a request is
received. The swap operation will wait for this custom warm-up to complete.
Here is a sample web.config fragment.+
Copy

<applicationInitialization>
<add initializationPage="/" hostName="[app hostname]" />
<add initializationPage="/Home/About" hostname="[app hostname]" />
</applicationInitialization>

26.8 To delete a deployment slot


In the blade for a deployment slot, open the deployment slot's blade, click
Overview (the default page), and click Delete in the command bar. +

+
+

26.9 Azure PowerShell cmdlets for deployment slots


Azure PowerShell is a module that provides cmdlets to manage Azure
through Windows PowerShell, including support for managing deployment
slots in Azure App Service.+

For information on installing and configuring Azure PowerShell, and on


authenticating Azure PowerShell with your Azure subscription, see How to install
and configure Microsoft Azure PowerShell.

26.9.1

Create a web app

Copy

New-AzureRmWebApp -ResourceGroupName [resource group name] -Name [app name]


-Location [location] -AppServicePlan [app service plan name]

26.9.2

Create a deployment slot

Copy

New-AzureRmWebAppSlot -ResourceGroupName [resource group name] -Name [app name]


-Slot [deployment slot name] -AppServicePlan [app service plan name]

26.9.3
Initiate a swap with review (multi-phase swap) and apply
destination slot configuration to source slot
Copy

$ParametersObject = @{targetSlot = "[slot name e.g. production]"}


Invoke-AzureRmResourceAction -ResourceGroupName [resource group name] -ResourceType
Microsoft.Web/sites/slots -ResourceName [app name]/[slot name] -Action applySlotConfig
-Parameters $ParametersObject -ApiVersion 2015-07-01

26.9.4
Cancel a pending swap (swap with review) and restore source
slot configuration
Copy

Invoke-AzureRmResourceAction -ResourceGroupName [resource group name] -ResourceType


Microsoft.Web/sites/slots -ResourceName [app name]/[slot name] -Action resetSlotConfig
-ApiVersion 2015-07-01

26.9.5

Swap deployment slots

Copy

$ParametersObject = @{targetSlot = "[slot name e.g. production]"}


Invoke-AzureRmResourceAction -ResourceGroupName [resource group name] -ResourceType
Microsoft.Web/sites/slots -ResourceName [app name]/[slot name] -Action slotsswap
-Parameters $ParametersObject -ApiVersion 2015-07-01

26.9.6

Delete deployment slot

Copy

Remove-AzureRmResource -ResourceGroupName [resource group name] -ResourceType


Microsoft.Web/sites/slots Name [app name]/[slot name] -ApiVersion 2015-07-01

26.10
Azure Command-Line Interface (Azure CLI) commands for
Deployment Slots
The Azure CLI provides cross-platform commands for working with Azure,
including support for managing App Service deployment slots.+

For instructions on installing and configuring the Azure CLI, including


information on how to connect Azure CLI to your Azure subscription, see Install
and Configure the Azure CLI.

To list the commands available for Azure App Service in the Azure CLI, call

azure site -h .
+
26.10.1.1.1 Note

For Azure CLI 2.0 (Preview) commands for deployment slots, see az
appservice web deployment slot.+

26.10.2
azure site list
For information about the apps in the current subscription, call azure site
list, as in the following example.+
azure site list webappslotstest+

26.10.3
azure site create
To create a deployment slot, call azure site create and specify the name of
an existing app and the name of the slot to create, as in the following
example.+
azure site create webappslotstest --slot staging+
To enable source control for the new slot, use the --git option, as in the
following example.+
azure site create --git webappslotstest --slot staging+

26.10.4
azure site swap
To make the updated deployment slot the production app, use the azure
site swap command to perform a swap operation, as in the following
example. The production app will not experience any down time, nor will it
undergo a cold start.+
azure site swap webappslotstest+

26.10.5
azure site delete
To delete a deployment slot that is no longer needed, use the azure site
delete command, as in the following example.+
azure site delete webappslotstest --slot staging+

26.10.5.1.1 Note

See a web app in action. Try App Service immediately and create a shortlived starter appno credit card required, no commitments.

27 How to auto scale a cloud service

On the Scale page of the Azure classic portal, you can manually scale your
web role or worker role, or you can enable automatic scaling based on CPU
load or a message queue.+
27.1.1.1.1

Note

This article focuses on Cloud Service web and worker roles. When you create
a virtual machine (classic) directly, it is hosted in a cloud service. Some of
this information applies to these types of virtual machines. Scaling an
availability set of virtual machines is really just shutting them on and off
based on the scale rules you configure. For more information about Virtual
Machines and availability sets, see Manage the Availability of Virtual
Machines+

You should consider the following information before you configure scaling
for your application:+

Scaling is affected by core usage. Larger role instances use more cores. You
can scale an application only within the limit of cores for your subscription. For
example, if your subscription has a limit of twenty cores and you run an
application with two medium sized cloud services (a total of four cores), you can
only scale up other cloud service deployments in your subscription by sixteen
cores. See Cloud Service Sizes for more information about sizes.

You must create a queue and associate it with a role before you can scale an
application based on a message threshold. For more information, see How to use
the Queue Storage Service.

You can scale resources that are linked to your cloud service. For more
information about linking resources, see How to: Link a resource to a cloud
service.

To enable high availability of your application, you should ensure that it is


deployed with two or more role instances. For more information, see Service Level
Agreements.

27.2 Schedule scaling


By default, all roles do not follow a specific schedule. Therefore, any settings
changed apply to all times and all days throughout the year. If you want, you
can setup manual or automatic scaling for:+

Weekdays

Weekends

Week nights

Week mornings

Specific dates

Specific date ranges

This is conigured in the Azure classic portal on the


Cloud Services > [Your cloud service] > Scale > [Production or
Staging] page.+
Click the set up schedule times button for each role you want to change.+

27.3 Manual scale


On the Scale page, you can manually increase or decrease the number of
running instances in a cloud service. This is configured for each schedule
you've created or to all time if you have not created a schedule.+
1.

In the Azure classic portal, click Cloud Services, and then click the name
of the cloud service to open the dashboard.
27.3.1.1.1 Tip

If you don't see your cloud service, you may need to change from Production
to Staging or vice versa.
2.

Click Scale.

3.

Select the schedule you want to change scaling options for. Defaults to No
scheduled times if you have no schedules defined.

4.

Find the Scale by metric section and select NONE. This is the default
setting for all roles.

5.

Each role in the cloud service has a slider for changing the number of
instances to use.

If you need more instances, you may need to change the cloud service virtual
machine size.
6.

Click Save.
Role instances will be added or removed based on your selections.

+
27.3.1.1.2

Tip

Whenever you see

move your mouse to it and you can get help about

what a specific setting does.+

27.4 Automatic scale - CPU


This scales if the average percentage of CPU usage goes above or below
specified thresholds; role instances are created or deleted.+
1.

In the Azure classic portal, click Cloud Services, and then click the name
of the cloud service to open the dashboard.

27.4.1.1.1 Tip

If you don't see your cloud service, you may need to change from Production
to Staging or vice versa.
2.

Click Scale.

3.

Select the schedule you want to change scaling options for. Defaults to No
scheduled times if you have no schedules defined.

4.

Find the Scale by metric section and select CPU.

5.

Now you can configure a minimum and maximum range of roles instances,
the target CPU usage (to trigger a scale up), and how many instances to scale up
and down by.

+
27.4.1.1.2

Tip

Whenever you see

move your mouse to it and you can get help about

what a specific setting does.+

27.5 Automatic scale - Queue


This automatically scales if the number of messages in a queue goes above
or below a specified threshold; role instances are created or deleted.+
1.

In the Azure classic portal, click Cloud Services, and then click the name
of the cloud service to open the dashboard.
27.5.1.1.1 Tip

If you don't see your cloud service, you may need to change from Production
to Staging or vice versa.
2.

Click Scale.

3.

Find the Scale by metric section and select CPU.

4.

Now you can configure a minimum and maximum range of roles instances,
the queue and amount of queue messages to process for each instance, and how
many instances to scale up and down by.

27.5.1.1.2

Tip

Whenever you see

move your mouse to it and you can get help about

what a specific setting does.+

27.6 Scale linked resources


Often when you scale a role, it's beneficial to scale the database that the
application is using also. If you link the database to the cloud service, you
can access the scaling settings for that resource by clicking on the
appropriate link.+
1.

In the Azure classic portal, click Cloud Services, and then click the name
of the cloud service to open the dashboard.
27.6.1.1.1 Tip

If you don't see your cloud service, you may need to change from Production
to Staging or vice versa.
2.

Click Scale.

3.

Find the linked resources section and clicked on Manage scale for this
database.
27.6.1.1.2 Note

If you don't see a linked resources section, you probably do not have any
linked resources.
+

28 https node.js documentation


HTTPS

Stability: 2 - Stable

HTTPS is the HTTP protocol over TLS/SSL. In Node.js this is


implemented as a separate module.

Class: https.Agent#
Added in: v0.4.5

An Agent object for HTTPS similar to http.Agent. See https.request() for


more information.

Class: https.Server#
Added in: v0.3.4

This class is a subclass of tls.Server and emits events same as


http.Server. See http.Server for more information.

server.setTimeout(msecs, callback)#
Added in: v0.11.2

See http.Server#setTimeout().

server.timeout#
Added in: v0.11.2

See http.Server#timeout.

https.createServer(options[,
requestListener])#
Added in: v0.3.4

Returns a new HTTPS web server object. The options is similar to


tls.createServer(). The requestListener is a function which is automatically
added to the 'request' event.
Example:
// curl -k https://localhost:8000/
const https = require('https');
const fs = require('fs');
const options = {
key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'),
cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem')
};
https.createServer(options, (req, res) => {
res.writeHead(200);
res.end('hello world\n');
}).listen(8000);

Or
const https = require('https');
const fs = require('fs');
const options = {
pfx: fs.readFileSync('server.pfx')
};
https.createServer(options, (req, res) => {
res.writeHead(200);
res.end('hello world\n');
}).listen(8000);

28.1.1 server.close([callback])#
Added in: v0.1.90

See http.close() for details.

28.1.2 server.listen(handle[, callback])#


28.1.3 server.listen(path[, callback])#
28.1.4 server.listen(port[, host][, backlog][,
callback])#
See http.listen() for details.

28.2 https.get(options, callback)#


Added in: v0.3.6

Like http.get() but for HTTPS.


options can be an object or a string. If options is a string, it is automatically
parsed with url.parse().

Example:
const https = require('https');
https.get('https://encrypted.google.com/', (res) => {
console.log('statusCode:', res.statusCode);
console.log('headers:', res.headers);
res.on('data', (d) => {
process.stdout.write(d);
});
}).on('error', (e) => {
console.error(e);
});

28.3 https.globalAgent#
Added in: v0.5.9

Global instance of https.Agent for all HTTPS client requests.

28.4 https.request(options, callback)#


Added in: v0.3.6

Makes a request to a secure web server.


options can be an object or a string. If options is a string, it is automatically
parsed with url.parse().

All options from http.request() are valid.

Example:
const https = require('https');
var options = {
hostname: 'encrypted.google.com',
port: 443,
path: '/',
method: 'GET'
};
var req = https.request(options, (res) => {
console.log('statusCode:', res.statusCode);
console.log('headers:', res.headers);
res.on('data', (d) => {
process.stdout.write(d);
});
});
req.on('error', (e) => {
console.error(e);
});
req.end();

The options argument has the following options

host : A domain name or IP address of the server to issue the request to.
Defaults to 'localhost' .

hostname : Alias for host . To support url.parse() hostname is preferred


over host .

family : IP address family to use when resolving host and hostname . Valid
values are 4 or 6 . When unspecified, both IP v4 and v6 will be used.

port : Port of remote server. Defaults to 443.

localAddress : Local interface to bind for network connections.

socketPath : Unix Domain Socket (use one of host:port or socketPath).

method : A string specifying the HTTP request method. Defaults to 'GET' .

path : Request path. Defaults to '/' . Should include query string if any. E.G.
'/index.html?page=12' . An exception is thrown when the request path

contains illegal characters. Currently, only spaces are rejected but that may
change in the future.

headers : An object containing request headers.

auth : Basic authentication i.e. 'user:password' to compute an

Authorization header.

agent : Controls Agent behavior. When an Agent is used request will


default to Connection: keep-alive . Possible values:

undefined (default): use globalAgent for this host and port.

Agent object: explicitly use the passed in Agent .

false : opts out of connection pooling with an Agent, defaults

request to Connection: close .


The following options from tls.connect() can also be specified:

pfx : Certificate, Private key and CA certificates to use for SSL. Default
null .

key : Private key to use for SSL. Default null .

passphrase : A string of passphrase for the private key or pfx. Default null .

cert : Public x509 certificate to use. Default null .

ca : A string, Buffer or array of strings or Buffers of trusted certificates in

PEM format. If this is omitted several well known "root" CAs will be used,
like VeriSign. These are used to authorize connections.

ciphers : A string describing the ciphers to use or exclude. Consult

https://www.openssl.org/docs/man1.0.2/apps/ciphers.html#CIPHER-LISTFORMAT for details on the format.

rejectUnauthorized : If true , the server certificate is verified against the

list of supplied CAs. An 'error' event is emitted if verification fails.

Verification happens at the connection level, before the HTTP request is


sent. Default true .

secureProtocol : The SSL method to use, e.g. SSLv3_method to force SSL

version 3. The possible values depend on your installation of OpenSSL and


are defined in the constant SSL_METHODS.

servername : Servername for SNI (Server Name Indication) TLS extension.

In order to specify these options, use a custom Agent.


Example:
var options = {
hostname: 'encrypted.google.com',
port: 443,
path: '/',
method: 'GET',
key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'),
cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem')
};
options.agent = new https.Agent(options);
var req = https.request(options, (res) => {
...
});

Alternatively, opt out of connection pooling by not using an Agent .


Example:
var options = {
hostname: 'encrypted.google.com',
port: 443,
path: '/',
method: 'GET',
key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'),
cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem'),
agent: false
};
var req = https.request(options, (res) => {
...
});

29 Introducing Asynchronous Cross-Account


Copy Blob

We are excited to introduce some changes to the Copy Blob API with 2012-02-12 version
that allows you to copy blobs between storage accounts. This enables some interesting
scenarios like:

Backup your blobs to another storage account without having to retrieve the
content and saving it yourself

Migrate your blobs from one account to another efficiently with respect to cost
and time

NOTE: To allow cross-account copy, the destination storage account needs to have been
created on or after June 7th 2012. This limitation is only for cross-account copy, as
accounts created prior can still copy within the same account. If the account is created
before June 7th 2012, a copy blob operation across accounts will fail with HTTP Status
code 400 (Bad Request) and the storage error code will be
CopyAcrossAccountsNotSupported.
In this blog, we will go over some of the changes that were made along with some of
the best practices to use this API. We will also show some sample code on using the new
Copy Blob APIs with SDK 1.7.1 which is available on GitHub.

29.1.1.1

Changes to Copy Blob API

To enable copying between accounts, we have made the following changes:


29.1.1.1.1

Copy Source is now a URL

In versions prior to 2012-02-12, the source request header was specified as /<account
name>/<fully qualified blob name with container name and snapshot time if applicable
>. With 2012-02-12 version, we now require x-ms-copy-source to be specified as a URL.
This is a versioned change, as specifying the old format with this new version will now
fail with 400 (Bad Request). The new format allows users to specify a shared access
signature or use a custom storage domain name. When specifying a source blob from a
different account than the destination, the source blob must either be

A publicly accessible blob (i.e. the container ACL is set to be public)

A private blob, only if the source URL is pre-authenticated with a Shared Access
Signature (i.e. pre-signed URL), allowing read permissions on the source blob

A copy operation preserves the type of the blob: a block blob will be copied as a block
blob and a page blob will be copied to the destination as a page blob. If the destination
blob already exists, it will be overwritten. However, if the destination type (for an
existing blob) does not match the source type, the operation fails with HTTP status code
400 (Bad Request).

Note: The source blob could even be a blob outside of Windows Azure, as long
as it is publicly accessible or accessible via some form of a Signed URL. For
source blobs outside of Windows Azure, they will be copied to block blobs.

29.1.1.1.2

Copy is now asynchronous

Making copy asynchronous is a major change that greatly differs from previous versions.
Previously, the Blob service returns a successful response back to the user only when
the copy operation has completed. With version 2012-02-12, the Blob service will
instead schedule the copy operation to be completed asynchronously: a success
response only indicates that the copy operation has been successfully scheduled. As a
consequence, a successful response from Copy Blob will now return HTTP status code
202 (Accepted) instead of 201 (Created).
A few important points:
1. There can be only one pending copy operation to a given destination blob name
URL at time. But a source blob can be a source for many outstanding copies at
once.
2. The asynchronous copy blob runs in the background using spare bandwidth
capacity, so there is no SLA in terms of how fast a blob will be copied.
3. Currently there is no limit on the number of pending copy blobs that can be
queued up for a storage account, but a pending copy blob operation can live in
the system for at most 2 weeks. If longer than that, then the copy blob operation
will be terminated.
4. If the source storage account is in a different location from the destination
storage account, then the source storage account will be charged egress for the
copy using the bandwidth rates as shown here.
5. When a copy is pending, any attempt to modify, snapshot, or lease the
destination blob will fail.
Below we break down the key concepts of the new Copy Blob API.
Copy Blob Scheduling: when the Blob service receives a Copy Blob request, it will first
ensure that the source exists and it can be accessed. If source does not exist or cannot
be accessed, an HTTP status code 400 (Bad Request) is returned. If any source access
conditions are provided, they will be validated too. If conditions do not match, then an
HTTP status code 412 (Precondition Failed) error is returned. Once the source is
validated, the service then validates any conditions provided for the destination blob (if
it exists). If condition checks fail on destination blob, an HTTP status code 412
(Precondition Failed) is returned. If there is already a pending copy operation, then the
service returns an HTTP status code 409 (Conflict). Once the validations are completed,
the service then initializes the destination blob before scheduling the copy and then
returns a success response to the user. If the source is a page blob, the service will
create a page blob with the same length as the source blob but all the bytes are zeroed
out. If the source blob is a block blob, the service will commit a zero length block blob
for the pending copy blob operation. The service maintains a few copy specific
properties during the copy operation to allow clients to poll the status and progress of
their copy operations.

Copy Blob Response: when a copy blob operation returns success to the client, this
indicates the Blob service has successfully scheduled the copy operation to be
completed. Two new response headers are introduced:
1. x-ms-copy-status: The status of the copy operation at the time the response was
sent. It can be one of the following:
o

success : Copy operation has completed. This is analogous to the scenario


in previous versions where the copy operation has completed
synchronously.

pending: Copy operation is still pending and the user is expected to poll
the status of the copy. (See Polling for Copy Blob properties below.)

2. x-ms-copy-id: The string token that is associated with the copy operation. This
can be used when polling the copy status, or if the user wishes to abort a
pending copy operation.
Polling for Copy Blob properties: we now provide the following additional properties
that allow users to track the progress of the copy, using Get Blob Properties, Get Blob,
or List Blobs:
1. x-ms-copy-status (or CopyStatus): The current status of the copy operation. It can
be one of the following:
o

pending: Copy operation is pending.

success: Copy operation completed successfully.

aborted: Copy operation was aborted by a client.

failed: Copy operation failed to complete due to an error.

2. x-ms-copy-id (CopyId): The id returned by the copy operation which can be used
to monitor the progress or abort a copy.
3. x-ms-copy-status-description (CopyStatusDescription): Additional error
information that can be used for diagnostics.
4. x-ms-copy-progress (CopyProgress): The amount of the blob copied so far. This
has the format X/Y where X=number of bytes copied and Y is the total number of
bytes.
5. x-ms-copy-completion-time (CopyCompletionTime): The completion time of the
last copy.
These properties can be monitored to track the progress of a copy operation that
returns pending status. However, it is important to note that except for Put Page, Put
Block and Lease Blob operations, any other write operation (i.e., Put Blob, Put Block List,

Set Blob Metadata, Set Blob Properties) on the destination blob will remove the
properties pertaining to the copy operation.
Asynchronous Copy Blob: for the cases where the Copy Blob response returns with xms-copy-status set to pending, the copy operation will complete asynchronously.
1. Block blobs: The source block blob will be retrieved using 4 MB chunks and
copied to the destination.
2. Page blobs: The source page blobs valid ranges are retrieved and copied to
destination
Copy Blob operations are retried on any intermittent failures such as network failures,
server busy etc. but any failures are recorded in x-ms-copy-status-description which
would let users know why the copy is still pending.
When the copy operation is pending, any writes to the destination blob is disallowed
and the write operation will fail with HTTP status code 409 (Conflict). One would need to
abort the copy before writing to the destination.
Data integrity during asynchronous copy: The Blob service will lock onto a version
of the source blob by storing the source blob ETag at the time of copy. This is done to
ensure that any source blob changes can be detected during the course of the copy
operation. If the source blob changes during the copy, the ETag will no longer match its
value at the start of the copy, causing the copy operation to fail.
Aborting the Copy Blob operation: To allow canceling a pending copy, we have
introduced the Abort Copy Blob operation in the 2012-02-12 version of REST API. The
Abort operation takes the copy-id returned by the Copy operation and will cancel the
operation if it is in the pending state. An HTTP status code 409 (Conflict) is returned if
the state is not pending or the copy-id does not match the pending copy. The blobs
metadata is retained but the content is zeroed out on a successful abort.

29.1.1.2

Best Practices

29.1.1.2.1 How to migrate blobs from a source accounts container to a


destination container in another account?
With asynchronous copy, copying blobs from one account to another is simply as follow:
1. List blobs in the source container.
2. For each blob in the source container, copy the blob to a destination container.
Once all the blobs are queued for copy, the monitoring component can do the following:
1. List all blobs in the destination container.
2. Check the copy status; if it has failed or has been aborted, start a new copy
operation.
Example: Here is a sample queuing of asynchronous copy. It will ignore snapshots and
only copy base blobs. Error handling is excluded for brevity.

public static void CopyBlobs(


CloudBlobContainer srcContainer,
string policyId,
CloudBlobContainer destContainer)
{
// get the SAS token to use for all blobs
string blobToken = srcContainer.GetSharedAccessSignature(
new SharedAccessBlobPolicy(), policyId);

var srcBlobList = srcContainer.ListBlobs(true, BlobListingDetails.None);


foreach (var src in srcBlobList)
{
var srcBlob = src as CloudBlob;

// Create appropriate destination blob type to match the source blob


CloudBlob destBlob;
if (srcBlob.Properties.BlobType == BlobType.BlockBlob)
{
destBlob = destContainer.GetBlockBlobReference(srcBlob.Name);
}
else
{
destBlob = destContainer.GetPageBlobReference(srcBlob.Name);
}

// copy using src blob as SAS


destBlob.StartCopyFromBlob(new Uri(srcBlob.Uri.AbsoluteUri + blobToken));
}
}

Example: Monitoring code without error handling for brevity. NOTE: This sample
assumes that no one else would start a different copy operation on the same
destination blob. If such assumption is not valid for your scenario, please see How do I
prevent someone else from starting a new copy operation to overwrite my successful
copy? below.
public static void MonitorCopy(CloudBlobContainer destContainer)
{
bool pendingCopy = true;

while (pendingCopy)
{
pendingCopy = false;
var destBlobList = destContainer.ListBlobs(
true, BlobListingDetails.Copy);

foreach (var dest in destBlobList)


{
var destBlob = dest as CloudBlob;

if (destBlob.CopyState.Status == CopyStatus.Aborted ||
destBlob.CopyState.Status == CopyStatus.Failed)
{

// Log the copy status description for diagnostics


// and restart copy
Log(destBlob.CopyState);
pendingCopy = true;
destBlob.StartCopyFromBlob(destBlob.CopyState.Source);
}
else if (destBlob.CopyState.Status == CopyStatus.Pending)
{
// We need to continue waiting for this pending copy
// However, let us log copy state for diagnostics
Log(destBlob.CopyState);

pendingCopy = true;
}
// else we completed this pending copy
}

Thread.Sleep(waitTime);
};
}

29.1.1.2.2

How do I prevent the source from changing until the copy completes?

In an asynchronous copy, once authorization is verified on source, the service locks to


that version of the source by using the ETag value. If the source blob is modified when
the copy operation is pending, the service will fail the copy operation with HTTP status
code 412 (Precondition Failed). To ensure that source blob is not modified, the client can
acquire and maintain a lease on the source blob. (See the Lease Blob REST API.)

With 2012-02-12 version, we have introduced the concept of lock (i.e. infinite lease)
which makes it easy for a client to hold on to the lease. A good option is for the copy job
to acquire an infinite lease on the source blob before issuing the copy operation. The
monitor job can then break the lease when the copy completes.
Example: Sample code that acquires a lock (i.e. infinite lease) on source.
// Acquire infinite lease on source blob
srcBlob.AcquireLease(null, leaseId);

// copy using source blob as SAS and with infinite lease id


string cid = destBlob.StartCopyFromBlob(
new Uri(srcBlob.Uri.AbsoluteUri + blobToken),
null /* source access condition */,
null /* destination access condition */,
null /* request options */);

29.1.1.2.3 How do I prevent someone else from starting a new copy operation to
overwrite my successful copy?
During a pending copy, the blob service ensures that no client requests can write to the
destination blob. The copy blob properties are maintained on the blob after a copy is
completed (failed/aborted/successful). However, these copy properties are removed
when any write command like Put Blob, Put Block List, Set Blob Metadata or Set Blob
Properties are issued on the destination blob. The following operations will however
retain the copy properties: Lease Blob, Put Page, and Put Block. Hence, a monitoring
component which may require providing confirmation that a copy is completed will need
these properties to be retained until it verifies the copy. To prevent any writes on
destination blob once the copy is completed, the copy job should acquire an infinite
lease on destination blob and provide that as destination access condition when starting
the copy blob operation. The copy operation only allows infinite leases on the
destination blob. This is because the service prevents any writes to the destination blob
and any other granular lease would require client to issue Renew Lease on the
destination blob. Acquiring a lease on destination blob requires the blob to exist and
hence client would need to create an empty blob before the copy operation is issued. To
terminate an infinite lease on a destination blob with pending copy operation, you would
have to abort the copy operation before issuing the break request on the lease.

30 Connecting Powershell to your Azure


Subscription
Windows PowerShell is a task automation and configuration management framework from
Microsoft, consisting of a command-line shell and associated scripting language, built on the .NET
Framework.
You can search for it, using Windows Search, but probably you already have it installed on your
computer.

On top of the standard command-line shell, you can also find the Windows PowerShell ISE, which
stands for Integrated Scripting Environment, and is a graphical user interface that allows you to
easily create different scripts without having to type all the commands in the command line.

In order to connect your Azure Subscription with Powershell, you need to follow the steps below.

30.1 Step 1: Install Latest Azure Tools


You can install the latest Azure Tools using Web Platform Installer, which provides an easy way to
download and install all the latest components of the Microsoft Web Platform.

30.2 Step 2: Get Azure Publish Settings File


First of all, you have to configure the connectivity between your computer and Azure. In order to do
that, you need to download the Azure Publish Settings File which contains secure credentials and
additional information on your subscription.
To obtain this file, type at the Windows Powershell command prompt

Get-AzurePublishSettingsFile

After hitting enter, a web browser will open at


https://manage.windowsazure.com/publishsettings for signing into Azure. Then, your subscription
file will be generated and the download will begin shortly after.

30.3 Step 3: Import publish settings file


Assuming that the file has been saved to C:\Azure\mysettings.publishsettings path, you can import it

Import-AzurePublishSettingsFile -PublishSettingsFile "C:\Azure\mysettings.publishsettings"

You can check that it is now included in your machine certificates

Get-ChildItem -Recurse cert:\ | Where-Object {$_.Issuer -like '*Azure*'} | select FriendlyName, Subject

30.4 Step 4: Set default Azure Subscription


Afterwards, in case you have multiple subscriptions linked to your machine, you can list them using

Get-AzureSubscription | Select SubscriptionName

and select the one you want, as default

Select-AzureSubscription -SubscriptionName "BizSpark"

To validate your current subscription, simply type

Get-AzureSubscription -Current

where you will get all available details about your current subscription.

30.5 Simple Commands


Now that you have a secure link to your subscription, you can use the command below to get
your available storage accounts

Get-AzureStorageAccount | Select StorageAccountName, AccountType, Endpoints

or

Get-AzureVM

to return Virtual Machines available in your subscription.

30.6 Certificate Management


Every certificate that was generated is now listed under Settings > Management Certificates at the
old windows azure portal, where you have the option to upload a new one or delete any of the
existing ones.

31 Downloading Windows Azure


Subscription Files
If you use Azure Management Studio, you can quickly setup your connections by importing your
publish settings.

The publish settings file is just an XML file with your subscription details (id, name, url) as well as a
management certificate for authenticating management API requests. It is available for download
from the Windows Azure Management Portal at:

https://windows.azure.com/download/publishprofile.aspx

31.1.1

Update (October 11, 2014)

You can get to this url via Visual Studio:


1. Open Visual Studio, go to File View > Server Explorer
2. In Server Explorer, right-click on the Azure node and select Manage Subscriptions
3. In the manage subscription dialog, sign in using your Azure account and go to the Certificates tab
4. Click on the Import button
5. Another dialog box will open with a Download subscription file link
Clicking on the link will open your default browser and navigate to the site where you can select
your subscription and download the file.

32 Switch azure website slot


Swaps the production slot for a website with another slot. This works on
websites with two slots only+

32.1 Syntax
Copy
PowerShell
Switch-AzureWebsiteSlot [[-Name] <String>] [-Force] [-Confirm] [-Profile <AzureSMProfile>]
[-Slot1 <String>]
[-Slot2 <String>] [-WhatIf] [<CommonParameters>]

32.2 Description
The Switch-AzureWebsiteSlot cmdlet swaps the production slot for a
website with another slot.
This works on websites with two slots only.+

32.3 Examples
32.3.1

1: Switch Website Slot

Copy
PowerShell
C:\PS>Switch-AzureWebsiteSlot -Name MyWebsite

Switch the azure website MyWebsite backup slot with production slot.

32.4 Parameters
32.4.1
+

-Name

The name of the website.

+
+

Type:

String

Required:

False

Position:

Default value:

None

Accept pipeline input:

True (ByPropertyName)

Accept wildcard characters:

False

32.4.2
+

-Force

Forces the command to run without asking for user confirmation.

+
+

Type:

SwitchParameter

Required:

False

Position:

Named

Default value:

None

Accept pipeline input:

False

Accept wildcard characters:

False

32.4.3
+

-Confirm

Prompts you for confirmation before running the cmdlet.

+
+

Type:

SwitchParameter

Aliases:

cf

Required:

False

Position:

Named

Default value:

False

Accept pipeline input:

False

Accept wildcard characters:

False

32.4.4
+

-Profile

Specifies the Azure profile from which this cmdlet reads.


If you do not specify a profile, this cmdlet reads from the local default profile.

+
+

Type:

AzureSMProfile

Required:

False

Position:

Named

Default value:

None

Accept pipeline input:

False

Accept wildcard characters:

False

32.4.5
+

-Slot1

Specifies the first slot.

+
+

Type:

String

Required:

False

Position:

Named

Default value:

None

Accept pipeline input:

True (ByPropertyName)

Accept wildcard characters:

False

32.4.6
+

-Slot2

Specifies the second slot.

+
+

Type:

String

Required:

False

Position:

Named

Default value:

None

Accept pipeline input:

True (ByPropertyName)

Accept wildcard characters:

False

32.4.7
+

-WhatIf

Shows what would happen if the cmdlet runs.


The cmdlet is not run.

+
+

Type:

SwitchParameter

Aliases:

wi

Required:

False

Position:

Named

Default value:

False

Accept pipeline input:

False

Accept wildcard characters:

False

Most of you would be aware of the Deployment Slots in Azure Web Sites. For those who
are not familiar, Deployment Slots provide an option to deploy your changes to
staging environment instead of directly moving the changes to the production
environment. This helps you to validate your changes in the Azure Web Sites
environment before you can swap the changes with the production environment. For
details about the deployment slots, check http://azure.microsoft.com/enus/documentation/articles/web-sites-staged-publishing/.

The swapping can be achieved easily using the Azure Management Portal using the
SWAP option under DASHBOARD of the website/slots. It is also discussed in
http://azure.microsoft.com/en-us/documentation/articles/web-sites-stagedpublishing/#Swap.

But there are many instances where people want to make use of the powerful Azure
PowerShell cmdlets to swap the slots.

Switch-AzureWebsiteSlot is the cmdlet to swap between the slots and the actual
production environment. Below are some examples.

When there is only one slot and you want to swap the slot with the production
environment, the example can be as simple as:
Switch-AzureWebsiteSlot Name <AzureWebsiteName>

When there are two or more slots and you want to swap between the slots, the example
can be as simple as:
Switch-AzureWebsiteSlot Name <AzureWebsiteName> -Slot1 <slotName>
-Slot2 <slotName>

But when there are two or more slots and you want to swap between one of the slot and
the production environment, and you provided only one slot name in the syntax as
below:
Switch-AzureWebsiteSlot Name <AzureWebsiteName> -Slot1 <slotName>

It will then throw error like The website has more than 2 slots you must specify which
ones to swap as shown below

PS C:\> Switch-AzureWebsiteSlot -Name hkrishaspnet -Slot1 slot1


Switch-AzureWebsiteSlot : The website has more than 2 slots you must specify which
ones to swap
At line:1 char:1
+ Switch-AzureWebsiteSlot -Name hkrishaspnet -Slot1 slot1
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~
+ CategoryInfo : CloseError: (:) [Switch-AzureWebsiteSlot], PSInvalidOperationException
+ FullyQualifiedErrorId :
Microsoft.WindowsAzure.Commands.Websites.SwitchAzureWebsiteSlotCommand

In this case, it is necessary to provide the slot name for the actual/production
environment which is Production. The syntax would be something like below:
Switch-AzureWebsiteSlot Name <AzureWebsiteName> -Slot1 Production
-Slot2 <slotName>

This will swap the corresponding slot content with the production slot of the Azure Web
Site.

33 How to Use Azure Redis Cache


This guide shows you how to get started using Azure Redis Cache.
Microsoft Azure Redis Cache is based on the popular open source Redis
Cache. It gives you access to a secure, dedicated Redis cache, managed by
Microsoft. A cache created using Azure Redis Cache is accessible from any
application within Microsoft Azure.+
Microsoft Azure Redis Cache is available in the following tiers:+

Basic Single node. Multiple sizes up to 53 GB.

Standard Two-node Primary/Replica. Multiple sizes up to 53 GB. 99.9% SLA.

Premium Two-node Primary/Replica with up to 10 shards. Multiple sizes


from 6 GB to 530 GB (contact us for more). All Standard tier features and more
including support for Redis cluster, Redis persistence, and Azure Virtual Network.
99.9% SLA.

Each tier differs in terms of features and pricing. For information on pricing,
see Cache Pricing Details.+
This guide shows you how to use the StackExchange.Redis client using C#
code. The scenarios covered include creating and configuring a cache,
configuring cache clients, and adding and removing objects from the
cache. For more information on using Azure Redis Cache, refer to the Next
Steps section. For a step-by-step tutorial of building an ASP.NET MVC web
app with Redis Cache, see How to create a Web App with Redis Cache.+
+

33.1 Get Started with Azure Redis Cache


Getting started with Azure Redis Cache is easy. To get started, you provision
and configure a cache. Next, you configure the cache clients so they can
access the cache. Once the cache clients are configured, you can begin
working with them.+

Create the cache

Configure the cache clients

33.2 Create a cache


To create a cache, first sign in to the Azure portal, and click New, Data +
Storage, Redis Cache.+
33.2.1.1.1

Note

If you don't have an Azure account, you can Open an Azure account for free
in just a couple of minutes.+

+
33.2.1.1.2

Note

In addition to creating caches in the Azure portal, you can also create them
using Resource Manager templates, PowerShell, or Azure CLI.+

To create a cache using Resource Manager templates, see Create a Redis


cache using a template.

To create a cache using Azure PowerShell, see Manage Azure Redis Cache
with Azure PowerShell.

To create a cache using Azure CLI, see How to create and manage Azure
Redis Cache using the Azure Command-Line Interface (Azure CLI).

In the New Redis Cache blade, specify the desired configuration for the
cache.+

In Dns name, enter a cache name to use for the cache endpoint. The cache
name must be a string between 1 and 63 characters and contain only numbers,
letters, and the - character. The cache name cannot start or end with the character, and consecutive - characters are not valid.

For Subscription, select the Azure subscription that you want to use for the
cache. If your account has only one subscription, it will be automatically selected
and the Subscription drop-down will not be displayed.

In Resource group, select or create a resource group for your cache. For
more information, see Using Resource groups to manage your Azure resources.

Use Location to specify the geographic location in which your cache is


hosted. For the best performance, Microsoft strongly recommends that you create
the cache in the same region as the cache client application.

Use Pricing Tier to select the desired cache size and features.

Redis cluster allows you to create caches larger than 53 GB and to shard
data across multiple Redis nodes. For more information, see How to configure
clustering for a Premium Azure Redis Cache.

Redis persistence offers the ability to persist your cache to an Azure


Storage account. For instructions on configuring persistence, see How to
configure persistence for a Premium Azure Redis Cache.

Virtual Network provides enhanced security and isolation by restricting


access to your cache to only those clients within the specified Azure Virtual
Network. You can use all the features of VNet such as subnets, access control
policies, and other features to further restrict access to Redis. For more
information, see How to configure Virtual Network support for a Premium Azure
Redis Cache.

Once the new cache options are configured, click Create. It can take a few
minutes for the cache to be created. To check the status, you can monitor

the progress on the startboard. After the cache has been created, your new
cache has a Running status and is ready for use with default settings.+

+
33.2.2
To access your cache after it's created
Caches can be accessed in the Azure portal using the Browse blade.+

+
To view your caches, click More services > Redis Caches. If you have
recently browsed to a Redis Cache, you can click Redis Caches directly from
the list without clicking More services.+
Select the desired cache to view and configure the settings for that cache.+

+
You can view and configure your cache from the Redis Cache blade.+

+
For more information about configuring your cache, see How to configure
Azure Redis Cache.+
+

33.3 Configure the cache clients


.NET applications can use the StackExchange.Redis cache client, which
can be configured in Visual Studio using a NuGet package that simplifies the
configuration of cache client applications. +
33.3.1.1.1

Note

For more information, see the StackExchange.Redis github page and the
StackExchange.Redis cache client documentation.+
To configure a client application in Visual Studio using the
StackExchange.Redis NuGet package, right-click the project in Solution
Explorer and choose Manage NuGet Packages. +

+
Type StackExchange.Redis or StackExchange.Redis.StrongName into
the search text box, select the desired version from the results, and click
Install.+

33.3.1.1.2

Note

If you prefer to use a strong-named version of the StackExchange.Redis


client library, choose StackExchange.Redis.StrongName; otherwise
choose StackExchange.Redis.+

+
The NuGet package downloads and adds the required assembly references
for your client application to access Azure Redis Cache with the
StackExchange.Redis cache client.+
33.3.1.1.3

Note

If you have previously configured your project to use StackExchange.Redis,


you can check for updates to the package from the NuGet Package
Manager. To check for and install updated versions of the
StackExchange.Redis NuGet package, click Updates in the the NuGet
Package Manager window. If an update to the StackExchange.Redis NuGet

package is available, you can update your project to use the updated
version.+
Once your client project is configured for caching, you can use the
techniques described in the following sections for working with your cache.+
+

33.4 Working with Caches


The steps in this section describe how to perform common tasks with Cache.
+

Connect to the cache

Add and retrieve objects from the cache

Work with .NET objects in the cache

33.5 Connect to the cache


In order to programmatically work with a cache, you need a reference to the
cache. Add the following to the top of any file from which you want to use
the StackExchange.Redis client to access an Azure Redis Cache.+
Copy

using StackExchange.Redis;

33.5.1.1.1

Note

The StackExchange.Redis client requires .NET Framework 4 or higher.+


The connection to the Azure Redis Cache is managed by the
ConnectionMultiplexer class. This class is designed to be shared and reused

throughout your client application, and does not need to be created on a per
operation basis. +
To connect to an Azure Redis Cache and be returned an instance of a
connected ConnectionMultiplexer, call the static Connect method and pass
in the cache endpoint and key like the following example. Use the key
generated from the Azure Portal as the password parameter.+
Copy

ConnectionMultiplexer connection =
ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,abortConnect=false,ssl=t
rue,password=...");

33.5.1.1.2

Important

Warning: Never store credentials in source code. To keep this sample simple,
Im showing them in the source code. See How Application Strings and
Connection Strings Work for information on how to store credentials.+
If you don't want to use SSL, either set ssl=false or omit the ssl parameter.+
33.5.1.1.3

Note

The non-SSL port is disabled by default for new caches. For instructions on
enabling the non-SSL port, see the Access Ports..+
One approach to sharing a ConnectionMultiplexer instance in your
application is to have a static property that returns a connected instance,
similar to the following example. This provides a thread-safe way to initialize
only a single connected ConnectionMultiplexer instance. In these examples
abortConnect is set to false, which means that the call will succeed even if a
connection to the Azure Redis Cache is not established. One key feature of

ConnectionMultiplexer is that it will automatically restore connectivity to the


cache once the network issue or other causes are resolved.+
Copy

private static Lazy<ConnectionMultiplexer> lazyConnection = new


Lazy<ConnectionMultiplexer>(() =>
{
return
ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,abortConnect=false,ssl=t
rue,password=...");
});

public static ConnectionMultiplexer Connection


{
get
{
return lazyConnection.Value;
}
}

For more information on advanced connection configuration options, see


StackExchange.Redis configuration model.+
To connect to an Azure Redis Cache instance, cache clients need the host
name, ports, and keys of the cache. Some clients may refer to these items by
slightly different names. To retrieve these items, browse to your cache in the
Azure portal and click Settings or All settings. +

+
33.5.2
Host name and ports
To access the host name and ports click Properties.+

+
33.5.3
Access keys
To retrieve the access keys, click Access keys.+

+
Once the connection is established, return a reference to the redis cache
database by calling the ConnectionMultiplexer.GetDatabase method. The
object returned from the GetDatabase method is a lightweight pass-through
object and does not need to be stored.+
Copy

// Connection refers to a property that returns a ConnectionMultiplexer

// as shown in the previous example.


IDatabase cache = Connection.GetDatabase();

// Perform cache operations using the cache object...


// Simple put of integral data types into the cache
cache.StringSet("key1", "value");
cache.StringSet("key2", 25);

// Simple get of data types from the cache


string key1 = cache.StringGet("key1");
int key2 = (int)cache.StringGet("key2");

Now that you know how to connect to an Azure Redis Cache instance and
return a reference to the cache database, let's take a look at working with
the cache.+
+

33.6 Add and retrieve objects from the cache


Items can be stored in and retrieved from a cache by using the StringSet and
StringGet methods.+
Copy

// If key1 exists, it is overwritten.


cache.StringSet("key1", "value1");

string value = cache.StringGet("key1");

Redis stores most data as Redis strings, but these strings can contain many
types of data, including serialized binary data, which can be used when
storing .NET objects in the cache.+
When calling StringGet, if the object exists, it is returned, and if it does not,
null is returned. In this case you can retrieve the value from the desired data
source and store it in the cache for subsequent use. This is known as the
cache-aside pattern.+
Copy

string value = cache.StringGet("key1");


if (value == null)
{
// The item keyed by "key1" is not in the cache. Obtain
// it from the desired data source and add it to the cache.
value = GetValueFromDataSource();

cache.StringSet("key1", value);
}

To specify the expiration of an item in the cache, use the TimeSpan


parameter of StringSet.+
Copy

cache.StringSet("key1", "value1", TimeSpan.FromMinutes(90));

33.7 Work with .NET objects in the cache


Azure Redis Cache can cache .NET objects as well as primitive data types,
but before a .NET object can be cached it must be serialized. This is the
responsibility of the application developer, and gives the developer flexibility
in the choice of the serializer.+
One simple way to serialize objects is to use the JsonConvert serialization
methods in Newtonsoft.Json.NET and serialize to and from JSON. The
following example shows a get and set using an Employee object instance.+
Copy

class Employee
{
public int Id { get; set; }
public string Name { get; set; }

public Employee(int EmployeeId, string Name)


{
this.Id = EmployeeId;
this.Name = Name;
}
}

// Store to cache
cache.StringSet("e25", JsonConvert.SerializeObject(new Employee(25, "Clayton Gragg")));

// Retrieve from cache


Employee e25 = JsonConvert.DeserializeObject<Employee>(cache.StringGet("e25"));

34 Monitor a storage account in the Azure


Portal
34.1 Overview
You can monitor your storage account from the Azure Portal. When you
configure your storage account for monitoring through the portal, Azure
Storage uses Storage Analytics to track metrics for your account and log
request data.+
34.1.1.1.1

Note

Additional costs are associated with examining monitoring data in the Azure
Portal. For more information, see Storage Analytics and Billing.
+
Azure File storage currently supports Storage Analytics metrics, but does not
yet support logging. You can enable metrics for Azure File storage via the
Azure Portal.+
Storage accounts with a replication type of Zone-Redundant Storage (ZRS)
do not have the metrics or logging capability enabled at this time. +
For an in-depth guide on using Storage Analytics and other tools to identify,
diagnose, and troubleshoot Azure Storage-related issues, see Monitor,
diagnose, and troubleshoot Microsoft Azure Storage.+

34.2 How to: Configure monitoring for a storage account


1.

In the Azure Portal, click Storage, and then click the storage account name
to open the dashboard.

2.

Click Configure, and scroll down to the monitoring settings for the blob,
table, and queue services.

3.

In monitoring, set the level of monitoring and the data retention policy
for each service:

To set the monitoring level, select one of the following:


Minimal - Collects metrics such as ingress/egress, availability, latency, and
success percentages, which are aggregated for the blob, table, and queue
services.

Verbose - In addition to the minimal metrics, collects the same set of


metrics for each storage operation in the Azure Storage Service API.
Verbose metrics enable closer analysis of issues that occur during
application operations.
Off - Turns off monitoring. Existing monitoring data is persisted through the
end of the retention period.
+

To set the data retention policy, in Retention (in days), type the number of
days of data to retain from 1 to 365 days. If you do not want to set a retention
policy, enter zero. If there is no retention policy, it is up to you to delete the
monitoring data. We recommend setting a retention policy based on how long you
want to retain storage analytics data for your account so that old and unused
analytics data can be deleted by system at no cost.

+
1.

When you finish the monitoring configuration, click Save.

You should start seeing monitoring data on the dashboard and the Monitor
page after about an hour.+
Until you configure monitoring for a storage account, no monitoring data is
collected, and the metrics charts on the dashboard and Monitor page are
empty.+
After you set the monitoring levels and retention policies, you can choose
which of the available metrics to monitor in the Azure Portal, and which
metrics to plot on metrics charts. A default set of metrics is displayed at each
monitoring level. You can use Add Metrics to add or remove metrics from
the metrics list.+

Metrics are stored in the storage account in four tables named


$MetricsTransactionsBlob, $MetricsTransactionsTable,
$MetricsTransactionsQueue, and $MetricsCapacityBlob. For more information,
see About Storage Analytics Metrics.+

34.3 How to: Customize the dashboard for monitoring


On the dashboard, you can choose up to six metrics to plot on the metrics
chart from nine available metrics. For each service (blob, table, and queue),
the Availability, Success Percentage, and Total Requests metrics are
available. The metrics available on the dashboard are the same for minimal
or verbose monitoring.+
1.

In the Azure Portal, click Storage, and then click the name of the storage
account to open the dashboard.

2.

To change the metrics that are plotted on the chart, take one of the
following actions:

To add a new metric to the chart, click the colored check box next to
the metric header in the table below the chart.

To hide a metric that is plotted on the chart, clear the colored check
box next to the metric header.

3.

By default, the chart shows trends, displaying only the current value of each
metric (the Relative option at the top of the chart). To display a Y axis so you can
see absolute values, select Absolute.

4.

To change the time range the metrics chart displays, select 6 hours, 24 hours,
or 7 days at the top of the chart.

34.4 How to: Customize the Monitor page


On the Monitor page, you can view the full set of metrics for your storage
account.+

If your storage account has minimal monitoring configured, metrics such as


ingress/egress, availability, latency, and success percentages are aggregated
from the blob, table, and queue services.

If your storage account has verbose monitoring configured, the metrics are
available at a finer resolution of individual storage operations in addition to the
service-level aggregates.

Use the following procedures to choose which storage metrics to view in the
metrics charts and table that are displayed on the Monitor page. These

settings do not affect the collection, aggregation, and storage of monitoring


data in the storage account.+

34.5 How to: Add metrics to the metrics table


1.

In the Azure Portal, click Storage, and then click the name of the storage
account to open the dashboard.

2.

Click Monitor.
The Monitor page opens. By default, the metrics table displays a subset of
the metrics that are available for monitoring. The illustration shows the
default Monitor display for a storage account with verbose monitoring
configured for all three services. Use Add Metrics to select the metrics you
want to monitor from all available metrics.

34.5.1.1.1 Note

Consider costs when you select the metrics. There are transaction and egress
costs associated with refreshing monitoring displays. For more information,
see Storage Analytics and Billing.
3.

Click Add Metrics.


The aggregate metrics that are available in minimal monitoring are at the top
of the list. If the check box is selected, the metric is displayed in the metrics
list.

4.

Hover over the right side of the dialog box to display a scrollbar that you
can drag to scroll additional metrics into view.

5.

Click the down arrow by a metric to expand a list of operations the metric
is scoped to include. Select each operation that you want to view in the
metrics table in the Azure Portal.
In the following illustration, the AUTHORIZATION ERROR PERCENTAGE metric
has been expanded.

6.

After you select metrics for all services, click OK (checkmark) to update the
monitoring configuration. The selected metrics are added to the metrics table.

7.

To delete a metric from the table, click the metric to select it, and then
click Delete Metric.

34.6 How to: Customize the metrics chart on the Monitor page
1.

On the Monitor page for the storage account, in the metrics table, select up
to 6 metrics to plot on the metrics chart. To select a metric, click the check box on
its left side. To remove a metric from the chart, clear the check box.

2.

To switch the chart between relative values (final value only displayed) and
absolute values (Y axis displayed), select Relative or Absolute at the top of the
chart.

3.

To change the time range the metrics chart displays, select 6 hours, 24
hours, or 7 days at the top of the chart.

34.7 How to: Configure logging


For each of the storage services available with your storage account (blob,
table, and queue), you can save diagnostics logs for Read Requests, Write
Requests, and/or Delete Requests, and can set the data retention policy for
each of the services.+

1.

In the Azure Portal, click Storage, and then click the name of the storage
account to open the dashboard.

2.

Click Configure, and use the Down arrow on the keyboard to scroll down
to logging.

3.

For each service (blob, table, and queue), configure the following:

The types of request to log: Read Requests, Write Requests, and Delete
Requests.

The number of days to retain the logged data. Enter zero is if you do
not want to set a retention policy. If you do not set a retention policy, it is up to
you to delete the logs.

4.

Click Save.

35 Update Data Disk

Updated: July 10, 2015


The Update Data Disk operation updates the configuration of the specified data disk that is attached
to the specified Virtual Machine.

35.1 Request
35.2
The Update Data Disk request may be specified as follows. Replace <subscription-id> with the
subscription ID, <cloudservice-name> with the name of the cloud service, <deployment-name> with
the name of the deployment, <role-name> with the name of the Virtual Machine, and <lun> with the
logical unit number of the disk.
Method

PUT

Request URI
https://management.core.windows.net/<subscriptionid>/services/hostedservices/<cloudservice-name>/deployments/<deploymentname>/roles/<role-name>/DataDisks/<lun>

35.2.1
35.2.2

URI Parameters

None.

35.2.3
35.2.4

Request Headers

The following table describes the request headers.


Request
Header
x-ms-version

35.2.5
35.2.6

Description
Required. Specifies the version of the operation to use for this request. This header
should be set to 2012-03-01 or higher.

Request Body

The format of the request body is as follows:


Copy

<DataVirtualHardDisk xmlns="http://schemas.microsoft.com/windowsazure"
xmlns:i="http://www.w3.org/2001/XMLSchema-instance">

<HostCaching>caching-mode-of-disk</HostCaching>
<DiskName>name-of-data-disk</DiskName>
<Lun>logical-unit-number-of-data-disk</Lun>
<MediaLink>path-to-vhd</MediaLink>
</DataVirtualHardDisk>

The following table describes the elements of the request body.


Element
name

Description
Optional. Specifies the caching behavior of the data disk.
Possible values are:

HostCaching

None

ReadOnly

ReadWrite

The default value is None.


DiskName

Lun

Required. Specifies the name of the data disk to update. This value is only used to
identify the data disk to update and cannot be changed.
Required. Specifies the Logical Unit Number (LUN) for the data disk. You can use this
element to change the LUN for the data disk. If you do not want to change the LUN,
specify the existing LUN as the value for this element.
Valid LUN values are 0 through 31.
Required. Specifies the location of the VHD that is associated with the data disk. This
value is only used to identify the data disk to update and cannot be changed.

MediaLink

Example:
http://example.blob.core.windows.net/disks/mydatadisk.vhd

35.3 Response
35.4
The response includes an HTTP status code, a set of response headers, and a response body.

35.4.1
35.4.2

Status Code

A successful operation returns status code 202 (Accepted).

35.4.3
35.4.4

Response Headers

The response for this operation includes the following headers. The response may also include
additional standard HTTP headers.
Response Header
x-ms-request-id

35.4.5
35.4.6

Description
A value that uniquely identifies a request made against the management service.

Response Body

None.

36 Map a custom domain name to an Azure


app
This article shows you how to manually map a custom domain name to your
web app, mobile app backend, or API app in Azure App Service. +
Your app already comes with a unique subdomain of azurewebsites.net. For
example, if the name of your app is contoso, then its domain name is
contoso.azurewebsites.net. However, you can map a custom domain
name to app so that its URL, such as www.contoso.com, reflects your brand.
+
36.1.1.1.1

Note

Get help from Azure experts on the Azure forums. For even higher level of
support, go to the Azure Support site and click Get Support.+

This article is for Azure App Service (Web Apps, API Apps, Mobile Apps, Logic
Apps); for Cloud Services, see Configuring a custom domain name for an
Azure cloud service.+
36.1.1.1.2

Note

If you app is load-balanced by Azure Traffic Manager, click the selector at the
top of this article to get specific steps.+
Custom domain names are not enabled for Free tier. You must scale up
to a higher pricing tier, which may change how much you are billed for your
subscription. See App Service Pricing for more information.+

36.2 Buy a new custom domain in Azure portal


If you haven't already purchased a custom domain name, you can buy one
and manage it directly in your app's settings in the Azure portal. This option
makes it easy to map a custom domain to your app, whether your app uses
Azure Traffic Manager or not. +
For instructions, see Buy a custom domain name for App Service.+

36.3 Map a custom domain you purchased externally


If you have already purchased a custom domain from Azure DNS or from a
third-party provider, there are three main steps to map the custom domain
to your app:+
1.

(A record only) Get app's IP address.

2.

Create the DNS records that map your domain to your app.

Where: your domain registrar's own management tool (e.g. Azure


DNS, GoDaddy, etc.).

Why: so your domain registrar knows to resolves the desired custom


domain to your Azure app.

3.

Enable the custom domain name for your Azure app.

Where: the Azure portal.

Why: so your app knows to respond to requests made to the custom


domain name.

4.

Verify DNS propagation.

36.3.1
Types of domains you can map
Azure App Service lets you map the following categories of custom domains
to your app.+

Root domain - the domain name that you reserved with the domain registrar
(represented by the @ host record, typically). For example, contoso.com.

Subdomain - any domain that's under your root domain. For example,
www.contoso.com (represented by the www host record). You can map
different subdomains of the same root domain to different apps in Azure.

Wildcard domain - any subdomain whose leftmost DNS label is * (e.g. host
records * and *.blogs ). For example, *.contoso.com.

36.3.2
Types of DNS records you can use
Depending on your need, you can use two different types of standard DNS
records to map your custom domain: +

A - maps your custom domain name to the Azure app's virtual IP address
directly.

CNAME - maps your custom domain name to your app's Azure domain name,
<appname>.azurewebsites.net.

The advantage of CNAME is that it persists across IP address changes. If you


delete and recreate your app, or change from a higher pricing tier back to
the Shared tier, your app's virtual IP address may change. Through such a

change, a CNAME record is still valid, whereas an A record requires an


update. +
The tutorial shows you steps for using the A record and also for using the
CNAME record.+
36.3.2.1.1

Important

Do not create a CNAME record for your root domain (i.e. the "root record").
For more information, see Why can't a CNAME record be used at the root
domain. To map a root domain to your Azure app, use an A record instead.+
+

36.4 Step 1. (A record only) Get app's IP address


To map a custom domain name using an A record, you need your Azure app's
IP address. If you will map using a CNAME record instead, skip this step and
move onto the next section.+
1.

Log in to the Azure portal.

2.

Click App Services on the left menu.

3.

Click your app, then click Custom domains.

4.

Take note of the IP address above Hostnames section..

5.

Keep this portal blade open. You will come back to it once you create the DNS
records.

36.5 Step 2. Create the DNS record(s)


Log in to your domain registrar and use their tool to add an A record or
CNAME record. Every registrars UI is slightly different, so you should consult
your provider's documentation. However, here are some general guidelines.
+
1.

Find the page for managing DNS records. Look for links or areas of the site
labeled Domain Name, DNS, or Name Server Management. Often, you can
find the link by viewing your account information, and then looking for a link such
as My domains.

2.

Look for a link that lets you add or edit DNS records. This might be a Zone
file or DNS Records link, or an Advanced configuration link.

3.

Create the record and save your changes.

Instructions for an A record are here.

Instructions for a CNAME record are here.

+
36.5.1
Create an A record
To use an A record to map to your Azure app's IP address, you actually need
to create both an A record and a TXT record. The A record is for the DNS
resolution itself, and the TXT record is for Azure to verify that you own the
custom domain name. +
Configure your A record as follows (@ typically represents the root domain):+

FQDN example

A Host

A Value

contoso.com (root)

IP address from Step 1

www.contoso.com (sub)

www

IP address from Step 1

*.contoso.com (wildcard)

IP address from Step 1

Your additional TXT record takes on the convention that maps from
<subdomain>.<rootdomain> to < appname>.azurewebsites.net. Configure
your TXT record as follows:+

FQDN example

TXT Host

TXT Value

contoso.com (root)

<appname>.azurewebsites.net

www.contoso.com (sub)

www

<appname>.azurewebsites.net

*.contoso.com (wildcard)

<appname>.azurewebsites.net

+
36.5.2
Create a CNAME record
If you use a CNAME record to map to your Azure app's default domain name,
you don't need an additional TXT record like you do with an A record. +
36.5.2.1.1

Important

Do not create a CNAME record for your root domain (i.e. the "root record").
For more information, see Why can't a CNAME record be used at the root
domain. To map a root domain to your Azure app, use an A record instead.+
Configure your CNAME record as follows (@ typically represents the root
domain):+

FQDN example

CNAME Host

CNAME Value

www.contoso.com (sub)

www

<appname>.azurewebsites.n
et

*.contoso.com (wildcard)

<appname>.azurewebsites.n
et

36.6 Step 3. Enable the custom domain name for your app
Back in the Custom Domains blade in the Azure portal (see Step 1), you
need to add the fully-qualified domain name (FQDN) of your custom domain
to the list.+
1.

If you haven't done so, log in to the Azure portal.

2.

In the Azure portal, click App Services on the left menu.

3.

Click your app, then click Custom domains > Add hostname.

4.

Add the FQDN of your custom domain to the list (e.g.


www.contoso.com).

36.6.1.1.1 Note

Azure will attempt to verify the domain name that you use here. Be sure that
it is the same domain name for which you created a DNS record in Step 2.
5.

Click Validate.

6.

Upon clicking Validate Azure will kick off Domain Verification workflow. This
will check for Domain ownership as well as Hostname availability and report
success or detailed error with prescriptive guidence on how to fix the error.

7.

Upon successful validation Add hostname button will become active and
you will be able to the assign hostname.

8.

Once Azure finishes configuring your new custom domain name, navigate to
your custom domain name in a browser. The browser should open your Azure
app, which means that your custom domain name is configured properly.

36.7 Migrate an active domain with no downtime


When you migrate a live site and its domain name to App Service, that
domain name is already serving live traffic, and you don't want any
downtime in DNS resolution during the migration process. In this case, you
need to preemptively bind the domain name to your Azure app for domain
verification. To do this, follow the modified steps below:+
1.

First, create a verification TXT record with your DNS registry by following
the steps at Step 2. Create the DNS record(s). Your additional TXT record takes
on the convention that maps from <subdomain>.<rootdomain> to
<appname>.azurewebsites.net. See the following table for examples:

2.

FQDN example

TXT Host

TXT Value

contoso.com (root)

awverify.contoso.com

<appname>.azurewebsit
es.net

www.contoso.com
(sub)

awverify.www.contoso.c
om

<appname>.azurewebsit
es.net

*.contoso.com
(wildcard)

awverify.*.contoso.com

<appname>.azurewebsit
es.net

Then, add your custom domain name to your Azure app by following the
steps at Step 3. Enable the custom domain name for your app.
Your custom domain is now enabled in your Azure app. The only thing left to
do is to update the DNS record with your domain registrar.

3.

Finally, update your domain's DNS record to point to your Azure app as is
shown in Step 2. Create the DNS record(s).
User traffic should be redirected to your Azure app immediately after DNS
propagation happens.

36.8 Verify DNS propagation


After you finish the configuration steps, it can take some time for the
changes to propagate, depending on your DNS provider. You can verify that
the DNS propagation is working as expected by using
http://digwebinterface.com/. After you browse to the site, specify the
hostnames in the textbox and click Dig. Verify the results to confirm if the
recent changes have taken effect. +

+
36.8.1.1.1

Note

The propagation of the DNS entries can take up to 48 hours (sometimes


longer). If you have configured everything correctly, you still need to wait for
the propagation to succeed.

37 Secure your app's custom domain with


HTTPS
This article shows you how to enable HTTPS for a web app, a mobile app backend,
or an API app in Azure App Service that uses a custom domain name. It covers

server-only authentication. If you need mutual authentication (including client


authentication), see How To Configure TLS Mutual Authentication for App Service.+
To secure with HTTPS an app that has a custom domain name, you add a certificate
for that domain name. By default, Azure secures the *.azurewebsites.net wildcard
domain with a single SSL certificate, so your clients can already access your app at
https://<appname>.azurewebsites.net. But if you want to use a custom domain, like
contoso.com, www.contoso.com, and *.contoso.com, the default certificate can't
secure that. Furthermore, like all wildcard certificates, the default certificate is not
as secure as using a custom domain and a certificate for that custom domain. +
Note
You can get help from Azure experts anytime on the Azure forums. For more
personalized support, go to Azure Support and click Get Support.+
+
What you need
To secure your custom domain name with HTTPS, you bind a custom SSL certificate
to that custom domain in Azure. Before binding a custom certificate, you need to do
the following:+
Configure the custom domain - App Service only allows adding a certificate for a
domain name that's already configured in your app. For instructions, see Map a
custom domain name to an Azure app.
Scale up to Basic tier or higher App Service plans in lower pricing tiers don't support
custom SSL certificates. For instructions, see Scale up an app in Azure.
Get an SSL certificate - If you do not already have one, you need to get one from a
trusted certificate authority (CA). The certificate must meet all the following
requirements:
It is signed by a trusted CA (no private CA servers).
It contains a private key.
It is created for key exchange, and exported to a .PFX file.
It uses a minimum of 2048-bit encryption.
Its subject name matches the custom domain it needs to secure. To secure multiple
domains with one certificate, you need to use a wildcard name (e.g. *.contoso.com)
or specify subjectAltName values.
It is merged with all intermediate certificates used by your CA. Otherwise, you may
run into irreproducible interoperability problems on some clients.
Note

The easiest way to get an SSL certificate that meets all the requirements is to buy
one in the Azure portal directly. This article shows you how to do it manually and
then bind it to your custom domain in App Service.
Elliptic Curve Cryptography (ECC) certificates can work with App Service, but
outside the scope of this article. Work with your CA on the exact steps to create ECC
certificates.
+
+
Step 1. Get an SSL certificate
Because CAs provide the various SSL certificate types at different price points, you
should start by deciding what type of SSL certificate to buy. To secure a single
domain name (www.contoso.com), you just need a basic certificate. To secure
multiple domain names (contoso.com and www.contoso.com and mail.contoso.com),
you need either a wildcard certificate or a certificate with Subject Alternate Name
(subjectAltName).+
Once you know which SSL certificate to buy, you submit a Certificate Signing
Request (CSR) to a CA. When you get requested certificate back from the CA, you
then generate a .pfx file from the certificate. You can perform these steps using the
tool of your choice. Here are instructions for the common tools:+
Certreq.exe steps - the Windows utility for creating certificate requests. It has been
part of Windows since Windows XP/Windows Server 2000.
IIS Manager steps - The tool of choice if you're already familiar with it.
OpenSSL steps - an open-source, cross-platform tool. Use it to help you get an SSL
certificate from any platform.
subjectAltName steps using OpenSSL - steps for getting subjectAltName certificates.
+
If you want to test the setup in App Service before buying a certificate, you can
generate a self-signed certificate. This tutorial gives you two ways to generate it:+
Self-signed certificate, Certreq.exe steps
Self-signed certificate, OpenSSL steps
+
+
Get a certificate using Certreq.exe
Create a file (e.g. myrequest.txt), and copy into it the following text, and save it in a
working directory. Replace the <your-domain> placeholder with the custom domain
name of your app.

Copy

[NewRequest]
Subject = "CN=<your-domain>" ; E.g. "CN=www.contoso.com", or
"CN=*.contoso.com" for a wildcard certificate
Exportable = TRUE
KeyLength = 2048

; Required minimum is 2048

KeySpec = 1
KeyUsage = 0xA0
MachineKeySet = True
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
HashAlgorithm = SHA256

[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1

; Server Authentication

For more information on the options in the CSR, and other available options, see the
Certreq reference documentation.
In a command prompt, CD into your working directory and run the following
command to create the CSR:
Copy

certreq -new myrequest.txt myrequest.csr


myrequest.csr is now created in your current working directory.
Submit myrequest.csr to a CA to obtain an SSL certificate. You either upload the file,
or copy its content from a text editor into a web form.
For a list of CAs trusted by Microsoft, see Microsoft Trusted Root Certificate Program:
Participants.
Once the CA has responded to you with a certificate (.CER) file, save it in your
working directory. Then, run the following command to complete the pending CSR.
Copy

certreq -accept -user <certificate-name>.cer


This command stores the finished certificate in the Windows certificate store.
If your CA uses intermediate certificates, install them before you proceed. They
usually come as a separate download from your CA, and in several formats for
different web server types. Select the version for Microsoft IIS.
Once you have downloaded the certificates, right-click each of them in Windows
Explorer and select Install certificate. Use the default values in the Certificate
Import Wizard, and continue selecting Next until the import has completed.
To export your SSL certificate from the certificate store, press Win+R and run
certmgr.msc to launch Certificate Manager. Select Personal > Certificates. In the
Issued To column, you should see an entry with your custom domain name, and the
CA you used to generate the certificate in the Issued By column.

Right-click the certificate and select All Tasks > Export. In the Certificate Export
Wizard, click Next, then select Yes, export the private key, and then click Next again.

Select Personal Information Exchange - PKCS #12, Include all certificates in the
certificate path if possible, and Export all extended properties. Then, click Next.

Select Password, and then enter and confirm the password. Click Next.

Provide a path and filename for the exported certificate, with the extension .pfx.
Click Next to finish.

+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Get a certificate using the IIS Manager
Generate a CSR with IIS Manager to send to the CA. For more information on
generating a CSR, see Request an Internet Server Certificate (IIS 7).
Submit your CSR to a CA to get an SSL certificate. For a list of CAs trusted by
Microsoft, see Microsoft Trusted Root Certificate Program: Participants.
Complete the CSR with the certificate that the CA sends back to you. For more
information on completing the CSR, see Install an Internet Server Certificate (IIS 7).

If your CA uses intermediate certificates, install them before you proceed. They
usually come as a separate download from your CA, and in several formats for
different web server types. Select the version for Microsoft IIS.
Once you have downloaded the certificates, right-click each of them in Windows
Explorer and select Install certificate. Use the default values in the Certificate
Import Wizard, and continue selecting Next until the import has completed.
Export the SSL certificate from IIS Manager. For more information on exporting the
certificate, see Export a Server Certificate (IIS 7).
Important
In the Certificate Export Wizard, make sure that you select Yes, export the private
key

and also select Personal Information Exchange - PKCS #12, Include all certificates in
the certificate path if possible, and Export all extended properties.

+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Get a certificate using OpenSSL
In a command-line terminal, CD into a working directory generate a private key and
CSR by running the following command:
Copy

openssl req -sha256 -new -nodes -keyout myserver.key -out server.csr -newkey
rsa:2048
When prompted, enter the appropriate information. For example:

Copy

Country Name (2 letter code)


State or Province Name (full name) []: Washington
Locality Name (eg, city) []: Redmond
Organization Name (eg, company) []: Microsoft
Organizational Unit Name (eg, section) []: Azure
Common Name (eg, YOUR name) []: www.microsoft.com
Email Address []:

Please enter the following 'extra' attributes to be sent with your certificate request

A challenge password []:


When finished, you should have two files in your working directory: myserver.key
and server.csr. The server.csr contains the CSR, and you need myserver.key later.
Submit your CSR to a CA to get an SSL certificate. For a list of CAs trusted by
Microsoft, see Microsoft Trusted Root Certificate Program: Participants.
Once the CA sends you the requested certificate, save it to a file named
myserver.crt in your working directory. If your CA provides it in a text format, simply
copy the content into myserver.crt in a text editor and save it. Your file should look
like the following:
Copy

-----BEGIN CERTIFICATE----MIIDJDCCAgwCCQCpCY4o1LBQuzANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJ
V
UzELMAkGA1UECBMCV0ExEDAOBgNVBAcTB1JlZG1vbmQxEDAOBgNVBAsTB0NvbnRv
c28xFDASBgNVBAMTC2NvbnRvc28uY29tMB4XDTE0MDExNjE1MzIyM1oXDTE1MDEx
NjE1MzIyM1owVDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwd
S

ZWRtb25kMRAwDgYDVQQLEwdDb250b3NvMRQwEgYDVQQDEwtjb250b3NvLmNvbT
CC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN96hBX5EDgULtWkCRK7DMM3
enae1LT9fXqGlbA7ScFvFivGvOLEqEPD//eLGsf15OYHFOQHK1hwgyfXa9sEDPMT
3AsF3iWyF7FiEoR/qV6LdKjeQicJ2cXjGwf3G5vPoIaYifI5r0lhgOUqBxzaBDZ4
xMgCh2yv7NavI17BHlWyQo90gS2X5glYGRhzY/fGp10BeUEgIs3Se0kQfBQOFUYb
ktA6802lod5K0OxlQy4Oc8kfxTDf8AF2SPQ6BL7xxWrNl/Q2DuEEemjuMnLNxmeA
Ik2+6Z6+WdvJoRxqHhleoL8ftOpWR20ToiZXCPo+fcmLod4ejsG5qjBlztVY4qsC
AwEAATANBgkqhkiG9w0BAQUFAAOCAQEAVcM9AeeNFv2li69qBZLGDuK0NDHD3zhK
Y0nDkqucgjE2QKUuvVSPodz8qwHnKoPwnSrTn8CRjW1gFq5qWEO50dGWgyLR8Wy1
F69DYsEzodG+shv/G+vHJZg9QzutsJTB/Q8OoUCSnQS1PSPZP7RbvDV9b7Gx+gtg
7kQ55j3A5vOrpI8N9CwdPuimtu6X8Ylw9ejWZsnyy0FMeOPpK3WTkDMxwwGxkU3Y
lCRTzkv6vnHrlYQxyBLOSafCB1RWinN/slcWSLHADB6R+HeMiVKkFpooT+ghtii1
A9PdUQIhK9bdaFicXPBYZ6AgNVuGtfwyuS5V6ucm7RE6+qf+QjXNFg==
-----END CERTIFICATE----In the command-line terminal, run the following command to export myserver.pfx
from myserver.key and myserver.crt:
Copy

openssl pkcs12 -export -out myserver.pfx -inkey myserver.key -in myserver.crt


When prompted, define a password to secure the .pfx file.
Note
If your CA uses intermediate certificates, you must include them with the -certfile
parameter. They usually come as a separate download from your CA, and in several
formats for different web server types. Select the version with the .pem extension.
Your openssl -export command should look like the following example, which creates
a .pfx file that includes the intermediate certificates from the intermediate-cets.pem
file:
openssl pkcs12 -chain -export -out myserver.pfx -inkey myserver.key -in
myserver.crt -certfile intermediate-cets.pem
+

You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Get a SubjectAltName certificate using OpenSSL
Create a file named sancert.cnf, copy the following text into it, and save it in a
working directory:
Copy

# -------------- BEGIN custom sancert.cnf ----HOME = .


oid_section = new_oids
[ new_oids ]
[ req ]
default_days = 730
distinguished_name = req_distinguished_name
encrypt_key = no
string_mask = nombstr
req_extensions = v3_req # Extensions to add to certificate request
[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_default =
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default =
localityName = Locality Name (eg, city)
localityName_default =
organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_default =
commonName

= Your common name (eg, domain name)

commonName_default
commonName_max = 64

= www.mydomain.com

[ v3_req ]
subjectAltName=DNS:ftp.mydomain.com,DNS:blog.mydomain.com,DNS:*.mydomai
n.com
# -------------- END custom sancert.cnf ----In the line that begins with subjectAltName, replace the value with all domain
names you want to secure (in addition to commonName). For example:
Copy

subjectAltName=DNS:sales.contoso.com,DNS:support.contoso.com,DNS:fabrikam.c
om
You do not need to change any other field, including commonName. You will be
prompted to specify them in the next few steps.
In a command-line terminal, CD into your working directory and run the following
command:
Copy

openssl req -sha256 -new -nodes -keyout myserver.key -out server.csr -newkey
rsa:2048 -config sancert.cnf
When prompted, enter the appropriate information. For example:
Copy

Country Name (2 letter code) []: US


State or Province Name (full name) []: Washington
Locality Name (eg, city) []: Redmond
Organizational Unit Name (eg, section) []: Azure
Your common name (eg, domain name) []: www.microsoft.com
Once finished, you should have two files in your working directory: myserver.key
and server.csr. The server.csr contains the CSR, and you need myserver.key later.
Submit your CSR to a CA to get an SSL certificate. For a list of CAs trusted by
Microsoft, see Microsoft Trusted Root Certificate Program: Participants.

Once the CA sends you the requested certificate, save it to a file named
myserver.crt. If your CA provides it in a text format, simply copy the content into
myserver.crt in a text editor and save it. The file should look like the following:
Copy

-----BEGIN CERTIFICATE----MIIDJDCCAgwCCQCpCY4o1LBQuzANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJ
V
UzELMAkGA1UECBMCV0ExEDAOBgNVBAcTB1JlZG1vbmQxEDAOBgNVBAsTB0NvbnRv
c28xFDASBgNVBAMTC2NvbnRvc28uY29tMB4XDTE0MDExNjE1MzIyM1oXDTE1MDEx
NjE1MzIyM1owVDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwd
S
ZWRtb25kMRAwDgYDVQQLEwdDb250b3NvMRQwEgYDVQQDEwtjb250b3NvLmNvbT
CC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN96hBX5EDgULtWkCRK7DMM3
enae1LT9fXqGlbA7ScFvFivGvOLEqEPD//eLGsf15OYHFOQHK1hwgyfXa9sEDPMT
3AsF3iWyF7FiEoR/qV6LdKjeQicJ2cXjGwf3G5vPoIaYifI5r0lhgOUqBxzaBDZ4
xMgCh2yv7NavI17BHlWyQo90gS2X5glYGRhzY/fGp10BeUEgIs3Se0kQfBQOFUYb
ktA6802lod5K0OxlQy4Oc8kfxTDf8AF2SPQ6BL7xxWrNl/Q2DuEEemjuMnLNxmeA
Ik2+6Z6+WdvJoRxqHhleoL8ftOpWR20ToiZXCPo+fcmLod4ejsG5qjBlztVY4qsC
AwEAATANBgkqhkiG9w0BAQUFAAOCAQEAVcM9AeeNFv2li69qBZLGDuK0NDHD3zhK
Y0nDkqucgjE2QKUuvVSPodz8qwHnKoPwnSrTn8CRjW1gFq5qWEO50dGWgyLR8Wy1
F69DYsEzodG+shv/G+vHJZg9QzutsJTB/Q8OoUCSnQS1PSPZP7RbvDV9b7Gx+gtg
7kQ55j3A5vOrpI8N9CwdPuimtu6X8Ylw9ejWZsnyy0FMeOPpK3WTkDMxwwGxkU3Y
lCRTzkv6vnHrlYQxyBLOSafCB1RWinN/slcWSLHADB6R+HeMiVKkFpooT+ghtii1
A9PdUQIhK9bdaFicXPBYZ6AgNVuGtfwyuS5V6ucm7RE6+qf+QjXNFg==
-----END CERTIFICATE----In the command-line terminal, run the following command to export myserver.pfx
from myserver.key and myserver.crt:

Copy

openssl pkcs12 -export -out myserver.pfx -inkey myserver.key -in myserver.crt


When prompted, define a password to secure the .pfx file.
Note
If your CA uses intermediate certificates, you must include them with the -certfile
parameter. They usually come as a separate download from your CA, and in several
formats for different web server types. Select the version with the .pem extension).
Your openssl -export command should look like the following example, which creates
a .pfx file that includes the intermediate certificates from the intermediate-cets.pem
file:
openssl pkcs12 -chain -export -out myserver.pfx -inkey myserver.key -in
myserver.crt -certfile intermediate-cets.pem
+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Generate a self-signed certificate using Certreq.exe
Important
Self-signed certificates are for test purposes only. Most browsers return errors when
visiting a website that's secured by a self-signed certificate. Some browsers may
even refuse to navigate to the site. +
Create a text file (e.g. mycert.txt), copy into it the following text, and save the file in
a working directory. Replace the <your-domain> placeholder with the custom
domain name of your app.
Copy

[NewRequest]
Subject = "CN=<your-domain>" ; E.g. "CN=www.contoso.com", or
"CN=*.contoso.com" for a wildcard certificate
Exportable = TRUE
KeyLength = 2048
; KeyLength can be 2048, 4096, 8192, or 16384
(required minimum is 2048)
KeySpec = 1

KeyUsage = 0xA0
MachineKeySet = True
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
HashAlgorithm = SHA256
RequestType = Cert

; Self-signed certificate

ValidityPeriod = Years
ValidityPeriodUnits = 1

[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1

; Server Authentication

The important parameter is RequestType = Cert, which specifies a self-signed


certificate. For more information on the options in the CSR, and other available
options, see the Certreq reference documentation.
In the command prompt, CD to your working directory and run the following
command:
Copy

certreq -new mycert.txt mycert.crt


Your new self-signed certificate is now installed in the certificate store.
To export the certificate from the certificate store, press Win+R and run
certmgr.msc to launch Certificate Manager. Select Personal > Certificates. In the
Issued To column, you should see an entry with your custom domain name, and the
CA you used to generate the certificate in the Issued By column.

Right-click the certificate and select All Tasks > Export. In the Certificate Export
Wizard, click Next, then select Yes, export the private key, and then click Next again.

Select Personal Information Exchange - PKCS #12, Include all certificates in the
certificate path if possible, and Export all extended properties. Then, click Next.

Select Password, and then enter and confirm the password. Click Next.

Provide a path and filename for the exported certificate, with the extension .pfx.
Click Next to finish.

+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Generate a self-signed certificate using OpenSSL
Important
Self-signed certificates are for test purposes only. Most browsers return errors when
visiting a website that's secured by a self-signed certificate. Some browsers may
even refuse to navigate to the site. +
Create a text file named serverauth.cnf, then copy the following content into it, and
then save it in a working directory:
Copy

[ req ]
default_bits

= 2048

default_keyfile

= privkey.pem

distinguished_name
attributes

= req_distinguished_name

= req_attributes

x509_extensions

= v3_ca

[ req_distinguished_name ]
countryName

= Country Name (2 letter code)

countryName_min

=2

countryName_max

=2

stateOrProvinceName

= State or Province Name (full name)

localityName

= Locality Name (eg, city)

0.organizationName

= Organization Name (eg, company)

organizationalUnitName
commonName

= Common Name (eg, your app's domain name)

commonName_max
emailAddress

= Organizational Unit Name (eg, section)

= 64
= Email Address

emailAddress_max

= 40

[ req_attributes ]
challengePassword

= A challenge password

challengePassword_min
challengePassword_max

=4
= 20

[ v3_ca ]
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer:always
basicConstraints = CA:false

keyUsage=nonRepudiation, digitalSignature, keyEncipherment


extendedKeyUsage = serverAuth
In a command-line terminal, CD into your working directory and run the following
command:
Copy

openssl req -sha256 -x509 -nodes -days 365 -newkey rsa:2048 -keyout
myserver.key -out myserver.crt -config serverauth.cnf
This command creates two files: myserver.crt (the self-signed certificate) and
myserver.key (the private key), based on the settings in serverauth.cnf.
Export the certificate to a .pfx file by running the following command:
Copy

openssl pkcs12 -export -out myserver.pfx -inkey myserver.key -in myserver.crt


When prompted, define a password to secure the .pfx file.
+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Step 2. Upload and bind the custom SSL certificate
Before you move on, review the What you need section and verify that:+
you have a custom domain that maps to your Azure app,
your app is running in Basic tier or higher, and
you have an SSL certificate for the custom domain from a CA.
+
In your browser, open the Azure Portal.
Click the App Service option on the left side of the page.
Click the name of your app to which you want to assign this certificate.
In the Settings, Click SSL certificates
Click Upload Certificate

Select the .pfx file that you exported in Step 1 and specify the password that you
create before. Then, click Upload to upload the certificate. You should now see your
uploaded certificate back in the SSL certificate blade.
In the ssl bindings section Click on Add bindings
In the Add SSL Binding blade use the dropdowns to select the domain name to
secure with SSL, and the certificate to use. You may also select whether to use
Server Name Indication (SNI) or IP based SSL.

Copy

IP based SSL associates a certificate with a domain name by mapping the


dedicated public IP address of the server to the domain name. This requires each
domain name (contoso.com, fabricam.com, etc.) associated with your service to
have a dedicated IP address. This is the traditional
method of associating SSL
certificates with a web server.

SNI based SSL is an extension to SSL and **[Transport Layer Security]


(http://en.wikipedia.org/wiki/Transport_Layer_Security)** (TLS) that allows multiple
domains to share the same IP address, with separate security certificates for each
domain. Most modern browsers (including Internet Explorer, Chrome, Firefox and
Opera) support SNI, however older browsers may not support SNI. For more
information on SNI, see the **[Server Name Indication]
(http://en.wikipedia.org/wiki/Server_Name_Indication)** article on Wikipedia.
Click Add Binding to save the changes and enable SSL.
+
Step 3. Change your domain name mapping (IP based SSL only)
If you use SNI SSL bindings only, skip this section. Multiple SNI SSL bindings can
work together on the existing shared IP address assigned to your app. However, if
you create an IP based SSL binding, App Service creates a dedicated IP address for
the binding because the IP based SSL requires one. Only one dedicated IP address
can be created, therefore only one IP based SSL binding may be added.+
Because of this dedicated IP address, you will need to configure your app further if:
+
You used an A record to map your custom domain to your Azure app, and you just
added an IP based SSL binding. In this scenario, you need to remap the existing A
record to point to the dedicated IP address by following these steps:
After you have configured an IP based SSL binding, a dedicated IP address is
assigned to your app. You can find this IP address on the Custom domain page
under settings of your app, right above the Hostnames section. It will be listed as
External IP Address

Remap the A record for your custom domain name to this new IP address.
You already have one or more SNI SSL bindings in your app, and you just added an
IP based SSL binding. Once the binding is complete, your
<appname>.azurewebsites.net domain name points to the new IP address.
Therefore, any existing CNAME mapping from the custom domain to
<appname>.azurewebsites.net, including the ones that the SNI SSL secure, also

receives traffic on the new address, which is created for the IP based SSL only. In
this scenario, you need to send the SNI SSL traffic back to the original shared IP
address by following these steps:
Identify all CNAME mappings of custom domains to your app that has an SNI SSL
binding.
Remap each CNAME record to sni.<appname>.azurewebsites.net instead of <
appname>.azurewebsites.net.
+
Step 4. Test HTTPS for your custom domain
All that's left to do now is to make sure that HTTPS works for your custom domain.
In various browsers, browse to https://<your.custom.domain> to see that it serves
up your app.+
If your app gives you certificate validation errors, you're probably using a self-signed
certificate.
If that's not the case, you may have left out intermediate certificates when you
export your .pfx certificate. Go back to What you need to verify that your CSR meets
all the requirements by App Service.
+
+
Enforce HTTPS on your app
If you still want to allow HTTP access to your app, skip this step. App Service does
not enforce HTTPS, so visitors can still access your app using HTTP. If you want to
enforce HTTPS for your app, you can define a rewrite rule in the web.config file for
your app. Every App Service app has this file, regardless of the language framework
of your app.+
Note
There is language-specific redirection of requests. ASP.NET MVC can use the
RequireHttps filter instead of the rewrite rule in web.config (see Deploy a secure
ASP.NET MVC 5 app to a web app).+
Follow these steps:+
Navigate to the Kudu debug console for your app. Its address is
https://<appname>.scm.azurewebsites.net/DebugConsole.
In the debug console, CD to D:\home\site\wwwroot.
Open web.config by clicking the pencil button.

If you deploy your app with Visual Studio or Git, App Service automatically
generates the appropriate web.config for your .NET, PHP, Node.js, or Python app in
the application root. If web.config doesn't exist, run touch web.config in the webbased command prompt to create it. Or, you can create it in your local project and
redeploy your code.
If you had to create a web.config, copy the following code into it and save it. If you
opened an existing web.config, then you just need to copy the entire <rule> tag
into your web.config's configuration/system.webServer/rewrite/rules element.
Copy

<?xml version="1.0" encoding="UTF-8"?>


<configuration>
<system.webServer>
<rewrite>
<rules>
<!-- BEGIN rule TAG FOR HTTPS REDIRECT -->
<rule name="Force HTTPS" enabled="true">
<match url="(.*)" ignoreCase="false" />
<conditions>
<add input="{HTTPS}" pattern="off" />
</conditions>
<action type="Redirect" url="https://{HTTP_HOST}/{R:1}"
appendQueryString="true" redirectType="Permanent" />
</rule>
<!-- END rule TAG FOR HTTPS REDIRECT -->

</rules>
</rewrite>
</system.webServer>
</configuration>
This rule returns an HTTP 301 (permanent redirect) to the HTTPS protocol whenever
the user requests a page using HTTP. It redirects from http://contoso.com to
https://contoso.com.
Important
If there are already other <rule> tags in your web.config, then place the copied
<rule> tag before the other <rule> tags.
Save the file in the Kudu debug console. It should take effect immediately redirect
all requests to HTTPS.

38 What is an endpoint Access Control List


(ACLs)?
An endpoint Access Control List (ACL) is a security enhancement available for
your Azure deployment. An ACL provides the ability to selectively permit or
deny traffic for a virtual machine endpoint. This packet filtering capability
provides an additional layer of security. You can specify network ACLs for
endpoints only. You can't specify an ACL for a virtual network or a specific
subnet contained in a virtual network.+
38.1.1.1.1

Important

It is recommended to use Network Security Groups (NSGs) instead of ACLs


whenever possible. To learn more about NSGs, see What is a Network
Security Group?.+
ACLs can be configured by using either PowerShell or the Management
Portal. To configure a network ACL by using PowerShell, see Managing Access
Control Lists (ACLs) for Endpoints by using PowerShell. To configure a

network ACL by using the Management Portal, see How to Set Up Endpoints
to a Virtual Machine.+
Using Network ACLs, you can do the following:+

Selectively permit or deny incoming traffic based on remote subnet IPv4


address range to a virtual machine input endpoint.

Blacklist IP addresses

Create multiple rules per virtual machine endpoint

Specify up to 50 ACL rules per virtual machine endpoint

Use rule ordering to ensure the correct set of rules are applied on a given
virtual machine endpoint (lowest to highest)

Specify an ACL for a specific remote subnet IPv4 address.

38.2 How ACLs work


An ACL is an object that contains a list of rules. When you create an ACL and
apply it to a virtual machine endpoint, packet filtering takes place on the
host node of your VM. This means the traffic from remote IP addresses is
filtered by the host node for matching ACL rules instead of on your VM. This
prevents your VM from spending the precious CPU cycles on packet filtering.
+
When a virtual machine is created, a default ACL is put in place to block all
incoming traffic. However, if an endpoint is created for (port 3389), then the
default ACL is modified to allow all inbound traffic for that endpoint. Inbound
traffic from any remote subnet is then allowed to that endpoint and no
firewall provisioning is required. All other ports are blocked for inbound traffic
unless endpoints are created for those ports. Outbound traffic is allowed by
default.+

Example Default ACL table+

Rule #

Remote Subnet

Endpoint

Permit/Deny

100

0.0.0.0/0

3389

Permit

38.3 Permit and deny


You can selectively permit or deny network traffic for a virtual machine input
endpoint by creating rules that specify "permit" or "deny". It's important to
note that by default, when an endpoint is created, all traffic is permitted to
the endpoint. For that reason, it's important to understand how to create
permit/deny rules and place them in the proper order of precedence if you
want granular control over the network traffic that you choose to allow to
reach the virtual machine endpoint.+
Points to consider:+
1.

No ACL By default when an endpoint is created, we permit all for the


endpoint.

2.

Permit - When you add one or more "permit" ranges, you are denying all
other ranges by default. Only packets from the permitted IP range will be able to
communicate with the virtual machine endpoint.

3.

Deny - When you add one or more "deny" ranges, you are permitting all
other ranges of traffic by default.

4.

Combination of Permit and Deny - You can use a combination of "permit"


and "deny" when you want to carve out a specific IP range to be permitted or
denied.

38.4 Rules and rule precedence


Network ACLs can be set up on specific virtual machine endpoints. For
example, you can specify a network ACL for an RDP endpoint created on a

virtual machine which locks down access for certain IP addresses. The table
below shows a way to grant access to public virtual IPs (VIPs) of a certain
range to permit access for RDP. All other remote IPs are denied. We follow a
lowest takes precedence rule order.+
38.4.1
Multiple rules
In the example below, if you want to allow access to the RDP endpoint only
from two public IPv4 address ranges (65.0.0.0/8, and 159.0.0.0/8), you can
achieve this by specifying two Permit rules. In this case, since RDP is created
by default for a virtual machine, you may want to lock down access to the
RDP port based on a remote subnet. The example below shows a way to
grant access to public virtual IPs (VIPs) of a certain range to permit access for
RDP. All other remote IPs are denied. This works because network ACLs can
be set up for a specific virtual machine endpoint and access is denied by
default.+
Example Multiple rules+

Rule #

Remote Subnet

Endpoint

Permit/Deny

100

65.0.0.0/8

3389

Permit

200

159.0.0.0/8

3389

Permit

38.4.2
Rule order
Because multiple rules can be specified for an endpoint, there must be a way
to organize rules in order to determine which rule takes precedence. The rule
order specifies precedence. Network ACLs follow a lowest takes precedence
rule order. In the example below, the endpoint on port 80 is selectively
granted access to only certain IP address ranges. To configure this, we have
a deny rule (Rule # 100) for addresses in the 175.1.0.1/24 space. A second

rule is then specified with precedence 200 that permits access to all other
addresses under 175.0.0.0/8.+
Example Rule precedence+

Rule #

Remote Subnet

Endpoint

Permit/Deny

100

175.1.0.1/24

80

Deny

200

175.0.0.0/8

80

Permit

38.5 Network ACLs and load balanced sets


Network ACLs can be specified on a Load balanced set (LB Set) endpoint. If
an ACL is specified for a LB Set, the Network ACL is applied to all Virtual
Machines in that LB Set. For example, if a LB Set is created with "Port 80"
and the LB Set contains 3 VMs, the Network ACL created on endpoint "Port
80" of one VM will automatically apply to the other VMs.+

39 Cost estimates

The following table summarizes the current rates in U.S. dollars for these services. The prices listed
here are accurate for the U.S. market as of July 2012. However, for up-to-date pricing information
see the Azure Pricing Details. You can find the pricing for other regions at the same address.
Service
1. In/Out

Description
This is the web traffic between the user's browser and

Cost
Inbound: Free

Bandwidth

the aExpense site.

Virtual machines, for the time each one is running.

Outbound (North America


and Europe): $0.12 per GB

Small size virtual machine:


$0.115 per hour
Medium size virtual
machine: $0.23 per hour

2. Compute
Cloud Services roles, for the time each role is running.

3. Azure
Storage

In aExpense this will be used to store scanned receipt


images. Later, it will also store profile data when
Adatum removes the requirement for a relational
database.

4. Transactions Each interaction with the storage system is billed.

SQL Server hosted in a VM

Small size role: $0.12 per


hour
Medium size role: $0.24
per hour
Up to 1 TB with georeplication: $0.125 per GB
Up to 1 TB without georeplication: $0.09 per GB
$0.01 per 100,000
transactions
Small or medium size VM:
$0.55 per hour
Up to 100 MB: $4.995
Up to 1 GB: $9.99
Up to 10 GB: First GB
$9.99, each additional GB
$3.996

5. Database
Azure SQL Database, cost per month.

Up to 50 GB: First 10 GB
$45.954, each additional
GB $1.998
Up to 150 GB: First 50 GB
$125.874, each additional
GB $0.999

6. Connectivity Virtual Networks and Connect

$0.05 per hour per


connection

40 Adding Trace Statements to Application


Code
Visual Studio .NET 2003

The methods used most often for tracing are the methods for writing output to listeners: Write,
WriteIf, WriteLine, WriteLineIf, Assert, and Fail. These methods can be divided into two
categories: Write, WriteLine, and Fail all emit output unconditionally, whereas WriteIf,
WriteLineIf, and Assert test a Boolean condition, and write or do not write based on the value of
the condition. WriteIf and WriteLineIf emit output of the condition is true, and Assert emits output
if the condition is false.
When designing your tracing and debugging strategy, you should think about how you want the
output to look. Multiple Write statements filled with unrelated information will create a log that is
difficult to read. On the other hand, using WriteLine to put related statements on separate lines may
make it difficult to distinguish what information belongs together. In general, use multiple Write
statements when you want to combine information from multiple sources to create a single
informative message, and the WriteLine statement when you want to create a single, complete
message.

To write a complete line

Call the WriteLine or WriteLineIf method.

A carriage return is appended to the end of the message this method returns, so that the next
message returned by Write, WriteIf, WriteLine, or WriteLineIf will begin on the following
line:
Copy
' Visual Basic
Dim errorFlag As Boolean = False
Trace.WriteLine("Error in AppendData procedure.")
Trace.WriteLineIf(errorFlag, "Error in AppendData procedure.")

// C#
bool errorFlag = false;
System.Diagnostics.Trace.WriteLine ("Error in AppendData procedure.");
System.Diagnostics.Trace.WriteLineIf(errorFlag,
"Error in AppendData procedure.");

To write a partial line

Call the Write or WriteIf method.

The next message put out by a Write, WriteIf, WriteLine, or WriteLineIf will begin on the
same line as the message put out by the Write or WriteIf statement:
Copy
' Visual Basic
Dim errorFlag As Boolean = False
Trace.WriteIf(errorFlag, "Error in AppendData procedure.")
Debug.WriteIf(errorFlag, "Transaction abandoned.")
Trace.Write("Invalid value for data request")

// C#
bool errorFlag = false;
System.Diagnostics.Trace.WriteIf(errorFlag,
"Error in AppendData procedure.");
System.Diagnostics.Debug.WriteIf(errorFlag, "Transaction abandoned.");
Trace.Write("Invalid value for data request");

To verify that certain conditions exist either before or after you execute a method

Call the Assert method.

Copy
' Visual Basic
Dim I As Integer = 4
Trace.Assert(I = 5, "I is not equal to 5.")

// C#
int I = 4;
System.Diagnostics.Trace.Assert(I == 5, "I is not equal to 5.");

Note You can use Assert with both tracing and debugging. This example
outputs the call stack to any listener in the Listeners collection. For more
information, see Assertions in Managed Code and Debug.Assert Method.

40.1 Controlling Conditional Writes with Switches


You might want to make a decision about writing trace output based on the status of switches. A
BooleanSwitch is a simple on-off switch, and a TraceSwitch is a switch with multiple level settings.
For more information, see Trace Switches.
You can use the WriteIf and WriteLineIf methods to test a particular switch and write a message if
appropriate. To use BooleanSwitch to write tracing information, use its Enabled field in an If
statement or a WriteIf statement. For some examples, see BooleanSwitch.Enabled Property in the
.NET Framework reference. To use TraceSwitch to write tracing information, use TraceSwitch's
Level property. For examples, see TraceSwitch.Level Property in the .NET Framework reference.
The following example uses the Enabled property of a BooleanSwitch called dataSwitch to
determine whether or not to write a line.
Copy
' Visual Basic
Trace.WriteLineIf(dataSwitch.Enabled, "Starting connection procedure")

// C#
System.Diagnostics.Trace.WriteLineIf(dataSwitch.Enabled,
"Starting connection procedure");

A TraceSwitch provides multiple setting levels, and exposes a set of properties that correspond to
these levels. Thus, the Boolean properties TraceError, TraceWarning, TraceInfo, and
TraceVerbose can be tested as part of a WriteIf or WriteLineIf statement. The code in this example
writes the specified information only if your TraceSwitch is set to trace level Error or higher:
Copy
' Visual Basic
Trace.WriteLineIf(myTraceSwitch.TraceError, "Error 42 occurred")

// C#
System.Diagnostics.Trace.WriteLineIf(myTraceSwitch.TraceError,
"Error 42 occurred");

Note When you test a TraceSwitch, the level is considered to be true if it is


equal to or higher than the level you test for.

The preceding example always calls the WriteLineIf method when tracing is enabled. Therefore, the
example must always execute any code necessary to evaluate the second argument for WriteLineIf.
However, you will usually get better performance by testing a BooleanSwitch first and then calling
the general Trace.Write method only if the test succeeds, using this code:
Copy
' Visual Basic
If MyBooleanSwitch.Enabled Then
Trace.WriteLine("Error 42 occured")
End If

// C#
if (MyBooleanSwitch.Enabled)
{
System.Diagnostics.Trace.WriteLine("Error 42 occured");
}

If you test the Boolean value before calling the tracing method, you avoid executing unnecessary
code, because tracing always evaluates all parameters of WriteLineIf. Note that this technique
would improve performance only if TraceSwitch is off during the application's normal operating
mode. If TraceSwitch is on, the application must evaluate all parameters of WriteLineIf, adding
time to the overall execution of the application.

41 Trace.TraceInformation Method
Writes an informational message to the trace listeners in the Listeners collection.
Namespace: System.Diagnostics
Assembly: System (in System.dll)

41.1 Overload List


41.2
Name

Description

TraceInformation(String Writes an informational message to the trace listeners in the


)
Listeners collection using the specified message.
TraceInformation(String Writes an informational message to the trace listeners in the
Listeners collection using the specified array of objects and
,Object[])
formatting information

42 Trace.WriteIf Method
Writes information about the trace to the trace listeners in the Listeners collection if a condition is
true.
Namespace: System.Diagnostics
Assembly: System (in System.dll)

42.1 Overload List


42.2
Name
WriteIf(Boolean,
Object)

Description

Writes the value of the object's ToString method to the trace listeners in

the Listeners collection if a condition is true.


WriteIf(Boolean,
Object,String)

Writes a category name and the value of the object's ToString method to
the trace listeners in the Listeners collection if a condition is true.

WriteIf(Boolean,
String)

Writes a message to the trace listeners in the Listeners collection if a


condition is true.

WriteIf(Boolean,
String,String)

Writes a category name and message to the trace listeners in the


Listeners collection if a condition is true.

43 Windows Azure Remote Debugging


Once you turn on remote debugging, it packages the msvsmon along with other component needed
to communicate and deploy it to your VM. There are two main components called connector and
forwarder
Connector: The connector is the a web service which listens for commands like start the forwarder
component, get processes etc.
Forwarder: The forwarder listens to an endpoint for incoming request from VS to forward it to
msvsmon
You would see below configuration once you turn on remote debugging. The endpoint changes are
added to the configuration file during the build. In your output directory youll find
Cloud.build.csdef/cscfg (or similarly named) files, in which you can see what actually gets deployed.
The debugger uses ports 30400-30424 and 31400-3142 and configuration looks like as below.
<ConfigurationSetting value=xxxxx
name=Microsoft.WindowsAzure.Plugins.RemoteDebugger.CertificateThumbprint/>
<ConfigurationSetting value=true
name=Microsoft.WindowsAzure.Plugins.RemoteDebugger.Connector.Enabled/>
<Endpoint name=Microsoft.WindowsAzure.Plugins.RemoteDebugger.Connector protocol=tcp
publicPort=30400 port=30398 address=10.78.78.4/>
<Endpoint name=Microsoft.WindowsAzure.Plugins.RemoteDebugger.Forwarder protocol=tcp
publicPort=31400 port=31398 address=10.78.78.4/>
RemoteDebugger.Connect uses instanceInputEndpoint to listens to the specified port for all
instance and once receive request from Visual studio, it establish a TLS connection between VS and
Remote forwarder. Remote debugging is done over a custom TLS connection, rather than via
msvsmons built-in connection security, because msvsmon was not designed to be used over the

internet. The forwarder is the server-end of that connection that listens to the endpoint for incoming
request from VS and it forwards incoming traffic to the msvsmon instance running on the same box.
Once the connection in established then it will hit the breakpoint in your code in VisualsStudio.

44

How to turn on remote


debugging?

To enable remote debugging for your cloud service, select Debug as the Build Configuration on the
Common Settings tab of your Cloud Services publish dialog wizard:

Then click the Advanced Settings tab and check the Enable Remote Debugging for all roles checkbox:

Once your cloud service is published and running live in the cloud, simply set a breakpoint in your local
source code:

Then use Visual Studios Server Explorer to select the Cloud Service instance deployed in the cloud, and
then use the Attach Debugger context menu on the role or to a specific VM instance of it. You can also
attach multiple instance if available.

Once the debugger attaches to the Cloud Service, and a breakpoint is hit, youll be able to use the rich
debugging capabilities of Visual Studio to debug the cloud instance remotely, in real-time, and see exactly
how your app is running in the cloud.

45

Limitations
Instances: Publish will fail if the role has more than 25 instances.
Traffic: As the debugger communicates with Visual Studio and that Azure charges for outbound data. The
data transferred is not big and shouldnt be too big a deal.
Native debugging: For CTP tooling does not enable native debugging.
Ports: The debugger uses ports 30400-30424 and 31400-31424. If you use ports that conflict with the
debugger ports youll see the following message: Allocation failed. Please retry later, try reducing the
VM size or number of role instances, or try deploying to a different region.
VS Restart after full deployment: If you do a full deployment and the VIP changes you need to restart
VS to attach the debugger.

46 Migrate Azure Virtual Machines between


Storage Accounts
One common task on Azure is to migrate a Virtual Machine from one storage
account to another. Before we dive into these steps, its helpful to briefly review

how Azure Virtual Machines are set up. When you create an Azure Virtual
Machine, there are two services that work in tandem to create this machine:
Compute and Storage. On the Storage side, a VHD is created in one of your
storage accounts within the Azure Storage Service. The physical node that this
VHD is stored on is located in the region you specified to place your Virtual
Machine. On the compute side, we find a physical node in a second cluster to
place your virtual machine. When the VM starts in that cluster, it establishes a
connection with the Storage Service and boots from the VHD. When creating a
Virtual Machine, we require that the VHD be located in a storage account in the
same region where you are creating the VM. This is to ensure there is
performance consistency when communicating between the Virtual Machine and
the storage account.

With this context in mind, lets walk through the steps to migrate the virtual
machine from one region to another:
1.
2.

Stop the Virtual Machine


Copy the VHD blob from a storage account in the source region to a
storage account in the destination region.

3.

Create an Azure Disk from the blob

4.

Boot the Virtual Machine from the Disk

46.1

Stop the Virtual Machine

Go to the Service Management Portal, select the Virtual Machine that youd like
to migrate, and select Shut Down from the control menu.

Alternatively, you can use the Azure Powershell cmdlet to accomplish the same
task:
$servicename = "KenazTestService"
$vmname = "TestVM1"
Get-AzureVM -ServiceName $servicename -Name $vmname | Stop-AzureVM

Stopping the VM is a required step so that the file system is consistent when you
do the copy operation. Azure does not support live migration at this time. This
operation implies that you are migrating a specialized VM from one region to
another. If youd like to create a VM from a generalized image, sys-prep the
Virtual Machine before stopping it.

46.2

Copy the VHD blob

The Azure Storage Service exposes the ability to move a blob from one storage
account to another. To do this, we have to perform the following steps:
1.

Determine the source storage account information

2.

Determine the destination storage account information

3.
4.

Ensure that the destination container exists in the destination storage


account
Perform the blob copy
NOTE: Copying blobs between storage accounts in different regions can take up
to one hour or more depending on the size of the blob. The easiest way to do
this is through Azure Powershell:
Select-AzureSubscription "kenazsubscription"

# VHD blob to copy #


$blobName = "KenazTestService-TestVM1-2014-8-26-15-1-55-658-0.vhd"

# Source Storage Account Information #


$sourceStorageAccountName = "kenazsa"
$sourceKey = "MySourceStorageAccountKey"
$sourceContext = New-AzureStorageContext StorageAccountName
$sourceStorageAccountName -StorageAccountKey $sourceKey
$sourceContainer = "vhds"

# Destination Storage Account Information #


$destinationStorageAccountName = "kenazdestinationsa"
$destinationKey = "MyDestinationStorageAccountKey"
$destinationContext = New-AzureStorageContext StorageAccountName
$destinationStorageAccountName -StorageAccountKey $destinationKey

# Create the destination container #


$destinationContainerName = "destinationvhds"
New-AzureStorageContainer -Name $destinationContainerName -Context
$destinationContext

# Copy the blob #


$blobCopy = Start-AzureStorageBlobCopy -DestContainer $destinationContainerName `
-DestContext $destinationContext `
-SrcBlob $blobName `
-Context $sourceContext `
-SrcContainer $sourceContainer

This will initiate the blob copy from your source storage account to your
destination storage account. At this point, youll probably have to wait a while
for the blob to be fully copied. In order to check the status of the operation, you
can try the following commands.
while(($blobCopy | Get-AzureStorageBlobCopyState).Status -eq "Pending")
{
Start-Sleep -s 30
$blobCopy | Get-AzureStorageBlobCopyState
}

Once the blob is finished copying, the status of the blob copy will be
Success. For a more comprehensive copy VHD example, see "Azure Virtual
Machine: Copy VHDs Between Storage Accounts."

46.3

Blob copy using AzCopy

Another option is to use the AzCopy utility (download here). Here is the
equivalent blob copy between storage accounts:
AzCopy https://sourceaccount.blob.core.windows.net/mycontainer1
https://destaccount.blob.core.windows.net/mycontainer2 /sourcekey:key1 /destkey:key2
abc.txt

For more details on how to use AzCopy for different scenarios, check out
Getting Started with the AzCopy Command-Line Utility.

46.4

Create an Azure Disk

At this point, the blob that youve copied into your destination storage account
is still just a blob. In order to boot from it, you have to create an Azure Disk from
this blob. Navigate to the Disks section of Virtual Machines and select Create.
NOTE: These instructions are specific to specialized VMs. If you want to use the
VHD as an image, you will need to restart the VM, sysprep it, copy the blob over,
and then add as an Image (not a Disk).

Use the VHD URL explorer to select the blob from the destination container that
we copied the blob to. Select the toggle that says The VHD contains an
operating system. This indicates to Azure that the disk object youre creating is
meant to be used as the OS disk rather than one of the data disks.

NOTE
: If you get an error that states A lease conflict occurred with the blob , go
back to the previous step to validate that the blob has finished copying.

Alternatively, you can use the Powershell cmdlets to perform the same
operation:
Add-AzureDisk -DiskName "myMigratedTestVM" `
-OS Linux `
-MediaLocation
"https://kenazdestinationsa.blob.core.windows.net/destinationvhds/KenazTestServiceTestVM1-2014-8-26-16-16-48-522-0.vhd" `
-Verbose

Once complete, the Disk should show up under the Disks section of Virtual
Machines.

46.5

Create the Virtual Machine

At this point, you can create the Virtual Machine using the disk object you just
created. From the Service Management Portal, select Create Virtual Machine
from Gallery and select the Disk that you created under My Disks. NOTE: If you
are moving a VM that has a storage pool configured (or want the drive letter
ordering to remain the same), make a note of the LUN number to VHD mapping
on the source VM, and make sure the data disks are attached to the same LUNs
on the destination VM.

The Virtual Machine is now running in the destination storage account.

47 Blob Service REST API


The Blob service stores text and binary data as blobs in the cloud. The Blob
service offers the following three resources: the storage account, containers,
and blobs. Within your storage account, containers provide a way to organize
sets of blobs. +
You can store text and binary data in one of the following types of blobs: +

Block blobs, which are optimized for streaming.

Append blobs, which are optimized for append operations.

Page blobs, which are optimized for random read/write operations and
which provide the ability to write to a range of bytes in a blob.
For more information about block blobs and page blobs, see Understanding
Block Blobs, Append Blobs, and Page Blobs.
The REST API for the Blob service defines HTTP operations against container
and blob resources. The API includes the operations listed in the following
table.

+
Resource
Type

Description

List
Containers

Account

Lists all of the containers in a storage account.

Set Blob
Service
Properties

Account

Sets the properties of the Blob service,


including logging and metrics settings, and the
default service version.

Get Blob
Service
Properties

Account

Gets the properties of the Blob service,


including logging and metrics settings, and the
default service version.

Preflight
Blob
Request

Account

Queries the Cross-Origin Resource Sharing


(CORS) rules for the Blob service prior to
sending the actual request.

Get Blob
Service
Stats

Account

Retrieves statistics related to replication for


the Blob service. This operation is only
available on the secondary location endpoint
when read-access geo-redundant replication is
enabled for the storage account.

Create

Container

Creates a new container in a storage account.

Operation

Operation

Resource
Type

Description

Container
Get
Container
Properties

Container

Returns all user-defined metadata and system


properties of a container.

Get
Container
Metadata

Container

Returns only user-defined metadata of a


container.

Set
Container
Metadata

Container

Sets user-defined metadata of a container.

Get
Container
ACL

Container

Gets the public access policy and any stored


access policies for the container.

Set
Container
ACL

Container

Sets the public access policy and any stored


access policies for the container.

Lease
Container

Container

Establishes and manages a lock on a container


for delete operations.

Delete
Container

Container

Deletes the container and any blobs that it


contains.

List Blobs

Container

Lists all of the blobs in a container.

Put Blob

Block,
append,
and page

Creates a new blob or replaces an existing


blob within a container.

Operation

Resource
Type

Description

blobs
Get Blob

Block,
append,
and page
blobs

Reads or downloads a blob from the Blob


service, including its user-defined metadata
and system properties.

Get Blob
Properties

Block,
append,
and page
blobs

Returns all system properties and user-defined


metadata on the blob.

Set Blob
Properties

Block,
append,
and page
blobs

Sets system properties defined for an existing


blob.

Get Blob
Metadata

Block,
append,
and page
blobs

Retrieves all user-defined metadata of an


existing blob or snapshot.

Set Blob
Metadata

Block,
append,
and page
blobs

Sets user-defined metadata of an existing


blob.

Delete Blob

Block,
append
and page
blobs

Marks a blob for deletion.

Lease Blob

Block,
append,
and page

Establishes and manages a lock on write and


delete operations. To delete or write to a
locked blob, a client must provide the lease ID.

Operation

Resource
Type

Description

blobs
Snapshot
Blob

Block,
append,
and page
blobs

Creates a read-only snapshot of a blob.

Copy Blob

Block,
append,
and page
blobs

Copies a source blob to a destination blob in


this storage account or in another storage
account.

Abort Copy
Blob

Block,
append,
and page
blobs

Aborts a pending Copy Blob operation, and


leaves a destination blob with zero length and
full metadata.

Put Block

Block blobs
only

Creates a new block to be committed as part


of a block blob.

Put Block
List

Block blobs
only

Commits a blob by specifying the set of block


IDs that comprise the block blob.

Get Block
List

Block blobs
only

Retrieves the list of blocks that have been


uploaded as part of a block blob.

Put Page

Page blobs
only

Writes a range of pages into a page blob.

Get Page
Ranges

Page blobs
only

Returns a list of valid page ranges for a page


blob or a snapshot of a page blob.

Incremental
Copy Blob

Page blobs
only

Copies a snapshot of a source page blob to a


destination page blob. Only differential

Operation

Resource
Type

Description
changes are transferred.

Append
Block

Append
blobs only

Writes a block of data to the end of an append


blob.

48 Creating A Custom Domain Name For


Azure Web Sites
Microsoft Azure Web sites provide a robust and easy-to-use container for hosting your Web
applications. This doesnt just pertain to ASP.NET apps, but several templates like Drupal,
WordPress, Orchard and so on. It also provides first class support for Node.js Web apps/APIs,
PHP and Python. Here, we will discuss ways of creating a custom domain name for Azure Web
Sites.
When you create your Azure application, you get both an IP and URL. The URL takes the form
of [your app].azurewebsites.net. Chances are youll want your own domain name so that instead
of [your app].azurewebsites.net you can point to a specific address. You can do this in four
simple steps.
Step 1: Ensure your site is configured for shared or standard mode. The free version doesnt
support custom domains, which seems reasonable. If you started with a Web site in free mode,
simply click on the Scale option and choose from Shared or Standard mode and click OK.
Step 2: Copy the IP and Azure URL. The next step is to make note of your URL and IP address.
Youll need this for the third step in this process. Go to the list of Azure sites, select the site (but
dont click on it). Click on the Manage Domains icon at the bottom of the command bar. This
will bring up a dialog that includes your current domain record ([your app].azurewebsites.net)
and your IP.
Step 3: Update the A Record and CNAMEs. Make a note of each and login to your domain
registrars console. You want to look for DNS Management and either Advanced or Manage
Zones or Manage DNS Zone File. You want to get to whichever console lets you configure
your A Record and CNAMEs. These records let requests to your registered domain name be
forwarded to Windows Azure-specifically your Web sites host name.

The result is your Web site will resolve to both [your app].azurewebsites.net and whatever
domain you purchased. The A record needs to point to the IP address you captured in step two.
Replace whatever value is there with the IP address provided. When someone calls up your
site, your registrar will authoritatively answer that request and pass it on directly to the IP
address you provided. For the CNAME, there are three entries you need to make:

Point www to [your app].azurewebsites.net-This tells DNS that [your


app].azurewebsites.net should be the destination (canonical host) for any DNS queries
that begin with www (like your site).

Point awwverify AND awwverify.www to awwverify.[your app].azurewebsites.net-This


provides for a DNS validation mechanism so WAWS can validate that your domain
registrar has been configured to allow WAWS to serve as a canonical domain in the
event that a CNAME look up fails. Be sure to save your file/settings.

Step 4: Enter your custom domain name in the Manage Domains dialog and check for validity.
Pull up the Domain Settings for your Web site again. This time, enter your new domain name.
If you want Azure to respond to both www.(yoursite).com and (yoursite).com, youll want to
create both entries. Youll likely see a red dot indicating that validation and/or CNAME look up
has failed.
This is simply Azures way of telling you records have not yet propagated. You can happily
continue using your Azure Web site using the [your app].azurewebsites.net URL. When you
come back to the dialog, the verification should succeed and any request for (yoursite).com
should automatically resolve to your Azure app.
Interested in more tips and tricks pertaining to the evolution of .net development, or creating a
custom domain name for Azure Web Sites? Join me for my next presentations at VS LIVE!,
Redmond, Washington:

49 Create and upload a Windows Server


VHD to Azure
This article shows you how to upload your own generalized VM image as a
virtual hard disk (VHD) so you can use it to create virtual machines. For more

details about disks and VHDs in Microsoft Azure, see About Disks and VHDs
for Virtual Machines.+
49.1.1.1.1

Important

Azure has two different deployment models for creating and working with
resources: Resource Manager and Classic. This article covers using the
Classic deployment model. Microsoft recommends that most new
deployments use the Resource Manager model. You can also upload a virtual
machine using the Resource Manager model.+

49.2 Prerequisites
This article assumes you have:+

An Azure subscription - If you don't have one, you can open an Azure
account for free.

Microsoft Azure PowerShell - You have the Microsoft Azure PowerShell


module installed and configured to use your subscription.

A .VHD file - supported Windows operating system stored in a .vhd file


and attached to a virtual machine. Check to see if the server roles running on
the VHD are supported by Sysprep. For more information, see Sysprep
Support for Server Roles.
49.2.1.1.1 Important

The VHDX format is not supported in Microsoft Azure. You can convert the disk
to VHD format using Hyper-V Manager or the Convert-VHD cmdlet. For details,
see this blogpost.
+

49.3 Step 1: Prep the VHD


Before you upload the VHD to Azure, it needs to be generalized by using the
Sysprep tool. This prepares the VHD to be used as an image. For details

about Sysprep, see How to Use Sysprep: An Introduction. Back up the VM


before running Sysprep.+
From the virtual machine that the operating system was installed to,
complete the following procedure:+
1.

Sign in to the operating system.

2.

Open a command prompt window as an administrator. Change the


directory to %windir%\system32\sysprep, and then run sysprep.exe .

3.

The System Preparation Tool dialog box appears.

4.

In the System Preparation Tool, select Enter System Out of Box


Experience (OOBE) and make sure that Generalize is checked.

5.

In Shutdown Options, select Shutdown.

6.

Click OK.

49.4 Step 2: Create a storage account and a container


You need a storage account in Azure so you have a place to upload the .vhd
file. This step shows you how to create an account, or get the info you need
from an existing account. Replace the variables in brackets with your own
information.+
1.

Login
Copy
PowerShell
Add-AzureAccount

2.

Set your Azure subscription.


Copy

PowerShell
Select-AzureSubscription -SubscriptionName <SubscriptionName>

3.

Create a new storage account. The name of the storage account should
be unique, 3-24 characters. The name can be any combination of letters and
numbers. You also need to specify a location like "East US"
Copy
PowerShell
New-AzureStorageAccount StorageAccountName <StorageAccountName> -Location
<Location>

4.

Set the new storage account as the default.


Copy
PowerShell
Set-AzureSubscription -CurrentStorageAccountName <StorageAccountName>
-SubscriptionName <SubscriptionName>

5.

Create a new container.


Copy
PowerShell
New-AzureStorageContainer -Name <ContainerName> -Permission Off

49.5 Step 3: Upload the .vhd file


Use the Add-AzureVhd to upload the VHD.+
From the Azure PowerShell window you used in the previous step, type the
following command and replace the variables in brackets with your own
information.+

Copy
PowerShell
Add-AzureVhd -Destination
"https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/<vhdName>.vh
d" -LocalFilePath <LocalPathtoVHDFile>

49.6 Step 4: Add the image to your list of custom images


Use the Add-AzureVMImage cmdlet to add the image to the list of your
custom images.+
Copy
PowerShell
Add-AzureVMImage -ImageName <ImageName> -MediaLocation
"https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/<vhdName>.vh
d" -OS "Windows"

49.7 Next steps


You can now create a custom VM using the image you uploaded.

50 Secure your app's custom domain with


HTTPS
What you need
Step 1. Get an SSL certificate
Step 2. Upload and bind the custom SSL certificate
Step 3. Change your domain name mapping (IP based SSL only)
Step 4. Test HTTPS for your custom domain
Enforce HTTPS on your app

This article shows you how to enable HTTPS for a web app, a mobile app backend,
or an API app in Azure App Service that uses a custom domain name. It covers
server-only authentication. If you need mutual authentication (including client
authentication), see How To Configure TLS Mutual Authentication for App Service.+

To secure with HTTPS an app that has a custom domain name, you add a certificate
for that domain name. By default, Azure secures the *.azurewebsites.net wildcard
domain with a single SSL certificate, so your clients can already access your app at
https://<appname>.azurewebsites.net. But if you want to use a custom domain, like
contoso.com, www.contoso.com, and *.contoso.com, the default certificate can't
secure that. Furthermore, like all wildcard certificates, the default certificate is not
as secure as using a custom domain and a certificate for that custom domain. +
What you need
To secure your custom domain name with HTTPS, you bind a custom SSL certificate
to that custom domain in Azure. Before binding a custom certificate, you need to do
the following:+
Configure the custom domain - App Service only allows adding a certificate for a
domain name that's already configured in your app. For instructions, see Map a
custom domain name to an Azure app.
Scale up to Basic tier or higher App Service plans in lower pricing tiers don't support
custom SSL certificates. For instructions, see Scale up an app in Azure.
Get an SSL certificate - If you do not already have one, you need to get one from a
trusted certificate authority (CA). The certificate must meet all the following
requirements:
It is signed by a trusted CA (no private CA servers).
It contains a private key.
It is created for key exchange, and exported to a .PFX file.
It uses a minimum of 2048-bit encryption.
Its subject name matches the custom domain it needs to secure. To secure multiple
domains with one certificate, you need to use a wildcard name (e.g. *.contoso.com)
or specify subjectAltName values.
It is merged with all intermediate certificates used by your CA. Otherwise, you may
run into irreproducible interoperability problems on some clients.
Note
The easiest way to get an SSL certificate that meets all the requirements is to buy
one in the Azure portal directly. This article shows you how to do it manually and
then bind it to your custom domain in App Service.
Elliptic Curve Cryptography (ECC) certificates can work with App Service, but
outside the scope of this article. Work with your CA on the exact steps to create ECC
certificates.
+
+

Step 1. Get an SSL certificate


Because CAs provide the various SSL certificate types at different price points, you
should start by deciding what type of SSL certificate to buy. To secure a single
domain name (www.contoso.com), you just need a basic certificate. To secure
multiple domain names (contoso.com and www.contoso.com and mail.contoso.com),
you need either a wildcard certificate or a certificate with Subject Alternate Name
(subjectAltName).+
Once you know which SSL certificate to buy, you submit a Certificate Signing
Request (CSR) to a CA. When you get requested certificate back from the CA, you
then generate a .pfx file from the certificate. You can perform these steps using the
tool of your choice. Here are instructions for the common tools:+
Certreq.exe steps - the Windows utility for creating certificate requests. It has been
part of Windows since Windows XP/Windows Server 2000.
IIS Manager steps - The tool of choice if you're already familiar with it.
OpenSSL steps - an open-source, cross-platform tool. Use it to help you get an SSL
certificate from any platform.
subjectAltName steps using OpenSSL - steps for getting subjectAltName certificates.
+
If you want to test the setup in App Service before buying a certificate, you can
generate a self-signed certificate. This tutorial gives you two ways to generate it:+
Self-signed certificate, Certreq.exe steps
Self-signed certificate, OpenSSL steps
+
+
Get a certificate using Certreq.exe
Create a file (e.g. myrequest.txt), and copy into it the following text, and save it in a
working directory. Replace the <your-domain> placeholder with the custom domain
name of your app.
Copy

[NewRequest]
Subject = "CN=<your-domain>" ; E.g. "CN=www.contoso.com", or
"CN=*.contoso.com" for a wildcard certificate
Exportable = TRUE
KeyLength = 2048

; Required minimum is 2048

KeySpec = 1
KeyUsage = 0xA0
MachineKeySet = True
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
HashAlgorithm = SHA256

[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1

; Server Authentication

For more information on the options in the CSR, and other available options, see the
Certreq reference documentation.
In a command prompt, CD into your working directory and run the following
command to create the CSR:
Copy

certreq -new myrequest.txt myrequest.csr


myrequest.csr is now created in your current working directory.
Submit myrequest.csr to a CA to obtain an SSL certificate. You either upload the file,
or copy its content from a text editor into a web form.
For a list of CAs trusted by Microsoft, see Microsoft Trusted Root Certificate Program:
Participants.
Once the CA has responded to you with a certificate (.CER) file, save it in your
working directory. Then, run the following command to complete the pending CSR.
Copy

certreq -accept -user <certificate-name>.cer


This command stores the finished certificate in the Windows certificate store.
If your CA uses intermediate certificates, install them before you proceed. They
usually come as a separate download from your CA, and in several formats for
different web server types. Select the version for Microsoft IIS.
Once you have downloaded the certificates, right-click each of them in Windows
Explorer and select Install certificate. Use the default values in the Certificate
Import Wizard, and continue selecting Next until the import has completed.

To export your SSL certificate from the certificate store, press Win+R and run
certmgr.msc to launch Certificate Manager. Select Personal > Certificates. In the
Issued To column, you should see an entry with your custom domain name, and the
CA you used to generate the certificate in the Issued By column.

Right-click the certificate and select All Tasks > Export. In the Certificate Export
Wizard, click Next, then select Yes, export the private key, and then click Next again.

Select Personal Information Exchange - PKCS #12, Include all certificates in the
certificate path if possible, and Export all extended properties. Then, click Next.

Select Password, and then enter and confirm the password. Click Next.

Provide a path and filename for the exported certificate, with the extension .pfx.
Click Next to finish.

+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Get a certificate using the IIS Manager
Generate a CSR with IIS Manager to send to the CA. For more information on
generating a CSR, see Request an Internet Server Certificate (IIS 7).
Submit your CSR to a CA to get an SSL certificate. For a list of CAs trusted by
Microsoft, see Microsoft Trusted Root Certificate Program: Participants.
Complete the CSR with the certificate that the CA sends back to you. For more
information on completing the CSR, see Install an Internet Server Certificate (IIS 7).

If your CA uses intermediate certificates, install them before you proceed. They
usually come as a separate download from your CA, and in several formats for
different web server types. Select the version for Microsoft IIS.
Once you have downloaded the certificates, right-click each of them in Windows
Explorer and select Install certificate. Use the default values in the Certificate
Import Wizard, and continue selecting Next until the import has completed.
Export the SSL certificate from IIS Manager. For more information on exporting the
certificate, see Export a Server Certificate (IIS 7).
Important
In the Certificate Export Wizard, make sure that you select Yes, export the private
key

and also select Personal Information Exchange - PKCS #12, Include all certificates in
the certificate path if possible, and Export all extended properties.

+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Get a certificate using OpenSSL
In a command-line terminal, CD into a working directory generate a private key and
CSR by running the following command:
Copy

openssl req -sha256 -new -nodes -keyout myserver.key -out server.csr -newkey
rsa:2048
When prompted, enter the appropriate information. For example:

Copy

Country Name (2 letter code)


State or Province Name (full name) []: Washington
Locality Name (eg, city) []: Redmond
Organization Name (eg, company) []: Microsoft
Organizational Unit Name (eg, section) []: Azure
Common Name (eg, YOUR name) []: www.microsoft.com
Email Address []:

Please enter the following 'extra' attributes to be sent with your certificate request

A challenge password []:


When finished, you should have two files in your working directory: myserver.key
and server.csr. The server.csr contains the CSR, and you need myserver.key later.
Submit your CSR to a CA to get an SSL certificate. For a list of CAs trusted by
Microsoft, see Microsoft Trusted Root Certificate Program: Participants.
Once the CA sends you the requested certificate, save it to a file named
myserver.crt in your working directory. If your CA provides it in a text format, simply
copy the content into myserver.crt in a text editor and save it. Your file should look
like the following:
Copy

-----BEGIN CERTIFICATE----MIIDJDCCAgwCCQCpCY4o1LBQuzANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJ
V
UzELMAkGA1UECBMCV0ExEDAOBgNVBAcTB1JlZG1vbmQxEDAOBgNVBAsTB0NvbnRv
c28xFDASBgNVBAMTC2NvbnRvc28uY29tMB4XDTE0MDExNjE1MzIyM1oXDTE1MDEx
NjE1MzIyM1owVDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwd
S

ZWRtb25kMRAwDgYDVQQLEwdDb250b3NvMRQwEgYDVQQDEwtjb250b3NvLmNvbT
CC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN96hBX5EDgULtWkCRK7DMM3
enae1LT9fXqGlbA7ScFvFivGvOLEqEPD//eLGsf15OYHFOQHK1hwgyfXa9sEDPMT
3AsF3iWyF7FiEoR/qV6LdKjeQicJ2cXjGwf3G5vPoIaYifI5r0lhgOUqBxzaBDZ4
xMgCh2yv7NavI17BHlWyQo90gS2X5glYGRhzY/fGp10BeUEgIs3Se0kQfBQOFUYb
ktA6802lod5K0OxlQy4Oc8kfxTDf8AF2SPQ6BL7xxWrNl/Q2DuEEemjuMnLNxmeA
Ik2+6Z6+WdvJoRxqHhleoL8ftOpWR20ToiZXCPo+fcmLod4ejsG5qjBlztVY4qsC
AwEAATANBgkqhkiG9w0BAQUFAAOCAQEAVcM9AeeNFv2li69qBZLGDuK0NDHD3zhK
Y0nDkqucgjE2QKUuvVSPodz8qwHnKoPwnSrTn8CRjW1gFq5qWEO50dGWgyLR8Wy1
F69DYsEzodG+shv/G+vHJZg9QzutsJTB/Q8OoUCSnQS1PSPZP7RbvDV9b7Gx+gtg
7kQ55j3A5vOrpI8N9CwdPuimtu6X8Ylw9ejWZsnyy0FMeOPpK3WTkDMxwwGxkU3Y
lCRTzkv6vnHrlYQxyBLOSafCB1RWinN/slcWSLHADB6R+HeMiVKkFpooT+ghtii1
A9PdUQIhK9bdaFicXPBYZ6AgNVuGtfwyuS5V6ucm7RE6+qf+QjXNFg==
-----END CERTIFICATE----In the command-line terminal, run the following command to export myserver.pfx
from myserver.key and myserver.crt:
Copy

openssl pkcs12 -export -out myserver.pfx -inkey myserver.key -in myserver.crt


When prompted, define a password to secure the .pfx file.
Note
If your CA uses intermediate certificates, you must include them with the -certfile
parameter. They usually come as a separate download from your CA, and in several
formats for different web server types. Select the version with the .pem extension.
Your openssl -export command should look like the following example, which creates
a .pfx file that includes the intermediate certificates from the intermediate-cets.pem
file:
openssl pkcs12 -chain -export -out myserver.pfx -inkey myserver.key -in
myserver.crt -certfile intermediate-cets.pem
+

You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Get a SubjectAltName certificate using OpenSSL
Create a file named sancert.cnf, copy the following text into it, and save it in a
working directory:
Copy

# -------------- BEGIN custom sancert.cnf ----HOME = .


oid_section = new_oids
[ new_oids ]
[ req ]
default_days = 730
distinguished_name = req_distinguished_name
encrypt_key = no
string_mask = nombstr
req_extensions = v3_req # Extensions to add to certificate request
[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_default =
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default =
localityName = Locality Name (eg, city)
localityName_default =
organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_default =
commonName

= Your common name (eg, domain name)

commonName_default
commonName_max = 64

= www.mydomain.com

[ v3_req ]
subjectAltName=DNS:ftp.mydomain.com,DNS:blog.mydomain.com,DNS:*.mydomai
n.com
# -------------- END custom sancert.cnf ----In the line that begins with subjectAltName, replace the value with all domain
names you want to secure (in addition to commonName). For example:
Copy

subjectAltName=DNS:sales.contoso.com,DNS:support.contoso.com,DNS:fabrikam.c
om
You do not need to change any other field, including commonName. You will be
prompted to specify them in the next few steps.
In a command-line terminal, CD into your working directory and run the following
command:
Copy

openssl req -sha256 -new -nodes -keyout myserver.key -out server.csr -newkey
rsa:2048 -config sancert.cnf
When prompted, enter the appropriate information. For example:
Copy

Country Name (2 letter code) []: US


State or Province Name (full name) []: Washington
Locality Name (eg, city) []: Redmond
Organizational Unit Name (eg, section) []: Azure
Your common name (eg, domain name) []: www.microsoft.com
Once finished, you should have two files in your working directory: myserver.key
and server.csr. The server.csr contains the CSR, and you need myserver.key later.
Submit your CSR to a CA to get an SSL certificate. For a list of CAs trusted by
Microsoft, see Microsoft Trusted Root Certificate Program: Participants.

Once the CA sends you the requested certificate, save it to a file named
myserver.crt. If your CA provides it in a text format, simply copy the content into
myserver.crt in a text editor and save it. The file should look like the following:
Copy

-----BEGIN CERTIFICATE----MIIDJDCCAgwCCQCpCY4o1LBQuzANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJ
V
UzELMAkGA1UECBMCV0ExEDAOBgNVBAcTB1JlZG1vbmQxEDAOBgNVBAsTB0NvbnRv
c28xFDASBgNVBAMTC2NvbnRvc28uY29tMB4XDTE0MDExNjE1MzIyM1oXDTE1MDEx
NjE1MzIyM1owVDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwd
S
ZWRtb25kMRAwDgYDVQQLEwdDb250b3NvMRQwEgYDVQQDEwtjb250b3NvLmNvbT
CC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN96hBX5EDgULtWkCRK7DMM3
enae1LT9fXqGlbA7ScFvFivGvOLEqEPD//eLGsf15OYHFOQHK1hwgyfXa9sEDPMT
3AsF3iWyF7FiEoR/qV6LdKjeQicJ2cXjGwf3G5vPoIaYifI5r0lhgOUqBxzaBDZ4
xMgCh2yv7NavI17BHlWyQo90gS2X5glYGRhzY/fGp10BeUEgIs3Se0kQfBQOFUYb
ktA6802lod5K0OxlQy4Oc8kfxTDf8AF2SPQ6BL7xxWrNl/Q2DuEEemjuMnLNxmeA
Ik2+6Z6+WdvJoRxqHhleoL8ftOpWR20ToiZXCPo+fcmLod4ejsG5qjBlztVY4qsC
AwEAATANBgkqhkiG9w0BAQUFAAOCAQEAVcM9AeeNFv2li69qBZLGDuK0NDHD3zhK
Y0nDkqucgjE2QKUuvVSPodz8qwHnKoPwnSrTn8CRjW1gFq5qWEO50dGWgyLR8Wy1
F69DYsEzodG+shv/G+vHJZg9QzutsJTB/Q8OoUCSnQS1PSPZP7RbvDV9b7Gx+gtg
7kQ55j3A5vOrpI8N9CwdPuimtu6X8Ylw9ejWZsnyy0FMeOPpK3WTkDMxwwGxkU3Y
lCRTzkv6vnHrlYQxyBLOSafCB1RWinN/slcWSLHADB6R+HeMiVKkFpooT+ghtii1
A9PdUQIhK9bdaFicXPBYZ6AgNVuGtfwyuS5V6ucm7RE6+qf+QjXNFg==
-----END CERTIFICATE----In the command-line terminal, run the following command to export myserver.pfx
from myserver.key and myserver.crt:

Copy

openssl pkcs12 -export -out myserver.pfx -inkey myserver.key -in myserver.crt


When prompted, define a password to secure the .pfx file.
Note
If your CA uses intermediate certificates, you must include them with the -certfile
parameter. They usually come as a separate download from your CA, and in several
formats for different web server types. Select the version with the .pem extension).
Your openssl -export command should look like the following example, which creates
a .pfx file that includes the intermediate certificates from the intermediate-cets.pem
file:
openssl pkcs12 -chain -export -out myserver.pfx -inkey myserver.key -in
myserver.crt -certfile intermediate-cets.pem
+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Generate a self-signed certificate using Certreq.exe
Important
Self-signed certificates are for test purposes only. Most browsers return errors when
visiting a website that's secured by a self-signed certificate. Some browsers may
even refuse to navigate to the site. +
Create a text file (e.g. mycert.txt), copy into it the following text, and save the file in
a working directory. Replace the <your-domain> placeholder with the custom
domain name of your app.
Copy

[NewRequest]
Subject = "CN=<your-domain>" ; E.g. "CN=www.contoso.com", or
"CN=*.contoso.com" for a wildcard certificate
Exportable = TRUE
KeyLength = 2048
; KeyLength can be 2048, 4096, 8192, or 16384
(required minimum is 2048)
KeySpec = 1

KeyUsage = 0xA0
MachineKeySet = True
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
HashAlgorithm = SHA256
RequestType = Cert

; Self-signed certificate

ValidityPeriod = Years
ValidityPeriodUnits = 1

[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1

; Server Authentication

The important parameter is RequestType = Cert, which specifies a self-signed


certificate. For more information on the options in the CSR, and other available
options, see the Certreq reference documentation.
In the command prompt, CD to your working directory and run the following
command:
Copy

certreq -new mycert.txt mycert.crt


Your new self-signed certificate is now installed in the certificate store.
To export the certificate from the certificate store, press Win+R and run
certmgr.msc to launch Certificate Manager. Select Personal > Certificates. In the
Issued To column, you should see an entry with your custom domain name, and the
CA you used to generate the certificate in the Issued By column.

Right-click the certificate and select All Tasks > Export. In the Certificate Export
Wizard, click Next, then select Yes, export the private key, and then click Next again.

Select Personal Information Exchange - PKCS #12, Include all certificates in the
certificate path if possible, and Export all extended properties. Then, click Next.

Select Password, and then enter and confirm the password. Click Next.

Provide a path and filename for the exported certificate, with the extension .pfx.
Click Next to finish.

+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Generate a self-signed certificate using OpenSSL
Important
Self-signed certificates are for test purposes only. Most browsers return errors when
visiting a website that's secured by a self-signed certificate. Some browsers may
even refuse to navigate to the site. +
Create a text file named serverauth.cnf, then copy the following content into it, and
then save it in a working directory:
Copy

[ req ]
default_bits

= 2048

default_keyfile

= privkey.pem

distinguished_name
attributes

= req_distinguished_name

= req_attributes

x509_extensions

= v3_ca

[ req_distinguished_name ]
countryName

= Country Name (2 letter code)

countryName_min

=2

countryName_max

=2

stateOrProvinceName

= State or Province Name (full name)

localityName

= Locality Name (eg, city)

0.organizationName

= Organization Name (eg, company)

organizationalUnitName
commonName

= Common Name (eg, your app's domain name)

commonName_max
emailAddress

= Organizational Unit Name (eg, section)

= 64
= Email Address

emailAddress_max

= 40

[ req_attributes ]
challengePassword

= A challenge password

challengePassword_min
challengePassword_max

=4
= 20

[ v3_ca ]
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer:always
basicConstraints = CA:false

keyUsage=nonRepudiation, digitalSignature, keyEncipherment


extendedKeyUsage = serverAuth
In a command-line terminal, CD into your working directory and run the following
command:
Copy

openssl req -sha256 -x509 -nodes -days 365 -newkey rsa:2048 -keyout
myserver.key -out myserver.crt -config serverauth.cnf
This command creates two files: myserver.crt (the self-signed certificate) and
myserver.key (the private key), based on the settings in serverauth.cnf.
Export the certificate to a .pfx file by running the following command:
Copy

openssl pkcs12 -export -out myserver.pfx -inkey myserver.key -in myserver.crt


When prompted, define a password to secure the .pfx file.
+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Step 2. Upload and bind the custom SSL certificate
Before you move on, review the What you need section and verify that:+
you have a custom domain that maps to your Azure app,
your app is running in Basic tier or higher, and
you have an SSL certificate for the custom domain from a CA.
+
In your browser, open the Azure Portal.
Click the App Service option on the left side of the page.
Click the name of your app to which you want to assign this certificate.
In the Settings, Click SSL certificates
Click Upload Certificate

Select the .pfx file that you exported in Step 1 and specify the password that you
create before. Then, click Upload to upload the certificate. You should now see your
uploaded certificate back in the SSL certificate blade.
In the ssl bindings section Click on Add bindings
In the Add SSL Binding blade use the dropdowns to select the domain name to
secure with SSL, and the certificate to use. You may also select whether to use
Server Name Indication (SNI) or IP based SSL.

Copy

IP based SSL associates a certificate with a domain name by mapping the


dedicated public IP address of the server to the domain name. This requires each
domain name (contoso.com, fabricam.com, etc.) associated with your service to
have a dedicated IP address. This is the traditional
method of associating SSL
certificates with a web server.

SNI based SSL is an extension to SSL and **[Transport Layer Security]


(http://en.wikipedia.org/wiki/Transport_Layer_Security)** (TLS) that allows multiple
domains to share the same IP address, with separate security certificates for each
domain. Most modern browsers (including Internet Explorer, Chrome, Firefox and
Opera) support SNI, however older browsers may not support SNI. For more
information on SNI, see the **[Server Name Indication]
(http://en.wikipedia.org/wiki/Server_Name_Indication)** article on Wikipedia.
Click Add Binding to save the changes and enable SSL.
+
Step 3. Change your domain name mapping (IP based SSL only)
If you use SNI SSL bindings only, skip this section. Multiple SNI SSL bindings can
work together on the existing shared IP address assigned to your app. However, if
you create an IP based SSL binding, App Service creates a dedicated IP address for
the binding because the IP based SSL requires one. Only one dedicated IP address
can be created, therefore only one IP based SSL binding may be added.+
Because of this dedicated IP address, you will need to configure your app further if:
+
You used an A record to map your custom domain to your Azure app, and you just
added an IP based SSL binding. In this scenario, you need to remap the existing A
record to point to the dedicated IP address by following these steps:
After you have configured an IP based SSL binding, a dedicated IP address is
assigned to your app. You can find this IP address on the Custom domain page
under settings of your app, right above the Hostnames section. It will be listed as
External IP Address

Remap the A record for your custom domain name to this new IP address.
You already have one or more SNI SSL bindings in your app, and you just added an
IP based SSL binding. Once the binding is complete, your
<appname>.azurewebsites.net domain name points to the new IP address.
Therefore, any existing CNAME mapping from the custom domain to
<appname>.azurewebsites.net, including the ones that the SNI SSL secure, also

receives traffic on the new address, which is created for the IP based SSL only. In
this scenario, you need to send the SNI SSL traffic back to the original shared IP
address by following these steps:
Identify all CNAME mappings of custom domains to your app that has an SNI SSL
binding.
Remap each CNAME record to sni.<appname>.azurewebsites.net instead of <
appname>.azurewebsites.net.
+
Step 4. Test HTTPS for your custom domain
All that's left to do now is to make sure that HTTPS works for your custom domain.
In various browsers, browse to https://<your.custom.domain> to see that it serves
up your app.+
If your app gives you certificate validation errors, you're probably using a self-signed
certificate.
If that's not the case, you may have left out intermediate certificates when you
export your .pfx certificate. Go back to What you need to verify that your CSR meets
all the requirements by App Service.
+
+
Enforce HTTPS on your app
If you still want to allow HTTP access to your app, skip this step. App Service does
not enforce HTTPS, so visitors can still access your app using HTTP. If you want to
enforce HTTPS for your app, you can define a rewrite rule in the web.config file for
your app. Every App Service app has this file, regardless of the language framework
of your app.+
Note
There is language-specific redirection of requests. ASP.NET MVC can use the
RequireHttps filter instead of the rewrite rule in web.config (see Deploy a secure
ASP.NET MVC 5 app to a web app).+
Follow these steps:+
Navigate to the Kudu debug console for your app. Its address is
https://<appname>.scm.azurewebsites.net/DebugConsole.
In the debug console, CD to D:\home\site\wwwroot.
Open web.config by clicking the pencil button.

If you deploy your app with Visual Studio or Git, App Service automatically
generates the appropriate web.config for your .NET, PHP, Node.js, or Python app in
the application root. If web.config doesn't exist, run touch web.config in the webbased command prompt to create it. Or, you can create it in your local project and
redeploy your code.
If you had to create a web.config, copy the following code into it and save it. If you
opened an existing web.config, then you just need to copy the entire <rule> tag
into your web.config's configuration/system.webServer/rewrite/rules element.
Copy

<?xml version="1.0" encoding="UTF-8"?>


<configuration>
<system.webServer>
<rewrite>
<rules>
<!-- BEGIN rule TAG FOR HTTPS REDIRECT -->
<rule name="Force HTTPS" enabled="true">
<match url="(.*)" ignoreCase="false" />
<conditions>
<add input="{HTTPS}" pattern="off" />
</conditions>
<action type="Redirect" url="https://{HTTP_HOST}/{R:1}"
appendQueryString="true" redirectType="Permanent" />
</rule>
<!-- END rule TAG FOR HTTPS REDIRECT -->

</rules>
</rewrite>
</system.webServer>
</configuration>
This rule returns an HTTP 301 (permanent redirect) to the HTTPS protocol whenever
the user requests a page using HTTP. It redirects from http://contoso.com to
https://contoso.com.
Important
If there are already other <rule> tags in your web.config, then place the copied
<rule> tag before the other <rule> tags.
Save the file in the Kudu debug console. It should take effect immediately redirect
all requests to HTTPS.
+
For more information on the IIS URL Rewrite module, see the URL Rewrite
documentation.

51 Add a Site-to-Site connection to a VNet


with an existing VPN gateway connection
This article walks you through using PowerShell to add Site-to-Site (S2S)
connections to a VPN gateway that has an existing connection. This type of
connection is often referred to as a "multi-site" configuration. +
This article applies to virtual networks created using the classic deployment
model (also known as Service Management). These steps do not apply to
ExpressRoute/Site-to-Site coexisting connection configurations. See
ExpressRoute/S2S coexisting connections for information about coexisting
connections.+

51.1.1
Deployment models and methods
It's important to know that Azure currently works with two deployment
models: Resource Manager and classic. Before you begin your configuration,
make sure that you understand the deployment models and tools. You'll need
to know which model that you want to work in. Not all networking features
are supported yet for both models. For information about the deployment
models, see Understanding Resource Manager deployment and classic
deployment.+
We update this table as new articles and additional tools become available
for this configuration. When an article is available, we link directly to it from
this table.+

Deployment
Model/Method

Classic
Portal

PowerShe
ll

Azure Portal

Resource Manager

Article

Not
Supported

Supported

Classic

Not
Supported

Not
Supported

Article

51.2 About connecting


You can connect multiple on-premises sites to a single virtual network. This is
especially attractive for building hybrid cloud solutions. Creating a multi-site
connection to your Azure virtual network gateway is very similar to creating
other Site-to-Site connections. In fact, you can use an existing Azure VPN
gateway, as long as the gateway is dynamic (route-based).+
If you already have a static gateway connected to your virtual network, you
can change the gateway type to dynamic without needing to rebuild the
virtual network in order to accommodate multi-site. Before changing the

routing type, make sure that your on-premises VPN gateway supports routebased VPN configurations. +

51.3 Points to consider


You won't be able to use the Azure Classic Portal to make changes
to this virtual network. For this release, you'll need to make changes to
the network configuration file instead of using the Azure Classic Portal. If you
make changes in the Azure Classic Portal, they'll overwrite your multi-site
reference settings for this virtual network. +
You should feel pretty comfortable using the network configuration file by the
time you've completed the multi-site procedure. However, if you have
multiple people working on your network configuration, you'll need to make
sure that everyone knows about this limitation. This doesn't mean that you
can't use the Azure Classic Portal at all. You can use it for everything else,
except making configuration changes to this particular virtual network.+

51.4 Before you begin


Before you begin configuration, verify that you have the following:+

An Azure subscription. If you don't already have an Azure subscription, you


can activate your MSDN subscriber benefits or sign up for a free account.

Compatible VPN hardware for each on-premises location. Check About VPN
Devices for Virtual Network Connectivity to verify if the device that you want to
use is something that is known to be compatible.

An externally facing public IPv4 IP address for each VPN device. The IP
address cannot be located behind a NAT. This is requirement.

You'll need to install the latest version of the Azure PowerShell cmdlets. See
How to install and configure Azure PowerShell for more information about
installing the PowerShell cmdlets.

Someone who is proficient at configuring your VPN hardware. You won't be


able to use the auto-generated VPN scripts from the Azure Classic Portal to
configure your VPN devices. This means you'll have to have a strong
understanding of how to configure your VPN device, or work with someone who
does.

The IP address ranges that you want to use for your virtual network (if you
haven't already created one).

The IP address ranges for each of the local network sites that you'll be
connecting to. You'll need to make sure that the IP address ranges for each of
the local network sites that you want to connect to do not overlap. Otherwise,
the Azure Classic Portal or the REST API will reject the configuration being
uploaded.
For example, if you have two local network sites that both contain the IP
address range 10.2.3.0/24 and you have a package with a destination address
10.2.3.3, Azure wouldn't know which site you want to send the package to

because the address ranges are overlapping. To prevent routing issues, Azure
doesn't allow you to upload a configuration file that has overlapping ranges.
+

51.5 1. Create a Site-to-Site VPN


If you already have a Site-to-Site VPN with a dynamic routing gateway, great!
You can proceed to Export the virtual network configuration settings. If not,
do the following:+
51.5.1
If you already have a Site-to-Site virtual network, but it has a
static (policy-based) routing gateway:
1.

Change your gateway type to dynamic routing. A multi-site VPN requires a


dynamic (also known as route-based) routing gateway. To change your gateway
type, you'll need to first delete the existing gateway, then create a new one. For
instructions, see How to change the VPN routing type for your gateway.

2.

Configure your new gateway and create your VPN tunnel. For instructions,
see Configure a VPN Gateway in the Azure Classic Portal. First, change your
gateway type to dynamic routing.

51.5.2
1.

If you don't have a Site-to-Site virtual network:

Create your Site-to-Site virtual network using these instructions: Create a


Virtual Network with a Site-to-Site VPN Connection in the Azure Classic Portal.

2.

Configure a dynamic routing gateway using these instructions: Configure a


VPN Gateway. Be sure to select dynamic routing for your gateway type.

51.6 2. Export the network configuration file


Export your network configuration file. The file that you export will be used to
configure your new multi-site settings. If you need instructions on how to
export a file, see the section in the article: How to create a VNet using a
network configuration file in the Azure Portal. +

51.7 3. Open the network configuration file


Open the network configuration file that you downloaded in the last step.
Use any xml editor that you like. The file should look similar to the following:
+
Copy

<NetworkConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration">
<VirtualNetworkConfiguration>
<LocalNetworkSites>
<LocalNetworkSite name="Site1">
<AddressSpace>
<AddressPrefix>10.0.0.0/16</AddressPrefix>
<AddressPrefix>10.1.0.0/16</AddressPrefix>
</AddressSpace>
<VPNGatewayAddress>131.2.3.4</VPNGatewayAddress>
</LocalNetworkSite>
<LocalNetworkSite name="Site2">
<AddressSpace>
<AddressPrefix>10.2.0.0/16</AddressPrefix>
<AddressPrefix>10.3.0.0/16</AddressPrefix>
</AddressSpace>
<VPNGatewayAddress>131.4.5.6</VPNGatewayAddress>
</LocalNetworkSite>
</LocalNetworkSites>

<VirtualNetworkSites>
<VirtualNetworkSite name="VNet1" AffinityGroup="USWest">
<AddressSpace>
<AddressPrefix>10.20.0.0/16</AddressPrefix>
<AddressPrefix>10.21.0.0/16</AddressPrefix>
</AddressSpace>
<Subnets>
<Subnet name="FE">
<AddressPrefix>10.20.0.0/24</AddressPrefix>
</Subnet>
<Subnet name="BE">
<AddressPrefix>10.20.1.0/24</AddressPrefix>
</Subnet>
<Subnet name="GatewaySubnet">
<AddressPrefix>10.20.2.0/29</AddressPrefix>
</Subnet>
</Subnets>
<Gateway>
<ConnectionsToLocalNetwork>
<LocalNetworkSiteRef name="Site1">
<Connection type="IPsec" />
</LocalNetworkSiteRef>
</ConnectionsToLocalNetwork>
</Gateway>
</VirtualNetworkSite>

</VirtualNetworkSites>
</VirtualNetworkConfiguration>
</NetworkConfiguration>

51.8 4. Add multiple site references


When you add or remove site reference information, you'll make
configuration changes to the
ConnectionsToLocalNetwork/LocalNetworkSiteRef. Adding a new local site
reference triggers Azure to create a new tunnel. In the example below, the
network configuration is for a single-site connection. Save the file once you
have finished making your changes.+
Copy

<Gateway>
<ConnectionsToLocalNetwork>
<LocalNetworkSiteRef name="Site1"><Connection type="IPsec"
/></LocalNetworkSiteRef>
</ConnectionsToLocalNetwork>
</Gateway>

To add additional site references (create a multi-site configuration), simply add additional
"LocalNetworkSiteRef" lines, as shown in the example below:

<Gateway>
<ConnectionsToLocalNetwork>
<LocalNetworkSiteRef name="Site1"><Connection type="IPsec"
/></LocalNetworkSiteRef>
<LocalNetworkSiteRef name="Site2"><Connection type="IPsec"
/></LocalNetworkSiteRef>

</ConnectionsToLocalNetwork>
</Gateway>

51.9 5. Import the network configuration file


Import the network configuration file. When you import this file with the
changes, the new tunnels will be added. The tunnels will use the dynamic
gateway that you created earlier. If you need instructions on how to import
the file, see the section in the article: How to create a VNet using a network
configuration file in the Azure Portal. +

51.10

6. Download keys

Once your new tunnels have been added, use the PowerShell cmdlet GetAzureVNetGatewayKey to get the IPsec/IKE pre-shared keys for each tunnel.
+
For example:+
Copy

Get-AzureVNetGatewayKey VNetName "VNet1" LocalNetworkSiteName "Site1"

Get-AzureVNetGatewayKey VNetName "VNet1" LocalNetworkSiteName "Site2"

If you prefer, you can also use the Get Virtual Network Gateway Shared Key
REST API to get the pre-shared keys.+

51.11

7. Verify your connections

Check the multi-site tunnel status. After downloading the keys for each
tunnel, you'll want to verify connections. Use Get-AzureVnetConnection to
get a list of virtual network tunnels, as shown in the example below. VNet1 is
the name of the VNet.+

Copy

Get-AzureVnetConnection -VNetName VNET1

ConnectivityState

: Connected

EgressBytesTransferred

: 661530

IngressBytesTransferred : 519207
LastConnectionEstablished : 5/2/2014 2:51:40 PM
LastEventID

: 23401

LastEventMessage
: The connectivity state for the local network site 'Site1' changed
from Not Connected to Connected.
LastEventTimeStamp

: 5/2/2014 2:51:40 PM

LocalNetworkSiteName

: Site1

OperationDescription
OperationId

: Get-AzureVNetConnection

: 7f68a8e6-51e9-9db4-88c2-16b8067fed7f

OperationStatus

: Succeeded

ConnectivityState

: Connected

EgressBytesTransferred

: 789398

IngressBytesTransferred : 143908
LastConnectionEstablished : 5/2/2014 3:20:40 PM
LastEventID

: 23401

LastEventMessage
: The connectivity state for the local network site 'Site2' changed
from Not Connected to Connected.
LastEventTimeStamp
LocalNetworkSiteName

: 5/2/2014 2:51:40 PM
: Site2

OperationDescription
OperationId

: 7893b329-51e9-9db4-88c2-16b8067fed7f

OperationStatus

51.12

: Get-AzureVNetConnection

: Succeeded

Next steps

To learn more about VPN Gateways, see About VPN Gateways.

52 VM Sizes
52.1 Notes: Standard A0 - A4 using CLI and PowerShell
In the classic deployment model, some VM size names are slightly different
in CLI and PowerShell:+

Standard_A0 is ExtraSmall

Standard_A1 is Small

Standard_A2 is Medium

Standard_A3 is Large

Standard_A4 is ExtraLarge

Size

CPU
cor
es

Memor
y: GiB

Loca
l
HDD
: GiB

Standard_

0.768

20

Max
dat
a
disk
s

Max data
disk
throughp
ut: IOPS

Max
NICs /
Network
bandwid
th

1x500

1 / low

Max
dat
a
disk
s

Max data
disk
throughp
ut: IOPS

Max
NICs /
Network
bandwid
th

CPU
cor
es

Memor
y: GiB

Loca
l
HDD
: GiB

Standard_
A1
small

1.75

70

2x500

1/
moderat
e

Standard_
A2
medium

3.5 GB

135

4x500

1/
moderat
e

Standard_
A3
large

285

8x500

2 / high

Standard_
A4
Extra
large

14

605

16

16x500

4 / high

Standard_
A5

14

135

4X500

1/
moderat
e

Standard_
A6

28

285

8x500

2 / high

Standard_
A7

56

605

16

16x500

4 / high

Size
A0
Extra
small

53 Database tiers

53.1.1

Standard service tier

Service tier

S0

S1

S2

S3

Max DTUs

10

20

50

100

250 GB

250 GB

250 GB

250 GB

N/A

N/A

N/A

N/A

Max concurrent workers

60

90

120

200

Max concurrent logins

60

90

120

200

600

900

1200

2400

Max database Size


Max in-memory OLTP storage

Max concurrent sessions

53.1.2

Premium service tier

Service tier

P1

P2

P4

P6

P11

P15

Max DTUs

125

250

500

1000

1750

4000

Max database size

500
GB

500
GB

500
GB

500
GB

1 TB

1 TB

Max in-memory
OLTP storage

1 GB

2 GB

4 GB

8 GB

14
GB

32
GB

Max concurrent
workers

200

400

800

1600

2400

6400

Service tier

P1

P2

P4

P6

P11

P15

Max concurrent
logins

200

400

800

1600

2400

6400

Max concurrent
sessions

3000
0

3000
0

3000
0

3000
0

3000
0

3000
0

54 How to configure and run startup tasks


for a cloud service
You can use startup tasks to perform operations before a role starts.
Operations that you might want to perform include installing a component,
registering COM components, setting registry keys, or starting a long running
process.3
54.1.1.1.1

Note

Startup tasks are not applicable to Virtual Machines, only to Cloud Service
Web and Worker roles.+

54.2 How startup tasks work


Startup tasks are actions that are taken before your roles begin and are
defined in the ServiceDefinition.csdef file by using the Task element within
the Startup element. Frequently startup tasks are batch files, but they can
also be console applications, or batch files that start PowerShell scripts.+
Environment variables pass information into a startup task, and local storage
can be used to pass information out of a startup task. For example, an
environment variable can specify the path to a program you want to install,

and files can be written to local storage that can then be read later by your
roles.+
Your startup task can log information and errors to the directory specified by
the TEMP environment variable. During the startup task, the TEMP
environment variable resolves to the C:\Resources\temp\[guid].
[rolename]\RoleTemp directory when running on the cloud.+
Startup tasks can also be executed several times between reboots. For
example, the startup task will be run each time the role recycles, and role
recycles may not always include a reboot. Startup tasks should be written in
a way that allows them to run several times without problems.+
Startup tasks must end with an errorlevel (or exit code) of zero for the
startup process to complete. If a startup task ends with a non-zero
errorlevel, the role will not start.+

54.3 Role startup order


The following lists the role startup procedure in Azure:+
1.

The instance is marked as Starting and does not receive traffic.

2.

All startup tasks are executed according to their taskType attribute.

The simple tasks are executed synchronously, one at a time.

The background and foreground tasks are started


asynchronously, parallel to the startup task.
54.3.1.1.1

Warning

IIS may not be fully configured during the startup task stage in the startup
process, so role-specific data may not be available. Startup tasks that

require role-specific data should use


Microsoft.WindowsAzure.ServiceRuntime.RoleEntryPoint.OnStart.
3.

The role host process is started and the site is created in IIS.

4.

The Microsoft.WindowsAzure.ServiceRuntime.RoleEntryPoint.OnStart method


is called.

5.

The instance is marked as Ready and traffic is routed to the instance.

6.

The Microsoft.WindowsAzure.ServiceRuntime.RoleEntryPoint.Run method is


called.

54.4 Example of a startup task


Startup tasks are defined in the ServiceDefinition.csdef file, in the Task
element. The commandLine attribute specifies the name and parameters of
the startup batch file or console command, the executionContext attribute
specifies the privilege level of the startup task, and the taskType attribute
specifies how the task will be executed.+
In this example, an environment variable, MyVersionNumber, is created for
the startup task and set to the value "1.0.0.0".+
ServiceDefinition.csdef:+
Copy
xml
<Startup>
<Task commandLine="Startup.cmd" executionContext="limited" taskType="simple" >
<Environment>
<Variable name="MyVersionNumber" value="1.0.0.0" />
</Environment>

</Task>
</Startup>

In the following example, the Startup.cmd batch file writes the line "The
current version is 1.0.0.0" to the StartupLog.txt file in the directory specified
by the TEMP environment variable. The EXIT /B 0 line ensures that the
startup task ends with an errorlevel of zero.+
Copy
cmd
ECHO The current version is %MyVersionNumber% >> "%TEMP%\StartupLog.txt" 2>&1
EXIT /B 0

54.4.1.1.1

Note

In Visual Studio, the Copy to Output Directory property for your startup
batch file should be set to Copy Always to be sure that your startup batch
file is properly deployed to your project on Azure (approot\bin for Web
roles, and approot for worker roles).+

54.5 Description of task attributes


The following describes the attributes of the Task element in the
ServiceDefinition.csdef file:+
commandLine - Specifies the command line for the startup task:+

The command, with optional command line parameters, which begins the
startup task.

Frequently this is the filename of a .cmd or .bat batch file.

The task is relative to the AppRoot\Bin folder for the deployment.


Environment variables are not expanded in determining the path and file of the
task. If environment expansion is required, you can create a small .cmd script
that calls your startup task.

Can be a console application or a batch file that starts a PowerShell script.

executionContext - Specifies the privilege level for the startup task. The
privilege level can be limited or elevated:+

limited
The startup task runs with the same privileges as the role. When the
executionContext attribute for the Runtime element is also limited, then user
privileges are used.

elevated
The startup task runs with administrator privileges. This allows startup tasks to
install programs, make IIS configuration changes, perform registry changes, and
other administrator level tasks, without increasing the privilege level of the role
itself.

+
54.5.1.1.1

Note

The privilege level of a startup task does not need to be the same as the role
itself.+
taskType - Specifies the way a startup task is executed.+

simple
Tasks are executed synchronously, one at a time, in the order specified in the
ServiceDefinition.csdef file. When one simple startup task ends with an
errorlevel of zero, the next simple startup task is executed. If there are no
more simple startup tasks to execute, then the role itself will be started.
54.5.1.1.2 Note

If the simple task ends with a non-zero errorlevel, the instance will be
blocked. Subsequent simple startup tasks, and the role itself, will not start.

To ensure that your batch file ends with an errorlevel of zero, execute the
command EXIT /B 0 at the end of your batch file process.

background
Tasks are executed asynchronously, in parallel with the startup of the role.

foreground
Tasks are executed asynchronously, in parallel with the startup of the role. The
key difference between a foreground and a background task is that a
foreground task prevents the role from recycling or shutting down until the task
has ended. The background tasks do not have this restriction.

54.6 Environment variables


Environment variables are a way to pass information to a startup task. For
example, you can put the path to a blob that contains a program to install, or
port numbers that your role will use, or settings to control features of your
startup task.1
There are two kinds of environment variables for startup tasks; static
environment variables and environment variables based on members of the
RoleEnvironment class. Both are in the Environment section of the
ServiceDefinition.csdef file, and both use the Variable element and name
attribute.+
Static environment variables uses the value attribute of the Variable
element. The example above creates the environment variable
MyVersionNumber which has a static value of "1.0.0.0". Another example
would be to create a StagingOrProduction environment variable which you
can manually set to values of "staging" or "production" to perform
different startup actions based on the value of the StagingOrProduction
environment variable.+

Environment variables based on members of the RoleEnvironment class do


not use the value attribute of the Variable element. Instead, the
RoleInstanceValue child element, with the appropriate XPath attribute value,
are used to create an environment variable based on a specific member of
the RoleEnvironment class. Values for the XPath attribute to access various
RoleEnvironment values can be found here.+
For example, to create an environment variable that is "true" when the
instance is running in the compute emulator, and "false" when running in
the cloud, use the following Variable and RoleInstanceValue elements:+
Copy
xml

55 Using Shared Access Signatures (SAS)


Using a shared access signature (SAS) is a powerful way to grant limited access to
objects in your storage account to other clients, without having to expose your
account key. In Part 1 of this tutorial on shared access signatures, we'll provide an
overview of the SAS model and review SAS best practices.+
For additional code examples using SAS beyond those presented here, see Getting
Started with Azure Blob Storage in .NET and other samples available in the Azure
Code Samples library. You can download the sample applications and run them, or
browse the code on GitHub.+
What is a shared access signature?
A shared access signature provides delegated access to resources in your storage
account. With a SAS, you can grant clients access to resources in your storage
account, without sharing your account keys. This is the key point of using shared
access signatures in your applications a SAS is a secure way to share your
storage resources without compromising your account keys.+
Important

Your storage account key is similar to the root password for your storage account.
Always be careful to protect your account key. Avoid distributing it to other users,
hard-coding it, or saving it in a plain-text file that is accessible to others. Regenerate
your account key using the Azure Portal if you believe it may have been
compromised. To learn how to regenerate your account key, see How to create,
manage, or delete a storage account in the Azure Portal.+
A SAS gives you granular control over what type of access you grant to clients who
have the SAS, including:+
The interval over which the SAS is valid, including the start time and the expiry
time.
The permissions granted by the SAS. For example, a SAS on a blob might grant a
user read and write permissions to that blob, but not delete permissions.
An optional IP address or range of IP addresses from which Azure Storage will accept
the SAS. For example, you might specify a range of IP addresses belonging to your
organization. This provides another measure of security for your SAS.
The protocol over which Azure Storage will accept the SAS. You can use this optional
parameter to restrict access to clients using HTTPS.
+
When should you use a shared access signature?
You can use a SAS when you want to provide access to resources in your storage
account to a client that can't be trusted with the account key. Your storage account
keys include both a primary and secondary key, both of which grant administrative
access to your account and all of the resources in it. Exposing either of your account
keys opens your account to the possibility of malicious or negligent use. Shared
access signatures provide a safe alternative that allows other clients to read, write,
and delete data in your storage account according to the permissions you've
granted, and without need for the account key.+
A common scenario where a SAS is useful is a service where users read and write
their own data to your storage account. In a scenario where a storage account
stores user data, there are two typical design patterns:+
1. Clients upload and download data via a front-end proxy service, which performs
authentication. This front-end proxy service has the advantage of allowing
validation of business rules, but for large amounts of data or high-volume
transactions, creating a service that can scale to match demand may be expensive
or difficult.+

+
2. A lightweight service authenticates the client as needed and then generates a
SAS. Once the client receives the SAS, they can access storage account resources
directly with the permissions defined by the SAS and for the interval allowed by the
SAS. The SAS mitigates the need for routing all data through the front-end proxy
service.+

+
Many real-world services may use a hybrid of these two approaches, depending on
the scenario involved, with some data processed and validated via the front-end
proxy while other data is saved and/or read directly using SAS.+
Additionally, you will need to use a SAS to authenticate the source object in a copy
operation in certain scenarios:+
When you copy a blob to another blob that resides in a different storage account,
you must use a SAS to authenticate the source blob. With version 2015-04-05, you
can optionally use a SAS to authenticate the destination blob as well.
When you copy a file to another file that resides in a different storage account, you
must use a SAS to authenticate the source file. With version 2015-04-05, you can
optionally use a SAS to authenticate the destination file as well.
When you copy a blob to a file, or a file to a blob, you must use a SAS to
authenticate the source object, even if the source and destination objects reside
within the same storage account.
+
Types of shared access signatures
Version 2015-04-05 of Azure Storage introduces a new type of shared access
signature, the account SAS. You can now create either of two types of shared access
signatures:+

Account SAS. The account SAS delegates access to resources in one or more of the
storage services. All of the operations available via a service SAS are also available
via an account SAS. Additionally, with the account SAS, you can delegate access to
operations that apply to a given service, such as Get/Set Service Properties and Get
Service Stats. You can also delegate access to read, write, and delete operations on
blob containers, tables, queues, and file shares that are not permitted with a service
SAS. See Constructing an Account SAS for in-depth information about constructing
the account SAS token.
Service SAS. The service SAS delegates access to a resource in just one of the
storage services: the Blob, Queue, Table, or File service. See Constructing a Service
SAS and Service SAS Examples for in-depth information about constructing the
service SAS token.
+
How a shared access signature works
A shared access signature is a signed URI that points to one or more storage
resources and includes a token that contains a special set of query parameters. The
token indicates how the resources may be accessed by the client. One of the query
parameters, the signature, is constructed from the SAS parameters and signed with
the account key. This signature is used by Azure Storage to authenticate the SAS.+
Here's an example of a SAS URI, showing the resource URI and the SAS token:+

+
Note that the SAS token is a string generated on the client side (see the SAS
examples section below for code examples). The SAS token generated by the
storage client library is not tracked by Azure Storage in any way. You can create an
unlimited number of SAS tokens on the client side.+
When a client provides a SAS URI to Azure Storage as part of a request, the service
checks the SAS parameters and signature to verify that it is valid for authenticating
the request. If the service verifies that the signature is valid, then the request is
authenticated. Otherwise, the request is declined with error code 403 (Forbidden).+
Shared access signature parameters
The account SAS and service SAS tokens include some common parameters, and
also take a few parameters that that are different.+

Parameters common to account SAS and service SAS tokens


Api version An optional parameter that specifies the storage service version to use
to execute the request.
Service version A required parameter that specifies the storage service version to
use to authenticate the request.
Start time. This is the time at which the SAS becomes valid. The start time for a
shared access signature is optional; if omitted, the SAS is effective immediately.
Must be expressed in UTC (Coordinated Universal Time), with a special UTC
designator ("Z") i.e. 1994-11-05T13:15:30Z.
Expiry time. This is the time after which the SAS is no longer valid. Best practices
recommend that you either specify an expiry time for a SAS, or associate it with a
stored access policy. Must be expressed in UTC (Coordinated Universal Time), with a
special UTC designator ("Z") i.e. 1994-11-05T13:15:30Z (see more below).
Permissions. The permissions specified on the SAS indicate what operations the
client can perform against the storage resource using the SAS. Available
permissions differ for an account SAS and a service SAS.
IP. An optional parameter that specifies an IP address or a range of IP addresses
outside of Azure (see the section Routing session configuration state for Express
Route) from which to accept requests.
Protocol. An optional parameter that specifies the protocol permitted for a request.
Possible values are both HTTPS and HTTP (https,http), which is the default value, or
HTTPS only (https). Note that HTTP only is not a permitted value.
Signature. The signature is constructed from the other parameters specified as part
token and then encrypted. It's used to authenticate the SAS.
+
Parameters for an account SAS token
Service or services. An account SAS can delegate access to one or more of the
storage services. For example, you can create an account SAS that delegates
access to the Blob and File service. Or you can create a SAS that delegates access
to all four services (Blob, Queue, Table, and File).
Storage resource types. An account SAS applies to one or more classes of storage
resources, rather than a specific resource. You can create an account SAS to
delegate access to:
Service-level APIs, which are called against the storage account resource. Examples
include Get/Set Service Properties, Get Service Stats, and List
Containers/Queues/Tables/Shares.
Container-level APIs, which are called against the container objects for each service:
blob containers, queues, tables, and file shares. Examples include Create/Delete

Container, Create/Delete Queue, Create/Delete Table, Create/Delete Share, and List


Blobs/Files and Directories.
Object-level APIs, which are called against blobs, queue messages, table entities,
and files. For example, Put Blob, Query Entity, Get Messages, and Create File.
+
Parameters for a service SAS token
Storage resource. Storage resources for which you can delegate access with a
service SAS include:
Containers and blobs
File shares and files
Queues
Tables and ranges of table entities.
+
Examples of SAS URIs
Here is an example of a service SAS URI that provides read and write permissions to
a blob. The table breaks down each part of the URI to understand how it contributes
to the SAS:+
Copy

https://myaccount.blob.core.windows.net/sascontainer/sasblob.txt?sv=2015-0405&st=2015-04-29T22%3A18%3A26Z&se=2015-0430T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=Z
%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D
Name

SAS portion

Description

Blob URI

https://myaccount.blob.core.windows.net/sasconta
iner/sasblob.txt

The
address of
the blob.
Note that
using
HTTPS is
highly
recommen
ded.

Name

SAS portion

Description

Storage
services
version

sv=2015-04-05

For storage
services
version
2012-0212 and
later, this
parameter
indicates
the version
to use.

Start
time

st=2015-04-29T22%3A18%3A26Z

Specified
in UTC
time. If you
want the
SAS to be
valid
immediatel
y, omit the
start time.

Expiry
time

se=2015-04-30T02%3A23%3A26Z

Specified
in UTC
time.

Resourc
e

sr=b

The
resource is
a blob.

Permissi
ons

sp=rw

The
permission
s granted
by the SAS
include
Read (r)
and Write
(w).

Name

SAS portion

Description

IP range

sip=168.1.5.60-168.1.5.70

The range
of IP
addresses
from which
a request
will be
accepted.

Protocol

spr=https

Only
requests
using
HTTPS are
permitted.

Signatur
e

sig=Z
%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXD
tkk%3D

Used to
authenticat
e access to
the blob.
The
signature
is an HMAC
computed
over a
string-tosign and
key using
the
SHA256
algorithm,
and then
encoded
using
Base64
encoding.

And here is an example of an account SAS that uses the same common parameters
on the token. Since these parameters are described above, they are not described
here. Only the parameters that are specific to account SAS are described in the
table below.+
Copy

https://myaccount.blob.core.windows.net/?
restype=service&comp=properties&sv=2015-04-05&ss=bf&srt=s&st=2015-0429T22%3A18%3A26Z&se=2015-0430T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=F
%6GRVAZ5Cdj2Pw4tgU7IlSTkWgn7bUkkAg8P6HESXwmf%4B
Descript
ion

Name

SAS portion

Resourc
e URI

https://myaccount.blob.core.windows.net/?
restype=service&comp=properties

The
Blob
service
endpoin
t, with
parame
ters for
getting
service
properti
es
(when
called
with
GET) or
setting
service
properti
es
(when
called
with
SET).

Service
s

ss=bf

The SAS
applies
to the
Blob
and File
services

Resourc

srt=s

The SAS

Name

SAS portion

e types

Permissi
ons

Descript
ion
applies
to
servicelevel
operatio
ns.

sp=rw

The
permiss
ions
grant
access
to read
and
write
operatio
ns.

Given that permissions are restricted to the service level, accessible operations with
this SAS are Get Blob Service Properties (read) and Set Blob Service Properties
(write). However, with a different resource URI, the same SAS token could also be
used to delegate access to Get Blob Service Stats (read).+
Controlling a SAS with a stored access policy
A shared access signature can take one of two forms:+
Ad hoc SAS: When you create an ad hoc SAS, the start time, expiry time, and
permissions for the SAS are all specified on the SAS URI (or implied, in the case
where start time is omitted). This type of SAS may be created as an account SAS or
a service SAS.
SAS with stored access policy: A stored access policy is defined on a resource
container - a blob container, table, queue, or file share - and can be used to manage
constraints for one or more shared access signatures. When you associate a SAS
with a stored access policy, the SAS inherits the constraints - the start time, expiry
time, and permissions - defined for the stored access policy.
+
Note
Currently, an account SAS must be an ad hoc SAS. Stored access policies are not
yet supported for account SAS.+

The difference between the two forms is important for one key scenario: revocation.
A SAS is a URL, so anyone who obtains the SAS can use it, regardless of who
requested it to begin with. If a SAS is published publicly, it can be used by anyone in
the world. A SAS that is distributed is valid until one of four things happens:+
The expiry time specified on the SAS is reached.
The expiry time specified on the stored access policy referenced by the SAS is
reached (if a stored access policy is referenced, and if it specifies an expiry time).
This can either occur because the interval elapses, or because you have modified
the stored access policy to have an expiry time in the past, which is one way to
revoke the SAS.
The stored access policy referenced by the SAS is deleted, which is another way to
revoke the SAS. Note that if you recreate the stored access policy with exactly the
same name, all existing SAS tokens will again be valid according to the permissions
associated with that stored access policy (assuming that the expiry time on the SAS
has not passed). If you are intending to revoke the SAS, be sure to use a different
name if you recreate the access policy with an expiry time in the future.
The account key that was used to create the SAS is regenerated. Note that doing
this will cause all application components using that account key to fail to
authenticate until they are updated to use either the other valid account key or the
newly regenerated account key.
+
Important
A shared access signature URI is associated with the account key used to create the
signature, and the associated stored access policy (if any). If no stored access policy
is specified, the only way to revoke a shared access signature is to change the
account key.+
Authenticating from a client application with a SAS
A client who is in possession of a SAS can use the SAS to authenticate a request
against a storage account for which they do not possess the account keys. A SAS
can be included in a connection string, or used directly from the appropriate
constructor or method.+
Using a SAS in a connection string
If you possess a shared access signature (SAS) URL that grants you access to
resources in a storage account, you can use the SAS in a connection string. Because
the SAS includes on the URI the information required to authenticate the request,
the SAS URI provides the protocol, the service endpoint, and the necessary
credentials to access the resource.+
To create a connection string that includes a shared access signature, specify the
string in the following format:+

Copy

BlobEndpoint=myBlobEndpoint;
QueueEndpoint=myQueueEndpoint;
TableEndpoint=myTableEndpoint;
FileEndpoint=myFileEndpoint;
SharedAccessSignature=sasToken
Each service endpoint is optional, although the connection string must contain at
least one.+
Note
Using HTTPS with a SAS is recommended as a best practice.+
If you are specifying a SAS in a connection string in a configuration file, you may
need to encode special characters in the URL.+
Service SAS example
Here's an example of a connection string that includes a service SAS for Blob
storage:+
Copy

BlobEndpoint=https://storagesample.blob.core.windows.net;SharedAccessSignature
=sv=2015-04-05&sr=b&si=tutorial-policy635959936145100803&sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX
%2BXXIU%3D
And here's an example of the same connection string with encoding of special
characters:+
Copy

BlobEndpoint=https://storagesample.blob.core.windows.net;SharedAccessSignature
=sv=2015-04-05&amp;sr=b&amp;si=tutorial-policy635959936145100803&amp;sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX
%2BXXIU%3D
Account SAS example
Here's an example of a connection string that includes an account SAS for Blob and
File storage. Note that endpoints for both services are specified:+
Copy

BlobEndpoint=https://storagesample.blob.core.windows.net;
FileEndpoint=https://storagesample.file.core.windows.net;
SharedAccessSignature=sv=2015-07-08&sig=iCvQmdZngZNW
%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&spr=https&st=2016-0412T03%3A24%3A31Z&se=2016-04-13T03%3A29%3A31Z&srt=s&ss=bf&sp=rwl
And here's an example of the same connection string with URL encoding:+
Copy

BlobEndpoint=https://storagesample.blob.core.windows.net;
FileEndpoint=https://storagesample.file.core.windows.net;
SharedAccessSignature=sv=2015-07-08&amp;sig=iCvQmdZngZNW
%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&amp;spr=https&amp;st=2016-0412T03%3A24%3A31Z&amp;se=2016-0413T03%3A29%3A31Z&amp;srt=s&amp;ss=bf&amp;sp=rwl
Using a SAS in a constructor or method
Several Azure Storage client library constructors and method overloads offer a SAS
parameter, so that you can authenticate a request to the service with a SAS.+
For example, here a SAS URI is used to create a reference to a block blob. The SAS
provides the only credentials needed for the request. The block blob reference is
then used for a write operation:+
Copy
C#
string sasUri = "https://storagesample.blob.core.windows.net/sample-container/" +
"sampleBlob.txt?sv=2015-0708&sr=b&sig=39Up9JzHkxhUIhFEjEH9594DJxe7w6cIRCg0V6lCGSo%3D" +
"&se=2016-10-18T21%3A51%3A37Z&sp=rcw";

CloudBlockBlob blob = new CloudBlockBlob(new Uri(sasUri));

// Create operation: Upload a blob with the specified name to the container.
// If the blob does not exist, it will be created. If it does exist, it will be overwritten.
try

{
MemoryStream msWrite = new
MemoryStream(Encoding.UTF8.GetBytes(blobContent));
msWrite.Position = 0;
using (msWrite)
{
await blob.UploadFromStreamAsync(msWrite);
}

Console.WriteLine("Create operation succeeded for SAS {0}", sasUri);


Console.WriteLine();
}
catch (StorageException e)
{
if (e.RequestInformation.HttpStatusCode == 403)
{
Console.WriteLine("Create operation failed for SAS {0}", sasUri);
Console.WriteLine("Additional error information: " + e.Message);
Console.WriteLine();
}
else
{
Console.WriteLine(e.Message);
Console.ReadLine();
throw;
}
}
Best practices for using SAS
When you use shared access signatures in your applications, you need to be aware
of two potential risks:+

If a SAS is leaked, it can be used by anyone who obtains it, which can potentially
compromise your storage account.
If a SAS provided to a client application expires and the application is unable to
retrieve a new SAS from your service, then the application's functionality may be
hindered.
+
The following recommendations for using shared access signatures will help balance
these risks:+
Always use HTTPS to create a SAS or to distribute a SAS. If a SAS is passed over
HTTP and intercepted, an attacker performing a man-in-the-middle attack will be
able to read the SAS and then use it just as the intended user could have,
potentially compromising sensitive data or allowing for data corruption by the
malicious user.
Reference stored access policies where possible. Stored access policies give you the
option to revoke permissions without having to regenerate the storage account
keys. Set the expiration on these to be a very long time (or infinite) and make sure
that it is regularly updated to move it farther into the future.
Use near-term expiration times on an ad hoc SAS. In this way, even if a SAS is
compromised unknowingly, it will only be viable for a short time duration. This
practice is especially important if you cannot reference a stored access policy. This
practice also helps limit the amount of data that can be written to a blob by limiting
the time available to upload to it.
Have clients automatically renew the SAS if necessary. Clients should renew the SAS
well before the expiration, in order to allow time for retries if the service providing
the SAS is unavailable. If your SAS is meant to be used for a small number of
immediate, short-lived operations that are expected to be completed within the
expiration period, then this may be unnecessary as the SAS is not expected to be
renewed. However, if you have client that is routinely making requests via SAS, then
the possibility of expiration comes into play. The key consideration is to balance the
need for the SAS to be short-lived (as stated above) with the need to ensure that
the client is requesting renewal early enough to avoid disruption due to the SAS
expiring prior to successful renewal.
Be careful with SAS start time. If you set the start time for a SAS to now, then due to
clock skew (differences in current time according to different machines), failures
may be observed intermittently for the first few minutes. In general, set the start
time to be at least 15 minutes ago, or don't set it at all, which will make it valid
immediately in all cases. The same generally applies to expiry time as well remember that you may observe up to 15 minutes of clock skew in either direction
on any request. Note for clients using a REST version prior to 2012-02-12, the
maximum duration for a SAS that does not reference a stored access policy is 1
hour, and any policies specifying longer term than that will fail.

Be specific with the resource to be accessed. A typical security best practice is to


provide a user with the minimum required privileges. If a user only needs read
access to a single entity, then grant them read access to that single entity, and not
read/write/delete access to all entities. This also helps mitigate the threat of the SAS
being compromised, as the SAS has less power in the hands of an attacker.
Understand that your account will be billed for any usage, including that done with
SAS. If you provide write access to a blob, a user may choose to upload a 200GB
blob. If you've given them read access as well, they may choose to download it 10
times, incurring 2TB in egress costs for you. Again, provide limited permissions, to
help mitigate the potential of malicious users. Use short-lived SAS to reduce this
threat (but be mindful of clock skew on the end time).
Validate data written using SAS. When a client application writes data to your
storage account, keep in mind that there can be problems with that data. If your
application requires that that data be validated or authorized before it is ready to
use, you should perform this validation after the data is written and before it is used
by your application. This practice also protects against corrupt or malicious data
being written to your account, either by a user who properly acquired the SAS, or by
a user exploiting a leaked SAS.
Don't always use SAS. Sometimes the risks associated with a particular operation
against your storage account outweigh the benefits of SAS. For such operations,
create a middle-tier service that writes to your storage account after performing
business rule validation, authentication, and auditing. Also, sometimes it's simpler
to manage access in other ways. For example, if you want to make all blobs in a
container publically readable, you can make the container Public, rather than
providing a SAS to every client for access.
Use Storage Analytics to monitor your application. You can use logging and metrics
to observe any spike in authentication failures due to an outage in your SAS
provider service or to the inadvertent removal of a stored access policy. See the
Azure Storage Team Blog for additional information.
+
SAS examples
Below are some examples of both types of shared access signatures, account SAS
and service SAS.+
To run these examples, you'll need to download and reference these packages:+
Azure Storage Client Library for .NET, version 6.x or later (to use account SAS).
Azure Configuration Manager
+
For additional examples that show how to create and test a SAS, see Azure Code
Samples for Storage.+

Example: Create and use an account SAS


The following code example creates an account SAS that is valid for the Blob and
File services, and gives the client permissions read, write, and list permissions to
access service-level APIs. The account SAS restricts the protocol to HTTPS, so the
request must be made with HTTPS.+
Copy
C#
static string GetAccountSASToken()
{
// To create the account SAS, you need to use your shared key credentials. Modify
for your account.
const string ConnectionString =
"DefaultEndpointsProtocol=https;AccountName=accountname;AccountKey=account-key";
CloudStorageAccount storageAccount =
CloudStorageAccount.Parse(ConnectionString);

// Create a new access policy for the account.


SharedAccessAccountPolicy policy = new SharedAccessAccountPolicy()
{
Permissions = SharedAccessAccountPermissions.Read |
SharedAccessAccountPermissions.Write | SharedAccessAccountPermissions.List,
Services = SharedAccessAccountServices.Blob |
SharedAccessAccountServices.File,
ResourceTypes = SharedAccessAccountResourceTypes.Service,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Protocols = SharedAccessProtocol.HttpsOnly
};

// Return the SAS token.


return storageAccount.GetSharedAccessSignature(policy);
}

To use the account SAS to access service-level APIs for the Blob service, construct a
Blob client object using the SAS and the Blob storage endpoint for your storage
account.+
Copy
C#
static void UseAccountSAS(string sasToken)
{
// Create new storage credentials using the SAS token.
StorageCredentials accountSAS = new StorageCredentials(sasToken);
// Use these credentials and the account name to create a Blob service client.
CloudStorageAccount accountWithSAS = new CloudStorageAccount(accountSAS,
"account-name", endpointSuffix: null, useHttps: true);
CloudBlobClient blobClientWithSAS = accountWithSAS.CreateCloudBlobClient();

// Now set the service properties for the Blob client created with the SAS.
blobClientWithSAS.SetServiceProperties(new ServiceProperties()
{
HourMetrics = new MetricsProperties()
{
MetricsLevel = MetricsLevel.ServiceAndApi,
RetentionDays = 7,
Version = "1.0"
},
MinuteMetrics = new MetricsProperties()
{
MetricsLevel = MetricsLevel.ServiceAndApi,
RetentionDays = 7,
Version = "1.0"
},
Logging = new LoggingProperties()
{

LoggingOperations = LoggingOperations.All,
RetentionDays = 14,
Version = "1.0"
}
});

// The permissions granted by the account SAS also permit you to retrieve service
properties.
ServiceProperties serviceProperties = blobClientWithSAS.GetServiceProperties();
Console.WriteLine(serviceProperties.HourMetrics.MetricsLevel);
Console.WriteLine(serviceProperties.HourMetrics.RetentionDays);
Console.WriteLine(serviceProperties.HourMetrics.Version);
}
Example: Create a stored access policy
The following code creates a stored access policy on a container. You can use the
access policy to specify constraints for a service SAS on the container or its blobs.+
Copy
C#
private static async Task CreateSharedAccessPolicyAsync(CloudBlobContainer
container, string policyName)
{
// Create a new shared access policy and define its constraints.
// The access policy provides create, write, read, list, and delete permissions.
SharedAccessBlobPolicy sharedPolicy = new SharedAccessBlobPolicy()
{
// When the start time for the SAS is omitted, the start time is assumed to be
the time when the storage service receives the request.
// Omitting the start time for a SAS that is effective immediately helps to avoid
clock skew.
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessBlobPermissions.Read |
SharedAccessBlobPermissions.List |

SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Create |
SharedAccessBlobPermissions.Delete
};

// Get the container's existing permissions.


BlobContainerPermissions permissions = await container.GetPermissionsAsync();

// Add the new policy to the container's permissions, and set the container's
permissions.
permissions.SharedAccessPolicies.Add(policyName, sharedPolicy);
await container.SetPermissionsAsync(permissions);
}
Example: Create a service SAS on a container
The following code creates a SAS on a container. If the name of an existing stored
access policy is provided, that policy is associated with the SAS. If no stored access
policy is provided, then the code creates an ad-hoc SAS on the container.+
Copy
C#
private static string GetContainerSasUri(CloudBlobContainer container, string
storedPolicyName = null)
{
string sasContainerToken;

// If no stored policy is specified, create a new access policy and define its
constraints.
if (storedPolicyName == null)
{
// Note that the SharedAccessBlobPolicy class is used both to define the
parameters of an ad-hoc SAS, and
// to construct a shared access policy that is saved to the container's shared
access policies.
SharedAccessBlobPolicy adHocPolicy = new SharedAccessBlobPolicy()
{

// When the start time for the SAS is omitted, the start time is assumed to be
the time when the storage service receives the request.
// Omitting the start time for a SAS that is effective immediately helps to
avoid clock skew.
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessBlobPermissions.Write |
SharedAccessBlobPermissions.List
};

// Generate the shared access signature on the container, setting the


constraints directly on the signature.
sasContainerToken = container.GetSharedAccessSignature(adHocPolicy, null);

Console.WriteLine("SAS for blob container (ad hoc): {0}", sasContainerToken);


Console.WriteLine();
}
else
{
// Generate the shared access signature on the container. In this case, all of the
constraints for the
// shared access signature are specified on the stored access policy, which is
provided by name.
// It is also possible to specify some constraints on an ad-hoc SAS and others
on the stored access policy.
sasContainerToken = container.GetSharedAccessSignature(null,
storedPolicyName);

Console.WriteLine("SAS for blob container (stored access policy): {0}",


sasContainerToken);
Console.WriteLine();
}

// Return the URI string for the container, including the SAS token.

return container.Uri + sasContainerToken;


}
Example: Create a service SAS on a blob
The following code creates a SAS on a blob. If the name of an existing stored access
policy is provided, that policy is associated with the SAS. If no stored access policy
is provided, then the code creates an ad-hoc SAS on the blob.+
Copy
C#
private static string GetBlobSasUri(CloudBlobContainer container, string blobName,
string policyName = null)
{
string sasBlobToken;

// Get a reference to a blob within the container.


// Note that the blob may not exist yet, but a SAS can still be created for it.
CloudBlockBlob blob = container.GetBlockBlobReference(blobName);

if (policyName == null)
{
// Create a new access policy and define its constraints.
// Note that the SharedAccessBlobPolicy class is used both to define the
parameters of an ad-hoc SAS, and
// to construct a shared access policy that is saved to the container's shared
access policies.
SharedAccessBlobPolicy adHocSAS = new SharedAccessBlobPolicy()
{
// When the start time for the SAS is omitted, the start time is assumed to be
the time when the storage service receives the request.
// Omitting the start time for a SAS that is effective immediately helps to
avoid clock skew.
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessBlobPermissions.Read |
SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Create

};

// Generate the shared access signature on the blob, setting the constraints
directly on the signature.
sasBlobToken = blob.GetSharedAccessSignature(adHocSAS);

Console.WriteLine("SAS for blob (ad hoc): {0}", sasBlobToken);


Console.WriteLine();
}
else
{
// Generate the shared access signature on the blob. In this case, all of the
constraints for the
// shared access signature are specified on the container's stored access policy.
sasBlobToken = blob.GetSharedAccessSignature(null, policyName);

Console.WriteLine("SAS for blob (stored access policy): {0}", sasBlobToken);


Console.WriteLine();
}

// Return the URI string for the container, including the SAS token.
return blob.Uri + sasBlobToken;
}
Conclusion
Shared access signatures are useful for providing limited permissions to your
storage account to clients that should not have the account key. As such, they are a
vital part of the security model for any application using Azure Storage. If you follow
the best practices listed here, you can use SAS to provide greater flexibility of
access to resources in your storage account, without compromising the security of
your application.

56 Container related ops


You can't do container related operations (with the exception of listing blobs) using Shared
Access Signature. You would need to use account key for performing operations on a
container. From this page: http://msdn.microsoft.com/en-us/library/azure/jj721951.aspx
Supported operations using shared access signatures include:

Reading and writing page or block blob content, block lists, properties, and metadata

Deleting, leasing, and creating a snapshot of a blob

Listing the blobs within a container

57 Emulator express

You might also like