You are on page 1of 539

Contents

Azure Migrate Documentation


Overview
About Azure Migrate
Tutorials
Discover and assess VMware VMs
Concepts
About the collector
Collector version upgrades
Assessment calculations
About dependency visualization
How-to guides
Migration best practices
Best practices for security and management after migration
Best practices for networking after migration
Best practices for costing and sizing after migration
Contoso migration series
1. Contoso-Migration overview
2. Contoso-Deploy an Azure infrastructure
3. Contoso-Assess on-premises resources for migration to Azure
4. Contoso-Rehost an app on Azure VMs and Azure SQL Database Managed
Instance
5. Contoso-Rehost an app on Azure VMs
6. Contoso-Rehost an app on Azure VMs and Azure SQL Server AlwaysOn
Availability Groups
7. Contoso-Rehost a Linux app on Azure VMs
8. Contoso-Rehost a Linux app on Azure VMs and Azure MySQL
9. Contoso-Refactor an app on Azure Web Apps and Azure SQL Database
10. Contoso-Refactor a Linux app on Azure Web Apps and Azure MySQL
11. Contoso-Migrate TFS to Azure DevOps Services
12. Contoso-Rearchitect an app on Azure Containers and Azure SQL Database
13. Contoso-Rebuild an app in Azure
14. Contoso-Scale a migration
Discover and assess a large environment
Group machines
Group machines using machine dependencies
Refine a group using group dependencies
Customize an assessment
Migrate machines after assessment
Automate migration of large number of VMs
Troubleshoot Azure Migrate
Resources
FAQ
REST API
Resource Manager template
Pricing
UserVoice
Forum
Blog
Azure Migrate
Azure Roadmap
About Azure Migrate
4/5/2019 • 7 minutes to read • Edit Online

The Azure Migrate service assesses on-premises workloads for migration to Azure. The service assesses the
migration suitability of on-premises machines, performs performance-based sizing, and provides cost
estimations for running on-premises machines in Azure. If you're contemplating lift-and-shift migrations, or are
in the early assessment stages of migration, this service is for you. After the assessment, you can use services
such as Azure Site Recovery and Azure Database Migration Service, to migrate the machines to Azure.

Why use Azure Migrate?


Azure Migrate helps you to:
Assess Azure readiness: Assess whether your on-premises machines are suitable for running in Azure.
Get size recommendations: Get size recommendations for Azure VMs based on the performance history of
on-premises VMs.
Estimate monthly costs: Get estimated costs for running on-premises machines in Azure.
Migrate with high confidence: Visualize dependencies of on-premises machines to create groups of
machines that you will assess and migrate together.

Current limitations
You can only assess on-premises VMware virtual machines (VMs) for migration to Azure VMs. The
VMware VMs must be managed by vCenter Server (version 5.5, 6.0, 6.5 or 6.7).
Support for Hyper-V is currently in preview with production support, if you are interested in trying it out,
please sign up here.
For assessment of physical servers, you can leverage our partner tools.
You can discover up to 1500 VMs in a single discovery and in a single project. We have a preview release
available that allows discovery of up to 10,000 VMware VMs in a single project using a single appliance, if
you are interested in trying it out, please sign up here.
If you want to discover a larger environment, you can split the discovery and create multiple projects.
Learn more. Azure Migrate supports up to 20 projects per subscription.
Azure Migrate only supports managed disks for migration assessment.
You can only create an Azure Migrate project in the following geographies. However, this does not restrict
your ability to create assessments for other target Azure locations.

GEOGRAPHY STORAGE LOCATION

Azure Government US Gov Virginia

Asia Southeast Asia or East Asia

Europe North Europe or West Europe

United States East US or West Central US


The geography associated with the migration project is used to store the metadata discovered from the
on-premises environment. Metadata is stored in one of the regions based on the geography specified for
the migration project. If you use dependency visualization by creating a new Log Analytics workspace, the
workspace is created in the same region as the project.
The dependency visualization functionality is not available in Azure Government.

What do I need to pay for?


Learn more about Azure Migrate pricing.

What's in an assessment?
Assessment settings can be customized based on your needs. Assessment properties are summarized in the
following table.

PROPERTY DETAILS

Target location The Azure location to which you want to migrate.

Azure Migrate currently supports 33 regions as migration


target locations. Check regions. By default, the target region
is set to East US.

Storage type The type of managed disks you want to allocate for all VMs
that are part of the assessment. If the sizing criterion is as
on-premises sizing you can specify the target disk type
either as premium disks (the default), standard SSD disks or
standard HDD disks. For performance-based sizing, along
with the above options, you also have the option to select
Automatic which will ensure that the disk sizing
recommendation is automatically done based on the
performance data of the VMs. For example, if you want to
achieve a single instance VM SLA of 99.9%, you may want to
specify the storage type as Premium managed disks which
will ensure that all disks in the assessment will be
recommended as Premium managed disks. Note that Azure
Migrate only supports managed disks for migration
assessment.

Reserved Instances Whether you have reserved instances in Azure. Azure


Migrate estimates the cost accordingly.

Sizing criterion Sizing can be based on performance history of the on-


premises VMs (the default), or as on-premises, without
considering performance history.

Performance history By default, Azure Migrate evaluates the performance of on-


premises machines using performance history for the last
day, with a 95% percentile value.
PROPERTY DETAILS

Comfort factor Azure Migrate considers a buffer (comfort factor) during


assessment. This buffer is applied on top of machine
utilization data for VMs (CPU, memory, disk, and network).
The comfort factor accounts for issues such as seasonal
usage, short performance history, and likely increases in
future usage.

For example, a 10-core VM with 20% utilization normally


results in a 2-core VM. However, with a comfort factor of
2.0x, the result is a 4-core VM instead. The default comfort
setting is 1.3x.

VM series The VM series used for size estimations. For example, if you
have a production environment that you do not plan to
migrate to A-series VMs in Azure, you can exclude A-series
from the list or series. Sizing is based on the selected series
only.

Currency Billing currency. Default is US dollars.

Discount (%) Any subscription-specific discount you receive on top of the


Azure offer. The default setting is 0%.

VM uptime If your VMs are not going to be running 24x7 in Azure, you
can specify the duration (number of days per month and
number of hours per day) for which they would be running
and the cost estimations will be done accordingly. The default
value is 31 days per month and 24 hours per day.

Azure offer The Azure offer you're enrolled to. Azure Migrate estimates
the cost accordingly.

Azure Hybrid Benefit Whether you have software assurance and are eligible for
Azure Hybrid Benefit with discounted costs.

How does Azure Migrate work?


1. You create an Azure Migrate project.
2. Azure Migrate uses an on-premises VM called the collector appliance, to discover information about your
on-premises machines. To create the appliance, you download a setup file in Open Virtualization
Appliance (.ova) format, and import it as a VM on your on-premises vCenter Server.
3. You connect to the VM from the vCenter Server, and specify a new password for it while connecting.
4. You run the collector on the VM to initiate discovery.
5. The collector collects VM metadata using the VMware PowerCLI cmdlets. Discovery is agentless, and
doesn't install anything on VMware hosts or VMs. The collected metadata includes VM information
(cores, memory, disks, disk sizes, and network adapters). It also collects performance data for VMs,
including CPU and memory usage, disk IOPS, disk throughput (MBps), and network output (MBps).
6. The metadata is pushed to the Azure Migrate project. You can view it in the Azure portal.
7. For the purposes of assessment, you gather the discovered VMs into groups. For example, you might
group VMs that run the same application. For more precise grouping, you can use dependency
visualization to view dependencies of a specific machine, or for all machines in a group and refine the
group.
8. After a group is defined, you create an assessment for it.
9. After the assessment finishes, you can view it in the portal, or download it in Excel format.

What are the port requirements?


The table summarizes the ports needed for Azure Migrate communications.

COMPONENT COMMUNICATES WITH DETAILS

Collector Azure Migrate service The collector connects to the service


over SSL port 443.

Collector vCenter Server By default the collector connects to the


vCenter Server on port 443. If the
server listens on a different port,
configure it as an outgoing port on the
collector VM.

On-premises VM Log Analytics Workspace The Microsoft Monitoring Agent


(MMA) uses TCP port 443 to connect
to Azure Monitor logs. You only need
this port if you're using dependency
visualization, that requires the MMA
agent.

What happens after assessment?


After you've assessed on-premises machines, you can use a couple of tools to perform the migration:
Azure Site Recovery: You can use Azure Site Recovery to migrate to Azure. To do this, you prepare the
Azure components you need, including a storage account and virtual network. On-premises, you prepare
your VMware environment. When everything's prepared, you set up and enable replication to Azure, and
migrate the VMs. Learn more.
Azure Database Migration: If on-premises machines are running a database such as SQL Server, MySQL,
or Oracle, you can use the Azure Database Migration Service to migrate them to Azure.
Want to learn more from community experts?
Visit the Azure Migrate MSDN forum or Stack Overflow

Need help? Contact us.


If you have questions or need help, create a support request. If your support request requires deep technical
guidance, please visit Azure Support Plans

Next steps
Follow the tutorial to create an assessment for an on-premises VMware VM.
Review frequently asked questions about Azure Migrate.
Discover and assess on-premises VMware VMs for
migration to Azure
4/10/2019 • 13 minutes to read • Edit Online

The Azure Migrate services assesses on-premises workloads for migration to Azure.
In this tutorial, you learn how to:
Create an account that Azure Migrate uses to discover on-premises VMs
Create an Azure Migrate project.
Set up an on-premises collector virtual machine (VM ), to discover on-premises VMware VMs for assessment.
Group VMs and create an assessment.
If you don't have an Azure subscription, create a free account before you begin.

Prerequisites
VMware: The VMs that you plan to migrate must be managed by vCenter Server running version 5.5, 6.0, 6.5,
or 6.7. Additionally, you need one ESXi host running version 5.5 or higher to deploy the collector VM.
vCenter Server account: You need a read-only account to access the vCenter Server. Azure Migrate uses this
account to discover the on-premises VMs.
Permissions: On the vCenter Server, you need permissions to create a VM by importing a file in .OVA format.

Create an account for VM discovery


Azure Migrate needs access to VMware servers to automatically discover VMs for assessment. Create a VMware
account with the following properties. You specify this account during Azure Migrate setup.
User type: At least a read-only user
Permissions: Data Center object –> Propagate to Child Object, role=Read-only
Details: User assigned at datacenter level, and has access to all the objects in the datacenter.
To restrict access, assign the No access role with the Propagate to child object, to the child objects (vSphere
hosts, datastores, VMs, and networks).

Sign in to the Azure portal


Sign in to the Azure portal.

Create a project
1. In the Azure portal, click Create a resource.
2. Search for Azure Migrate, and select the service Azure Migrate in the search results. Then click Create.
3. Specify a project name, and the Azure subscription for the project.
4. Create a new resource group.
5. Specify the geography in which you want to create the project, then click Create. You can only create an Azure
Migrate project in the following geographies. However, you can still plan your migration for any target Azure
location. The geography specified for the project is only used to store the metadata gathered from on-premises
VMs.
GEOGRAPHY STORAGE LOCATION

Azure Government US Gov Virginia

Asia Southeast Asia

Europe North Europe or West Europe

Unites States East US or West Central US

Download the collector appliance


Azure Migrate creates an on-premises VM known as the collector appliance. This VM discovers on-premises
VMware VMs, and sends metadata about them to the Azure Migrate service. To set up the collector appliance, you
download an .OVA file, and import it to the on-premises vCenter server to create the VM.
1. In the Azure Migrate project, click Getting Started > Discover & Assess > Discover Machines.
2. In Discover machines, click Download to download the appliance.
The Azure Migrate appliance communicates with vCenter Server and continuously profiles the on-
premises environment to gather real-time utilization data for each VM. It collects peak counters for each
metric (CPU utilization, memory utilization etc.). This model does not depend on the statistics settings of
vCenter Server for performance data collection. You can stop the continuous profiling anytime from the
appliance.

NOTE
The one-time discovery appliance is now deprecated as this method relied on vCenter Server's statistics settings for
performance data point availability and collected average performance counters which resulted in under-sizing of
VMs for migration to Azure.

Quick assessments: With the continuous discovery appliance, once the discovery is complete (takes
couple of hours depending on the number of VMs), you can immediately create assessments. Since the
performance data collection starts when you kick off discovery, if you are looking for quick assessments,
you should select the sizing criterion in the assessment as as on-premises. For performance-based
assessments, it is advised to wait for at least a day after kicking off discovery to get reliable size
recommendations.
The appliance only collects performance data continuously, it does not detect any configuration change in
the on-premises environment (that is, VM addition, deletion, disk addition etc.). If there is a configuration
change in the on-premises environment, you can do the following to reflect the changes in the portal:
Addition of items (VMs, disks, cores etc.): To reflect these changes in the Azure portal, you can stop
the discovery from the appliance and then start it again. This will ensure that the changes are
updated in the Azure Migrate project.
Deletion of VMs: Due to the way the appliance is designed, deletion of VMs is not reflected even if
you stop and start the discovery. This is because data from subsequent discoveries are appended to
older discoveries and not overridden. In this case, you can simply ignore the VM in the portal, by
removing it from your group and recalculating the assessment.
3. In Copy project credentials, copy the project ID and key. You need these when you configure the
collector.

Verify the collector appliance


Check that the .OVA file is secure, before you deploy it.
1. On the machine to which you downloaded the file, open an administrator command window.
2. Run the following command to generate the hash for the OVA:
C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]
Example usage: C:\>CertUtil -HashFile C:\AzureMigrate\AzureMigrate.ova SHA256
3. The generated hash should match these settings.
Continuous discovery
For OVA version 1.0.10.11

ALGORITHM HASH VALUE

MD5 5f6b199d8272428ccfa23543b0b5f600

SHA1 daa530de6e8674a66a728885a7feb3b0a2e8ccb0

SHA256 85da50a21a7a6ca684418a87ccc1dd4f8aab30152c438a17b2
16ec401ebb3a21

For OVA version 1.0.10.9

ALGORITHM HASH VALUE

MD5 169f6449cc1955f1514059a4c30d138b

SHA1 f8d0a1d40c46bbbf78cd0caa594d979f1b587c8f

SHA256 d68fe7d94be3127eb35dd80fc5ebc60434c8571dcd0e114b87
587f24d6b4ee4d

For OVA version 1.0.10.4

ALGORITHM HASH VALUE

MD5 2ca5b1b93ee0675ca794dd3fd216e13d

SHA1 8c46a52b18d36e91daeae62f412f5cb2a8198ee5

SHA256 3b3dec0f995b3dd3c6ba218d436be003a687710abab9fcd17d
4bdc90a11276be

One-time discovery (deprecated now)


This model is now deprecated, support for existing appliances will be provided.
For OVA version 1.0.9.15

ALGORITHM HASH VALUE

MD5 e9ef16b0c837638c506b5fc0ef75ebfa

SHA1 37b4b1e92b3c6ac2782ff5258450df6686c89864

SHA256 8a86fc17f69b69968eb20a5c4c288c194cdcffb4ee6568d85ae
5ba96835559ba
For OVA version 1.0.9.14

ALGORITHM HASH VALUE

MD5 6d8446c0eeba3de3ecc9bc3713f9c8bd

SHA1 e9f5bdfdd1a746c11910ed917511b5d91b9f939f

SHA256 7f7636d0959379502dfbda19b8e3f47f3a4744ee9453fc9ce54
8e6682a66f13c

For OVA version 1.0.9.12

ALGORITHM HASH VALUE

MD5 d0363e5d1b377a8eb08843cf034ac28a

SHA1 df4a0ada64bfa59c37acf521d15dcabe7f3f716b

SHA256 f677b6c255e3d4d529315a31b5947edfe46f45e4eb4dbc8019
d68d1d1b337c2e

Create the collector VM


Import the downloaded file to the vCenter Server.
1. In the vSphere Client console, click File > Deploy OVF Template.

2. In the Deploy OVF Template Wizard > Source, specify the location of the .ova file.
3. In Name and Location, specify a friendly name for the collector VM, and the inventory object in which the
VM will be hosted.
4. In Host/Cluster, specify the host or cluster on which the collector VM will run.
5. In storage, specify the storage destination for the collector VM.
6. In Disk Format, specify the disk type and size.
7. In Network Mapping, specify the network to which the collector VM will connect. The network needs
internet connectivity, to send metadata to Azure.
8. Review and confirm the settings, then click Finish.

Run the collector to discover VMs


1. In the vSphere Client console, right-click the VM > Open Console.
2. Provide the language, time zone, and password preferences for the appliance.
3. On the desktop, click the Run collector shortcut.
4. Click Check for updates in the top bar of the collector UI and verify that the collector is running on the
latest version. If not, you can choose to download the latest upgrade package from the link and update the
collector.
5. In the Azure Migrate Collector, open Set up prerequisites.
Select the Azure cloud to which you plan to migrate (Azure Global or Azure Government).
Accept the license terms, and read the third-party information.
The collector checks that the VM has internet access.
If the VM accesses the internet via a proxy, click Proxy settings, and specify the proxy address and
listening port. Specify credentials if the proxy needs authentication. Learn more about the internet
connectivity requirements and the list of URLs that the collector accesses.

NOTE
The proxy address needs to be entered in the form http://ProxyIPAddress or http://ProxyFQDN. Only HTTP
proxy is supported. If you have an intercepting proxy, the internet connection might initially fail if you have
not imported the proxy certificate; learn more on how you can fix this by importing the proxy certificate as a
trusted certificate on the collector VM.

The collector checks that the collector service is running. The service is installed by default on the
collector VM.
Download and install VMware PowerCLI.
6. In Specify vCenter Server details, do the following:
Specify the name (FQDN ) or IP address of the vCenter server.
In User name and Password, specify the read-only account credentials that the collector will use to
discover VMs on the vCenter server.
In Collection scope, select a scope for VM discovery. The collector can only discover VMs within
the specified scope. Scope can be set to a specific folder, datacenter, or cluster. It shouldn't contain
more than 1500 VMs. Learn more about how you can discover a larger environment.
NOTE
Collection scope lists only folders of hosts and clusters. Folders of VMs cannot be directly selected as
collection scope. However, you can discover by using a vCenter account that has access to the individual
VMs. Learn more about how to scope to a folder of VMs.

7. In Specify migration project, specify the Azure Migrate project ID and key that you copied from the
portal. If didn't copy them, open the Azure portal from the collector VM. In the project Overview page,
click Discover Machines, and copy the values.
8. In View collection progress, monitor discovery status. Learn more about what data is collected by the
Azure Migrate collector.

NOTE
The collector only supports "English (United States)" as the operating system language and the collector interface language.
If you change the settings on a machine you want to assess, trigger discover again before you run the assessment. In the
collector, use the Start collection again option to do this. After the collection is done, select the Recalculate option for the
assessment in the portal, to get updated assessment results.

Verify VMs in the portal


The collector appliance will continuously profile the on-premises environment and will keep sending the
performance data at an hour interval. You can view the machines in the portal after an hour of kicking off the
discovery.
1. In the migration project, click Manage > Machines.
2. Check that the VMs you want to discover appear in the portal.

Create and view an assessment


After VMs are discovered in the portal, you group them and create assessments. You can immediately create as
on-premises assessments once the VMs are discovered in the portal. It is recommended to wait for at least a day
before creating any performance-based assessments to get reliable size recommendations.
1. In the project Overview page, click +Create assessment.
2. Click View all to review the assessment properties.
3. Create the group, and specify a group name.
4. Select the machines that you want to add to the group.
5. Click Create Assessment, to create the group and the assessment.
6. After the assessment is created, view it in Overview > Dashboard.
7. Click Export assessment, to download it as an Excel file.

NOTE
It is strongly recommended to wait for at least a day, after starting discovery, before creating an assessment. If you would
like to update an existing assessment with the latest performance data, you can use the Recalculate command on the
assessment to update it.

Assessment details
An assessment includes information about whether the on-premises VMs are compatible for Azure, what would
be the right VM size for running the VM in Azure and the estimated monthly Azure costs.
Azure readiness
The Azure readiness view in the assessment shows the readiness status of each VM. Depending on the properties
of the VM, each VM can be marked as:
Ready for Azure
Conditionally ready for Azure
Not ready for Azure
Readiness unknown
For VMs that are ready, Azure Migrate recommends a VM size in Azure. The size recommendation done by Azure
Migrate depends on the sizing criterion specified in the assessment properties. If the sizing criterion is
performance-based sizing, the size recommendation is done by considering the performance history of the VMs
(CPU and memory) and disks (IOPS and throughput). If the sizing criterion is 'as on-premises', Azure Migrate
does not consider the performance data for the VM and disks. The recommendation for the VM size in Azure is
done by looking at the size of the VM on-premises and the disk sizing is done based on the Storage type specified
in the assessment properties (default is premium disks). Learn more about how sizing is done in Azure Migrate.
For VMs that aren't ready or conditionally ready for Azure, Azure Migrate explains the readiness issues, and
provides remediation steps.
The VMs for which Azure Migrate cannot identify Azure readiness (due to data unavailability) are marked as
readiness unknown.
In addition to Azure readiness and sizing, Azure Migrate also suggests tools that you can use for the migrating the
VM. This requires a deeper discovery of on the on-premises environment. Learn more about how you can do a
deeper discovery by installing agents on the on-premises machines. If the agents are not installed on the on-
premises machines, lift and shift migration is suggested using Azure Site Recovery. If the agents are installed on
the on-premises machine, Azure Migrate looks at the processes running inside the machine and identifies whether
the machine is a database machine or not. If the machine is a database machine, Azure Database Migration
Service is suggested, else Azure Site Recovery is suggested as the migration tool.
Monthly cost estimate
This view shows the total compute and storage cost of running the VMs in Azure along with the details for each
machine. Cost estimates are calculated considering the size recommendations done by Azure Migrate for a
machine, its disks, and the assessment properties.

NOTE
The cost estimation provided by Azure Migrate is for running the on-premises VMs as Azure Infrastructure as a service
(IaaS) VMs. Azure Migrate does not consider any Platform as a service (PaaS) or Software as a service (SaaS) costs.

Estimated monthly costs for compute and storage are aggregated for all VMs in the group.

Confidence rating
Each performance-based assessment in Azure Migrate is associated with a confidence rating that ranges from 1
star to 5 star (1 star being the lowest and 5 star being the highest). The confidence rating is assigned to an
assessment based on the availability of data points needed to compute the assessment. The confidence rating of
an assessment helps you estimate the reliability of the size recommendations provided by Azure Migrate.
Confidence rating is not applicable to "as-is" on-premises assessments.
For performance-based sizing, Azure Migrate needs the utilization data for CPU, memory of the VM. Additionally,
for every disk attached to the VM, it needs the disk IOPS and throughput data. Similarly for each network adapter
attached to a VM, Azure Migrate needs the network in/out to do performance-based sizing. If any of the above
utilization numbers are not available in vCenter Server, the size recommendation done by Azure Migrate may not
be reliable. Depending on the percentage of data points available, the confidence rating for the assessment is
provided as below:

AVAILABILITY OF DATA POINTS CONFIDENCE RATING

0%-20% 1 Star

21%-40% 2 Star

41%-60% 3 Star

61%-80% 4 Star

81%-100% 5 Star

An assessment may not have all the data points available due to one of the following reasons:
You did not profile your environment for the duration for which you are creating the assessment. For
example, if you are creating the assessment with performance duration set to 1 day, you need to wait for at
least a day after you start the discovery for all the data points to get collected.
Few VMs were shut down during the period for which the assessment is calculated. If any VMs were
powered off for some duration, we will not be able to collect the performance data for that period.
Few VMs were created in between the period for which the assessment is calculated. For example, if you
are creating an assessment for the performance history of last one month, but few VMs were created in the
environment only a week ago. In such cases, the performance history of the new VMs will not be there for
the entire duration.

NOTE
If the confidence rating of any assessment is below 5 Stars, wait for at least a day for the appliance to profile the
environment and then Recalculate the assessment. If the preceding cannot be done , performance-based sizing may not be
reliable and it is recommended to switch to as on-premises sizing by changing the assessment properties.

Next steps
Learn how to customize an assessment based on your requirements.
Learn how to create high-confidence assessment groups using machine dependency mapping
Learn more about how assessments are calculated.
Learn how to discover and assess a large VMware environment.
Learn more about the FAQs on Azure Migrate
About the Collector appliance
4/26/2019 • 13 minutes to read • Edit Online

This article provides information about Azure Migrate Collector.


The Azure Migrate Collector is a lightweight appliance that can be used to discover an on-premises vCenter
environment for the purposes of assessment with the Azure Migrate service, before migration to Azure.

Discovery method
Previously, there were two options for the collector appliance, one-time discovery, and continuous discovery. The
one-time discovery model is now deprecated as it relied on vCenter Server statistics settings for performance data
collection (required statistics settings to be set to level 3) and also collected average counters (instead of peak)
which resulted in under-sizing. The continuous discovery model ensures granular data collection and results in
accurate sizing due to collection of peak counters. Below is how it works:
The collector appliance is continuously connected to the Azure Migrate project and continuously collects
performance data of VMs.
The collector continuously profiles the on-premises environment to gather real-time utilization data every 20
seconds.
The appliance rolls up the 20-second samples, and creates a single data point every 15 minutes.
To create the data point the appliance selects the peak value from the 20-second samples, and sends it to Azure.
This model doesn't depend on the vCenter Server statistics settings to collect performance data.
You can stop continuous profiling at anytime from the Collector.
Quick assessments: With the continuous discovery appliance, once the discovery is complete (it takes couple of
hours depending on the number of VMs), you can immediately create assessments. Since the performance data
collection starts when you kick off discovery, if you are looking for quick assessments, you should select the sizing
criterion in the assessment as as on-premises. For performance-based assessments, it is advised to wait for at least
a day after kicking off discovery to get reliable size recommendations.
The appliance only collects performance data continuously, it does not detect any configuration change in the on-
premises environment (i.e. VM addition, deletion, disk addition etc.). If there is a configuration change in the on-
premises environment, you can do the following to reflect the changes in the portal:
Addition of items (VMs, disks, cores etc.): To reflect these changes in the Azure portal, you can stop the
discovery from the appliance and then start it again. This will ensure that the changes are updated in the
Azure Migrate project.
Deletion of VMs: Due to the way the appliance is designed, deletion of VMs is not reflected even if you stop
and start the discovery. This is because data from subsequent discoveries are appended to older discoveries
and not overridden. In this case, you can simply ignore the VM in the portal, by removing it from your
group and recalculating the assessment.

NOTE
The one-time discovery appliance is now deprecated as this method relied on vCenter Server's statistics settings for
performance data point availability and collected average performance counters which resulted in under-sizing of VMs for
migration to Azure.
Deploying the Collector
You deploy the Collector appliance using an OVF template:
You download the OVF template from an Azure Migrate project in the Azure portal. You import the
downloaded file to vCenter Server, to set up the Collector appliance VM.
From the OVF, VMware sets up a VM with 8 cores, 16 GB RAM, and one disk of 80 GB. The operating
system is Windows Server 2016 (64 bit).
When you run the Collector, a number of prerequisite checks run to make sure that the Collector can
connect to Azure Migrate.
Learn more about creating the Collector.

Collector prerequisites
The Collector must pass a few prerequisite checks to ensure it can connect to the Azure Migrate service over the
internet, and upload discovered data.
Verify Azure cloud: The Collector needs to know the Azure cloud to which you are planning to migrate.
Select Azure Government if you are planning to migrate to Azure Government cloud.
Select Azure Global if you are planning to migrate to commercial Azure cloud.
Based on the cloud specified here, the appliance will send discovered metadata to the respective end
points.
Check internet connection: The Collector can connect to the internet directly, or via a proxy.
The prerequisite check verifies connectivity to required and optional URLs.
If you have a direct connection to the internet, no specific action is required, other than making sure that
the Collector can reach the required URLs.
If you're connecting via a proxy, note the requirements below.
Verify time synchronization: The Collector should synchronized with the internet time server to ensure the
requests to the service are authenticated.
The portal.azure.com url should be reachable from the Collector so that the time can be validated.
If the machine isn't synchronized, you need to change the clock time on the Collector VM to match the
current time. To do this open an admin prompt on the VM, run w32tm /tz to check the time zone. Run
w32tm /resync to synchronize the time.
Check collector service running: The Azure Migrate Collector service should be running on the Collector
VM.
This service is started automatically when the machine boots.
If the service isn't running, start it from the Control Panel.
The Collector service connects to vCenter Server, collects the VM metadata and performance data, and
sends it to the Azure Migrate service.
Check VMware PowerCLI 6.5 installed: The VMware PowerCLI 6.5 PowerShell module must be installed on
the Collector VM, so that it can communicate with vCenter Server.
If the Collector can access the URLs required to install the module, it's install automatically during
Collector deployment.
If the Collector can't install the module during deployment, you must install it manually.
Check connection to vCenter Server: The Collector must be able to vCenter Server and query for VMs, their
metadata, and performance counters. Verify prerequisites for connecting.
Connect to the internet via a proxy
If the proxy server requires authentication, you can specify the username and password when you set up the
Collector.
The IP address/FQDN of the Proxy server should specified as http://IPaddress or http://FQDN.
Only HTTP proxy is supported. HTTPS -based proxy servers aren't supported by the Collector.
If the proxy server is an intercepting proxy, you must import the proxy certificate to the Collector VM.
1. In the collector VM, go to Start Menu > Manage computer certificates.
2. In the Certificates tool, under Certificates - Local Computer, find Trusted Publishers >
Certificates.

3. Copy the proxy certificate to the collector VM. You might need to obtain it from your network admin.
4. Double-click to open the certificate, and click Install Certificate.
5. In the Certificate Import Wizard > Store Location, choose Local Machine.
6. Select Place all certificates in the following store > Browse > Trusted Publishers. Click Finish
to import the certificate.

7. Check that the certificate is imported as expected, and check that the internet connectivity
prerequisite check works as expected.
URLs for connectivity
The connectivity check is validated by connecting to a list of URLs.

URL DETAILS PREREQUISITE CHECK

*.portal.azure.com Applicable to Azure Global. Checks Access to URL required.


connectivity with the Azure service, and
time synchronization. Prerequisites check fails if there's no
connectivity.

*.portal.azure.us Applicable only to Azure Government. Access to URL required.


Checks connectivity with the Azure
service, and time synchronization. Prerequisites check fails if there's no
connectivity.

*.oneget.org:443 Used to download the PowerShell Access to URLs is required.


vCenter PowerCLI module.
*.windows.net:443 Prerequisites check won't fail.

*.windowsazure.com:443 Automatic module installation on the


Collector VM will fail. You'll need to
*.powershellgallery.com:443 install the module manually in a
machine that has internet connectivity
*.msecnd.net:443 and then copy the modules to the
appliance. Learn more by going to
*.visualstudio.com:443 Step#4 in this troubleshooting guide.
Install VMware PowerCLI module manually
1. Install the module using these steps. These steps describe both online and offline installation.
2. If the Collector VM is offline and install on the module on a different machine with internet access, you need to
copy the VMware.* files from that machine to the Collector VM.
3. After installation, you can restart the prerequisites checks to confirm that PowerCLI is installed.
Connect to vCenter Server
The Collector connects to the vCenter Server and queries for VM metadata, and performance counters. Here's
what you need for the connection.
Only vCenter Server versions 5.5, 6.0, 6.5 and 6.7 are supported.
You need a read-only account with the permissions summarized below for discovery. Only datacenters
accessible with the account can be accessed for discovery.
By default you connect to vCenter Server with an FQDN or IP address. If vCenter Server listens on a different
port, you connect to it using the form IPAddress:Port_Number or FQDN:Port_Number.
The Collector should have a network line of sight to the vCenter server.
Account permissions

ACCOUNT PERMISSIONS

At least a read-only user account Data Center object –> Propagate to Child Object, role=Read-
only

Collector communications
The collector communicates as summarized in the following diagram and table.

COLLECTOR COMMUNICATES WITH PORT DETAILS


COLLECTOR COMMUNICATES WITH PORT DETAILS

Azure Migrate service TCP 443 Collector communicates with Azure


Migrate service over SSL 443.

vCenter Server TCP 443 The Collector must be able to


communicate with the vCenter Server.

By default, it connects to vCenter on


443.

If vCenter Server listens on a different


port, that port should be available as
outgoing port on the Collector.

RDP TCP 3389

Collected metadata
NOTE
Metadata discovered by the Azure Migrate collector appliance is used to help you right-size your applications as you migrate
them to Azure, perform Azure suitability analysis, application dependency analysis, and cost planning. Microsoft does not use
this data in relation to any license compliance audit.

The collector appliance discovers the following configuration metadata for each VM. The configuration data for the
VMs is available an hour after you start discovery.
VM display name (on vCenter Server)
VM’s inventory path (the host/folder on vCenter Server)
IP address
MAC address
Operating system
Number of cores, disks, NICs
Memory size, Disk sizes
Performance counters of the VM, disk and network.
Performance counters
The collector appliance collects the following performance counters for each VM from the ESXi host at an interval
of 20 seconds. These counters are vCenter counters and although the terminology says average, the 20-second
samples are real time counters. The performance data for the VMs starts becoming available in the portal two
hours after you have kicked off the discovery. It is strongly recommended to wait for at least a day before creating
performance-based assessments to get accurate right-sizing recommendations. If you are looking for instant
gratification, you can create assessments with sizing criterion as as on-premises which will not consider the
performance data for right-sizing.

COUNTER IMPACT ON ASSESSMENT

cpu.usage.average Recommended VM size and cost

mem.usage.average Recommended VM size and cost


COUNTER IMPACT ON ASSESSMENT

virtualDisk.read.average Calculates disk size, storage cost, VM size

virtualDisk.write.average Calculates disk size, storage cost, VM size

virtualDisk.numberReadAveraged.average Calculates disk size, storage cost, VM size

virtualDisk.numberWriteAveraged.average Calculates disk size, storage cost, VM size

net.received.average Calculates VM size

net.transmitted.average Calculates VM size

The complete list of VMware counters collected by Azure Migrate is available below:

CATEGORY METADATA VCENTER DATAPOINT

Machine Details VM ID vm.Config.InstanceUuid

Machine Details VM name vm.Config.Name

Machine Details vCenter Server ID VMwareClient.InstanceUuid

Machine Details VM description vm.Summary.Config.Annotation

Machine Details License product name vm.Client.ServiceContent.About.License


ProductName

Machine Details Operating system type vm.Summary.Config.GuestFullName

Machine Details Operating system version vm.Summary.Config.GuestFullName

Machine Details Boot type vm.Config.Firmware

Machine Details Number of cores vm.Config.Hardware.NumCPU

Machine Details Megabytes of memory vm.Config.Hardware.MemoryMB

Machine Details Number of disks vm.Config.Hardware.Device.ToList().Find


All(x => x is VirtualDisk).count

Machine Details Disk size list vm.Config.Hardware.Device.ToList().Find


All(x => x is VirtualDisk)

Machine Details Network adapters list vm.Config.Hardware.Device.ToList().Find


All(x => x is VirtualEthernetCard)

Machine Details CPU utilization cpu.usage.average

Machine Details Memory utilization mem.usage.average

Disk Details (per disk) Disk key value disk.Key


CATEGORY METADATA VCENTER DATAPOINT

Disk Details (per disk) Disk unit number disk.UnitNumber

Disk Details (per disk) Disk controller key value disk.ControllerKey.Value

Disk Details (per disk) Gigabytes provisioned virtualDisk.DeviceInfo.Summary

Disk Details (per disk) Disk name This value is generated using
disk.UnitNumber, disk.Key and
disk.ControllerKey.Value

Disk Details (per disk) Number of read operations per second virtualDisk.numberReadAveraged.avera
ge

Disk Details (per disk) Number of write operations per second virtualDisk.numberWriteAveraged.avera
ge

Disk Details (per disk) Megabytes per second of read virtualDisk.read.average


throughput

Disk Details (per disk) Megabytes per second of write virtualDisk.write.average


throughput

Network Adapter Details (per NIC) Network adapter name nic.Key

Network Adapter Details (per NIC) MAC Address ((VirtualEthernetCard)nic).MacAddress

Network Adapter Details (per NIC) IPv4 Addresses vm.Guest.Net

Network Adapter Details (per NIC) IPv6 Addresses vm.Guest.Net

Network Adapter Details (per NIC) Megabytes per second of read net.received.average
throughput

Network Adapter Details (per NIC) Megabytes per second of write net.transmitted.average
throughput

Inventory Path Details Name container.GetType().Name

Inventory Path Details Type of child object container.ChildType

Inventory Path Details Reference details container.MoRef

Inventory Path Details Complete inventory path container.Name with complete path

Inventory Path Details Parent details Container.Parent

Inventory Path Details Folder details for each VM ((Folder)container).ChildEntity.Type

Inventory Path Details Datacenter details for each VM Folder ((Datacenter)container).VmFolder

Inventory Path Details Datacenter details for each Host Folder ((Datacenter)container).HostFolder
CATEGORY METADATA VCENTER DATAPOINT

Inventory Path Details Cluster details for each Host ((ClusterComputeResource)container).H


ost)

Inventory Path Details Host details for each VM ((HostSystem)container).Vm

Securing the Collector appliance


We recommend the following steps to secure the Collector appliance:
Don't share or misplace administrator passwords with unauthorized parties.
Shut down the appliance when not in use.
Place the appliance in a secured network.
After migration is finished, delete the appliance instance.
In addition, after migration, also delete the disk backup files (VMDKs), as the disks might have vCenter
credentials cached on them.

OS license in the collector VM


The collector comes with a Windows Server 2016 evaluation license which is valid for 180 days. If the evaluation
period is expiring for your collector VM, it is recommended to download a new OVA and create a new appliance.

Updating the OS of the Collector VM


Although the collector appliance has an evaluation license for 180 days, you need to continuously update the OS
on the appliance to avoid auto-shut down of the appliance.
If the Collector isn't updated for 60 days, it starts shutting down the machine automatically.
If a discovery is running, the machine won't be turned off, even if 60 days have passed. The machine will be
turned off after the discovery completes.
If you've used the Collector for more than 60 days, we recommend keeping the machine updated at all times by
running Windows update.

Upgrading the Collector appliance version


You can upgrade the Collector to the latest version without downloading the OVA again.
1. Download the latest listed upgrade package
2. To ensure that the downloaded hotfix is secure, open Administrator command window and run the
following command to generate the hash for the ZIP file. The generated hash should match with the hash
mentioned against the specific version:
C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]

(example usage C:>CertUtil -HashFile C:\AzureMigrate\CollectorUpdate_release_1.0.9.14.zip SHA256)


3. Copy the zip file to the Azure Migrate collector virtual machine (collector appliance).
4. Right-click on the zip file and select Extract All.
5. Right-click on Setup.ps1 and select Run with PowerShell and follow the instructions on screen to install the
update.
Discovery process
After the appliance is set up, you can run discovery. Here's how that works:
You run a discovery by scope. All VMs in the specified vCenter inventory path will be discovered.
You set one scope at a time.
The scope can include 1500 VMs or less.
The scope can be a datacenter, folder, or ESXi host.
After connecting to vCenter Server, you connect by specifying a migration project for the collection.
VMs are discovered, and their metadata and performance data is sent to Azure. These actions are part of a
collection job.
The Collector appliance is given a specific Collector ID that's persistent for a given machine across
discoveries.
A running collection job is given a specific session ID. The ID changes for each collection job, and can be
used for troubleshooting.

Next steps
Set up an assessment for on-premises VMware VMs
Collector appliance updates
3/29/2019 • 2 minutes to read • Edit Online

This article summarizes upgrade information for the Collector appliance in Azure Migrate.
The Azure Migrate Collector is a lightweight appliance that's used to discover an on-premises vCenter
environment, for the purposes of assessment before migration to Azure. Learn more.

How to upgrade the appliance


You can upgrade the Collector to the latest version without downloading the OVA again.
1. Close all browser windows and any open files/folders in the appliance.
2. Download the latest upgrade package from the list of updates mentioned below in this article.
3. To ensure that the downloaded package is secure, open Administrator command window and run the
following command to generate the hash for the ZIP file. The generated hash should match with the hash
mentioned against the specific version:
C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]

Example: C:>CertUtil -HashFile C:\AzureMigrate\CollectorUpdate_release_1.0.9.14.zip SHA256)


4. Copy the zip file to the Collector appliance VM.
5. Right-click on the zip file > Extract All.
6. Right-click on Setup.ps1 > Run with PowerShell, and follow the installation instructions.

Collector update release history


Continuous discovery: Upgrade versions
Version 1.0.10.14 (Released on 03/29/2019)
Contains few UI enhancements.
Hash values for upgrade package 1.0.10.14

ALGORITHM HASH VALUE

MD5 846b1eb29ef2806bcf388d10519d78e6

SHA1 6243239fa49c6b3f5305f77e9fd4426a392d33a0

SHA256 fb058205c945a83cc4a31842b9377428ff79b08247f3fb8bb4ff
30c125aa47ad

Version 1.0.10.12 (Released on 03/13/2019)


Contains fixes for issues in selecting Azure cloud in the appliance.
Hash values for upgrade package 1.0.10.12
ALGORITHM HASH VALUE

MD5 27704154082344c058238000dff9ae44

SHA1 41e9e2fb71a8dac14d64f91f0fd780e0d606785e

SHA256 c6e7504fcda46908b636bfe25b8c73f067e3465b748f77e5002
7e66f2727c2a9

One -time discovery (deprecated now): Previous upgrade versions

NOTE
The one-time discovery appliance is now deprecated as this method relied on vCenter Server's statistics settings for
performance data point availability and collected average performance counters which resulted in under-sizing of VMs for
migration to Azure.

Version 1.0.9.16 (Released on 10/29/2018)


Contains fixes for PowerCLI issues faced while setting up the appliance.
Hash values for upgrade package 1.0.9.16

ALGORITHM HASH VALUE

MD5 d2c53f683b0ec7aaf5ba3d532a7382e1

SHA1 e5f922a725d81026fa113b0c27da185911942a01

SHA256 a159063ff508e86b4b3b7b9a42d724262ec0f2315bdba8418b
ce95d973f80cfc

Version 1.0.9.14
Hash values for upgrade package 1.0.9.14

ALGORITHM HASH VALUE

MD5 c5bf029e9fac682c6b85078a61c5c79c

SHA1 af66656951105e42680dfcc3ec3abd3f4da8fdec

SHA256 58b685b2707f273aa76f2e1d45f97b0543a8c4d017cd27f0bd
b220e6984cc90e

Version 1.0.9.13
Hash values for upgrade package 1.0.9.13

ALGORITHM HASH VALUE

MD5 739f588fe7fb95ce2a9b6b4d0bf9917e

SHA1 9b3365acad038eb1c62ca2b2de1467cb8eed37f6

SHA256 7a49fb8286595f39a29085534f29a623ec2edb12a3d76f90c96
54b2f69eef87e
Next steps
Learn more about the Collector appliance.
Run an assessment for VMware VMs.
Assessment calculations
3/15/2019 • 11 minutes to read • Edit Online

Azure Migrate assesses on-premises workloads for migration to Azure. This article provides information about
how assessments are calculated.

Overview
An Azure Migrate assessment has three stages. Assessment starts with a suitability analysis, followed by sizing,
and lastly, a monthly cost estimation. A machine only moves along to a later stage if it passes the previous one.
For example, if a machine fails the Azure suitability check, it’s marked as unsuitable for Azure, and sizing and
costing won't be done.

Azure suitability analysis


Not all machines are suitable for running on cloud as cloud has its own limitations and requirements. Azure
Migrate assesses each on-premises machine for migration suitability to Azure and categorizes the machines into
one of the following categories:
Ready for Azure - The machine can be migrated as-is to Azure without any changes. It will boot in Azure with
full Azure support.
Conditionally ready for Azure - The machine may boot in Azure, but may not have full Azure support. For
example, a machine with an older version of Windows Server OS is not supported in Azure. You need to be
careful before migrating these machines to Azure and follow the remediation guidance suggested in the
assessment to fix the readiness issues before you migrate.
Not ready for Azure - The machine will not boot in Azure. For example, if an on-premises machine has a disk
of size more than 4 TB attached to it, it cannot be hosted on Azure. You need to follow the remediation
guidance suggested in the assessment to fix the readiness issue before migrating to Azure. Right-sizing and
cost estimation is not done for machines that are marked as not ready for Azure.
Readiness unknown - Azure Migrate could not find the readiness of the machine due to insufficient data
available in vCenter Server.
Azure Migrate reviews the machine properties and guest operating system to identify the Azure readiness of the
on-premises machine.
Machine properties
Azure Migrate reviews the following properties of the on-premises VM to identify whether a VM can run on
Azure.

PROPERTY DETAILS AZURE READINESS STATUS

Boot type Azure supports VMs with boot type as Conditionally ready if boot type is UEFI.
BIOS, and not UEFI.
PROPERTY DETAILS AZURE READINESS STATUS

Cores The number of cores in the machines Ready if less than or equal to limits.
must be equal to or less than the
maximum number of cores (128 cores)
supported for an Azure VM.

If performance history is available,


Azure Migrate considers the utilized
cores for comparison. If a comfort
factor is specified in the assessment
settings, the number of utilized cores is
multiplied by the comfort factor.

If there's no performance history, Azure


Migrate uses the allocated cores,
without applying the comfort factor.

Memory The machine memory size must be Ready if within limits.


equal to or less than the maximum
memory (3892 GB on Azure M series
Standard_M128m 2 ) allowed for an
Azure VM. Learn more.

If performance history is available,


Azure Migrate considers the utilized
memory for comparison. If a comfort
factor is specified, the utilized memory
is multiplied by the comfort factor.

If there's no history the allocated


memory is used, without applying the
comfort factor.

Storage disk Allocated size of a disk must be 4 TB Ready if within limits.


(4096 GB) or less.

The number of disks attached to the


machine must be 65 or less, including
the OS disk.

Networking A machine must have 32 or less NICs Ready if within limits.


attached to it.

Guest operating system


Along with VM properties, Azure Migrate also looks at the guest OS of the on-premises VM to identify if the VM
can run on Azure.

NOTE
Azure Migrate considers the OS specified in vCenter Server to do the following analysis. Since the discovery done by Azure
Migrate is appliance-based, it does not have a way to verify if the OS running inside the VM is same as the one specified in
vCenter Server.

The following logic is used by Azure Migrate to identify the Azure readiness of the VM based on the operating
system.
OPERATING SYSTEM DETAILS AZURE READINESS STATUS

Windows Server 2016 & all SPs Azure provides full support. Ready for Azure

Windows Server 2012 R2 & all SPs Azure provides full support. Ready for Azure

Windows Server 2012 & all SPs Azure provides full support. Ready for Azure

Windows Server 2008 R2 with all SPs Azure provides full support. Ready for Azure

Windows Server 2008 (32-bit and 64- Azure provides full support. Ready for Azure
bit)

Windows Server 2003, 2003 R2 These operating systems have passed Conditionally ready for Azure, consider
their end of support date and need a upgrading the OS before migrating to
Custom Support Agreement (CSA) for Azure.
support in Azure.

Windows 2000, 98, 95, NT, 3.1, MS- These operating systems have passed Conditionally ready for Azure, it is
DOS their end of support date, the machine recommended to upgrade the OS
may boot in Azure, but no OS support before migrating to Azure.
is provided by Azure.

Windows Client 7, 8 and 10 Azure provides support with Visual Conditionally ready for Azure
Studio subscription only.

Windows 10 Pro Desktop Azure provides support with Conditionally ready for Azure
Multitenant Hosting Rights.

Windows Vista, XP Professional These operating systems have passed Conditionally ready for Azure, it is
their end of support date, the machine recommended to upgrade the OS
may boot in Azure, but no OS support before migrating to Azure.
is provided by Azure.

Linux Azure endorses these Linux operating Ready for Azure if the version is
systems. Other Linux operating endorsed.
systems may boot in Azure, but it is
recommended to upgrade the OS to an Conditionally ready if the version is not
endorsed version before migrating to endorsed.
Azure.

Other operating systems Azure does not endorse these Conditionally ready for Azure, it is
operating systems. The machine may recommended to install a supported
e.g., Oracle Solaris, Apple Mac OS etc., boot in Azure, but no OS support is OS before migrating to Azure.
FreeBSD, etc. provided by Azure.

OS specified as Other in vCenter Server Azure Migrate cannot identify the OS in Unknown readiness. Ensure that the OS
this case. running inside the VM is supported in
Azure.

32-bit operating systems The machine may boot in Azure, but Conditionally ready for Azure, consider
Azure may not provide full support. upgrading the OS of the machine from
32-bit OS to 64-bit OS before
migrating to Azure.

Sizing
After a machine is marked as ready for Azure, Azure Migrate sizes the VM and its disks for Azure. If the sizing
criterion specified in the assessment properties is to do performance-based sizing, Azure Migrate considers the
performance history of the machine to identify the VM size and disk type in Azure. This method is helpful in
scenarios where you have over-allocated the on-premises VM but the utilization is low and you would like to
right-size the VMs in Azure to save cost.
If you do not want to consider the performance history for VM -sizing and want to take the VM as-is to Azure, you
can specify the sizing criterion as as on-premises and Azure Migrate will then size the VMs based on the on-
premises configuration without considering the utilization data. Disk sizing, in this case, will be done based on the
Storage type you specify in the assessment properties (Standard disk or Premium disk)
Performance -based sizing
For performance-based sizing, Azure Migrate starts with the disks attached to the VM, followed by network
adapters and then maps an Azure VM based on the compute requirements of the on-premises VM.
Storage: Azure Migrate tries to map every disk attached to the machine to a disk in Azure.

NOTE
Azure Migrate supports only managed disks for assessment.

To get the effective disk I/O per second (IOPS ) and throughput (MBps), Azure Migrate multiplies the
disk IOPS and the throughput with the comfort factor. Based on the effective IOPS and throughput
values, Azure Migrate identifies if the disk should be mapped to a standard or premium disk in Azure.
If Azure Migrate can't find a disk with the required IOPS & throughput, it marks the machine as
unsuitable for Azure. Learn more about Azure limits per disk and VM.
If it finds a set of suitable disks, Azure Migrate selects the ones that support the storage redundancy
method, and the location specified in the assessment settings.
If there are multiple eligible disks, it selects the one with the lowest cost.
If performance data for disks in unavailable, all the disks are mapped to standard disks in Azure.
Network: Azure Migrate tries to find an Azure VM that can support the number of network adapters
attached to the on-premises machine and the performance required by these network adapters.
To get the effective network performance of the on-premises VM, Azure Migrate aggregates the data
transmitted per second (MBps) out of the machine (network out), across all network adapters, and
applies the comfort factor. This number is used to find an Azure VM that can support the required
network performance.
Along with network performance, it also considers if the Azure VM can support the required the
number of network adapters.
If no network performance data is available, only the network adapters count is considered for VM
sizing.
Compute: After storage and network requirements are calculated, Azure Migrate considers CPU and
memory requirements to find a suitable VM size in Azure.
Azure Migrate looks at the utilized cores and memory, and applies the comfort factor to get the
effective cores and memory. Based on that number, it tries to find a suitable VM size in Azure.
If no suitable size is found, the machine is marked as unsuitable for Azure.
If a suitable size is found, Azure Migrate applies the storage and networking calculations. It then applies
location and pricing tier settings, for the final VM size recommendation.
If there are multiple eligible Azure VM sizes, the one with the lowest cost is recommended.
As on-premises sizing
If the sizing criterion is as on-premises sizing, Azure Migrate does not consider the performance history of the
VMs and disks and allocates a VM SKU in Azure based on the size allocated on-premises. Similarly for disk sizing,
it looks at the Storage type specified in assessment properties (Standard/Premium) and recommends the disk
type accordingly. Default storage type is Premium disks.
Confidence rating
Each performance-based assessment in Azure Migrate is associated with a confidence rating that ranges from 1
star to 5 star (1 star being the lowest and 5 star being the highest). The confidence rating is assigned to an
assessment based on the availability of data points needed to compute the assessment. The confidence rating of
an assessment helps you estimate the reliability of the size recommendations provided by Azure Migrate.
Confidence rating is not applicable to as on-premises assessments.
For performance-based sizing, Azure Migrate needs the utilization data for CPU, memory of the VM. Additionally,
for every disk attached to the VM, it needs the disk IOPS and throughput data. Similarly for each network adapter
attached to a VM, Azure Migrate needs the network in/out to do performance-based sizing. If any of the above
utilization numbers are not available in vCenter Server, the size recommendation done by Azure Migrate may not
be reliable. Depending on the percentage of data points available, the confidence rating for the assessment is
provided as below:

AVAILABILITY OF DATA POINTS CONFIDENCE RATING

0%-20% 1 Star

21%-40% 2 Star

41%-60% 3 Star

61%-80% 4 Star

81%-100% 5 Star

Below are the reasons regarding why an assessment could get a low confidence rating:
You did not profile your environment for the duration for which you are creating the assessment. For
example, if you are creating the assessment with performance duration set to 1 day, you need to wait for at
least a day after you start the discovery for all the data points to get collected.
Few VMs were shut down during the period for which the assessment is calculated. If any VMs were
powered off for some duration, we will not be able to collect the performance data for that period.
Few VMs were created in between the period for which the assessment is calculated. For example, if you
are creating an assessment for the performance history of last one month, but few VMs were created in the
environment only a week ago. In such cases, the performance history of the new VMs will not be there for
the entire duration.

NOTE
If the confidence rating of any assessment is below 5 Stars, we recommend you to wait for at least a day for the
appliance to profile the environment and then Recalculate the assessment. If the preceding cannot be done ,
performance-based sizing may not be reliable and it is recommended to switch to as on-premises sizing by
changing the assessment properties.

Monthly cost estimation


After sizing recommendations are complete, Azure Migrate calculates post-migration compute and storage costs.
Compute cost: Using the recommended Azure VM size, Azure Migrate uses the Billing API to calculate the
monthly cost for the VM. The calculation takes the operating system, software assurance, reserved instances,
VM uptime, location, and currency settings into account. It aggregates the cost across all machines, to calculate
the total monthly compute cost.
Storage cost: The monthly storage cost for a machine is calculated by aggregating the monthly cost of all
disks attached to the machine. Azure Migrate calculates the total monthly storage costs by aggregating the
storage costs of all machines. Currently, the calculation doesn't take offers specified in the assessment settings
into account.
Costs are displayed in the currency specified in the assessment settings.

Next steps
Create an assessment for on-premises VMware VMs
Dependency visualization
3/14/2019 • 3 minutes to read • Edit Online

The Azure Migrate services assesses groups of on-premises machines for migration to Azure. You can use the
dependency visualization functionality in Azure Migrate to create groups. This article provides information about
this feature.

NOTE
The dependency visualization functionality is not available in Azure Government.

Overview
Dependency visualization in Azure Migrate allows you to create high-confidence groups for migration
assessments. Using dependency visualization you can view network dependencies of machines and identify related
machines that needed to be migrated together to Azure. This functionality is useful in scenarios where you are not
completely aware of the machines that constitute your application and need to be migrated together to Azure.

How does it work?


Azure Migrate uses the Service Map solution in Azure Monitor logs for dependency visualization.
To leverage dependency visualization, you need to associate a Log Analytics workspace, either new or
existing, with an Azure Migrate project.
You can only create or attach a workspace in the same subscription where the migration project is created.
To attach a Log Analytics workspace to a project, go to Essentials section of the project Overview page and
click Requires configuration

While associating a workspace, you will get the option to create a new workspace or attach an existing one:
When you create a new workspace, you need to specify a name for the workspace. The workspace is then
created in a region in the same Azure geography as the migration project.
When you attach an existing workspace, you can pick from all the available workspaces in the same
subscription as the migration project. Note that only those workspaces are listed which were created in a
region where Service Map is supported. To be able to attach a workspace, ensure that you have 'Reader'
access to the workspace.
NOTE
Once you have attached a workspace to a project, you cannot change it later.

The associated workspace is tagged with the key Migration Project, and value Project name, which you
can use to search in the Azure portal.
To navigate to the workspace associated with the project, you can go to Essentials section of the project
Overview page and access the workspace

To use dependency visualization, you need to download and install agents on each on-premises machine that you
want to analyze.
Microsoft Monitoring agent(MMA) needs to be installed on each machine.
The Dependency agent needs to be installed on each machine.
In addition, if you have machines with no internet connectivity, you need to download and install Log Analytics
gateway on them.
You don't need these agents on machines you want to assess unless you're using dependency visualization.

Do I need to pay for it?


Azure Migrate is available at no additional charge. Use of the dependency visualization feature in Azure Migrate
requires Service Map and requires you to associate a Log Analytics workspace, either new or existing, with the
Azure Migrate project. The dependency visualization functionality in Azure Migrate is free for the first 180 days in
Azure Migrate.
1. Use of any solutions other than Service Map within this Log Analytics workspace will incur standard Log
Analytics charges.
2. To support migration scenarios at no additional cost, the Service Map solution will not incur any charges for the
first 180 days from the day of associating the Log Analytics workspace with the Azure Migrate project. After
180 days, standard Log Analytics charges will apply.
When you register agents to the workspace, use the ID and the Key given by the project on the install agent steps
page.
When the Azure Migrate project is deleted, the workspace is not deleted along with it. Post the project deletion, the
Service Map usage will not be free, and each node will be charged as per the paid tier of Log Analytics workspace.

NOTE
The dependency visualization feature uses Service Map via a Log Analytics workspace. Since 28 February 2018, with the
announcement of Azure Migrate general availability, the feature is now available at no extra charge. You will need to create a
new project to make use of the free usage workspace. Existing workspaces before general availability are still chargeable,
hence we recommend you to move to a new project.

Learn more about Azure Migrate pricing here.

How do I manage the workspace?


You can use the Log Analytics workspace outside Azure Migrate. It's not deleted if you delete the migration project
in which it was created. If you no longer need the workspace, delete it manually.
Don't delete the workspace created by Azure Migrate, unless you delete the migration project. If you do, the
dependency visualization functionality will not work as expected.

Next steps
Group machines using machine dependencies
Learn more about the FAQs on dependency visualization.
Best practices for securing and managing workloads
migrated to Azure
3/26/2019 • 35 minutes to read • Edit Online

As you plan and design for migration, in addition to thinking about the migration itself, you need to consider your
security and management model in Azure after migration. This article describes planning and best practices for
securing your Azure deployment after migrating, and for ongoing tasks to keep your deployment running at an
optimal level.

IMPORTANT
The best practices and opinions described in this article are based on the Azure platform and service features available at the
time of writing. Features and capabilities change over time.

Secure migrated workloads


After migration, the most critical task is to secure migrated workloads from internal and external threats. These
best practices help you to do that:
Work with Azure Security Center: Learn how to work with the monitoring, assessments, and recommendations
provided by Azure Security Center
Encrypt your data: Get best practices for encrypting your data in Azure.
Set up antimalware: Protect your VMs from malware and malicious attacks.
Secure web apps: Keep sensitive information secure in migrated web apps.
Review subscriptions: Verify who can access your Azure subscriptions and resources after migration.
Work with logs: Review your Azure auditing and security logs on a regular basis.
Review other security features: Understand and evaluate advanced security features that Azure offers.

Best practice: Follow Azure Security Center recommendations


Microsoft works hard to ensure that Azure tenant admins have the information needed to enable security features
that protect workloads from attacks. Azure Security Center provides unified security management. From the
Security Center, you can apply security policies across workloads, limit threat exposure, and detect and respond to
attacks. Security Center analyzes resources and configurations across Azure tenants and makes security
recommendations, including:
Centralized policy management – Ensure compliance with company or regulatory security requirements by
centrally managing security policies across all your hybrid cloud workloads.
Continuous security assessment – Monitor the security posture of machines, networks, storage and data
services, and applications to discover potential security issues.
Actionable recommendations – Remediate security vulnerabilities before they can be exploited by attackers
with prioritized and actionable security recommendations.
Prioritized alerts and incidents - Focus on the most critical threats first with prioritized security alerts and
incidents.
In addition to assessments and recommendations, the Security Center provides a number of other security
features that can be enabled for specific resources.
Just In Time (JIT) access: Reduce your network attack surface with just in time, controlled access to
management ports on Azure VMs.
Having VM RDP port 3389 open on the internet exposes VMs to continual bad actor activity. Azure IP
addresses are well-known, and hackers continually probe them for attacks on open 3389 ports.
Just in time uses network security groups (NSGs) and incoming rules that limit the amount of time that a
specific port is open.
With just in time enabled, Security Center checks that a user has role-based access control (RBAC ) write
access permissions for a VM. In addition, specify rules for how users can connect to VMs. If permissions
are OK, an access request is approved and Security Center configures NSGs to allow inbound traffic to
the selected ports for the amount of time you specify. NSGs are return to their previous state when the
time expires.
Adaptive application controls: Keep software and malware off VMs by controlling which apps run on them
using dynamic app whitelisting.
Adaptive application controls allow you to white list apps, and prevent rogue users or administrators
from installing unapproved or vetting software apps on your VMs.
You can block or alert attempts to run malicious apps, avoid unwanted or malicious apps, and ensure
compliance with your organization's app security policy.
File Integrity Monitoring: Ensure the integrity of files running on VMs.
You don’t need to install software to cause VM issues. Changing a system file can also cause VM failure
or performance degradation. File integrity Monitoring examines system files and registry settings for
changes, and notifies you if something is updated.
Security Center recommends which files you should monitor.
Learn more:
Learn more about Azure Security Center.
Learn more about just in time VM access.
Learn about applying adaptive application controls.
Get started with File Integrity Monitoring.

Best practice: Encrypt data


Encryption is an important part of Azure security practices. Ensuring that encryption is enabled at all levels helps
prevent unauthorized parties from gaining access to sensitive data, including data in transit and at rest.
Encryption for IaaS
VMs: For VMs you can use Azure Disk Encryption to encrypt your Windows and Linux IaaS VM disks.
Disk encryption leverages BitLocker for Windows, and DM -Crypt for Linux to provide volume
encryption for the OS and data disks.
You can use an encryption key created by Azure, or you can supply your own encryption keys,
safeguarded in Azure Key Vault.
With Disk Encryption, IaaS VM data is secured at rest (on the disk) and during VM boot.
Azure Security Center alerts you if you have VMs that aren't encrypted.
Storage: Protect at rest data stored in Azure storage.
Data stored in Azure storage accounts can be encrypted using Microsoft-generated AES keys that are
FIPS 140-2 compliant, or you can use your own keys.
Storage Service Encryption is enabled for all new and existing storage accounts and can't be disabled.
Encryption for PaaS
Unlike IaaS where you manage your own VMs and infrastructure, in a PaaS model platform and infrastructure is
managed by the provider, leaving you to focus on core app logic and capabilities. With so many different types of
PaaS services, each service will be evaluated individually for security purposes. As an example, let's see how we
might enable encryption for Azure SQL Database.
Always Encrypted: Use the Always Encrypted Wizard in SQL Server Management Studio to protect data at
rest.
You create Always Encrypted key to encrypt individual column data.
Always Encrypted keys can be stored as encrypted in database metadata, or stored in trusted key stores
such as Azure Key Vault.
App changes will probably be needed to use this feature.
Transparent data encryption (TDE ): Protect the Azure SQL Database with real-time encryption and
decryption of the database, associated backups, and transaction log files at rest.
TDE allows encryption activities to take place without changes at the app layer.
TDE can use encryption keys provided by Microsoft, or you can provide your own keys using Bring Your
Own Key support.
Learn more:
Learn about Azure Disk Encryption for IaaS VMs.
Enable encryption for IaaS Windows VMs.
Learn about Azure Storage Service Encryption for data at rest.
Read an overview of Always Encrypted.
Read about TDE for Azure SQL Database.
Learn about TDE with Bring Your Own Key.

Best practice: Protect VMs with antimalware


In particular, older Azure migrated VMs may not have the appropriate level of antimalware installed. Azure
provides a free endpoint solution that helps protect VMs from viruses, spyware, and other malware.
Microsoft Antimalware for Azure generates alerts when known malicious or unwanted software tries to install
itself.
It's a single agent solution that runs in the background without human intervention.
In Azure Security Center, you can easily identify VMs that don't have endpoint protection running, and install
Microsoft Antimalware as needed.

Antimalware for VMs


Learn more:
Learn about Microsoft Antimalware.

Best practice: Secure web apps


Migrated web apps face a couple of issues:
Most legacy web applications tend to have sensitive information inside configuration files. Files containing such
information can present security issues when apps are backed up, or when app code is checked into or out of
source control.
In addition, when you migrate web apps residing in a VM, you are likely moving that machine from an on-
premises network and firewall-protected environment to an environment facing the internet. You need to make
sure that you set up a solution that does the same work as your on-premises protection resources.
Azure provides a couple of solutions:
Azure Key Vault: Today web app developers are taking steps to ensure that sensitive information isn't leaked
from these files. One method to secure information is to extract it from files and put it into an Azure Key Vault.
You can use Key Vault to centralize storage of app secrets, and control their distribution. It avoids the
need to store security information in app files.
Apps can security access information in the vault using URIs, without needing custom code.
Azure Key Vault allows you to lock down access via Azure security controls and to seamlessly implement
'rolling keys'. Microsoft does not see or extract your data.
App Service Environment: If an app you migrate needs extra protection, you can consider adding an App
Service Environment and Web Application Firewall to protect the app resources.
The Azure App Service Environment provides a fully isolated and dedicated environment in which to
running App Service apps such as Windows and Linux web apps, Docker containers, mobile apps, and
functions.
It's useful for apps that are very high scale, require isolation and secure network access or have high
memory utilization
Web Application Firewall: A feature of Application Gateway that provides centralized protection for web
apps.
It protects web apps without requiring backend code modifications.
It protects multiple web apps at the same time behind an application gateway.
Web application firewall can be monitored using Azure Monitor, and is integrated into Azure Security
Center.
Azure Key Vault
Learn more:
Get an overview of Azure Key Vault.
Learn about Web application firewall.
Get an introduction to App Service Environments.
Learn how to configure a web app to read secrets from Key Vault.
Learn about Web Application Firewall

Best practice: Review subscriptions and resource permissions


As you migrate your workloads and run them in Azure, staff with workload access move around. Your security
team should review access to your Azure tenant and resource groups on a regular basis. Azure provides a number
of offerings for identity management and access control security, including role-based access control (RBAC ) to
authorize permissions to access Azure resources.
RBAC assigns access permissions for security principals. Security principals represent users, groups (a set of
users), service principals (identity used by apps and services), and managed identities (an Azure Active
Directory identity automatically managed by Azure).
RBAC can assign roles to security principles, such as owner, contributor and reader, and role definitions (a
collection of permissions) that define the operations that can be performed by the roles.
RBAC can also set scopes that set the boundary for a role. Scope can be set at a number of levels, including a
management group, subscription, resource group, or resource
Ensure that admins with Azure access are only able to access resources that you want to allow. If the predefined
roles in Azure aren't granular enough, you can create custom roles to separate and limit access permissions.
Access control - IAM
Learn more:
About RBAC.
Learn to manage access using RBAC and the Azure portal.
Learn about custom roles.

Best practice: Review audit and security logs


Azure Active Directory (AD ) provides activity logs that appear in Azure Monitor. The logs capture the operations
performed in Azure tenancy, when they occurred, and who performed them.
Audit logs show the history of tasks in the tenant. Sign-in activity logs show who carried out the tasks.
Access to security reports depends on your Azure AD license. In Free and Basic you get a list of risky users and
sign-ins. In Premium 1 and Premium 2 editions you get underlying event information.
You can route activity logs to a number of endpoints for long-term retention and data insights.
Make it a common practice to review the logs or integrate your security information and event management
(SIEM ) tools to automatically review abnormalities. If you're not using Premium 1 or 2, you'll need to do a lot of
analysis yourself or using your SIEM system. Analysis includes looking for risky sign-ins and events, and other
user attack patterns.
Azure AD
Users and Groups
Learn more:
Learn about Azure AD activity logs in Azure Monitor.
Learn how to audit activity reports in the Azure AD portal.

Best practice: Evaluate other security features


Azure provides a number of other security features that provide advanced security options. Some of these best
practices require add-on licenses and premium options.
Implement Azure AD administrative units (AU ): Delegating administrative duties to support staff can be
tricky with just basic Azure access control. Giving support staff access to administer all the groups in Azure AD
might not be the ideal approach for organizational security. Using AU allows you to segregate Azure resources
into containers in a similar way to on-premises organizational units (OU ). To use AU the AU admin must have a
premium Azure AD license. Learn more.
Use multifactor authentication (MFA ): If you have a premium Azure AD license you can enable and enforce
MFA on your admin accounts. Phishing is the most common way that accounts credentials are compromised.
Once a bad actor has admin account credentials,there's no stopping them from far-reaching actions, such as
deleting all your resource groups. You can set MFA up in a number of ways, including with email, authenticator
app, and phone text messages. As an admin you can select the least intrusive option. MFA integrates with threat
analytics and conditional access policies to randomly require an MFA challenge respond. Learn more about
security guidance, and how to set up MFA.
Implement conditional access: In most small and medium size organizations, Azure admins and the support
team are probably located in a single geography. In this case, most logins will come from the same areas. If the
IP addresses of these locations are fairly static, it makes sense that you shouldn't see administrator logins from
outside these areas. Even in an event in which a remote bad actor compromises an admin's credentials, you can
implement security features like conditional access combined with MFA to prevent login from remote locations,
or from spoofed locations from random IP addresses. Learn more about conditional access, and review best
practices for conditional access in Azure AD.
Review Enterprise Application permissions: Over time, admins click Microsoft and third-party links without
knowing their impact on the organization. Links can present consent screens that assign permissions to Azure
apps, and might allow access to read Azure AD data, or even full access to manage your entire Azure
subscription. You should regularly review the apps to which your admins and users have allowed access to
Azure resources. You should ensure that these apps have only the permissions that are necessary. Additionally,
quarterly or semi-annually you can email users with a link to app pages so that they're aware of the apps to
which they've allowed access to their organizational data. Learn more about application types, and how to
control app assignments in Azure AD.

Managed migrated workloads


In this section we'll recommend some best practices for Azure management, including:
Manage resources: Best practices for Azure resource groups and resources, including smart naming, preventing
accidental deletion, managing resource permissions, and effective resource tagging.
Use blueprints: Get a quick overview on using blueprints for building and managing your deployment
environments.
Review architectures: Review sample Azure architectures to learn from as you build your post-migration
deployments.
Set up management groups: If you have multiple subscriptions, you can gather them into management groups,
and apply governance settings to those groups.
Set up access policies: Apply compliance policies to your Azure resources.
Implement a BCDR strategy: Put together a business continuity and disaster recovery (BCDR ) strategy to keep
data safe, your environment resilient, and resources up and running when outages occur.
Manage VMs: Group VMs into availability groups for resilience and high availability. Use managed disks for
ease of VM disk and storage management.
Monitor resource usage: Enable diagnostic logging for Azure resources, build alerts and playbooks for
proactive troubleshooting, and use the Azure dashboard for a unified view of your deployment health and
status.
Manage support and updates: Understand your Azure support plan and how to implement it, get best practices
for keeping VMs up-to-date, and put processes in place for change management.

Best practice: Name resource groups


Ensuring that your resource groups have meaningful names that admins and support team members can easy
recognize and navigate will drastically improve productivity and efficiency.
We recommend following Azure naming conventions.
If you're synchronizing your on-premises AD DS to Azure AD using AD Connect, consider matching the names
of security groups on-premises to the names of resource groups in Azure.
Resource group
naming
Learn more:
Learn about naming conventions

Best practice: Implement delete locks for resource groups


The last thing you need is for a resource group to disappear because it was deleted accidentally. We recommend
that you implement delete locks so that this doesn't happen.

Delete locks
Learn more:
Learn about locking resources to prevent unexpected changes.

Best practice: Understand resource access permissions


A subscription owner has access to all the resource groups and resources in your subscription.
Add people sparingly to this valuable assignment. Understanding the ramifications of these types of
permissions is important in keeping your environment secure and stable.
Make sure you place resources in appropriate resources groups:
Match resources with a similar lifecycle together. Ideally, you shouldn't need to move a resource when
you need to delete an entire resource group.
Resources that support a function or workload should be placed together for simplified management.
Learn more:
Learn about organizing subscriptions and resource groups.

Best practice: Tag resources effectively


Often, using only a resource group name related to resources won't provide enough metadata for effective
implementation of mechanisms such as internal billing or management within a subscription.
As a best practice, you should use Azure tags to add useful metadata that can be queried and reported on.
Tags provide a way to logically organize resources with properties that you define. Tags can be applied to
resource groups or resources directly.
Tags can be applied on a resource group or on individual resources. Resource group tags aren't inherited by the
resources in the group.
You can automate tagging using Powershell or Azure Automation, or tag individual groups and resources. -
tagging approach or a self-service one. If you have a request and change management system in place, then
you can easily utilize the information in the request to populate your company-specific resource tags.

Tagging
Learn more:
Learn about tagging and tag limitations.
Review PowerShell and CLI examples to set up tagging, and to apply tags from a resource group to its
resources.
Read Azure tagging best practices.
Best practice: Implement blueprints
Just as blueprint allows engineers and architects to sketch a project's design parameters, Azure Blueprints enable
cloud architects and central IT groups to define a repeatable set of Azure resources that implements and adheres
to an organization's standards, patterns, and requirements. Using Azure blueprints development teams can rapidly
build and create new environments that meet organizational compliance requirements, and that have a set of built-
in components, such as networking, to speed up development and delivery.
Use blueprints to orchestrate the deployment of resource groups, Azure Resource Manager templates, and
policy and role assignments.
Azure blueprints are stored in a globally distributed Azure Cosmos DB. Blueprint objects are replicated to
multiple Azure regions. Replication provides low latency, high availability, and consistent access to blueprint,
regardless of the region to which a blueprint deploys resources.
Learn more:
Read about blueprints.
Review a blueprint example used to accelerate AI in healthcare.

Best practice: Review Azure reference architectures


Building secure, scalable, and manageable workloads in Azure can be daunting. With continual changes, it can be
difficult to keep up with different features for an optimal environment. Having a reference to learn from can be
helpful when designing and migrating your workloads. Azure and Azure partners have built several sample
reference architectures for various types of environments. These samples are designed to provide ideas that you
can learn from and build on.
Reference architectures are arranged by scenario. They contain recommend practices, and advice on management,
availability, scalability, and security. The Azure App Service Environment provides a fully isolated and dedicated
environment in which to run App Service apps, including Windows and Linux web apps, Docker containers, mobile
apps, and functions. App Service adds the power of Azure to your application, with security, load balancing,
autoscaling, and automated management. You can also take advantage of its DevOps capabilities, such as
continuous deployment from Azure DevOps and GitHub, package management, staging environments, custom
domain, and SSL certificates. App Service is useful for apps that need isolation and secure network access, and
those that use high amounts of memory and other resources that need to scale. Learn more:
Learn about Azure reference architectures.
Review Azure example scenarios.

Best practice: Manage resources with Management Groups


If your organization has multiple subscriptions, you need to manage access, policies, and compliance for them.
Azure management groups provide a level of scope above subscriptions.
You organize subscriptions into containers called management groups, and apply governance conditions to
them.
All subscriptions in a management group automatically inherit the management group conditions.
Management groups provide enterprise-grade management at a large scale, no matter what type of
subscriptions you have.
For example, you can apply a management group policy that limits the regions in which VMs can be created.
This policy is then applied to all management groups, subscriptions, and resources under that management
group.
You can build a flexible structure of management groups and subscriptions, to organize your resources into a
hierarchy for unified policy and access management.
The following diagram shows an example of creating a hierarchy for governance using management groups.

Management groups
Learn more:
Learn more about organizing resources into management groups.

Best practice: Deploy Azure Policy


Azure Policy is a service in Azure that you use to create, assign and, manage policies.
Policies enforce different rules and effects over your resources, so those resources stay compliant with your
corporate standards and service level agreements.
Azure Policy evaluates your resources, scanning for those not compliant with your policies.
For example, you could create a policy that allows only a specific SKU size for VMs in your environment. Azure
Policy will evaluate this setting when creating and updating resources, and when scanning existing resources.
Azure provides some built-in policies that you can assign, or you can create your own.

Azure Policy
Learn more:
Get an overview of Azure Policy.
Learn about creating and managing policies to enforce compliance.
Best practice: Implement a BCDR strategy
Planning for business continuity and disaster recovery (BCDR ), is a critical exercise that you should complete
during planning for migration to Azure. In legal terms, your contract includes a force majeure clause that excuse
obligations due to a greater force such as hurricanes or earthquakes. However, you also have obligations around
an ability to ensure that services will continue to run, and recover where necessary, when disaster strike. Your
ability to do this can make or break your company's future.
Broadly, your BCDR strategy must consider:
Data backup: How to keep your data safe so that you can recover it easily if outages occur.
Disaster recovery: How to keep your apps resilient and available if outages occur.
Azure resiliency features
The Azure platform provides a number of resiliency features.
Region pairing: Azure pairs regions to provide regional protection within data residency boundaries. Azure
ensures physical isolation between region pairs, prioritizes the recovery of one region in the pair in case of a
broad outage, deploys system updates separately in each region, and allows features such as Azure geo-
redundant storage to replicate across the regional pairs.
Availability zones: Availability zones protect against failure of an entire Azure datacenter by establishing
physical separate zones with an Azure region. Each zone has a distinctive power source, network infrastructure,
and cooling mechanism.
Availability sets: Availability sets protect against failures within a datacenter. You group VMs in availability sets
to keep them highly available. Within each availability set, Azure implements multiple fault domains that group
together underlying hardware with a common power source and network switch, and update domains that
group together underlying hardware that can undergo maintenance, or be rebooted, at the same time. As an
example, when a workload is spread across Azure VMs, you can put two or more VMs for each app tier into a
set. For example, you can place frontend VMs in one set, and data tier VMs in another. Since only one update
domain is every rebooted at a time in a set, and Azure ensures that VMs in a set are spread across fault
domains, you ensure that not all VMs in a set will fail at the same time.
Set up BCDR
When migrating to Azure, it's important to understand that although the Azure platform provides these inbuilt
resiliency capabilities, you need to design your Azure deployment to take advantage of Azure features and services
that provide high availability, disaster recovery, and backup.
Your BCDR solution will depend your company objectives, and influenced by your Azure deployment strategy.
Infrastructure as a Service (IaaS ) and Platform as a Service (PaaS ) deployments present different challenges for
BCDR.
Once in place, your BCDR solutions should be tested regularly to check that your strategy remains viable.

Best practice: Back up your data


In most cases an on-premises workload is retired after migration, and your on-premises strategy for backing up
data must be extended or replaced. If you migrate your entire datacenter to Azure, you'll need to design and
implement a full backup solution using Azure technologies, or third-party integrated solutions.
Back up an IaaS deployment
For workloads running on Azure IaaS VMs, consider these backup solutions:
Azure Backup: Provides application-consistent backups for Azure Windows and Linux VMs.
Storage snapshots: Take snapshots of blob storage.
Azure Backup
Azure Backup backs up creates data recovery points that are stored in Azure storage. Azure Backup can back up
Azure VM disks, and Azure Files (preview ). Azure Files provide file shares in the cloud, accessible via SMB.
You can use Azure Backup to back up VMs in a couple of ways.
Direct backup from VM settings: You can back up VMs with Azure Backup directly from the VM options in
the Azure portal. You can back up the VM once and day, and restore the VM disk as needed. Azure Backup
takes app-aware data snapshots (VSS ), no agent is installed on the VM.
Direct backup in a Recovery Services vault: You can back up your IaaS VMs by deploying an Azure Backup
Recovery Services vault. This provides a single location to track and manage backups, and provides granular
backup and restore options. Backup is up to three times a day, at the file/folder level. It isn't app-aware and
Linux isn't supported. You need to install the Microsoft Azure Recovery Services (MARS ) agent on each VM
you want to back up.
Azure Backup Server: Protect the VM to Azure Backup Server: Azure Backup Server is provided free with
Azure Backup. The VM is backed up to local Azure Backup Server storage. You then back up the Azure Backup
Server to Azure in a vault. Backup is app-aware, with full granularity over backup frequently and retention. You
can back up at the app level. For example by backing up SQL Server or SharePoint.
For security, Azure Backup encrypts data in-flight using AES 256 and sends it over HTTPS to Azure. Backed-up
data at-rest in Azure is encrypted using Storage Service Encryption (SSE ), and data for transmission and storage.

Azure Backup
Learn more:
Learn about different types of backups.
Plan a backup infrastructure for Azure VMs.
Storage snapshots
Azure VMs are stored as page blobs in Azure Storage.
Snapshots capture the blob state at a specific point in time.
As an alternative backup method for Azure VM disks, you can take a snapshot of storage blobs and copy them
to another storage account.
You can copy an entire blob, or use an incremental snapshot copy to copy only delta changes and reduce
storage space.
As an extra precaution, you can enable soft delete for blob storage accounts. With this feature enabled, a blob
that's deleted is marked for deletion but not immediately purged. During the interim period the blob can be
restored.
Learn more:
Learn about Azure blob storage.
Learn how to create a blob snapshot.
Review a sample scenario for blob storage backup.
Read about soft delete.
Disaster recovery and forced failover (preview ) in Azure Storage
Third-party backup
In addition, you can use third-party solutions to back up Azure VMs and storage containers to local storage or
other cloud providers. Learn more about backup solutions in the Azure marketplace.
Back up a PaaS deployment
Unlike IaaS where you manage your own VMs and infrastructure, in a PaaS model platform and infrastructure is
managed by the provider, leaving you to focus on core app logic and capabilities. With so many different types of
PaaS services, each service will be evaluated individually for the purposes of backup. We'll look at two common
Azure PaaS services - Azure SQL Database, and Azure Functions.
Back up Azure SQL Database
Azure SQL Database is a fully managed PaaS Database Engine. It provides a number of business continuity
features, including automate backup.
SQL Database automatically performs weekly full database backups, and differential backups every 12 hours.
Transaction log backups are taken every five to ten minutes to protect the database from data loss.
Backups are transparent and don't incur additional cost.
Backups are stored in RA-GRS storage for geo-redundancy, and replicated to the paired geographical region.
Backup retention depends on the purchasing model. DTU -based service tiers go from seven days for Basic tier
to 35 days for other tiers.
You can restore a database to a point-in-time within the retention period. You can also restore a deleted
database, restore to a different geographical region, or restore from a long-term backup if the database has a
long-term retention policy (LTR ).
Azure SQL backup
Learn more:
Automated backups for SQL Database.
Recover a database using automated backups.
Back up Azure Functions
Since Azure Functions functions more or less as code, you should back them up using the same methods that you
use to protect code such as source control in GitHub or Azure DevOps Services
Learn more:
Data protection for Azure DevOps.

Best practice: Set up disaster recovery


In addition to protecting data, BCDR planning must consider how to keep apps and workloads available in case of
disaster.
Set up disaster recovery for IaaS apps
For workloads running on Azure IaaS VMs and Azure storage consider these solutions:
Azure Site Recovery: Orchestrates replication of Azure VMs in a primary to a secondary region. When
outages occur, you fail over from the primary region to the secondary, and users can continue to access apps.
When things are back to normal, you fail back to the primary region.
Azure storage: Azure provides in-build resilience and high availability for different types of storage:
Azure Site Recovery
Azure Site Recovery is the primary Azure service for ensuring that Azure VMs can be brought online and VM apps
made available when outages occur. Site Recovery replicates VMs from a primary to secondary Azure region.
When disaster strikes, you fail VMs over from the primary region, and continue accessing them as normal in the
secondary region. When operations return to normal, you can fail back VMs to the primary region.

Site
Recovery
Learn more:
Review disaster recovery scenarios for Azure VMs.
Learn how to set up disaster recovery for an Azure VM after migration.
Azure storage
Azure storage is replicated for built-in resilience and high availability.
Geo-redundant storage (GRS ): Protects against a region-wide outage, with at least 99.99999999999999%
(16 9's) durability of objects over a given year.
Storage data replicates to the secondary region with which your primary region is paired.
If the primary region goes down, and Microsoft initiates a failover to the secondary region, you'll have
read access to your data.
Read access geo-redundant storage (RA -GRS ): Protects against a region-wide outage.
Storage data replicates to the secondary region.
You have guaranteed read access to replicated data in the secondary region, regardless of whether or not
Microsoft initiates a failover. where two or more data centers in the same region might have an issue yet
your data is still available in a geographically separated region.
Zone redundant storage (ZRS ): Protects against datacenter failure.
ZRS replicates data synchronously across three storage clusters in a single region. Clusters and
physically separated and each located in its own availability zone.
If disaster occurs, your storage will still be available. ZRS should be the minimum target for mission-
critical workloads.
Learn more:
Learn about Azure storage replication.
Set up disaster recovery for PaaS workloads
Let's consider disaster recovery options for our PaaS workload examples.
Disaster recovery of Azure SQL Server
There are a number of different options, each impacting data loss, recovery time, and cost.
You can use failover groups and active geo-replication to provide resilience against regional outages and
catastrophic failures
Active geo-replication: Deploy active geo-replication for quick disaster recovery if a datacenter outage
occurs, or a connection can't be made to a primary database.
Geo-replication continually creates readable replicas of your database in up to four secondaries in the
same or different regions.
In an outage, you fail over to one of the secondary regions, and bring your database back online.
Auto-failover groups: Auto-failover groups extend active geo-replication with transparent failover of multiple
databases.
An auto-failover group provides a powerful abstraction of active geo-replication with group level
database replication and automatic failover.
You create a failover group that contains a primary server hosting one or more primary databases, a
secondary server hosting read-only replicas of the primary databases, listeners that point to each server,
and an automatic failover policy.
The specified listener endpoints remove the need to change the SQL connection string after failover.
Geo-restore:
Geo-restore allows you to recover the database to a different region. The automated backup of all Azure
databases will be replicated to a secondary region in the background. It will always restore the database
from the copy of backup files stored in the secondary region.
Zone-redundant databases provide built-in support for Azure availability zones.
Zone-redundant databases enhance high availability for Azure SQL Server in the event of a data center
failure.
With zone-redundancy, you can place redundant database replicas within different availability zones in a
region.

Geo -
replication
Learn more:
Learn about high availability for Azure SQL Server.
Read Azure SQL Databases 101 for disaster recovery.
Get an overview of active geo-replication and failover groups.
Learn about designing for disaster recovery.
Get best practices for failover groups.
Get best practices for security after geo-restore or failover.
Learn about zone redundancy
Learn how to perform a disaster recovery drill for SQL database.
Disaster recovery for Azure Functions
If the compute infrastructure in Azure fails, an Azure function app might become unavailable.
To minimize the possibility of such downtime, use two function apps deployed to different regions.
Azure Traffic Manager can be configured to detect problems in the primary function app, and automatically
redirect traffic to the function app in the secondary region
Traffic Manager with geo-redundant storage allows you to have the same function in multiple regions, in case
of regional failure

Traffic Manager
Learn more:
Learn about disaster recovery for Azure apps.
Learn about disaster recovery and geo-distribution for durable Azure functions.

Best practice: Use managed disks and availability sets


Azure uses availability sets to logically group VMs together, and to isolate VMs in a set from other resources. VMs
in an availability set are spread across multiple fault domains with separate subsystems, to protect against local
failures, and are also spread across multiple update domains so that not all VMs in a set reboot at the same time.
Azure-Managed Disks simplify disk management for Azure IaaS VMs, by managing the storage
accounts associated with the VM disks.
We recommend that you use managed disks where possible. You only have to specify the type of storage you
want to use and the size of disk you need, and Azure creates and manages the disk for you, behind the scenes.
You can convert existing disks to managed.
You should create VMs in availability sets for high resilience and availability. When planned or unplanned
outages occur, availability sets ensure that at least one of your VMs in the set continues to be available.
Managed disks
Learn more:
Get an overview of managed disks.
Learn about converting disks to managed.
Learn how to manage the availability of Windows VMs in Azure.

Best Practice: Monitor resource usage and performance


You might have moved your workloads to Azure for its immense scaling capabilities. However, moving your
workload doesn't mean that Azure will automatically implement scaling without your input. As an example:
If your marketing organization pushes a new TV advertisement that drives 300% more traffic, this could cause
site availability issues. Your newly migrated workload might hit assigned limits and crash.
Another example might be a distributed denial-of-service (DDoS ) attack on your migrated workload. In this
case you might not want to scale, but to prevent the source of the attacks from reaching your resources.
These two cases have different resolutions, but for both you need an insight into what's happening with usage and
performance monitoring.
Azure Monitor can help surface these metrics, and provide response with alerts, autoscaling, event hubs, logic
apps and more.
In addition to Azure monitoring, you can integrate your third-party SIEM application to monitor the Azure logs
for auditing and performance events.
Azure Monitor
Learn more:
Learn about Azure Monitor.
Get best practices for monitoring and diagnostics.
Learn about autoscaling.
Learn how to route Azure data to a SIEM tool.

Best practice: Enable diagnostic logging


Azure resources generate a fair number of logging metrics and telemetry data.
By default, most resource types don't have diagnostic logging enabled.
By enabling diagnostic logging across your resources, you can query logging data, and build alerts and
playbooks based on it.
When you enable diagnostic logging, each resource will have a specific set of categories. You select one or more
logging categories, and a location for the log data. Logs can be sent to a storage account, event hub, or to Azure
Monitor logs.
Diagnostic logging
Learn more:
Learn about collecting and consuming log data.
Learn what's supported for diagnostic logging.

Best practice: Set up alerts and playbooks


With diagnostic logging enabled for Azure resources, you can start to use logging data to create custom alerts.
Alerts proactively notify you when conditions are found in your monitoring data. You can then address issues
before system users notice them. You can alert on things like metric values, log search queries, activity log
events, platform health, and website availability.
When alerts are triggered, you can run a Logic App Playbook. A playbook helps you to automate and
orchestrate a response to a specific alert. Playbooks are based on Azure Logic Apps. You can use Logic App
templates to create playbooks, or create your own.
As a simple example, you can create an alert that triggers when a port scan happens against a network security
group. You can set up a playbook that runs and locks down the IP address of the scan origin.
Another example might be an app with a memory leak. When the memory usage gets to a certain point, a
playbook can recycle the process.

Alerts
Learn more:
Learn about alerts.
Learn about security playbooks that respond to Security Center alerts.

Best practice: Use the Azure dashboard


The Azure portal is a web-based unified console that allows you to build, manage, and monitor everything from
simple web apps to complex cloud applications. It includes a customizable dashboard and accessibility options.
You can create multiple dashboards and share them with others who have access to your Azure subscriptions.
With this shared model, your team has visibility into the Azure environment, allowing them to be proactive
when managing systems in the cloud.

Azure dashboard
Learn more:
Learn how to create a dashboard.
Learn about dashboard structure.

Best practice: Understand support plans


At some point you will need to collaborate with your support staff or Microsoft support staff. Having a set of
policies and procedures for support during scenarios such as disaster recovery is vital. In addition your admins and
support staff should be trained on implementing those policies.
In the unlikely event that an Azure service issue impacts your workload, admins should know how to submit a
support ticket to Microsoft in the most appropriate and efficient way.
Familiarize yourself with the various support plans offered for Azure. They range from response times
dedicated to Developer instances, to Premier support with a response time of less than 15 minutes.
Support plans
Learn more:
Get an overview of Azure support plans.
Learn about service level agreements (SLAs).

Best practice: Manage updates


Keeping Azure VMs updated with the latest operating system and software updates is a massive chore. The ability
to surface all VMs, to figure out which updates they need, and to automatically push those updates is extremely
valuable.
You can use Update Management in Azure Automation to manage operating system updates for machines
running Windows and Linux computers that are deployed in Azure, on-premises, and in other cloud providers.
Use Update Management to quickly assess the status of available updates on all agent computers, and manage
update installation.
You can enable Update Management for VMs directly from an Azure Automation account. You can also update
a single VM from the VM page in the Azure portal.
In addition, Azure VMs can be registered with System Center Configuration Manager. You could then migrate
the Configuration Manager workload to Azure, and do reporting and software updates from a single web
interface.
Updates
Learn more:
Learn about update management in Azure.
Learn how to integrate Configuration Manager with update management.
Frequently asked questions about Configuration Manager in Azure.

Implement a change management process


As with any production system, making any type of change can impact your environment. A change management
process that requires requests to be submitted in order to make changes to production systems is a valuable
addition in your migrated environment.
You can build best practice frameworks for change management to raise awareness in administrators and
support staff.
You can use Azure Automation to help with configuration management and change tracking for your migrated
workflows.
When enforcing change management process, you can use audit logs to link Azure change logs to presumably
(or not) existing change requests. So that if you see a change made without a corresponding change request,
you can investigate what went wrong in the process.
Azure has a Change Tracking solution in Azure automation:
The solution tracks changes to Windows and Linux software and files, Windows registry keys, Windows
services, and Linux daemons.
Changes on monitored servers are sent to the Azure Monitor service in the cloud for processing.
Logic is applied to the received data and the cloud service records the data.
On the Change Tracking dashboard, you can easily see the changes that were made in your server
infrastructure.
Change management
Learn more:
Learn about Change Tracking.
Learn about Azure Automation capabilities.

Next steps
Review other best practices:
Best practices for networking after migration.
Best practices for cost management after migration.
Best practices to set up networking for workloads
migrated to Azure
2/4/2019 • 27 minutes to read • Edit Online

As you plan and design for migration, in addition to the migration itself, one of the most critical steps is the design
and implementation of Azure networking. This article describes best practices for networking when migrating to
IaaS and PaaS implementations in Azure.

IMPORTANT
The best practices and opinions described in this article are based on the Azure platform and service features available at the
time of writing. Features and capabilities change over time. Not all recommendations might be applicable for your
deployment, so select those that work for you.

Design virtual networks


Azure provides virtual networks (VNets):
Azure resources communicate privately, directly, and securely with each other over VNets.
You can configure endpoint connections on VNets for VMs and services that require internet communication.
A VNet is a logical isolation of the Azure cloud that's dedicated to your subscription.
You can implement multiple VNets within each Azure subscription and Azure region.
Each VNet is isolated from other VNets.
VNets can contain private and public IP addresses defined in RFC 1918, expressed in CIDR notation. Public IP
addresses are not directly accessible from the internet.
VNets can connect to each other using VNet peering. Connected VNets can be in the same or different regions.
Thus resources in one VNet can connect to resources in other VNets.
By default, Azure routes traffic between subnets within a VNet, connected VNets, on-premises networks, and
the internet.
There are a number of things you need to think about when planning your VNet topology, including how to
arrange IP address spaces, how to implement a hub-spoke network, how to segment VNets into subnets, set up
DNS, and implement Azure availability zones.

Best practice: Plan IP addressing


When you create VNets as part of your migration, it's important to plan out your VNet IP address space.
You should assign an address space that isn't larger than a CIDR range of /16 for each VNet. VNets allow for
the use of 65536 IP addresses, and assigning a smaller prefix than /16 would result in the loss of IP addresses.
It's important not to waste IP addresses, even if they're in the private ranges defined by RFC 1918.
The VNet address space shouldn't overlap with on-premises network ranges.
Network Address Translation (NAT) shouldn't be used.
Overlapping addresses can cause networks that can't be connected and routing that doesn't work properly. If
networks overlap, you'll need to redesign the network or use network address translation (NAT).
Learn more:
Get an overview of Azure VNets.
Read the networking FAQ.
Learn about networking limitations.

Best practice: Implement a hub-spoke network topology


A hub-spoke network topology isolates workloads while sharing services such as identity and security.
The hub is an Azure VNet that acts as a central point of connectivity.
The spokes are VNets that connect to the hub VNet using VNet peering.
Shared services are deployed in the hub, while individual workloads are deployed as spokes.
Consider the following:
Implementing a hub and spoke topology in Azure centralizes common services such as connections to on-
premises networks, firewalls, and isolation between VNets. The hub VNet provides a central point of
connectivity to on-premises networks, and a place to host services use by workloads hosted in spoke VNets.
A hub and spoke configuration is typically used by larger enterprises. Smaller networks might consider a
simpler design to save on costs and complexity.
Spoke VNets can be used to isolate workloads, with each spoke managed separately from other spokes. Each
workload can include multiple tiers, and multiple subnets that are connected with Azure load balancers.
Hub and spoke VNets can be implemented in different resource groups, and even in different subscriptions.
When you peer virtual networks in different subscriptions, the subscriptions can be associated to the same, or
different, Azure Active Directory (AD ) tenants. This allows for decentralized management of each workload,
while sharing services maintained in the hub network.

Hub and spoke topology


Learn more:
Read about a hub and spoke topology.
Get network recommendations for running Azure Windows and Linux VMs.
Learn about VNet peering.

Best practice: Design subnets


To provide isolation within a VNet, you segment it into one or more subnets, and allocate a portion of the VNet's
address space to each subnet.
You can create multiple subnets within each VNet.
By default, Azure routes network traffic between all subnets in a VNet.
Your subnet decisions are based on your technical and organizational requirements.
You create subnets using CIDR notation.
When deciding on network range for subnets, it's important to note that Azure retains five IP addresses from
each subnet that can't be used. For example, if you create the smallest available subnet of /29 (with eight IP
addresses), Azure will retain five addresses, so you only have three usable addresses that can be assigned to
hosts on the subnet.
In most cases, using /28 as the smallest subnet is recommended.
Example
The table shows an example of a VNet with an address space of 10.245.16.0/20 segmented into subnets, for a
planned migration.

SUBNET CIDR ADDRESSES USE

DEV-FE-EUS2 10.245.16.0/22 1019 Frontend/web tier VMs

DEV-APP-EUS2 10.245.20.0/22 1019 App-tier VMs

DEV-DB-EUS2 10.245.24.0/23 507 Database VMs

Learn more:
Learn about designing subnets.
Learn how a fictitious company (Contoso) prepared their networking infrastructure for migration.

Best practice: Set up a DNS server


Azure adds a DNS server by default when you deploy a VNet. This allows you to rapidly build VNets and deploy
resources. However, this DNS server only provides services to the resources on that VNet. If you want to connect
multiple VNets together, or connect to an on-premises server from VNets, you need additional name resolution
capabilities. For example, you might need Active Directory to resolve DNS names between virtual networks. To do
this, you deploy your own custom DNS server in Azure.
DNS servers in a VNet can forward DNS queries to the recursive resolvers in Azure. This enables you to
resolve host names within that VNet. For example, a domain controller running in Azure can respond to
DNS queries for its own domains, and forward all other queries to Azure.
DNS forwarding allows VMs to see both your on-premises resources (via the domain controller) and
Azure-provided host names (using the forwarder). Access to the recursive resolvers in Azure is provided
using the virtual IP address 168.63.129.16.
DNS forwarding also enables DNS resolution between VNets, and allows on-premises machines to resolve
host names provided by Azure.
To resolve a VM host name, the DNS server VM must reside in the same VNet, and be configured to
forward host name queries to Azure.
Because the DNS suffix is different in each VNet, you can use conditional forwarding rules to send DNS
queries to the correct VNet for resolution.
When you use your own DNS servers, you can specify multiple DNS servers for each VNet. You can also
specify multiple DNS servers per network interface (for Azure Resource Manager), or per cloud service (for
the classic deployment model).
DNS servers specified for a network interface or cloud service take precedence over DNS servers specified
for the VNet.
In the Azure Resource Manager deployment model, you can specify DNS servers for a VNet and a network
interface, but the best practice is to use the setting only on VNets.

DNS servers for VNet


Learn more:
Learn about name resolution when you use your own DNS server.
Learn about DNS naming rules and restrictions.

Best practice: Set up availability zones


Availability zones increase high-availability to protect your apps and data from datacenter failures.
Availability Zones are unique physical locations within an Azure region.
Each zone is made up of one or more datacenters equipped with independent power, cooling, and
networking.
To ensure resiliency, there's a minimum of three separate zones in all enabled regions.
The physical separation of availability zones within a region protects applications and data from datacenter
failures.
Zone-redundant services replicate your applications and data across availability zones to protect from
single points of failure. - - With availability zones, Azure offers an SLA of 99.99% VM uptime.
Availability zone
You can plan and build high-availability into your migration architecture by colocating compute, storage,
networking, and data resources within a zone, and replicating them in other zones. Azure services that
support availability zones fall into two categories:
Zonal services: You associate a resource with a specific zone. For example VMs, managed disks, IP
addresses).
Zone-redundant services: The resource replicates automatically across zones. For example, zone-
redundant storage, Azure SQL Database.
You can deploy a standard Azure load balanced with internet-facing workloads or app tiers, to provide zonal
fault tolerance.
Load balancer
Learn more:
Get an overview of availability zones.

Design hybrid cloud networking


For a successful migration, it's critical to connect on-premises corporate networks to Azure. This creates an always-
on connection known as a hybrid-cloud network, where services are provided from the Azure cloud to corporate
users. There are two options for creating this type of network:
Site-to-site VPN: You establish a site-to-site connection between your compatible on-premises VPN device
and an Azure VPN gateway that's deployed in a VNet. Any authorized on-premises resource can access VNets.
Site-to-site communications are sent through an encrypted tunnel over the internet.
Azure ExpressRoute: You establish an Azure ExpressRoute connection between your on-premises network
and Azure, through an ExpressRoute partner. This connection is private, and traffic doesn't go over the internet.
Learn more:
Learn more about hybrid-cloud networking.

Best practice: Implement a highly available site-to-site VPN


To implement a site-to-site VPN, you set up a VPN gateway in Azure.
A VPN gateway is a specific type of VNet gateway that's used to send encrypted traffic between an Azure VNet
and an on-premises location over the public Internet.
You can also use a VPN gateway to send encrypted traffic between Azure VNets over the Microsoft network.
Each VNet can have only one VPN gateway.
You can create multiple connections to the same VPN gateway. When you create multiple connections, all VPN
tunnels share the available gateway bandwidth.
Every Azure VPN gateway consists of two instances in an active-standby configuration.
For planned maintenance or unplanned disruption to the active instance, failover occurs and the standby
instance takes over automatically, and resumes the site-to-site or VNet-to-VNet connection.
The switchover causes a brief interruption.
For planned maintenance, connectivity should be restored within 10 to 15 seconds.
For unplanned issues, the connection recovery will be longer, about one to 1.5 minutes in the worst case.
Point-to-site (P2S ) VPN client connections to the gateway will be disconnected, and the users will need
to reconnect from client machines.
When setting up a site-to-site VPN, you do the following:
You need a VNet who's address range doesn't overlap with the on-premises network to which the VPN will
connect.
You create a gateway subnet in the network.
You create a VPN gateway, specify the gateway type (VPN ) and whether the gateway is policy-based or route-
based. A RouteBased VPN is recommended as more capable and future-proof.
You create a local network gateway on-premises, and configure your on-premises VPN device.
You create a failover site-to-site VPN connection between the VNet gateway and the on-premises device. Using
route-based VPN allows for either active-passive or active-active connections to Azure. Route-based also
supports both site-to-site (from any computer) and point-to-site (from a single computer) connections
concurrently.
You specify the gateway SKU that you want to use. This will depend on your workload requirements,
throughputs, features, and SLAs.
Border gateway protocol (BGP ) is an optional feature you can use with Azure ExpressRoute and route-based
VPN gateways to propagate your on-premises BGP routes to your VNets.

Site-to -site VPN


Learn more:
Review compatible on-premises VPN devices.
Get an overview of VPN gateways.
Learn about highly available VPN connections.
Learn about planning and designing a VPN gateway.
Review VPN gateway settings.
Review gateway SKUs.
Read about setting up BGP with Azure VPN gateways.
Best practice: Configure a gateway for VPN Gateways
When you create a VPN gateway in Azure, you must use a special subnet named GatewaySubnet. When creating
this subnet note these best practices:
The prefix length of the gateway subnet can have a maximum prefix length of 29 (for example,
10.119.255.248/29). The current recommendation is that you use a prefix length of 27 (for example,
10.119.255.224/27).
When you define the address space of the gateway subnet, use the very last part of the VNet address space.
When using the Azure GatewaySubnet, never deploy any VMs or other devices such as Application Gateway to
the gateway subnet.
Don't assign a network security group (NSG ) to this subnet. It will cause the gateway to stop functioning.
Learn more:
Use this tool to determine your IP address space.

Best practice: Implement Azure Virtual WAN for branch offices


For multiple VPN connections, Azure Virtual WAN is a networking service that provides optimized and automated
branch-to-branch connectivity through Azure.
Virtual WAN lets you connect and configure branch devices to communicate with Azure. This can be done
manually, or by using preferred provider devices through a Virtual WAN partner.
Using preferred provider devices allows for simple use, connectivity, and configuration management.
The Azure WAN built-in dashboard provides instant troubleshooting insights that save time, and provide an
easy way to track large-scale site-to-site connectivity.
Learn more: Learn about Azure Virtual WAN.
Best practice: Implement ExpressRoute for mission critical connections
The Azure ExpressRoute service lets you extend your on-premises infrastructure into the Microsoft cloud by
creating private connections between the virtual Azure datacenter and on-premises networks.
ExpressRoute connections can be over an any-to-any (IP VPN ) network, a point-to-point Ethernet network, or
through a connectivity provider. They don't go over the public internet.
ExpressRoute connections offer higher security, reliability, and higher speeds (up to 10 Gbps), along with
consistent latency.
ExpressRoute is useful for virtual datacenters, as customers can get the benefits of compliance rules associated
with private connections.
With ExpressRoute Direct you can connect directly to Microsoft routers at 100Gbps, for larger bandwidth
needs.
ExpressRoute uses BGP to exchange routes between on-premises networks, Azure instances, and Microsoft
public addresses.
Deploying ExpressRoute connections usually involves engaging with an ExpressRoute service provider. For a quick
start, it's common to initially use a site-to-site VPN to establish connectivity between the virtual datacenter and
on-premises resources, and then migrate to a ExpressRoute connection when a physical interconnection with your
service provider is established.
Learn more:
Get an overview of ExpressRoute.
Learn about ExpressRoute Direct.
Best practice: Optimize ExpressRoute routing with BGP communities
When you have multiple ExpressRoute circuits, you have more than one path to connect to Microsoft. As a result,
suboptimal routing can happen, and your traffic might take a longer path to reach Microsoft, and Microsoft to your
network. The longer the network path,the higher the latency. Latency has direct impact on app performance and
user experience.
Example
Let's take an example.
You have two offices in the US, one in Los Angeles and one in New York.
Your offices are connected on a WAN, which can be either your own backbone network or your service
provider's IP VPN.
You have two ExpressRoute circuits, one in US West and one in US East, that are also connected on the WAN.
Obviously, you have two paths to connect to the Microsoft network.
Problem Now imagine you have an Azure deployment (for example, Azure App Service) in both US West and US
East.
You want users in each office to access their nearest Azure services for an optimal experience.
Thus you want to connect users in Los Angeles to Azure US West and users in New York to Azure US East.
This works for east coast users, but not for the west. The problem is as follows:
On each ExpressRoute circuit, we advertise both prefixes in Azure US East (23.100.0.0/16) and Azure US
West (13.100.0.0/16).
Without knowing which prefix is from which region, prefixes aren't treated differently.
Your WAN network can assume that both prefixes are closer to US East than US West, and thus route
users from both offices to the ExpressRoute circuit in US East, providing a less than optimal experience
for users in the Los Angeles office.
BGP communities unoptimized connection
Solution
To optimize routing for both office users, you need to know which prefix is from Azure US West and which is from
Azure US East. You can encode this information by using BGP community values.
You assign a unique BGP community value to each Azure region. For example 12076:51004 for US East;
12076:51006 for US West.
Now that it's clear which prefix belongs to which Azure region, you can configure a preferred ExpressRoute
circuit.
Because you're using BGP to exchange routing info, you can use BGP's local preference to influence routing.
In our example, you assign a higher local preference value to 13.100.0.0/16 in US West than in US East, and
similarly, a higher local preference value to 23.100.0.0/16 in US East than in US West.
This configuration ensures that when both paths to Microsoft are available, users in Los Angeles will connect to
Azure US West using the west circuit, and users New York connect to Azure US East using the east circuit.
Routing is optimized on both sides.
BGP communities optimized connection
Learn more:
Learn about optimizing routing

Securing VNets
The responsibility for securing VNETs is shared between Microsoft and you. Microsoft provides many networking
features, as well as services that help keep resources secure. When designing security for VNets, there are a
number of best practices you should follow, including implementing a perimeter network, using filtering and
security groups, securing access to resources and IP addresses, and implementing attack protection.
Learn more:
Get an overview of best practices for network security.
Learn how to design for secure networks.

Best practice: Implement an Azure perimeter network


Although Microsoft invests heavily in protecting the cloud infrastructure, you must also protect your cloud services
and resource groups. A multilayered approach to security provides the best defense. Putting a perimeter network
in place is an important part of that defense strategy.
A perimeter network protects internal network resources from an untrusted network.
It's the outermost layer that's exposed to the internet. It generally sits between the internet and the enterprise
infrastructure, usually with some form of protection on both sides.
In a typical enterprise network topology, the core infrastructure is heavily fortified at the perimeters, with
multiple layers of security devices. The boundary of each layer consists of devices and policy enforcement
points.
Each layer can include a combination of the network security solutions that include firewalls, Denial of Service
(DoS ) prevention, intrusion detection/intrusion protection systems (IDS/IPS ), and VPN devices.
Policy enforcement on the perimeter network can use firewall policies, access control lists (ACLs), or specific
routing.
As incoming traffic arrives from the internet, it's intercepted and handled by a combination of defense solution
to block attacks and harmful traffic, while allowing legitimate requests into the network.
Incoming traffic can route directly to resources in the perimeter network. The perimeter network resource can
then communicate with other resources deeper in the network, moving traffic forward into the network after
validation.
The following figure shows an example of a single subnet perimeter network in a corporate network, with two
security boundaries.

Perimeter network deployment


Learn more:
Learn about deploying a perimeter network between Azure and your on-premises datacenter.

Best practice: Filter VNet traffic with NSGs


Network security groups (NSG ) contain multiple inbound and outbound security rules that filter traffic going to
and from resources. Filtering can be by source and destination IP address, port, and protocol.
NSGs contain security rules that allow or deny inbound network traffic to (or outbound network traffic from)
several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.
NSG rules are evaluated by priority using five-tuple information (source, source port, destination, destination
port, and protocol) to allow or deny the traffic.
A flow record is created for existing connections. Communication is allowed or denied based on the connection
state of the flow record.
A flow record allows an NSG to be stateful. For example, if you specify an outbound security rule to any
address over port 80, you don't need an inbound security rule to respond to the outbound traffic. You only need
to specify an inbound security rule if communication is initiated externally.
The opposite is also true. If inbound traffic is allowed over a port, you don't need to specify an outbound
security rule to respond to traffic over the port.
Existing connections aren't interrupted when you remove a security rule that enabled the flow. Traffic flows are
interrupted when connections are stopped, and no traffic is flowing in either direction, for at least a few
minutes.
When creating NSGs, create as few as possible but as many that are necessary.
Best practice: Secure north/south and east/west traffic
When securing VNets, it's important to consider attack vectors.
Using only subnet NSGs simplifies your environment, but only secures traffic into your subnet. This is known
as north/south traffic.
Traffic between VMs on the same subnet is known as east/west traffic.
It's important to leverage both forms of protection, so that if a hacker gains access from the outside they'll be
stopped when trying to attach machines located in the same subnet.
Use service tags on NSGs
A service tag represents a group of IP address prefixes. Using a service tag helps minimize complexity when you
create NSG rules.
You can use service tags instead of specific IP addresses when you create rules.
Microsoft manages the address prefixes associated with a service tag, and automatically updates the service tag
as addresses change.
You can't create your own service tag, or specify which IP addresses are included within a tag.
Service tags take the manual work out of assigning a rule to groups of Azure services. For example, if you want to
allow a VNet subnet containing web servers access to an Azure SQL Database, you could create an outbound rule
to port 1433, and use the Sql service tag.
This Sql tag denotes the address prefixes of the Azure SQL Database and Azure SQL Data Warehouse
services.
If you specify Sql as the value, traffic is allowed or denied to Sql.
If you only want to allow access to Sql in a specific region, you can specify that region. For example, if you want
to allow access only to Azure SQL Database in the East US region, you can specify Sql.EastUS as a service tag.
The tag represents the service, but not specific instances of the service. For example, the tag represents the
Azure SQL Database service, but doesn't represent a particular SQL database or server.
All address prefixes represented by this tag are also represented by the Internet tag.
Learn more:
Read about NSGs.
Review the service tags available for NSGs.

Best practice: Use application security groups


Application security groups enable you to configure network security as a natural extension of an app structure.
You can group VMs and define network security policies based on application security groups.
Application security groups enable you to reuse your security policy at scale without manual maintenance of
explicit IP addresses.
Application security groups handle the complexity of explicit IP addresses and multiple rule sets, allowing you
to focus on your business logic.
Example
Application
security group example

NETWORK INTERFACE APPLICATION SECURITY GROUP

NIC1 AsgWeb

NIC2 AsgWeb

NIC3 AsgLogic

NIC4 AsgDb

In our example, each network interface belongs to only one application security group, but in fact an
interface can belong to multiple groups, in accordance with Azure limits.
None of the network interfaces have an associated NSG. NSG1 is associated to both subnets and contains
the following rules.

RULE NAME PURPOSE DETAILS


RULE NAME PURPOSE DETAILS

Allow-HTTP-Inbound-Internet Allow traffic from the internet to the Priority: 100


web servers. Inbound traffic from the
internet is denied by the Source: internet
DenyAllInbound default security rule,
so no additional rule is needed for Source port: *
the AsgLogic or AsgDb application
security groups. Destination: AsgWeb

Destination port: 80

Protocol: TCP

Access: Allow.

Deny-Database-All AllowVNetInBound default security Priority: 120


rule allows all communication
between resources in the same VNet, Source: *
this rule is needed to deny traffic
from all resources. Source port: *

Destination: AsgDb

Destination port: 1433

Protocol: All

Access: Deny.

Allow-Database-BusinessLogic Allow traffic from the AsgLogic Priority: 110


application security group to the
AsgDb application security group. Source: AsgLogic
The priority for this rule is higher
than the Deny-Database-All rule, and Source port: *
will be processed before that rule, so
traffic from the AsgLogic application Destination: AsgDb
security group is allowed, and all
other traffic is blocked. Destination port: 1433

Protocol: TCP

Access: Allow.

The rules that specify an application security group as the source or destination are only applied to the
network interfaces that are members of the application security group. If the network interface is not a
member of an application security group, the rule is not applied to the network interface, even though the
network security group is associated to the subnet.
Learn more:
Learn about application security groups.
Best practice: Secure access to PaaS using VNet service endpoints
VNet service endpoints extend your VNet private address space and identity to Azure services over a direct
connection.
Endpoints allow you to secure critical Azure service resources to your VNets only. Traffic from your VNet to the
Azure service always remains on the Microsoft Azure backbone network.
VNet private address space can be overlapping and thus cannot be used to uniquely identify traffic originating
from a VNet.
After service endpoints are enabled in your VNet, you can secure Azure service resources by adding a VNet
rule to the service resources. This provides improved security by fully removing public internet access to
resources, and allowing traffic only from your VNet.

Service endpoints
Learn more:
Learn about VNet service endpoints.

Best practice: Control public IP addresses


Public IP addresses in Azure can be associated with VMs, load balancers, application gateways, and VPN
gateways.
Public IP addresses allow internet resources to communicate inbound to Azure resources, and Azure resources
to communicate outbound to the internet.
Public IP addresses are created with a basic or standard SKU, with a number of differences between them.
Standard SKUs can be assigned to any service, but are most usually configured on VMs, load balancers, and
application gateways.
It's important to note that a basic public IP address doesn't have an NSG automatically configured. You need to
configure your own and assign rules to control access. Standard SKU IP addresses have an NSG and rules
assigned by default.
As a best practice, VMs shouldn't be configured with a public IP address.
If you need a port opened, it should only be for web services such as port 80 or 443.
Standard remote management ports such as SSH (22) and RDP (3389) should be set to deny, along with
all other ports, using NSGs.
A better practice is to put VMs behind an Azure load balancer or application gateway. Then if access to remote
management ports is needed, you can use just-in-time VM access in the Azure Security Center.
Learn more:
Learn about public IP addresses in Azure.
Read more on just-in-time VM access in the Azure Security Center.

Leverage Azure security features for networking


Azure has platform security features that are easy to use, and provide rich countermeasures to common network
attacks. These include Azure Firewall, Web Application Firewall, and Network Watcher.

Best Practice: Deploy Azure Firewall


Azure Firewall is a managed, cloud-based network security service that protects your VNet resources. It is a fully
stateful firewall-as-a-service with built-in high availability, and unrestricted cloud scalability.

Azure Firewall
In Azure Firewall, can centrally create, enforce, and log application and network connectivity policies across
subscriptions and VNets.
Azure Firewall uses a static public IP address for your VNet resources, allowing outside firewalls to identify
traffic originating from your VNet.
Azure Firewall is fully integrated with Azure Monitor for logging and analytics.
As a best practice when creating Azure Firewall rules, use the FQDN tags to create rules.
An FQDN tag represents a group of FQDNs associated with well-known Microsoft services.
You can use an FQDN tag to allow the required outbound network traffic through the firewall.
For example, to manually allow Windows Update network traffic through your firewall, you would need to
create multiple application rules. Using FQDN tags, you create an application rule, and include the Windows
Updates tag. With this rule in place, network traffic to Microsoft Windows Update endpoints can flow through
your firewall.
Learn more:
Get an overview of Azure Firewall.
Learn about FQDN tags.

Best practice: Deploy Azure Web Application Firewall (WAF)


Web applications are increasingly targets of malicious attacks that exploit commonly known vulnerabilities.
Exploits include SQL injection attacks and cross-site scripting attacks. Preventing such attacks in application code
can be challenging, and can require rigorous maintenance, patching and monitoring at multiple layers of the
application topology. A centralized web application firewall helps make security management much simpler and
helps app administrators guard against threats or intrusions. A web app firewall can react to security threats faster,
by patching known vulnerabilities at a central location, instead of securing individual web applications. Existing
application gateways can be converted to a web application firewall enabled application gateway easily.
Azure Web application firewall (WAF ) is a feature of Azure application gateway.
WAF provides centralized protection of your web applications, from common exploits and vulnerabilities.
WAF protects without modification to backend code.
It can protect multiple web apps at the same time behind an application gateway
WAF is integrated with Azure Security Center.
You can customize WAF rules and rule groups to suit your app requirements.
As a best practice, you should use a WAF in front on any web-facing app, including apps on Azure VMs or as an
Azure App Service.
Learn more:
Learn about WAF.
Review WAF limitations and exclusions.

Best practice: Implement Azure Network Watcher


Azure Network Watcher provides tools to monitor resources and communications in an Azure VNet. For example,
you can monitor communications between a VM and an endpoint such as another VM or FQDN, view resources
and resource relationships in a VNet, or diagnose network traffic issues.

Network Watcher
With Network Watcher you can monitor and diagnose networking issues without logging into VMs.
You can trigger packet capture by setting alerts, and gain access to real-time performance information at the
packet level. When you see an issue, you can investigate it in detail.
As best practice, you should use Network Watcher to review NSG flow logs.
NSG flow logs in Network Watcher allow you to view information about ingress and egress IP traffic
through an NSG.
Flow logs are written in json format.
Flow logs show outbound and inbound flows on a per-rule basis, the network interface (NIC ) to which
the flow applies, 5-tuple information about the flow (source/destination IP, source/destination port, and
protocol), and whether the traffic was allowed or denied.
Learn more:
Get an overview of Network Watcher.
Learn more about NSG flow Logs.

Use partner tools in the Azure Marketplace


For more complex network topologies, you might use security products from Microsoft partners, in particular
network virtual appliances (NVAs).
An NVA is a VM that performs a network function, such as a firewall, WAN optimization, or other network
function.
NVAs bolster VNet security and network functions. They can be deployed for highly available firewalls,
intrusion prevention, intrusion detection, web application firewalls (WAFs), WAN optimization, routing, load
balancing, VPN, certificate management, Active Directory, and multifactor authentication.
NVA is available from numerous vendors in the Azure Marketplace.

Best practice: Implement firewalls and NVAs in hub networks


In the hub, the perimeter network (with access to the internet) is normally managed through an Azure Firewall, a
firewall farm, or with Web Application Firewalls (WAFs). Consider the following comparisons.

FIREWALL TYPE DETAILS

WAFs Web apps are common, and tend to suffer from vulnerabilities
and potential exploits.

WAFs are designed to detect attacks against web applications


(HTTP/HTTPS), more specifically than a generic firewall.

Compared with tradition firewall technology, WAFs have a set


of specific features that protect internal web servers from
threats.

Azure Firewall LiKE NVA firewall farms, Azure Firewall uses a common
administration mechanism, and a set of security rules to
protect workloads hosted in spoke networks, and to control
access to on-premises networks.

The Azure Firewall has scalability built in.


FIREWALL TYPE DETAILS

NVA firewalls Like Azure Firewall NVA firewall farms have common
administration mechanism, and a set of security rules to
protect workloads hosted in spoke networks, and to control
access to on-premises networks.

NVA firewalls can be manually scaled behind a load balancer.

Though an NVA firewall has less specialized software than a


WAF, it has broader application scope to filter and inspect any
type of traffic in egress and ingress.

If you want to use NVA you can find them in the Azure
Marketplace.

We recommend using one set of Azure Firewalls (or NVAs) for traffic originating on the internet, and another for
traffic originating on-premises.
Using only one set of firewalls for both is a security risk, as it provides no security perimeter between the two
sets of network traffic.
Using separate firewall layers reduces the complexity of checking security rules, and it's clear which rules
correspond to which incoming network request.
Learn more:
Learn about using NVAs in an Azure VNet.

Next steps
Review other best practices:
Best practices for security and management after migration.
Best practices for cost management after migration.
Best practices for costing and sizing workloads
migrated to Azure
4/29/2019 • 17 minutes to read • Edit Online

As you plan and design for migration, focusing on costs ensures the long-term success of your Azure migration.
During a migration project, it's critical that all teams (finance, management, app teams etc) understand associated
costs.
Before migration, estimating your migration spend, with a baseline for monthly, quarterly, and yearly budget
targets is critical to success.
After migration, you should optimize costs, continually monitor workloads, and plan for future usage patterns.
Migrated resources might start out as one type of workload, but evolve into another type over time, based on
usage, costs, and shifting business requirements.
This article describes best practices for costing and sizing before and after migration.

IMPORTANT
The best practices and opinions described in this article are based on Azure platform and service features available at the
time of writing. Features and capabilities change over time. Not all recommendations might be applicable for your
deployment, so select what works for you.

Before migration
Before you move your workloads to the cloud, estimate the monthly cost of running them in Azure. Proactively
managing cloud costs helps you adhere to your operating expenses (OpEx) budget. If budget is limited, take this
into account before migration. Consider converting workloads to Azure serverless technologies, where
appropriate, to reduce costs.
The best practices in this section help you to estimate costs, perform right-sizing for VMs and storage, leverage
Azure Hybrid benefits, use reserved VMs, and estimate cloud spending across subscriptions.

Best practice: Estimate monthly workload costs


To forecast your monthly bill for migrated workloads, there are a number of tools you can use.
Azure pricing calculator: You select the products you want to estimate, for example VMs and storage. You
input costs into the pricing calculator, to build an estimate.
Azure pricing calculator
Azure Migrate: To estimate costs, you need to review and account for all the resources required to run
your workloads in Azure. To acquire this data, you create inventory of your assets, including servers, VMs,
databases, and storage. You can use Azure Migrate to collect this information.
Azure Migrate discovers and assesses your on-premises environment to provide an inventory.
Azure Migrate can map and show you dependencies between VMs so that you have a complete
picture.
An Azure Migrate assessment contains estimated cost.
Compute costs: Using the Azure VM size recommended when you create an assessment, Azure
Migrate uses the Billing API to calculate estimated monthly VM costs. The estimation considers
the operating system, software assurance, reserved instances, VM uptime, location, and currency
settings. It aggregates the cost across all VMs in the assessment, and calculates a total monthly
compute cost.
Storage cost: Azure Migrate calculates total monthly storage costs by aggregating the storage
costs of all VMs in an assessment. You can calculate the monthly storage cost for a specific
machine by aggregating the monthly cost of all disks attached to it.
Azure Migrate assessment
Learn more:
Use the Azure pricing calculator.
Get an overview of Azure Migrate.
Read about Azure Migrate assessments.
Learn more about the Database Migration Service (DMS ).

Best practice: Right-size VMs


You can choose a number of options when you deploy Azure VMs to support workloads. Each VM type has
specific features and different combinations of CPU, memory, and disks. VMs are grouped as follows.

TYPE DETAILS USE

General purpose Balanced CPU-to-memory. Good for testing and development,


small to medium databases, low to
medium traffic web servers.

Compute-optimized High CPU-to-memory. Good for medium traffic web server,


network appliances, batch processes,
app servers.

Memory-optimized High memory-to-CPU. Good for relational databases, medium


to large cache, in-memory analytics.

Storage optimized High disk throughput and IO. Good for big data, SQL and NoSQL
databases.

GPU optimized Specialized VMs. Single or multiple Heavy graphics and video editing.
GPUs.

High performance Fastest and most powerful CPU. VMs Critical high-performance apps.
with optional high-throughput network
interfaces (RDMA)

It's important to understand the pricing differences between these VMs, and the long-term budget effects.
Each type has a number of VM series within it.
Additionally, when you select a VM within a series, you can only scale the VM up and down within that series.
For example, a DSv2_2 can scale up to DSv2_4, but it can't be changed to a different series such as Fsv2_2.
Learn more:
Learn more about VM types and sizing, and map sizes to types.
Plan VM sizing.
Review a sample assessment for the fictitious Contoso company.

Best practice: Select the right storage


Tuning and maintaining on-premises storage (SAN or NAS ), and the networks to support them, can be costly and
time-consuming. File (storage) data is commonly migrated to the cloud to help alleviate operational and
management headaches. Microsoft provides several options for moving data to Azure, and you need to make
decisions about those options. Picking the right storage type for data can save your organization several
thousands of dollars every month. A few considerations:
Data that isn't accessed much, and isn't business-critical, doesn't need to be placed on the most expensive
storage.
Conversely, important business-critical data should be located on higher tier storage options.
During migration planning, take an inventory of data and classify it by importance, in order to map it to the
most suitable storage. Consider budget and costs, as well as performance. Cost shouldn't necessarily be the
main decision-making factor. Picking the least expensive option could expose the workload to performance and
availability risks.
Storage data types
Azure provides different types of storage data.

DATA TYPE DETAILS USAGE

Blobs Optimized to store massive amounts of Use for streaming and random access
unstructured objects, such as text or scenarios. For example, to serve images
binary data and documents directly to a browser,
Access data from everywhere over stream video and audio, and store
HTTP/HTTPS. backup and disaster recovery data.

Files Managed file shares accessed over SMB Use when migrating on-premises file
3.0 shares, and to provide multiple
access/connections to file data.

Disks Based on page blobs. Use Premium disks for VMs. Use
managed disks for simple management
Disk type (speed): Standard (HDD or and scaling.
SSD) or Premium (SSD).

Disk management: Unmanaged (you


manage disk settings and storage) or
Managed (you select the disk type and
Azure manages the disk for you).

Queues Store and retrieve large numbers of Connect app components with
messages accessed via authenticated asynchronous message queueing.
calls (HTTP or HTTPS)

Tables Store tables. Now part of Azure Cosmos DB Table


API.

Access tiers
Azure storage provides different options for accessing block blob data. Selecting the right access tier helps ensure
that you store block blob data in the most cost-effective manner.

TYPE DETAILS USAGE

Hot Higher storage cost than Cool. Lower Use for data in active use that's
access charges than Cool. accessed frequently.

This is the default tier.

Cool Lower storage cost than Hot. Higher Store short-term, data is available but
access charges than Hot. accessed infrequently.

Store for minimum of 30 days.

Archive Used for individual block blobs. Use for data that can tolerate server
hours of retrieval latency and will
Most cost-effective option for storage. remain in the tier for at least 180 days.
Data access is more expensive than hot
and cold.

Storage account types


Azure provides different types of storage accounts and performance tiers.

ACCOUNT TYPE DETAILS USAGE

General Purpose v2 Standard Supports blobs (block, page, append), Use for most scenarios and most types
files, disks, queues, and tables. of data. Standard storage accounts can
be HDD or SSD based.
Supports Hot, Cool, and Archive access
tiers. ZRS is supported.

General Purpose v2 Premium Supports Blob storage data (page Microsoft recommends using for all
blobs). Supports Hot, Cool, and Archive VMs.
access tiers. ZRS is supported.

Stored on SSD.

General Purpose v1 Access tiering isn't supported. Doesn't Use if apps need the Azure classic
support ZRS deployment model.

Blob Specialized storage account for storing you can't store page blobs in these
unstructured objects. Provides block accounts, and therefore can't store VHD
blobs and append blobs only (no File, files. You can set an access tier to Hot
Queue, Table or Disk storage services). or Cool.
Provides the same durability, availability,
scalability and performance as General
Purpose v2.

Storage redundancy options


Storage accounts can use different types of redundancy for resilience and high availability.

TYPE DETAILS USAGE


TYPE DETAILS USAGE

Locally Redundant Storage (LRS) Protects against a local outage by Consider if your app stores data that
replicating within a single storage unit can be easily reconstructed.
to a separate fault domain and update
domain. Keeps multiple copies of your
data in one datacenter. Provides at least
99.999999999 % (11 9's) durability of
objects over a given year.

Zone Redundant Storage (ZRS) Protects again a datacenter outage by Consider if you need consistency,
replicating across three storage clusters durability, and high availability. Might
in a single region. Each storage cluster not protect against a regional disaster
is physically separated and located in its when multiple zones are permanently
own availability zone. Provides at least affected.
99.9999999999 % (12 9's) durability of
objects over a given year by keeping
multiple copies of your data across
multiple data centers or regions.

Geographically Redundant Storage Protects against an entire region Replica data isn't available unless
(GRS) outage by replicating data to a Microsoft initiates a failover to the
secondary region hundreds of miles secondary region. If failover occurs,
away from the primary. Provides at read and write access is available.
least 99.99999999999999 % (16 9's)
durability of objects over a given year.

Read-Access Geographically Similar to GRS. Provides at least Provides and 99.99 % read availability
Redundant Storage (RA-GRS) 99.99999999999999 % (16 9's) by allowing read access from the
durability of objects over a given year second region used for GRS.

Learn more:
Review Azure Storage pricing.
Learn about Azure Import/Export for migration large amounts of data to the Azure blobs and files.
Compare blobs, files, and disk storage data types.
Learn more about access tiers.
Review different types of storage accounts.
Learn about storage redundancy, LRS, ZRS, GRS, and Read-access GRS.
Learn more about Azure Files.

Best practice: Leverage Azure Hybrid benefits


Due to years of software investment in systems such as Windows Server and SQL Server, Microsoft is in a unique
position to offer customers value in the cloud, with substantial discounts that other cloud providers can't
necessarily provide.
An integrated Microsoft on-premises/Azure product portfolio generates competitive and cost advantages. If you
currently have an operating system or other software licensing through software assurance (SA), you can take
those licenses with you to the cloud for with Azure Hybrid Benefit.
Learn more:
Take a look at the Hybrid Benefit Savings Calculator.
Learn more about Hybrid Benefit for Windows Server.
Review pricing guidance for SQL Server Azure VMs.
Best practice: Use reserved VM instances
Most cloud platforms are set up as pay-as-you-go. This model presents disadvantages, since you don't necessarily
know how dynamic workloads will be. When you specify clear intentions for a workload, you contribute to
infrastructure planning.
Using Azure Reserved VM instances, you prepay for a one or three-year term VM instance.
Prepayment provides a discount on the resources you use.
You can significantly reduce VM, SQL database compute, Azure Cosmos DB, or other resource costs by up to
72% on pay-as-you-go prices.
Reservations provide a billing discount, and don't affect the runtime state of your resources.
You can cancel reserved instances.

Azure reserved VMs


Learn more:
Learn about Azure Reservations.
Read the reserved instances FAQ.
Get pricing guidance for SQL Server Azure VMs.

Best practice: Aggregate cloud spend across subscriptions


It's inevitable that eventually you'll have more than one Azure subscription. For example, you might need an
additional subscription to separate development and production boundaries, or you might have a platform that
requires a separate subscription for each client. Having the ability to aggregate data reporting across all the
subscriptions into a single platform is a valuable feature.
To do this, you can use Azure Cost Management APIs. Then, after aggregating data into a single source such as
Azure SQL, you can use tools like Power BI to surface the aggregated data. You can create aggregated subscription
reports, and granular reports. For example,for users who need proactive insights into cost management, you can
create specific views of costs, based on department, resource group etc. You don't need to provide them with full
access to Azure billing data.
Learn more:
Get an overview of the Azure Consumption API.
Learn about connecting to Azure Consumption Insights in Power BI Desktop.
Learn how to manage access to billing information for Azure using role-based access control (RBAC ).

After migration
After a successful migration of your workloads, and a few weeks of collecting consumption data, you'll have a clear
idea of resources costs.
As you analyze data, you can start to generate a budget baseline for Azure resource groups and resources.
Then, as you understand where your cloud budget is being spent, you can analyze how to further reduce your
costs.
Best practices in this section include using Azure Cost Management for cost budgeting and analysis, monitoring
resources and implementing resource group budgets, and optimizing monitoring, storage, and VMs.

Best practice: Use Azure Cost Management


Microsoft provides Azure Cost Management to help you track spending, as follows:
Helps you to monitor and control Azure spending, and optimize use of resources.
Reviews your entire subscription and all of its resources, and makes recommendations.
Provides with a full API, to integrate external tools and financial systems for reporting.
Tracks resource usage and manage cloud costs with a single, unified view.
Provides rich operational and financial insights to help you make informed decisions.
In Cost Management, you can:
Create a budget: Create a budget for financial accountability.
You can account for the services you consume or subscribe to for a specific period (monthly,
quarterly, annually) and a scope (subscriptions/resource groups). For example, you can create an
Azure subscription budget for a monthly, quarterly, or annual period.
After you create a budget, it's shown in cost analysis. Viewing your budget against current spending
is one of the first steps needed when analyzing your costs and spending.
Email notifications can be sent when budget thresholds are reached.
You can export costs management data to Azure storage, for analysis.
Azure Cost Management budget
Do a cost analysis: Get a cost analysis to explore and analyze your organizational costs, to help you
understand how costs are accrued, and identify spending trends.
Cost analysis is available to EA users.
You can view cost analysis data for a number of scopes, including by department, account,
subscription or resource group.
You can get a cost analysis that shows total costs for the current month, and accumulated daily costs.

Azure Cost Management analysis


Get recommendations: Get Advisor recommendations that show you how you can optimize and improve
efficiency.
Learn more:
Get an overview of Azure Cost Management.
Learn how to optimize your cloud investment with Azure Cost Management.
Learn how to use Azure Cost Management reports.
Get a tutorial on optimizing costs from recommendations.
Review the Azure Consumption API.

Best practice: Monitor resource utilization


In Azure you pay for what you use, when resources are consumed, and you don't pay when they aren't. For VMs,
billing occurs when a VM is allocated, and you aren't charged after a VM is deallocated. With this in mind you
should monitor VMs in use, and verify VM sizing.
Continually evaluate your VM workloads to determine baselines.
For example, if your workload is used heavily Monday through Friday, 8am to 6pm, but hardly used outside
those hours, you could downgrade VMs outside peak times. This might mean changing VM sizes, or using
virtual machine scale sets to autoscale VMs up or down.
Some companies "snooze", VMs by putting them on a calendar that specifies when they should be available,
and when they're not needed.
In addition to VM monitoring, you should monitor other networking resources such as ExpressRoute and
virtual network gateways for under and over use.
You can monitor VM usage using a number of Microsoft tools including Azure Cost Management, Azure
Monitor, and Azure Advisor. Third-party tools are also available.
Learn more:
Get an overview of Azure Monitor and Azure Advisor.
Get Advisor cost recommendations.
[Learn how to optimize costs from recommendations, and prevent unexpected charges.
Learn about the Azure Resource Optimization (ARO ) Toolkit

Best practice: Implement resource group budgets


Often, resource groups are used to represent cost boundaries. Together with this usage pattern, the Azure team
continues to develop new and enhanced ways to track and analyze resource spending at different levels, including
the ability to create budgets at the resource group and resources.
A resource group budget helps you track the costs associated with a resource group.
You can trigger alerts and run a wide variety of playbooks as the budget is reached or exceeded.
Learn more:
Learn how to manage costs with Azure Budgets.
Follow a tutorial to create and manage an Azure budget.

Best practice: Optimize Azure Monitor retention


As you move resources into Azure and enable diagnostic logging for them, you generate a lot of log data. Typically
this log data is sent to a storage account that's mapped to a Log Analytics workspace.
The longer the log data retention period, the more data you'll have.
Not all log data is equal, and some resources will generate more log data than others.
Due to regulations and compliance, it's likely that you'll need to retain log data for some resources longer than
others.
You should walk a careful line between optimizing your log storage costs, and keeping the log data you need.
We recommend evaluating and setting up the logging immediately after completing a migration, so that you
aren't spending money retaining logs of no importance.
Learn more:
Learn about monitoring usage and estimated costs.

Best practice: Optimize storage


If you followed best practices for selecting storage before migration, you are probably reaping some benefits.
However, there are probably additional storage costs that you can still optimize. Over time blobs and files become
stale. Data might not be used anymore, but regulatory requirements might mean that you need to keep it for a
certain period. As such, you might not need to store it on the high-performance storage that you used for the
original migration.
Identifying and moving stale data to cheaper storage areas can have a huge impact on your monthly storage
budget and cost savings. Azure provides many ways to help you identify and then store this stale data.
Take advantage of access tiers for general-purpose v2 storage, moving less important data from Hot to Cool
and Archived tiers.
Use StorSimple to help move stale data based on customized policies.
Learn more:
Learn more about access tiers.
Get an overview of StorSimple, and StorSimple pricing.

Best practice: Automate VM optimization


The ultimate goal of running a VM in the cloud is to maximize the CPU, memory, and disk that it uses. If you
discover VMs that aren't optimized, or have frequent periods when VMs aren't used, it makes sense to either shut
them down, or downscale them using VM scale sets.
You can optimize a VM with Azure Automation, VM scale sets, auto-shutdown, and scripted or 3rd party solutions.
Learn more:
Learn how to use vertical autoscaling.
Schedule a VM autostart.
Learn how to start or stop VMs off hours in Azure Automation.
[Get more information] about Azure Advisor, and the Azure Resource Optimization (ARO ) Toolkit.

Best practices: Use Logic Apps and runbooks with Budgets API
Azure provides a REST API that has access to your tenant billing information.
You can use the Budgets API to integrate external systems and workflows that are triggered by metrics that you
build from the API data.
You can pull usage and resource data into your preferred data analysis tools.
The Azure Resource Usage and RateCard APIs can help you accurately predict and manage your costs.
The APIs are implemented as a Resource Provider and are included in the APIs exposed by the Azure Resource
Manager.
The Budgets API can be integrated with Azure Logic Apps and Runbooks.
Learn more:
Learn more about the Budgets API.
Get insights into Azure usage with the Billing API.

Best practice: Implement serverless technologies


VM workloads are often migrated "as is" to avoid downtime. Often VMs may host tasks that are intermittent,
taking a short period to run, or alternatively many hours. For example, VMs that run scheduled tasks such as
Windows task scheduler or PowerShell scripts. When these tasks aren't running, you're nevertheless absorbing
VM and disk storage costs.
After migration, after a thorough review of these types of tasks you might consider migrating them to serverless
technologies such as Azure Functions or Azure Batch jobs. With this solution, you no longer need to manage and
maintain the VMs, bringing an additional cost saving.
Learn more:
Learn about Azure Functions
Learn more about Azure Batch

Next steps
Review other best practices:
Best practices for security and management after migration.
Best practices for networking after migration.
Contoso migration: Overview
3/15/2019 • 7 minutes to read • Edit Online

This article demonstrates how the fictitious organization Contoso migrates on-premises infrastructure to the
Microsoft Azure cloud.
This document is the first in a series of articles that show how the fictitious company Contoso migrates to Azure.
The series includes information and scenarios that illustrate how to set up a migration of infrastructure, and run
different types of migrations. Scenarios grow in complexity, and we'll add additional articles over time. The
articles show how the Contoso company completes its migration mission, but pointers for general reading and
specific instructions are provided throughout.

Introduction
Azure provides access to a comprehensive set of cloud services. As developers and IT professionals, you can use
these services to build, deploy, and manage applications on a range of tools and frameworks, through a global
network of datacenters. As your business faces challenges associated with the digital shift, the Azure cloud helps
you to figure out how to optimize resources and operations, engage with your customers and employees, and
transform your products.
However, Azure recognizes that even with all the advantages that the cloud provides in terms of speed and
flexibility, minimized costs, performance, and reliability, many organizations are going to need to run on-premises
datacenters for some time to come. In response to cloud adoption barriers, Azure provides a hybrid cloud
strategy that builds bridges between your on-premises datacenters, and the Azure public cloud. For example,
using Azure cloud resources like Azure Backup to protect on-premises resources, or using Azure analytics to gain
insights into on-premises workloads.
As part of the hybrid cloud strategy, Azure provides growing solutions for migrating on-premises apps and
workloads to the cloud. With simple steps, you can comprehensively assess your on-premises resources to figure
out how they'll run in the Azure cloud. Then, with a deep assessment in hand, you can confidently migrate
resources to Azure. When resources are up and running in Azure, you can optimize them to retain and improve
access, flexibility, security, and reliability.

Migration strategies
Strategies for migration to the cloud fall into four broad categories: rehost, refactor, rearchitect, or rebuild. The
strategy you adopt depends upon your business drivers, and migration goals. You might adopt multiple
strategies. For example, you could choose to rehost (lift-and-shift) simple apps, or apps that aren't critical to your
business, but rearchitect those that are more complex and business-critical. Let's look at the strategies.

STRATEGY DEFINITION WHEN TO USE


STRATEGY DEFINITION WHEN TO USE

Rehost Often referred to as a "lift-and-shift" When you need to move apps quickly
migration. This option doesn't require to the cloud.
code changes, and let's you migrate
your existing apps to Azure quickly. When you want to move an app
Each app is migrated as is, to reap the without modifying it.
benefits of the cloud, without the risk
and cost associated with code changes. When your apps are architected so that
they can leverage Azure IaaS scalability
after migration.

When apps are important to your


business, but you don't need
immediate changes to app capabilities.

Refactor Often referred to as “repackaging,” If your app can easily be repackaged to


refactoring requires minimal changes to work in Azure.
apps, so that they can connect to
Azure PaaS, and use cloud offerings. If you want to apply innovative
DevOps practices provided by Azure, or
For example, you could migrate existing you're thinking about DevOps using a
apps to Azure App Service or Azure container strategy for workloads.
Kubernetes Service (AKS).
For refactoring, you need to think
Or, you could refactor relational and about the portability of your existing
non-relational databases into options code base, and available development
such as Azure SQL Database Managed skills.
Instance, Azure Database for MySQL,
Azure Database for PostgreSQL, and
Azure Cosmos DB.

Rearchitect Rearchitecting for migration focuses on When your apps need major revisions
modifying and extending app to incorporate new capabilities, or to
functionality and the code base to work effectively on a cloud platform.
optimize the app architecture for cloud
scalability. When you want to use existing
application investments, meet
For example, you could break down a scalability requirements, apply
monolithic application into a group of innovative Azure DevOps practices, and
microservices that work together and minimize use of virtual machines.
scale easily.

Or, you could rearchitect relational and


non-relational databases to a fully
managed DBaaS solutions, such as
Azure SQL Database Managed
Instance, Azure Database for MySQL,
Azure Database for PostgreSQL, and
Azure Cosmos DB.

Rebuild Rebuild takes things a step further by When you want rapid development,
rebuilding an app from scratch using and existing apps have limited
Azure cloud technologies. functionality and lifespan.

For example, you could build green When you're ready to expedite
field apps with cloud-native business innovation (including DevOps
technologies like Azure Functions, practices provided by Azure), build new
Azure AI, Azure SQL Database applications using cloud-native
Managed Instance, and Azure Cosmos technologies, and take advantage of
DB. advancements in AI, Blockchain, and
IoT.
Migration articles
The articles in the series are summarized in the table below.
Each migration scenario is driven by slightly different business goals that determine the migration strategy.
For each deployment scenario, we provide information about business drivers and goals, a proposed
architecture, steps to perform the migration, and recommendation for cleanup and next steps after migration
is complete.

ARTICLE DETAILS STATUS

Article 1: Overview Overview of the article series, This article


Contoso's migration strategy, and the
sample apps that are used in the series.

Article 2: Deploy Azure infrastructure Contoso prepares its on-premises Available


infrastructure and its Azure
infrastructure for migration. The same
infrastructure is used for all migration
articles in the series.

Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database using
Data Migration Assistant.

Article 4: Rehost an app on an Azure Contoso runs a lift-and-shift migration Available


VM and SQL Database Managed to Azure for its on-premises
Instance SmartHotel360 app. Contoso migrates
the app front-end VM using Azure Site
Recovery. Contoso migrates the app
database to an Azure SQL Database
Managed Instance using the Azure
Database Migration Service.

Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs by using the
Site Recovery service.

Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available Article 7: Rehost a Linux app
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to on Azure VMs
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.

Article 8: Rehost a Linux app on Azure Contoso migrates its Linux osTicket Available
VMs and Azure Database for MySQL app to Azure VMs by using Site
Recovery. It migrates the app database
to Azure Database for MySQL by using
MySQL Workbench.
ARTICLE DETAILS STATUS

Article 9: Refactor an app in an Azure Contoso migrates its SmartHotel360 Available


web app and Azure SQL Database app to an Azure web app and migrates
the app database to an Azure SQL
Server instance with the Database
Migration Assistant.

Article 10: Refactor a Linux app in an Contoso migrates its Linux osTicket Available
Azure web app and Azure Database for app to an Azure web app on multiple
MySQL Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure Database
for MySQL instance.

Article 11: Refactor Team Foundation Contoso migrates its on-premises Available
Server on Azure DevOps Services Team Foundation Server deployment to
Azure DevOps Services in Azure.

Article 12: Rearchitect an app in Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and the
database with Azure SQL Database.

Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app by Available
using a range of Azure capabilities and
services, including Azure App Service,
Azure Kubernetes Service (AKS), Azure
Functions, Azure Cognitive Services,
and Azure Cosmos DB.

Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.

In this article Contoso sets up all the infrastructure elements it needs to complete all migration scenarios.
Demo apps
The articles use two demo apps - SmartHotel360, and osTicket.
SmartHotel360: This app was developed by Microsoft as a test app that you can use when working with
Azure. It's provided as open source and you can download it from GitHub. It's an ASP.NET app connected to
a SQL Server database. Currently the app is on two VMware VMs running Windows Server 2008 R2, and
SQL Server 2008 R2. The app VMs are hosted on-premises and managed by vCenter Server.
osTicket: An open-source service desk ticketing app that runs on Linux. You can download it from GitHub.
Currently the app is on two VMware VMs running Ubuntu 16.04 LTS, using Apache 2, PHP 7.0, and MySQL
5.7

Next steps
Learn how Contoso sets up an on-premises and Azure infrastructure to prepare for migration.
Contoso - Deploy a migration infrastructure
3/18/2019 • 37 minutes to read • Edit Online

In this article, Contoso prepares its on-premises infrastructure for migration, and sets up an Azure
infrastructure, in preparation for migration, and for running the business in a hybrid environment.
It's a sample architecture that's specific to Contoso.
Whether you need all the elements described in this article depends upon your migration strategy. For
example, if you're building only cloud-native apps in Azure, you might need a less complex networking
structure.
This article is part of a series of articles that document how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background information and a series of
deployment scenarios that illustrate how to set up a migration infrastructure, assess the suitability of on-
premises resources for migration, and run different types of migrations. Scenarios grow in complexity. Articles
will be added to the series over time.

ARTICLE DETAILS STATUS

Article 1: Overview Overview of the article series, Available


Contoso's migration strategy, and the
sample apps that are used in the
series.

Article 2: Deploy an Azure Contoso prepares its on-premises This article


infrastructure infrastructure and its Azure
infrastructure for migration. The same
infrastructure is used for all migration
articles in the series.

Article 3: Assess on-premises SContoso runs an assessment of its Available


resources for migration to Azure on-premises SmartHotel360 app
running on VMware. Contoso
assesses app VMs using the Azure
Migrate service, and the app SQL
Server database using Data Migration
Assistant.

Article 4: Rehost an app on an Azure Contoso runs a lift-and-shift Available


VM and SQL Database Managed migration to Azure for its on-premises
Instance SmartHotel360 app. Contoso migrates
the app front-end VM using Azure
Site Recovery. Contoso migrates the
app database to an Azure SQL
Database Managed Instance using the
Azure Database Migration Service.

Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.
ARTICLE DETAILS STATUS

Article 6: Rehost an app on Azure VMs Contoso migrates the app, using Site Available
and in a SQL Server AlwaysOn Recovery to migrate the app VMs, and
availability group the Database Migration Service to
migrate the app database to a SQL
Server cluster that's protected by an
AlwaysOn availability group.

Article 7: Rehost a Linux app on Azure Contoso completes a lift-and-shift Available


VMs migration of its Linux osTicket app to
Azure VMs, using the Site Recovery
service.

Article 8: Rehost a Linux app on Azure Contoso migrates its Linux osTicket Available
VMs and Azure Database for MySQL app to Azure VMs by using Site
Recovery. It migrates the app
database to Azure Database for
MySQL by using MySQL Workbench.

Article 9: Refactor an app in an Azure Contoso migrates its SmartHotel360 Available


web app and Azure SQL Database app to an Azure web app and
migrates the app database to an
Azure SQL Server instance with the
Database Migration Assistant.

Article 10: Refactor a Linux app in an Contoso migrates its Linux osTicket Available
Azure web app and Azure Database app to an Azure web app on multiple
for MySQL sites. The web app is integrated with
GitHub for continuous delivery. It
migrates the app database to an
Azure Database for MySQL instance.

Article 11: Refactor Team Foundation Contoso migrates its on-premises Available
Server on Azure DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.

Article 12: Rearchitect an app in Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the app database with Azure SQL
Database.

Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.

Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.

In this article Contoso sets up all the infrastructure elements it needs to complete all migration scenarios.

Overview
Before Contoso can migrate to Azure, it's critical to prepare an Azure infrastructure. Generally, there are five
broad areas Contoso needs to think about:
Step 1: Azure subscriptions: How will Contoso purchase Azure, and interact with the Azure platform and
services?
Step 2: Hybrid identity: How will it manage and control access to on-premises and Azure resources after
migration? How does Contoso extend or move identity management to the cloud?
Step 3: Disaster recovery and resilience: How will Contoso ensure that its apps and infrastructure are
resilient if outages and disasters occur?
Step 4: Networking: How should Contoso design a networking infrastructure, and establish connectivity
between its on-premises datacenter and Azure?
Step 5: Security: How will its secure the hybrid/Azure deployment?
Step 6: Governance: How will Contoso keep the deployment aligned with security and governance
requirements?

Before you start


Before we start looking at the infrastructure, you might want to read some background information about the
Azure capabilities we discuss in this article:
There are a number of options available for purchasing Azure access, including Pay-As-You-Go, Enterprise
Agreements (EA), Open Licensing from Microsoft resellers, or from Microsoft Partners known as Cloud
Solution Providers (CSPs). Learn about purchase options, and read about how Azure subscriptions are
organized.
Get an overview of Azure identity and access management. In particular, learn about Azure AD and
extending on-premises AD to the cloud. There's a useful downloadable e-book about identity and access
management (IAM ) in a hybrid environment.
Azure provides a robust networking infrastructure with options for hybrid connectivity. Get an overview of
networking and network access control.
Get an introduction to Azure Security, and read about creating a plan for governance.

On-premises architecture
Here's a diagram showing the current Contoso on-premises infrastructure.

Contoso has one main datacenter located in the city of New York in the Eastern United States.
There are three additional local branches across the United States.
The main datacenter is connected to the internet with a fiber metro ethernet connection (500 mbps).
Each branch is connected locally to the internet using business class connections, with IPSec VPN tunnels
back to the main datacenter. This allows the entire network to be permanently connected, and optimizes
internet connectivity.
The main datacenter is fully virtualized with VMware. Contoso has two ESXi 6.5 virtualization hosts,
managed by vCenter Server 6.5.
Contoso uses Active Directory for identity management, and DNS servers on the internal network.
The domain controllers in the datacenter run on VMware VMs. The domain controllers at local branches
run on physical servers.

Step 1: Buy and subscribe to Azure


Contoso needs to figure out how to buy Azure, how to architect subscriptions, and how to license services and
resources.
Buy Azure
Contoso is going with an Enterprise Agreement (EA). This entails an upfront monetary commitment to Azure,
entitling Contoso to earn great benefits, including flexible billing options and optimized pricing.
Contoso estimated what its yearly Azure spend will be. When it signed the agreement, Contoso paid for the
first year in full.
Contoso needs to use all commitments before the year is over, or lose the value for those dollars.
If for some reason Contoso exceeds its commitment and spends more, Microsoft will invoice them for the
difference.
Any cost incurred above the commitment will be at the same rates and those in the Contoso contract. There
are no penalties for going over.
Manage subscriptions
After paying for Azure, Contoso needs to figure out how to manage Azure subscriptions. Contoso has an EA,
and thus no limit on the number of Azure subscriptions it can set up.
An Azure Enterprise Enrollment defines how a company shapes and uses Azure services, and defines a core
governance structure.
As a first step, Contoso has determined a structure (known as an enterprise scaffold for Enterprise
Enrollment. Contoso used this article to help understand and design a scaffold.
For now, Contoso has decided to use a functional approach to manage subscriptions.
Inside the enterprise it will use a single IT department that controls the Azure budget. This will be
the only group with subscriptions.
Contoso will extend this model in the future, so that other corporate groups can join as
departments in the Enterprise Enrollment.
Inside the IT department Contoso has structured two subscriptions, Production and
Development.
If Contoso requires additional subscriptions in the future, it needs to manage access, policies and
compliance for those subscriptions. Contoso will do that by introducing Azure management
groups, as an additional layer above subscriptions.
Examine licensing
With subscriptions configured, Contoso can look at Microsoft licensing. The licensing strategy will depend on
the resources that Contoso want to migrate into Azure, and how Azure VMs and services are selected and
deployed.
Azure Hybrid Benefit
When deploying VMs in Azure, standard images include a license that will charge Contoso by the minute for
the software being used. However, Contoso has been a long-term Microsoft customer, and has maintained EAs
and open licenses with Software Assurance (SA).
Azure Hybrid Benefit provides a cost-effective method for Contoso migration, by allowing it to save on Azure
VMs and SQL Server workloads by converting or reusing Windows Server Datacenter and Standard edition
licenses covered with Software Assurance. This will enable Contoso to pay a lower based compute rate for
VMs and SQL Server. Learn more.
License Mobility
License Mobility through SA gives Microsoft Volume Licensing customers like Contoso the flexibility to deploy
eligible server apps with active SA on Azure. This eliminates the need to purchase new licenses. With no
associated mobility fees, existing licenses can easily be deployed in Azure. Learn more.
Reserve instances for predictable workloads
Predictable workloads are those that always need to be available with VMs running. For example, line-of-
business apps such as a SAP ERP system. On the other hand, unpredictable workloads are those that are
variable. For example VMs that are on during high demand and off at non-peak times.
In exchange for using reserved instances for specific VM instances must be maintained for large durations of
time, Console can get both a discount, and prioritized capacity. Using Azure Reserved Instances, together with
Azure Hybrid Benefit, Contoso can save up to 82% off regular pay-as-you-go pricing (April 2018).

Step 2: Manage hybrid identity


Giving and controlling user access to Azure resources with identity and access management (IAM ) is an
important step in pulling together an Azure infrastructure.
Contoso decides to extend its on-premises Active Directory into the cloud, rather than build a new separate
system in Azure.
It creates an Azure-based Active Directory to do this.
Contoso doesn't have Office 365 in place, so it needs to provision a new Azure AD.
Office 365 uses Azure AD for user management. If Contoso was using Office 365, it would already have an
Azure AD tenant, and can use that as the primary AD.
Learn more about Azure AD for Office 365, and learn how to add a subscription to an existing Azure AD
tenant.
Create an Azure AD
Contoso is using the Azure AD Free edition that's included with an Azure subscription. Contoso admins set up
an AD directory as follows:
1. In the Azure portal, they navigate to Create a resource > Identity > Azure Active Directory.
2. In Create Directory, they specify a name for the directory, an initial domain name, and region in which
the Azure AD directory should be created.

NOTE
The directory that's created has an initial domain name in the form domainname.onmicrosoft.com. The name
can't be changed or deleted. Instead, they need to add its registered domain name to Azure AD.

Add the domain name


To use its standard domain name, Contoso admins need to add it as a custom domain name to Azure AD. This
option allows them to assign familiar user names. For example, a user can log in with the email address
billg@contoso.com, rather than needing billg@contosomigration.microsoft.com.
To set up a custom domain name they add it to the directory, add a DNS entry, and then verify the name in
Azure AD.
1. In Custom domain names > Add custom domain, they add the domain.
2. To use a DNS entry in Azure they need to register it with their domain registrar.
In the Custom domain names list, they note the DNS information for the name. It's using an MX
entry.
They need access to the name server to do this. They log into the Contoso.com domain, and create a
new MX record for the DNS entry provided by Azure AD, using the details noted.
3. After the DNS records propagate, in the details name for the domain, they click Verify to check the
custom domain name.

Set up on-premises and Azure groups and users


Now that the Azure AD is up and running, Contoso admins need to add employees to on-premises AD groups
that will synchronize to Azure AD. They should use on-premises group names that match the names of
resource groups in Azure. This makes it easier to identify matches for synchronization purposes.
Create resource groups in Azure
Azure resource groups gather Azure resources together. Using a resource group ID allows Azure to perform
operations on the resources within the group.
An Azure subscription can have multiple resource groups, but a resource group can only exist within a
single subscription.
In addition, a single resource group can have multiple resources, but a resource can only belong to a single
resource group.
Contoso admins set up Azure resource groups as summarized in the following table.

RESOURCE GROUP DETAILS


RESOURCE GROUP DETAILS

ContosoCobRG This group contains all resources related to continuity of


business (COB). It includes vaults that Contoso will use for
the Azure Site Recovery service, and the Azure Backup
service.

It will also include resources used for migration, including


the Azure Migrate and Database Migration Services.

ContosoDevRG This group contains development and test resources.

ContosoFailoverRG This group serves as a landing zone for failed over


resources.

ContosoNetworkingRG This group contains all networking resources.

ContosoRG This group contains resources related to production apps


and databases.

They create resource groups as follows:


1. In the Azure portal > Resource groups, they add a group.
2. For each group they specify a name, the subscription to which the group belongs, and the region.
3. Resource groups appear in the Resource groups list.

Sc a l i n g r e so u r c e g r o u p s

In future, Contoso will add other resource groups based on needs. For example, they could define a resource
group for each app or service, so that they can be managed and secure independently.
Create matching security groups on-premises
1. In the on-premises Active Directory, Contoso admins set up security groups with names that match the
names of the Azure resource groups.
2. For management purposes, they create an additional group that will be added to all of the other groups.
This group will have rights to all resource groups in Azure. A limited number of Global Admins will be
added to this group.
Synchronize AD
Contoso wants to provide a common identity for accessing resources on-premises and in the cloud. To do this,
it will integrate the on-premises Active Directory with Azure AD. With this model:
Users and organizations can take advantage of a single identity to access on-premises applications and
cloud services such as Office 365, or thousands of other sites on the internet.
Admins can leverage the groups in AD to implement Role Based Access Control (RBAC ) in Azure.
To facilitate integration, Contoso uses the Azure AD Connect tool. When you install and configure the tool on a
domain controller, it synchronizes the local on-premises AD identities to the Azure AD.
Download the tool
1. In the Azure portal, Contoso admins go to Azure Active Directory > Azure AD Connect, and
download the latest version of the tool to the server they're using for synchronization.

2. They start the AzureADConnect.msi installation, with Use express settings. This is the most
common installation, and can be used for a single-forest topology, with password hash synchronization
for authentication.
3. In Connect to Azure AD, they specify the credentials for connecting to the Azure AD (in the form
CONTOSO\admin or contoso.com\admin).

4. In Connect to AD DS, they specify credentials for the on-premises AD.

5. In Ready to configure, they click Start the synchronization process when configuration
completes to start the sync immediately. Then they install.
Note that:
Contoso has a direct connection to Azure. If your on-premises AD is behind a proxy, read this article.
After the first synchronization, on-premises AD objects can be seen in the Azure AD.
The Contoso IT team is represented in each group, based on its role.

Set up RBAC
Azure Role-Based Access Control (RBAC ) enables fine-grained access management for Azure. Using RBAC,
you can grant only the amount of access that users need to perform tasks. You assign the appropriate RBAC
role to users, groups, and applications at a scope level. The scope of a role assignment can be a subscription, a
resource group, or a single resource.
Contoso admins now assigns roles to the AD groups that they synchronized from on-premises.
1. In the ControlCobRG resource group, they click Access control (IAM ) > Add role assignment.
2. In Add role assignment > Role, > Contributor, they select the ContosoCobRG AD group from the
list. The group then appears in the Selected members list.
3. They repeat this with the same permissions for the other resource groups (except for
ContosoAzureAdmins), by adding the Contributor permissions to the AD account that matches the
resource group.
4. For the ContosoAzureAdmins AD group, they assign the Owner role.

Step 3: Design for resilience and disaster


Set up regions
Azure resources are deployed within regions.
Regions are organized into geographies, and data residency, sovereignty, compliance and resiliency
requirements are honored within geographical boundaries.
A region is composed of a set of datacenters. These datacenters are deployed within a latency-defined
perimeter, and connected through a dedicated regional low -latency network.
Each Azure region is paired with a different region for resiliency.
Read about Azure regions, and understand how regions are paired.
Contoso has decided to go with the East US 2 (located in Virginia) as the primary region, and Central US
(located in Iowa) as the secondary region. There are a couple of reasons for this:
The Contoso datacenter is located in New York, and Contoso considered latency to the closest datacenter.
The East US 2 region has all the service and products that Contoso needs to use. Not all Azure regions are
the same in terms of the products and services available. You can review Azure products by region.
Central US is the Azure paired region for East US 2.
As it thinks about the hybrid environment, Contoso needs to consider how to build resilience and a disaster
recovery strategy into the region design. Broadly, strategies range from a single-region deployment, which
relies on Azure platform features such as fault domains and regional pairing for resilience, through to a full
Active-Active model in which cloud services and database are deployed and servicing users from two regions.
Contoso has decided to take a middle road. It will deploy apps and resources in a primary region, and keep a
full copy of the infrastructure in the secondary region, so that it's ready to act as a full backup in case of
complete app disaster, or regional failure.
Set up availability zones
Availability zones help protect apps and data from datacenter failures.
Each availability zone is a unique physical location within an Azure region.
Each zone is made up of one or more datacenters equipped with independent power, cooling, and
networking.
There's a minimum of three separate zones in all enabled regions.
The physical separation of zones within a region protects applications and data from datacenter failures.
Contoso will deploy availability zones as apps call for scalability, high-availability, and resiliency. Learn more.

Step 4: Design a network infrastructure


With the regional design in place, Contoso is ready to consider a networking strategy. It needs to think about
how the on-premises datacenter and Azure connect and communicate with each other, and how to design the
network infrastructure in Azure. Specifically Contoso needs to:
Plan hybrid network connectivity: Figure out how it's going to connect networks across on-premises
and Azure.
Design an Azure network infrastructure: Decide how it will deploy networks over regions. How will
networks communicate within the same region, and across regions?
Design and set up Azure networks: Set up Azure networks and subnets, and decide what will reside in
them.
Plan hybrid network connectivity
Contoso considered a number of architectures for hybrid networking between Azure and the on-premises
datacenter. Read more about comparing options.
As a reminder, the Contoso on-premises network infrastructure currently consists of the datacenter in New
York, and local branches in the eastern portion of the US. All locations have a business class connection to the
internet. Each of the branches is then connected to the datacenter via an IPSec VPN tunnel over the internet.

Here's how Contoso decided to implement hybrid connectivity:


1. Set up a new site-to-site VPN connection between the Contoso datacenter in New York and the two Azure
regions in East US 2 and Central US.
2. Branch office traffic bound for Azure virtual networks will route through the main Contoso datacenter.
3. As Contoso scales up Azure deployment, it will establish an ExpressRoute connection between the
datacenter and the Azure regions. When this happens, Contoso will retain the VPN site-to-site connection
for failover purposes only.
Learn more about choosing between a VPN and ExpressRoute hybrid solution.
Verify ExpressRoute locations and support.
VPN only

VPN and ExpressRoute

Design the Azure network infrastructure


It's critical that Contoso puts networks in place in a way that makes the hybrid deployment secure and scalable.
To do this, Contoso are taking a long-term approach, and are designing virtual networks (VNets) to be resilient
and enterprise ready. Learn more about planning VNets.
To connect the two regions, Contoso has decided to implement a hub-to-hub network model:
Within each region, Contoso will use a hub-and-spoke model.
To connect networks and hubs, Contoso will use Azure network peering.
Network peering
Azure provides network peering to connect VNets and hubs. Global peering allows connections between
VNets/hubs in different regions. Local peering connects VNets in the same region. VNet peering provide a
number of advantages:
Network traffic between peered VNets is private.
Traffic between the VNets is kept on the Microsoft backbone network. No public internet, gateways, or
encryption is required in the communication between the VNets.
Peering provides a default, low -latency, high-bandwidth connection between resources in different VNets.
Learn more about network peering.
Hub-to-hub across regions
Contoso will deploy a hub in each region. A hub is a virtual network (VNet) in Azure that acts as a central point
of connectivity to your on-premises network. The hub VNets will connect to each other using global VNet
peering. Global VNet peering connects VNets across Azure regions.
The hub in each region is peered to its partner hub in the other region.
The hub is peered to every network in its region, and can connect to all network resources.

Hub-and-spoke within a region


Within each region, Contoso will deploy VNets for different purposes, as spoke networks from the region hub.
VNets within a region use peering to connect to their hub, and to each other.
Design the hub network
Within the hub and spoke model that Contoso has chosen, it needs to think about how traffic from the on-
premises datacenter, and from the internet, will be routed. Here's how Contoso has decided to handle routing
for both the East US 2 and Central US hubs:
Contoso is designing a network known as "reverse c", as this is the path that the packets follow from the
inbound to outbound network.
The network architecture has two boundaries, an untrusted front-end perimeter zone and a back-end
trusted zone.
A firewall will have a network adapter in each zone, controlling access to trusted zones.
From the internet:
Internet traffic will hit a load-balanced public IP address on the perimeter network.
This traffic is routed through the firewall, and subject to firewall rules.
After network access controls are implemented, traffic will be forwarded to the appropriate location
in the trusted zone.
Outbound traffic from the VNet will be routed to the internet using user-defined routes (UDRs). The
traffic is forced through the firewall, and inspected in line with Contoso policies.
From the Contoso datacenter:
Incoming traffic over VPN site-to-site (or ExpressRoute) hits the public IP address of the Azure VPN
gateway.
Traffic is routed through the firewall and subject to firewall rules.
After applying firewall rules, traffic is forwarded to an internal load balancer (Standard SKU ) on the
trusted internal zone subnet.
Outbound traffic from the trusted subnet to the on-premises datacenter over VPN is routed through
the firewall, and rules applied, before going over the VPN site-to-site connection.
Design and set up Azure networks
With a network and routing topology in place, Contoso is ready to set up Azure networks and subnets.
Contoso will implement a Class A private network in Azure (0.0.0.0 to 127.255.255.255). This works, since
on-premises it currently has a Class B private address space 172.160.0/16 so Contoso can be sure there
won't be any overlap between address ranges.
It's going to deploy VNets in the primary and secondary regions.
Contoso will use a naming convention that includes the prefix VNET and the region abbreviation EUS2 or
CUS. Using this standard, the hub networks will be named VNET-HUB -EUS2 (East US 2), and VNET-
HUB -CUS (Central US ).
Contoso doesn't have an IPAM solution, so it needs to plan for network routing without NAT.
Virtual networks in East US 2
East US 2 is the primary region that Contoso will use to deploy resources and services. Here's how Contoso
will architect networks within it:
Hub: The hub VNet in East US 2 is the central point of primary connectivity to the on-premises datacenter.
VNets: Spoke VNets in East US 2 can be used to isolate workloads if required. In addition to the Hub VNet,
Contoso will have two spoke VNets in East US 2:
VNET-DEV -EUS2. This VNet will provide the development and test team with a fully functional
network for dev projects. It will act as a production pilot area, and will rely on the production
infrastructure to function.
VNET-PROD -EUS2: Azure IaaS production components will be located in this network.
Each VNet will have its own unique address space, with no overlap. Contoso intend to configure
routing without requiring NAT.
Subnets:
There will be a subnet in each network for each app tier
Each subnet in the Production network will have a matching subnet in the Development VNet.
In addition, the Production network has a subnet for domain controllers.
VNets in East US 2 are summarized in the following table.

VNET RANGE PEER

VNET-HUB-EUS2 10.240.0.0/20 VNET-HUB-CUS2, VNET-DEV-EUS2,


VNET-PROD-EUS2

VNET-DEV-EUS2 10.245.16.0/20 VNET-HUB-EUS2

VNET-PROD-EUS2 10.245.32.0/20 VNET-HUB-EUS2, VNET-PROD-CUS


Subnets in the East US 2 Hub network (VNET-HUB-EUS2)

SUBNET/ZONE CIDR **USABLE IP ADDRESSES

IB-UntrustZone 10.240.0.0/24 251

IB-TrustZone 10.240.1.0/24 251

OB-UntrustZone 10.240.2.0/24 251

OB-TrustZone 10.240.3.0/24 251

GatewaySubnets 10.240.10.0/24 251

Subnets in the East US 2 Dev network (VNET-DEV-EUS2)


The Development VNet is used by the development team as a production pilot area. It has three subnets.

SUBNET CIDR ADDRESSES IN SUBNET

DEV-FE-EUS2 10.245.16.0/22 1019 Frontends/web tier VMs

DEV-APP-EUS2 10.245.20.0/22 1019 App-tier VMs

DEV-DB-EUS2 10.245.24.0/23 507 Database VMs

Subnets in the East US 2 Production network (VNET-PROD-EUS2)


Azure IaaS components are located in the Production network. Each app tier has its own subnet. Subnets
match those in the Development network, with the addition of a subnet for domain controllers.

SUBNET CIDR ADDRESSES IN SUBNET

PROD-FE-EUS2 10.245.32.0/22 1019 Frontends/web tier VMs

PROD-APP-EUS2 10.245.36.0/22 1019 App-tier VMs

PROD-DB-EUS2 10.245.40.0/23 507 Database VMs

PROD-DC-EUS2 10.245.42.0/24 251 Domain controller VMs


Virtual networks in Central US (secondary region )
Central US is Contoso's secondary region. Here's how Contoso will architect networks within it:
Hub: The hub VNet in East US 2 is the central point of connectivity to the on-premises datacenter, and the
spoke VNets in East US 2 can be used to isolate workloads if required, managed separately from other
spokes.
VNets: Contoso will have two VNets in Central US:
VNET-PROD -CUS. This VNet is a production network, similar to VNET-PROD_EUS2.
VNET-ASR -CUS. This VNet will act as a location in which VMs are created after failover from on-
premises, or as a location for Azure VMs that are failed over from the primary to the secondary
region. This network is similar to the production networks, but without any domain controllers on it.
Each VNet in the region will have its own address space, with no overlap. Contoso will configure
routing without NAT.
Subnets: The subnets will be architected in a similar way to those in East US 2. The exception is that
Contoso doesn't need a subnet for domain controllers.
The VNets in Central US are summarized in the following table.

VNET RANGE PEER

VNET-HUB-CUS 10.250.0.0/20 VNET-HUB-EUS2, VNET-ASR-CUS,


VNET-PROD-CUS

VNET-ASR-CUS 10.255.16.0/20 VNET-HUB-CUS, VNET-PROD-CUS

VNET-PROD-CUS 10.255.32.0/20 VNET-HUB-CUS, VNET-ASR-CUS,


VNET-PROD-EUS2
Subnets in the Central US Hub network (VNET-HUB-CUS)

SUBNET CIDR USABLE IP ADDRESSES

IB-UntrustZone 10.250.0.0/24 251

IB-TrustZone 10.250.1.0/24 251

OB-UntrustZone 10.250.2.0/24 251

OB-TrustZone 10.250.3.0/24 251

GatewaySubnet 10.250.2.0/24 251

Subnets in the Central US Production network (VNET-PROD-CUS)


In parallel to the production network in the primary East US 2 region, there's a production network in the
secondary Central US region.

SUBNET CIDR ADDRESSES IN SUBNET

PROD-FE-CUS 10.255.32.0/22 1019 Frontends/web tier VMs

PROD-APP-CUS 10.255.36.0/22 1019 App-tier VMs

PROD-DB-CUS 10.255.40.0/23 507 Database VMs

PROD-DC-CUS 10.255.42.0/24 251 Domain controller VMs

Subnets in the Central US failover/recovery network in Central US (VNET-ASR-CUS)


The VNET-ASR -CUS network is used for purposes of failover between regions. Site Recovery will be used to
replicate and fail over Azure VMs between the regions. It also functions as a Contoso datacenter to Azure
network for protected workloads that remain on-premises, but fail over to Azure for disaster recovery.
VNET-ASR -CUS is the same basic subnet as the production VNet in East US 2, but without the need for a
domain controller subnet.

SUBNET CIDR ADDRESSES IN SUBNET

ASR-FE-CUS 10.255.16.0/22 1019 Frontends/web tier VMs

ASR-APP-CUS 10.255.20.0/22 1019 App-tier VMs


SUBNET CIDR ADDRESSES IN SUBNET

ASR-DB-CUS 10.255.24.0/23 507 Database VMs

Configure peered connections


The hub in each region will be peered to the hub in the other region, and to all VNets within the hub region.
This allows for hubs to communicate, and to view all VNets within a region. Note that:
Peering creates a two-sided connection. One from the initiating peer on the first VNet, and another one on
the second VNet.
In a hybrid deployment, traffic that passes between peers needs to be seen from the VPN connection
between the on-premises datacenter and Azure. To enable this, there are some specific settings that must be
set on peered connections.
For any connections from spoke VNets through the hub to the on-premises datacenter, Contoso needs to allow
traffic to be forwarded, and transverse the VPN gateways.
Do m ai n c o n t r o l l er

For the domain controllers in the VNET-PROD -EUS2 network, Contoso wants traffic to flow both between the
EUS2 hub/production network, and over the VPN connection to on-premises. To do this it Contoso admins
must allow the following:
1. Allow forwarded traffic and Allow gateway transit configurations on the peered connection. In
our example this would be the VNET-HUB -EUS2 to VNET-PROD -EUS2 connection.
2. Allow forwarded traffic and Use remote gateways on the other side of the peering, on the VNET-
PROD -EUS2 to VNET-HUB -EUS2 connection.

3. On-premises they'll set up a static route that directs the local traffic to route across the VPN tunnel to
the VNet. The configuration would be completed on the gateway that provides the VPN tunnel from
Contoso to Azure. They use RRAS for this.

Pr o du c t i o n n et w o r ks

A spoked peer network can't see a spoked peer network in another region via a hub.
For Contoso's production networks in both regions to see each other, Contoso admins need to create a direct
peered connection for VNET-PROD -EUS2 and VENT-PROD -CUS.
Set up DNS
When you deploy resources in virtual networks, you have a couple of choices for domain name resolution. You
can use name resolution provided by Azure, or provide DNS servers for resolution. The type of name
resolution you use depends on how your resources need to communicate with each other. Get more
information about the Azure DNS service.
Contoso admins have decided that the Azure DNS service isn't a good choice in the hybrid environment.
Instead, they're going to leverage the on-premises DNS servers.
Since this is a hybrid network all the VMs on-premises and in Azure need to be able to resolve names to
function properly. This means that custom DNS settings must be applied to all the VNets.
Contoso currently has DCs deployed in the Contoso datacenter and at the branch offices. The primary
DNS servers are CONTOSODC1(172.16.0.10) and CONTOSODC2(172.16.0.1)
When the VNets are deployed, the on-premises domain controllers will be set to be used as DNS
servers in the networks.
To configure this, when using custom DNS on the VNet, Azure's recursive resolvers IP address (such as
168.63.129.16) must be added to the DNS list. To do this, Contoso configures DNS server settings on
each VNet. For example, the custom DNS settings for the VNET-HUB -EUS2 network would be as
follows:

In addition to the on-premises domain controllers, Contoso are going to implement four more to support the
Azure networks, two for each region. Here's what Contoso will deploy in Azure.

REGION DC VNET SUBNET IP ADDRESS

EUS2 CONTOSODC3 VNET-PROD-EUS2 PROD-DC-EUS2 10.245.42.4


REGION DC VNET SUBNET IP ADDRESS

EUS2 CONTOSODC4 VNET-PROD-EUS2 PROD-DC-EUS2 10.245.42.5

CUS CONTOSODC5 VNET-PROD-CUS PROD-DC-CUS 10.255.42.4

CUS CONTOSODC6 VNET-PROD-CUS PROD-DC-CUS 10.255.42.4

After deploying the on-premises domain controllers, Contoso needs to update the DNS settings on networks
on either region to include the new domain controllers in the DNS server list.
Set up domain controllers in Azure
After updating network settings, Contoso admins are ready to build out the domain controllers in Azure.
1. In the Azure portal, they deploy a new Windows Server VM to the appropriate VNet.
2. They create availability sets in each location for the VM. Availability sets do the following:
Ensure that the Azure fabric separates the VMs into different infrastructures in the Azure Region.
Allows Contoso to be eligible for the 99.95% SLA for VMs in Azure. Learn more.

3. After the VM is deployed, they open the network interface for the VM. They set the private IP address to
static, and specify a valid address.
4. Now, they attach a new data disk to the VM. This disk contains the Active Directory database, and the
sysvol share.
The size of the disk will determine the number of IOPS that it supports.
Over time the disk size might need to increase as the environment grows.
The drive shouldn't be set to Read/Write for host caching. Active Directory databases don't
support this.

5. After the disk is added, they connect to the VM over Remote Desktop, and open Server Manager.
6. Then in File and Storage Services, they run the New Volume Wizard, ensuring that the drive is given
the letter F: or above on the local VM.
7. In Server Manager, they add the Active Directory Domain Services role. Then, they configure the
VM as a domain controller.

8. After the VM is configured as a DC and rebooted, they open DNS Manager and configure the Azure
DNS resolver as a forwarder. This allows the DC to forward DNS queries it can't resolve in the Azure
DNS.

9. Now, they update the custom DNS settings for each VNet with the appropriate domain controller for
the VNet region. They include on-premises DCs in the list.
Set up Active Directory
AD is a critical service in networking, and must be configured correctly. Contoso admins will build AD sites for
the Contoso datacenter, and for the EUS2 and CUS regions.
1. They create two new sites (AZURE -EUS2, and AZURE -CUS ) along with the datacenter site
(ContosoDatacenter).
2. After creating the sites, they create subnets in the sites, to match the VNets and datacenter.

3. Then, they create two site links to connect everything. The domain controllers should then be moved to
their location.

4. After everything is configured, the Active Directory replication topology is in place.

5. With everything complete, a list of the domain controllers and sites are shown in the on-premises Active
Directory Administrative Center.
Step 5: Plan for governance
Azure provides a range of governance controls across services and the Azure platform. Read more for a basic
understanding of options.
As they configure identity and access control, Contoso has already begun to put some aspects of governance
and security in place. Broadly, there are three areas it needs to consider:
Policy: Policy in Azure applies and enforces rules and effects over your resources, so that resources stay
compliant with corporate requirements and SLAs.
Locks: Azure allows you to lock subscriptions, resources groups, and other resources, so that they can only
be modified by those with authority to do so.
Tags: Resources can be controlled, audited, and managed with tags. Tags attach metadata to resources,
providing information about resources or owners.
Set up policies
The Azure Policy service evaluates your resources, scanning for those not compliant with the policy definitions
you have in place. For example, you might have a policy that only allows certain types of VMs, or requires
resources to have a specific tag.
Azure policies specify a policy definition, and a policy assignment specifies the scope in which a policy should
be applied. The scope can range from a management group to a resource group. Learn about creating and
managing policies.
Contoso wants to get started with a couple of policies:
It wants a policy to ensure that resources can only be deployed in the EUS2 and CUS regions.
It wants to limit VM SKUs to approved SKUs only. The intention is to ensure that expensive VM SKUs
aren't used.
Limit resources to regions
Contoso uses the built-in policy definition Allowed locations to limit resource regions.
1. In the Azure portal, click All Services, and search for Policy.
2. Select Assignments > Assign Policy.
3. In the policy list, select Allowed locations.
4. Set Scope to the name of the Azure subscription, and select the two regions in the allowed list.
5. By default the policy is set with Deny, meaning that if someone starts a deployment in the subscription
that isn't in EUS2 or CUS, the deployment will fail. Here's what happens if someone in the Contoso
subscription tries to set up a deployment in West US.

Allow specific VM SKUs


Contoso will use the built-in policy definition Allow virtual machines SKUs to limit the type of VMs that can
be created in the subscription.

Check policy compliance


Policies go into effect immediately, and Contoso can check resources for compliance.
1. In the Azure portal, click the Compliance link.
2. The compliance dashboard appears. You can drill down for further details.

Set up locks
Contoso has long been using the ITIL framework for the management of its systems. One of the most
important aspects of the framework is change control, and Contoso wants to make sure that change control is
implemented in the Azure deployment.
Contoso is going to implement locks as follows:
Any production or failover component must be in a resource group that has a ReadOnly lock. This means
that to modify or delete production items, the lock must be removed.
Non-production resource groups will have CanNotDelete locks. This means that authorized users can read
or modify a resource, but can't delete it.
Learn more about locks.
Set up tagging
To track resources as they're added, it will be increasingly important for Contoso to associate resources with an
appropriate department, customer, and environment.
In addition to providing information about resources and owners, tags will enable Contoso to aggregate and
group resources, and to use that data for chargeback purposes.
Contoso needs to visualize its Azure assets in a way that makes sense for the business. For example by role or
department. Note that resources don't need to reside in the same resource group to share a tag. Contoso will
create a simple tag taxonomy so that everyone uses the same tags.

TAG NAME VALUE

CostCenter 12345: It must be a valid cost center from SAP.

BusinessUnit Name of business unit (from SAP). Matches CostCenter.

ApplicationTeam Email alias of the team that owns support for the app.

CatalogName Name of the app or ShareServices, per the service catalog


that the resource supports.

ServiceManager Email alias of the ITIL Service Manager for the resource.

COBPriority Priority set by the business for BCDR. Values of 1-5.


TAG NAME VALUE

ENV DEV, STG, PROD are the possible values. Representing


developing, staging, and production.

For example:

After creating the tag, Contoso will go back and create new Azure policy definitions and assignments, to
enforce the use of the required tags across the organization.

Step 6: Consider security


Security is crucial in the cloud, and Azure provides a wide array of security tools and capabilities. These help
you to create secure solutions, on the secure Azure platform. Read Confidence in the trusted cloud to learn
more about Azure security.
There are a few aspects for Contoso to consider:
Azure Security Center: Azure Security Center provides unified security management and advanced threat
protection across hybrid cloud workloads. With Security Center, you can apply security policies across your
workloads, limit your exposure to threats, and detect and respond to attacks. Learn more.
Network Security Groups (NSGs): An NSG is a filter (firewall) that contains a list of security rules which,
when applied, allow or deny network traffic to resources connected to Azure VNets. Learn more.
Data encryption: Azure Disk Encryption is a capability that helps you encrypt your Windows and Linux
IaaS virtual machine disks. Learn more.
Work with the Azure Security Center
Contoso is looking for a quick view into the security posture of its new hybrid cloud, and specifically its Azure
workloads. As a result, Contoso has decided to implement Azure Security Center starting with the following
features:
Centralized policy management
Continuous assessment
Actionable recommendations
Centralize policy management
With centralized policy management, Contoso will ensure compliance with security requirements by centrally
managing security policies across the entire environment. It can simply and quickly implement a policy which
applies to all of its Azure resources.

Assess and action


Contoso will leverage the continuous security assessment which monitors the security of machines, networks,
storage, data, and applications; to discover potential security issues.
Security Center will analyze the security state of Contoso’s compute, infrastructure, and data resources, and
of Azure apps and services.
Continuous assessment helps the Contoso operations team to discover potential security issues, such as
systems with missing security updates or exposed network ports.
In particular Contoso wants to make sure all of the VMs are protected. Security Center helps with this,
verifying VM health, and making prioritized and actionable recommendations to remediate security
vulnerabilities before they're exploited.
Work with NSGs
Contoso can limit network traffic to resources in a virtual network using network security groups.
A network security group contains a list of security rules that allow or deny inbound or outbound network
traffic based on source or destination IP address, port, and protocol.
When applied to a subnet, rules are applied to all resources in the subnet. In addition to network interfaces,
this includes instances of Azure services deployed in the subnet.
Application security groups (ASGs) enable you to configure network security as a natural extension of an
app structure, allowing you to group VMs and define network security policies based on those groups.
Application security groups mean that Contoso can reuse the security policy at scale, without manual
maintenance of explicit IP addresses. The platform handles the complexity of explicit IP addresses
and multiple rule sets, allowing you to focus on your business logic.
Contoso can specify an application security group as the source and destination in a security rule.
After a security policy is defined, Contoso can create VMs, and assign the VM NICs to a group.
Contoso will implement a mix of NSGs and ASGs. Contoso is concerned about NSG management. It's also
worried about the overuse of NSGs, and the added complexity for operations staff. Here's what Contoso will
do:
All traffic into and out of all subnets (north-south), will be subject to an NSG rule, except for the
GatewaySubnets in the Hub networks.
Any firewalls or domain controller will be protected by both subnet NSGs and NIC NSGs.
All production applications will have ASGs applied.
Contoso has built a model of how this will look for its applications.
The NSGs associated with the ASGs will be configured with least privilege to ensure that only allowed packets
can flow from one part of the network to its destination.

ACTION NAME SOURCE TARGET PORT

Allow AllowiInternetToFE VNET-HUB-EUS1/IB- APP1-FE 80, 443


TrustZone

Allow AllowWebToApp APP1-FE APP1-DB 1433

Allow AllowAppToDB APP1-APP Any Any

Deny DenyAllInbound Any Any Any

Encrypt data
Azure Disk Encryption integrates with Azure Key Vault to help control and manage the disk-encryption keys
and secrets in a key vault subscription. It ensures that all data on VM disks are encrypted at rest in Azure
storage.
Contoso has determined that specific VMs require encryption.
Contoso will apply encryption to VMs with customer, confidential, or PPI data.

Conclusion
In this article, Contoso set up an Azure infrastructure and policy for Azure subscription, hybrid identify, disaster
recovery, networking, governance, and security.
Not all of the steps that Contoso completed here are required for a migration to the cloud. In this case, it
wanted to plan a network infrastructure that can be used for all types of migrations, and is secure, resilient, and
scalable.
With this infrastructure in place, Contoso is ready to move on and try out migration.

Next steps
As a first migration scenario, Contoso is going to assess the on-premises SmartHotel360 two-tiered app for
migration to Azure.
Contoso migration: Assess on-premises workloads
for migration to Azure
3/15/2019 • 24 minutes to read • Edit Online

In this article, Contoso assesses its on-premises SmartHotel360 app for migration to Azure.
This article is part of a series that documents how the fictitious company Contoso migrates its on-premises
resources to the Microsoft Azure cloud. The series includes background information, and detailed deployment
scenarios that illustrate how to set up a migration infrastructure, assess the suitability of on-premises resources
for migration, and run different types of migrations. Scenarios grow in complexity. Articles will be added to the
series over time.

ARTICLE DETAILS STATUS

Article 1: Overview Overview of the article series, Available


Contoso's migration strategy, and the
sample apps that are used in the series.

Article 2: Deploy an Azure Contoso prepares its on-premises Available


infrastructure infrastructure and its Azure
infrastructure for migration. The same
infrastructure is used for all articles in
the series.

Article 3: Assess on-premises resources Contoso runs an assessment of its on- This article
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database using
Data Migration Assistant.

Article 4: Rehost an app on an Azure Contoso runs a lift-and-shift migration Available


VM and SQL Database Managed to Azure for its on-premises
Instance SmartHotel360 app. It migrates the
app front-end using the Azure Site
Recovery service. It migrates the app
database to an Azure SQL Database
Managed Instance using the Azure
Database Migration Service.

Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.

Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available
and in a SQL Server AlwaysOn app, using Site Recovery to migrate the
availability group app VMs, and the Database Migration
Service to migrate the app database to
a SQL Server cluster that's protected by
an AlwaysOn availability group.
ARTICLE DETAILS STATUS

Article 7: Rehost a Linux app on Azure Contoso completes a lift-and-shift Available


VMs migration of its Linux osTicket app to
Azure VMs, using the Site Recovery
service.

Article 8: Rehost a Linux app on Azure Contoso migrates its Linux osTicket Available
VMs and Azure Database for MySQL app to Azure VMs using Site Recovery.
It migrates the app database to Azure
Database for MySQL using MySQL
Workbench.

Article 9: Refactor an app in an Azure Contoso migrates its SmartHotel360 Available


web app and Azure SQL Database app to an Azure web app, and migrates
the app database to an Azure SQL
Server instance with the Database
Migration Assistant.

Article 10: Refactor a Linux app in an Contoso migrates its Linux osTicket Available
Azure web app and Azure Database for app to an Azure web app on multiple
MySQL Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.

Article 11: Refactor Team Foundation Contoso migrates its on-premises Available
Server on Azure DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.

Article 12: Rearchitect an app in Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL Database.

Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app by Available
using a range of Azure capabilities and
services, including Azure App Service,
Azure Kubernetes Service (AKS), Azure
Functions, Azure Cognitive Services,
and Azure Cosmos DB.

Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.

Overview
As Contoso considers migrating to Azure, the company wants to run a technical and financial assessment to
determine whether its on-premises workloads are suitable for migration to the cloud. In particular, the Contoso
team wants to assess machine and database compatibility for migration. It wants to estimate capacity and costs
for running Contoso's resources in Azure.
To get started and to better understand the technologies involved, Contoso assesses two of its on-premises apps,
summarized in the following table. The company assesses for migration scenarios that rehost and refactor apps
for migration. Learn more about rehosting and refactoring in the Contoso migration overview.
APP NAME PLATFORM APP TIERS DETAILS

SmartHotel360 Runs on Windows with a Two-tiered app. The front- VMs are VMware, running
SQL Server database end ASP.NET website runs on an ESXi host managed
(manages Contoso travel on one VM (WEBVM) and by vCenter Server.
requirements) the SQL Server runs on
another VM (SQLVM). You can download the
sample app from GitHub.

osTicket Runs on Linux/Apache with Two-tiered app. A front-end The app is used by
MySQL PHP (LAMP) PHP website runs on one customer service apps to
(Contoso service desk app) VM (OSTICKETWEB) and track issues for internal
the MySQL database runs employees and external
on another VM customers.
(OSTICKETMYSQL).
You can download the
sample from GitHub.

Current architecture
This diagram shows the current Contoso on-premises infrastructure:

Contoso has one main datacenter. The datacenter is located in the city of New York in the Eastern United
States.
Contoso has three additional local branches across the United States.
The main datacenter is connected to the internet with a fiber Metro Ethernet connection (500 MBps).
Each branch is connected locally to the internet by using business-class connections with IPsec VPN tunnels
back to the main datacenter. The setup allows Contoso's entire network to be permanently connected and
optimizes internet connectivity.
The main datacenter is fully virtualized with VMware. Contoso has two ESXi 6.5 virtualization hosts that are
managed by vCenter Server 6.5.
Contoso uses Active Directory for identity management. Contoso uses DNS servers on the internal network.
The domain controllers in the datacenter run on VMware VMs. The domain controllers at local branches run
on physical servers.

Business drivers
Contoso's IT leadership team has worked closely with the company's business partners to understand what the
business wants to achieve with this migration:
Address business growth: Contoso is growing. As a result, pressure has increased on the company's on-
premises systems and infrastructure.
Increase efficiency: Contoso needs to remove unnecessary procedures and streamline processes for its
developers and users. The business needs IT to be fast and to not waste time or money, so the company can
deliver faster on customer requirements.
Increase agility: Contoso IT needs to be more responsive to the needs of the business. It must be able to
react faster than the changes that occur in the marketplace for the company to be successful in a global
economy. IT at Contoso must not get in the way or become a business blocker.
Scale: As the company's business grows successfully, Contoso IT must provide systems that can grow at the
same pace.

Assessment goals
The Contoso cloud team has identified goals for its migration assessments:
After migration, apps in Azure should have the same performance capabilities that apps have today in
Contoso's on-premises VMWare environment. Moving to the cloud doesn't mean that app performance is
less critical.
Contoso needs to understand the compatibility of its applications and databases with Azure requirements.
Contoso also needs to understand its hosting options in Azure.
Contoso's database administration should be minimized after apps move to the cloud.
Contoso wants to understand not only its migration options, but also the costs associated with the
infrastructure after it moves to the cloud.

Assessment tools
Contoso uses Microsoft tools for its migration assessment. The tools align with the company's goals and should
provide Contoso with all the information it needs.

TECHNOLOGY DESCRIPTION COST

Data Migration Assistant Contoso uses Data Migration Assistant Data Migration Assistant is a free,
to assess and detect compatibility downloadable tool.
issues that might affect its database
functionality in Azure. Data Migration
Assistant assesses feature parity
between SQL sources and targets. It
recommends performance and
reliability improvements.
TECHNOLOGY DESCRIPTION COST

Azure Migrate Contoso uses the Azure Migrate As of May 2018, Azure Migrate is a
service to assess its VMware VMs. free service.
Azure Migrate assesses the migration
suitability of the machines. It provides
sizing and cost estimates for running in
Azure.

Service Map Azure Migrate uses Service Map to Service Map is part of Azure Monitor
show dependencies between machines logs. Currently, Contoso can use
that the company wants to migrate. Service Map for 180 days without
incurring charges.

In this scenario, Contoso downloads and runs Data Migration Assistant to assess the on-premises SQL Server
database for its travel app. Contoso uses Azure Migrate with dependency mapping to assess the app VMs before
migration to Azure.

Assessment architecture

Contoso is a fictitious name that represents a typical enterprise organization.


Contoso has an on-premises datacenter (contoso-datacenter) and on-premises domain controllers
(CONTOSODC1, CONTOSODC2).
VMware VMs are located on VMware ESXi hosts running version 6.5 (contosohost1, contosohost2).
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com, running on a VM ).
The SmartHotel360 travel app has these characteristics:
The app is tiered across two VMware VMs (WEBVM and SQLVM ).
The VMs are located on VMware ESXi host contosohost1.contoso.com.
The VMs are running Windows Server 2008 R2 Datacenter with SP1.
The VMware environment is managed by vCenter Server (vcenter.contoso.com ) running on a VM.
The osTicket service desk app:
The app is tiered across two VMs (OSTICKETWEB and OSTICKETMYSQL ).
The VMs are running Ubuntu Linux Server 16.04-LTS.
OSTICKETWEB is running Apache 2 and PHP 7.0.
OSTICKETMYSQL is running MySQL 5.7.22.

Prerequisites
Contoso and other users must meet the following prerequisites for the assessment:
Owner or Contributor permissions for the Azure subscription, or for a resource group in the Azure
subscription.
An on-premises vCenter Server instance running version 6.5, 6.0, or 5.5.
A read-only account in vCenter Server, or permissions to create one.
Permissions to create a VM on the vCenter Server instance by using an .ova template.
At least one ESXi host running version 5.5 or later.
At least two on-premises VMware VMs, one running a SQL Server database.
Permissions to install Azure Migrate agents on each VM.
The VMs should have direct internet connectivity.
You can restrict internet access to the required URLs.
If your VMs don't have internet connectivity, the Azure Log Analytics Gateway must be installed on
them, and agent traffic directed through it.
The FQDN of the VM running the SQL Server instance, for database assessment.
Windows Firewall running on the SQL Server VM should allow external connections on TCP port 1433
(default). This setup allows Data Migration Assistant to connect.

Assessment overview
Here's how Contoso performs its assessment:
Step 1: Download and install Data Migration Assistant: Contoso prepares Data Migration Assistant for
assessment of the on-premises SQL Server database.
Step 2: Assess the database by using Data Migration Assistant: Contoso runs and analyzes the database
assessment.
Step 3: Prepare for VM assessment by using Azure Migrate: Contoso sets up on-premises accounts and
adjusts VMware settings.
Step 4: Discover on-premises VMs by using Azure Migrate: Contoso creates an Azure Migrate collector
VM. Then, Contoso runs the collector to discover VMs for assessment.
Step 5: Prepare for dependency analysis by using Azure Migrate: Contoso installs Azure Migrate agents
on the VMs, so the company can see dependency mapping between VMs.
Step 6: Assess the VMs by using Azure Migrate: Contoso checks dependencies, groups the VMs, and runs
the assessment. When the assessment is ready, Contoso analyzes the assessment in preparation for
migration.

Step 1: Download and install Data Migration Assistant


1. Contoso downloads Data Migration Assistant from the Microsoft Download Center.
Data Migration Assistant can be installed on any machine that can connect to the SQL Server instance.
Contoso doesn't need to run it on the SQL Server machine.
Data Migration Assistant shouldn't be run on the SQL Server host machine.
2. Contoso runs the downloaded setup file (DownloadMigrationAssistant.msi) to begin the installation.
3. On the Finish page, Contoso selects Launch Microsoft Data Migration Assistant before finishing the
wizard.

Step 2: Run and analyze the database assessment for SmartHotel360


Now, Contoso can run an assessment to analyze its on-premises SQL Server database for the SmartHotel360
app.
1. In Data Migration Assistant, Contoso selects New > Assessment, and then gives the assessment a
project name.
2. For Source server type, Contoso selects SQL Server on Azure Virtual Machines.

NOTE
Currently, Data Migration Assistant doesn't support assessment for migrating to an Azure SQL Database Managed
Instance. As a workaround, Contoso uses SQL Server on an Azure VM as the supposed target for the assessment.

3. In Select Target Version, Contoso selects SQL Server 2017 as the target version. Contoso needs to
select this version because it's the version that's used by the SQL Database Managed Instance.
4. Contoso selects reports to help it discover information about compatibility and new features:
Compatibility Issues note changes that might break migration or that require a minor adjustment
before migration. This report keeps Contoso informed about any features currently in use that are
deprecated. Issues are organized by compatibility level.
New features' recommendation notes new features in the target SQL Server platform that can
be used for the database after migration. New feature recommendations are organized under the
headings Performance, Security, and Storage.

5. In Connect to a server, Contoso enters the name of the VM that's running the database and credentials
to access it. Contoso selects Trust server certificate to make sure the VM can access SQL Server. Then,
Contoso selects Connect.

6. In Add source, Contoso adds the database it wants to assess, and then selects Next to start the
assessment.
7. The assessment is created.
8. In Review Results, Contoso views the assessment results.
Analyze the database assessment
Results are displayed as soon as they're available. If Contoso fixes issues, it must select Restart Assessment to
rerun the assessment.
1. In the Compatibility issues report, Contoso checks for any issues at each compatibility level.
Compatibility levels map to SQL Server versions as follows:
100: SQL Server 2008/Azure SQL Database
110: SQL Server 2012/Azure SQL Database
120: SQL Server 2014/Azure SQL Database
130: SQL Server 2016/Azure SQL Database
140: SQL Server 2017/Azure SQL Database

2. In the Feature recommendations report, Contoso views performance, security, and storage features
that the assessment recommends after migration. A variety of features are recommended, including In-
Memory OLTP, columnstore indexes, Stretch Database, Always Encrypted, dynamic data masking, and
transparent data encryption.
NOTE
Contoso should enable transparent data encryption for all SQL Server databases. This is even more critical when a
database is in the cloud than when it's hosted on-premises. Transparent data encryption should be enabled only
after migration. If transparent data encryption is already enabled, Contoso must move the certificate or
asymmetric key to the master database of the target server. Learn how to move a transparent data encryption-
protected database to another SQL Server instance.

3. Contoso can export the assessment in JSON or CSV format.

NOTE
For large-scale assessments:
Run multiple assessments concurrently and view the state of the assessments on the All Assessments page.
Consolidate assessments into a SQL Server database.
Consolidate assessments into a Power BI report.

Step 3: Prepare for VM assessment by using Azure Migrate


Contoso needs to create a VMware account that Azure Migrate can use to automatically discover VMs for
assessment, verify rights to create a VM, note the ports that need to be opened, and set the statistics settings
level.
Set up a VMware account
VM discovery requires a read-only account in vCenter Server that has the following properties:
User type: At least a read-only user.
Permissions: For the datacenter object, select the Propagate to Child Objects checkbox. For Role, select
Read-only.
Details: The user is assigned at the datacenter level, with access to all objects in the datacenter.
To restrict access, assign the No access role with the Propagate to child object to the child objects (vSphere
hosts, datastores, VMs, and networks).
Verify permissions to create a VM
Contoso verifies that it has permissions to create a VM by importing a file in .ova format. Learn how to create
and assign a role with privileges.
Verify ports
The Contoso assessment uses dependency mapping. Dependency mapping requires an agent to be installed on
VMs that will be assessed. The agent must be able to connect to Azure from TCP port 443 on each VM. Learn
about connection requirements.
Set statistics settings
Before Contoso begins the deployment, it must set the statistics settings for the vCenter Server to level 3.
NOTE
After setting the level, Contoso must wait at least a day before it runs the assessment. Otherwise, the assessment
might not work as expected.
If the level is higher than 3, the assessment works, but:
Performance data for disks and networking isn't collected.
For storage, Azure Migrate recommends a standard disk in Azure, with the same size as the on-premises disk.
For networking, for each on-premises network adapter, a network adapter is recommended in Azure.
For compute, Azure Migrate looks at the VM cores and memory size and recommends an Azure VM with the
same configuration. If there are multiple eligible Azure VM sizes, the one with the lowest cost is recommended.
For more information about sizing by using level 3, see Sizing.

To set the level:


1. In the vSphere Web Client, Contoso opens the vCenter Server instance.
2. Contoso selects Manage > Settings > General > Edit.
3. In Statistics, Contoso sets the statistic level settings to Level 3.

Step 4: Discover VMs


To discover VMs, Contoso creates an Azure Migrate project. Contoso downloads and sets up the collector VM.
Then, Contoso runs the collector to discover its on-premises VMs.
Create a project
1. In the Azure portal, Contoso searches for Azure Migrate. Then, Contoso creates a project.
2. Contoso specifies a project name (ContosoMigration) and the Azure subscription. It creates a new Azure
resource group (ContosoFailoverRG).

NOTE
You can create an Azure Migrate project only in the West Central US or East US region.
You can plan a migration for any target location.
The project location is used only to store the metadata that's gathered from on-premises VMs.
Download the collector appliance
Azure Migrate creates an on-premises VM known as the collector appliance. The VM discovers on-premises
VMware VMs and sends metadata about the VMs to the Azure Migrate service. To set up the collector appliance,
Contoso downloads an OVA template, and then imports it to the on-premises vCenter Server instance to create
the VM.
1. In the Azure Migrate project, Contoso selects Getting Started > Discover & Assess > Discover
Machines. Contoso downloads the OVA template file.
2. Contoso copies the project ID and key. The project and ID are required for configuring the collector.
Verify the collector appliance
Before deploying the VM, Contoso checks that the OVA file is secure:
1. On the machine on which the file was downloaded, Contoso opens an administrator Command Prompt
window.
2. Contoso runs the following command to generate the hash for the OVA file:
C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]

Example
C:\>CertUtil -HashFile C:\AzureMigrate\AzureMigrate.ova SHA256

3. The generated hash should match the hash values listed here.
Create the collector appliance
Now, Contoso can import the downloaded file to the vCenter Server instance and provision the collector
appliance VM:
1. In the vSphere Client console, Contoso selects File > Deploy OVF Template.
2. In the Deploy OVF Template Wizard, Contoso selects Source, and then specifies the location of the OVA
file.
3. In Name and Location, Contoso specifies a display name for the collector VM. Then, it selects the
inventory location in which to host the VM. Contoso also specifies the host or cluster on which to run the
collector appliance.
4. In Storage, Contoso specifies the storage location. In Disk Format, Contoso selects how it wants to
provision the storage.
5. In Network Mapping, Contoso specifies the network in which to connect the collector VM. The network
needs internet connectivity to send metadata to Azure.
6. Contoso reviews the settings, and then selects Power on after deployment > Finish. A message that
confirms successful completion appears when the appliance is created.
Run the collector to discover VMs
Now, Contoso runs the collector to discover VMs. Currently, the collector currently supports only English
(United States) as the operating system language and collector interface language.
1. In the vSphere Client console, Contoso selects Open Console. Contoso specifies the language, time zone,
and password preferences for the collector VM.
2. On the desktop, Contoso selects the Run collector shortcut.
3. In Azure Migrate Collector, Contoso selects Set up prerequisites. Contoso accepts the license terms and
reads the third-party information.
4. The collector checks that the VM has internet access, that the time is synced, and that the collector service
is running. (The collector service is installed by default on the VM.) Contoso also installs VMware
PowerCLI.

NOTE
It's assumed that the VM has direct access to the internet without using a proxy.

5. In Specify vCenter Server details, Contoso enters the name (FQDN ) or IP address of the vCenter
Server instance and the read-only credentials used for discovery.
6. Contoso selects a scope for VM discovery. The collector can discover only VMs that are within the
specified scope. The scope can be set to a specific folder, datacenter, or cluster. The scope shouldn't contain
more than 1,500 VMs.
7. In Specify migration project, Contoso enters the Azure Migrate project ID and key that were copied
from the portal. To get the project ID and key, Contoso can go to the project Overview page > Discover
Machines.

8. In View collection progress, Contoso can monitor discovery and check that metadata collected from the
VMs is in scope. The collector provides an approximate discovery time.
Verify VMs in the portal
When collection is finished, Contoso checks that the VMs appear in the portal:
1. In the Azure Migrate project, Contoso selects Manage > Machines. Contoso checks that the VMs that it
wants to discover are shown.

2. Currently, the machines don't have the Azure Migrate agents installed. Contoso must install the agents to
view dependencies.

Step 5: Prepare for dependency analysis


To view dependencies between VMs that it wants to assess, Contoso downloads and installs agents on the app
VMs. Contoso installs agents on all VMs for its apps, both for Windows and Linux.
Take a snapshot
To keep a copy of the VMs before modifying them, Contoso takes a snapshot before the agents are installed.

Download and install the VM agents


1. In Machines, Contoso selects the machine. In the Dependencies column, Contoso selects Requires
installation.
2. In the Discover Machines pane, Contoso:
Downloads the Microsoft Monitoring Agent (MMA) and Dependency Agent for each Windows VM.
Downloads the MMA and Dependency Agent for each Linux VM.
3. Contoso copies the workspace ID and key. Contoso needs the workspace ID and key when it installs the
MMA.
Install the agents on Windows VMs
Contoso runs the installation on each VM.
Install the MMA on Windows VMs
1. Contoso double-clicks the downloaded agent.
2. In Destination Folder, Contoso keeps the default installation folder, and then selects Next.
3. In Agent Setup Options, Contoso selects Connect the agent to Azure Log Analytics > Next.

4. In Azure Log Analytics, Contoso pastes the workspace ID and key that it copied from the portal.

5. In Ready to Install, Contoso installs the MMA.


Install the Dependency agent on Windows VMs
1. Contoso double-clicks the downloaded Dependency Agent.
2. Contoso accepts the license terms and waits for the installation to finish.
Install the agents on Linux VMs
Contoso runs the installation on each VM.
Install the MMA on Linux VMs
1. Contoso installs the Python ctypes library on each VM by using the following command:
sudo apt-get install python-ctypeslib

2. Contoso must run the command to install the MMA agent as root. To become root, Contoso runs the
following command, and then enters the root password:
sudo -i

3. Contoso installs the MMA:


Contoso enters the workspace ID and key in the command.
Commands are for 64-bit.
The workspace ID and primary key are located in the Log Analytics workspace in the Azure portal.
Select Settings, and then select the Connected Sources tab.
Run the following commands to download the Log Analytics agent, validate the checksum, and
install and onboard the agent:

wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-
Linux/master/installer/scripts/onboard_agent.sh && sh onboard_agent.sh -w 6b7fcaff-7efb-4356-
ae06-516cacf5e25d -s
k7gAMAw5Bk8pFVUTZKmk2lG4eUciswzWfYLDTxGcD8pcyc4oT8c6ZRgsMy3MmsQSHuSOcmBUsCjoRiG2x9A8Mg==

Install the Dependency Agent on Linux VMs


After the MMA is installed, Contoso installs the Dependency Agent on the Linux VMs:
1. The Dependency Agent is installed on Linux computers by using InstallDependencyAgent-Linux64.bin, a
shell script that has a self-extracting binary. Contoso runs the file by using sh, or it adds execute
permissions to the file itself.
2. Contoso installs the Linux Dependency Agent as root:
wget --content-disposition https://aka.ms/dependencyagentlinux -O InstallDependencyAgent-Linux64.bin
&& sudo sh InstallDependencyAgent-Linux64.bin -s

Step 6: Run and analyze the VM assessment


Contoso can now verify machine dependencies and create a group. Then, it runs the assessment for the group.
Verify dependencies and create a group
1. To determine which machines to analyze, Contoso selects View Dependencies.

2. For SQLVM, the dependency map shows the following details:


Process groups or processes that have active network connections running on SQLVM during the
specified time period (an hour, by default).
Inbound (client) and outbound (server) TCP connections to and from all dependent machines.
Dependent machines that have the Azure Migrate agents installed are shown as separate boxes.
Machines that don't have the agents installed show port and IP address information.
3. For machines that have the agent installed (WEBVM ), Contoso selects the machine box to view more
information. The information includes the FQDN, operating system, and MAC address.
4. Contoso selects the VMs to add to the group (SQLVM and WEBVM ). Contoso uses Ctrl+Click to select
multiple VMs.
5. Contoso selects Create Group, and then enters a name (smarthotelapp).

NOTE
To view more granular dependencies, you can expand the time range. You can select a specific duration or select
start and end dates.

Run an assessment
1. In Groups, Contoso opens the group (smarthotelapp), and then selects Create assessment.

2. To view the assessment, Contoso selects Manage > Assessments.


Contoso uses the default assessment settings, but you can customize settings.
Analyze the VM assessment
An Azure Migrate assessment includes information about the compatibility of on-premises with Azure,
suggested right-sizing for Azure VM, and estimated monthly Azure costs.

Review confidence rating

An assessment has a confidence rating of from 1 star to 5 stars (1 star is the lowest and 5 stars is the highest).
The confidence rating is assigned to an assessment based on the availability of data points that are
needed to compute the assessment.
The rating helps you estimate the reliability of the size recommendations that are provided by Azure
Migrate.
The confidence rating is useful when you are doing performance-based sizing. Azure Migrate might not
have enough data points for utilization-based sizing. For as on-premises sizing, the confidence rating is
always 5 stars because Azure Migrate has all the data points it needs to size the VM.
Depending on the percentage of data points available, the confidence rating for the assessment is
provided:

AVAILABILITY OF DATA POINTS CONFIDENCE RATING

0%-20% 1 star

21%-40% 2 stars
AVAILABILITY OF DATA POINTS CONFIDENCE RATING

41%-60% 3 stars

61%-80% 4 stars

81%-100% 5 stars

Verify Azure readiness

The assessment report shows the information that's summarized in the table. To show performance-based sizing,
Azure Migrate needs the following information. If the information can't be collected, sizing assessment might not
be accurate.
Utilization data for CPU and memory.
Read/write IOPS and throughput for each disk attached to the VM.
Network in/out information for each network adapter attached to the VM.

SETTING INDICATION DETAILS

Azure VM readiness Indicates whether the VM is ready for Possible states:


migration.
- Ready for Azure

- Ready with conditions

- Not ready for Azure

- Readiness unknown

If a VM isn't ready, Azure Migrate


shows some remediation steps.

Azure VM size For ready VMs, Azure Migrate provides Sizing recommendation depends on
an Azure VM size recommendation. assessment properties:

- If you used performance-based


sizing, sizing considers the performance
history of the VMs.

- If you used as on-premises, sizing is


based on the on-premises VM size and
utilization data isn't used.
SETTING INDICATION DETAILS

Suggested tool Because Azure machines are running


the agents, Azure Migrate looks at the
processes that are running inside the
machine. It identifies whether the
machine is a database machine.

VM information The report shows settings for the on-


premises VM, including operating
system, boot type, and disk and
storage information.

Review monthly cost estimates


This view shows the total compute and storage cost of running the VMs in Azure. It also shows details for each
machine.

Cost estimates are calculated by using the size recommendations for a machine.
Estimated monthly costs for compute and storage are aggregated for all VMs in the group.

Clean up after assessment


When the assessment finishes, Contoso retains the Azure Migrate appliance to use in future evaluations.
Contoso turns off the VMware VM. Contoso will use it again when it evaluates additional VMs.
Contoso keeps the Contoso Migration project in Azure. The project currently is deployed in the
ContosoFailoverRG resource group in the East US Azure region.
The collector VM has a 180-day evaluation license. If this limit expires, Contoso will need to download the
collector and set it up again.

Conclusion
In this scenario, Contoso assesses its SmartHotel360 app database by using the Data Migration Assessment
tool. It assesses the on-premises VMs by using the Azure Migrate service. Contoso reviews the assessments to
make sure that on-premises resources are ready for migration to Azure.

Next steps
In the next article in the series, Contoso rehosts its SmartHotel360 app in Azure by using a lift-and-shift
migration. Contoso migrates the front-end WEBVM for the app by using Azure Site Recovery. It migrates the
app database to an Azure SQL Database Managed Instance by using the Database Migration Service. Get
started with this deployment.
Contoso migration: Rehost an on-premises app on
an Azure VM and SQL Database Managed Instance
3/15/2019 • 31 minutes to read • Edit Online

In this article, Contoso migrates its SmartHotel360 app front-end VM to an Azure VM by using the Azure Site
Recovery service. Contoso also migrates the app database to Azure SQL Database Managed Instance.

NOTE
Azure SQL Database Managed Instance currently is in preview.

This article is one in a series of articles that documents how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background information and a series of
scenarios that illustrate how to set up a migration infrastructure and run different types of migrations. Scenarios
grow in complexity. Articles will be added to the series over time.

ARTICLE DETAILS STATUS

Article 1: Overview Overview of Contoso's migration Available


strategy, the article series, and the
sample apps that are used in the
series.

Article 2: Deploy an Azure Contoso prepares its on-premises Available


infrastructure infrastructure and its Azure
infrastructure for migration. The same
infrastructure is used for all migration
articles in the series.

Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises two-tier SmartHotel app
running on VMware. Contoso assesses
app VMs by using the Azure Migrate
service. Contoso assesses the app SQL
Server database by using Data
Migration Assistant.

Article 4: Rehost an app on an Azure Contoso runs a lift-and-shift migration This article
VM and SQL Database Managed to Azure for its on-premises
Instance SmartHotel app. Contoso migrates the
app front-end VM by using Azure Site
Recovery. Contoso migrates the app
database to an Azure SQL Database
Managed Instance by using the Azure
Database Migration Service.

Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel app Available
VMs to Azure VMs by using the Site
Recovery service.
ARTICLE DETAILS STATUS

Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel app. Available
and in a SQL Server AlwaysOn Contoso uses Site Recovery to migrate
availability group the app VMs. It uses the Database
Migration Service to migrate the app
database to a SQL Server cluster that's
protected by an AlwaysOn availability
group.

Article 7: Rehost a Linux app on Azure Contoso completes a lift-and-shift Available


VMs migration of its Linux osTicket app to
Azure VMs by using Site Recovery.

Article 8: Rehost a Linux app on Azure Contoso migrates its Linux osTicket Available
VMs and Azure Database for MySQL app to Azure VMs by using Site
Recovery. It migrates the app database
to Azure Database for MySQL by using
MySQL Workbench.

Article 9: Refactor an app in an Azure Contoso migrates its SmartHotel app Available
web app and Azure SQL Database to an Azure web app and migrates the
app database to an Azure SQL Server
instance.

Article 10: Refactor a Linux app in an Contoso migrates its Linux osTicket Available
Azure web app and Azure Database for app to an Azure web app on multiple
MySQL sites. The web app is integrated with
GitHub for continuous delivery.
Contoso migrates the app database to
an Azure Database for MySQL
instance.

Article 11: Refactor Team Foundation Contoso migrates its on-premises Available
Server on Azure DevOps Services Team Foundation Server deployment
by migrating it to Azure DevOps
Services in Azure.

Article 12: Rearchitect an app in Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure, and then rearchitects the
app. Contoso rearchitects the app web
tier as a Windows container, and
rearchitects the app database by using
Azure SQL Database.

Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service,
Azure Functions, Azure Cognitive
Services, and Azure Cosmos DB.

Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.

You can download the sample SmartHotel360 app that's used in this article from GitHub.

Business drivers
Contoso's IT leadership team has worked closely with the company's business partners to understand what the
business wants to achieve with this migration:
Address business growth: Contoso is growing. As a result, pressure has increased on the company's on-
premises systems and infrastructure.
Increase efficiency: Contoso needs to remove unnecessary procedures, and to streamline processes for its
developers and users. The business needs IT to be fast and to not waste time or money, so the company can
deliver faster on customer requirements.
Increase agility: Contoso IT needs to be more responsive to the needs of the business. It must be able to
react faster than the changes that occur in the marketplace for the company to be successful in a global
economy. IT at Contoso must not get in the way or become a business blocker.
Scale: As the company's business grows successfully, Contoso IT must provide systems that can grow at the
same pace.

Migration goals
The Contoso cloud team has identified goals for this migration. The company uses migration goals to determine
the best migration method.
After migration, the app in Azure should have the same performance capabilities that the app has today in
Contoso's on-premises VMWare environment. Moving to the cloud doesn't mean that app performance is
less critical.
Contoso doesn’t want to invest in the app. The app is critical and important to the business, but Contoso
simply wants to move the app in its current form to the cloud.
Database administration tasks should be minimized after the app is migrated.
Contoso doesn't want to use an Azure SQL Database for this app. It's looking for alternatives.

Solution design
After pinning down their goals and requirements, Contoso designs and reviews a deployment solution, and
identifies the migration process, including the Azure services that it will use for the migration.
Current architecture
Contoso has one main datacenter (contoso-datacenter) . The datacenter is located in the city of New York in
the Eastern United States.
Contoso has three additional local branches across the United States.
The main datacenter is connected to the internet with a fiber Metro Ethernet connection (500 MBps).
Each branch is connected locally to the internet by using business-class connections with IPsec VPN tunnels
back to the main datacenter. The setup allows Contoso's entire network to be permanently connected and
optimizes internet connectivity.
The main datacenter is fully virtualized with VMware. Contoso has two ESXi 6.5 virtualization hosts that are
managed by vCenter Server 6.5.
Contoso uses Active Directory for identity management. Contoso uses DNS servers on the internal network.
Contoso has an on-premises domain controller (contosodc1).
The domain controllers run on VMware VMs. The domain controllers at local branches run on physical
servers.
The SmartHotel360 app is tiered across two VMs (WEBVM and SQLVM ) that are located on a VMware
ESXi version 6.5 host (contosohost1.contoso.com ).
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ) running on a VM.
Proposed architecture
In this scenario, Contoso wants to migrate its two-tier on-premises travel app as follows:
Migrate the app database (SmartHotelDB ) to an Azure SQL Database Managed Instance.
Migrate the frontend WebVM to an Azure VM.
The on-premises VMs in the Contoso datacenter will be decommissioned when the migration is finished.

Database considerations
As part of the solution design process, Contoso did a feature comparison between Azure SQL Database and
SQL Server Managed Instance. The following considerations helped them to decide to go with Managed
Instance.
Managed Instance aims to deliver almost 100% compatibility with the latest on-premises SQL Server
version. Microsoft recommends Managed instance for customers running SQL Server on-premises or on
IaaS VM who want to migrate their apps to a fully managed service with minimal design changes.
Contoso is planning to migrate a large number of apps from on-premises to IaaS. Many of these are ISV
provided. Contoso realizes that using Managed Instance will help ensure database compatibility for these
apps, rather than using SQL Database which might not be supported.
Contoso can simply do a lift-and-shift migration to Managed Instance using the fully automated Data
Migration Service (DMS ). With this service in place, Contoso can reuse it for future database migrations.
SQL Managed Instance supports SQL Server Agent which is an important issue for the SmartHotel360 app.
Contoso needs this compatibility, otherwise it will have to redesign maintenance plans required by the app.
With Software Assurance, Contoso can exchange their existing licenses for discounted rates on a SQL
Database Managed Instance using the Azure Hybrid Benefit for SQL Server. This can allow Contoso to save
up to 30% on Managed Instance.
Managed Instance is fully contained in the virtual network, so it provides a high level of isolation and security
for Contoso’s data. Contoso can get the benefits of the public cloud, while keeping the environment isolated
from the public Internet.
Managed Instance supports many security features including Always-encrypted, dynamic data masking, row -
level security, and threat detection.
Solution review
Contoso evaluates the proposed design by putting together a pros and cons list.

CONSIDERATION DETAILS

Pros WEBVM will be moved to Azure without changes, making the


migration simple.

SQL Managed Instance supports Contoso's technical


requirements and goals.

Managed Instance will provide 100% compatibility with their


current deployment, while moving them away from SQL
Server 2008 R2.

They can leverage their investment in Software Assurance


and using the Azure Hybrid Benefit forSQL Server and
Windows Server.

They can reuse the Database Migration Service for additional


future migrations.

SQL Managed Instance has built in fault tolerance which


Contoso doesn't need to configures. This ensures that the
data tier is no longer a single point of failover.

Cons The WEBVM is running Windows Server 2008 R2. Although


this operating system is supported by Azure, it is no longer
supported platform. Learn more.

The web tier remains a single point of failover with only


WEBVM providing services.

Contoso will need to continue supporting the app web tier


as a VM rather than moving to a managed service, such as
Azure App Service.

For the data tier, Managed Instance might not be the best
solution if Contoso want to customize the operating system
or the database server, or if they want to run third-party
apps along with SQL Server. Running SQL Server on an IaaS
VM could provide this flexibility.

Migration process
Contoso will migrate the web and data tiers of its SmartHotel360 app to Azure by completing these steps:
1. Contoso already has its Azure infrastructure in place, so it just needs to add a couple of specific Azure
components for this scenario.
2. The data tier will be migrated by using the Data Migration Service. The Data Migration Service connects
to the on-premises SQL Server VM across a site-to-site VPN connection between the Contoso datacenter
and Azure. Then, the Data Migration Service migrates the database.
3. The web tier will be migrated by using a lift-and-shift migration by using Site Recovery. The process
entails preparing the on-premises VMware environment, setting up and enabling replication, and
migrating the VMs by failing them over to Azure.

Azure services
SERVICE DESCRIPTION COST

Database Migration Service The Database Migration Service Learn about supported regions and
enables seamless migration from Database Migration Service pricing.
multiple database sources to Azure
data platforms with minimal downtime.

Azure SQL Database Managed Managed Instance is a managed Using a SQL Database Managed
Instance database service that represents a fully Instance running in Azure incurs
managed SQL Server instance in the charges based on capacity. Learn more
Azure cloud. It uses the same code as about Managed Instance pricing.
the latest version of SQL Server
Database Engine, and has the latest
features, performance improvements,
and security patches.

Azure Site Recovery The Site Recovery service orchestrates During replication to Azure, Azure
and manages migration and disaster Storage charges are incurred. Azure
recovery for Azure VMs and on- VMs are created and incur charges
premises VMs and physical servers. when failover occurs. Learn more about
Site Recovery charges and pricing.

Prerequisites
Contoso and other users must meet the following prerequisites for this scenario:

REQUIREMENTS DETAILS

Enroll in the Managed Instance preview You must be enrolled in the SQL Database Managed
Instance limited public preview. You need an Azure
subscription to sign up. Signup can take a few days to
complete, so make sure to sign up before you begin to
deploy this scenario.
REQUIREMENTS DETAILS

Azure subscription You should have already created a subscription when you
perform the assessment in the first article in this series. If you
don't have an Azure subscription, create a free account.

If you create a free account, you're the administrator of your


subscription and can perform all actions.

If you use an existing subscription and you're not the


administrator of the subscription, you need to work with the
admin to assign you Owner or Contributor permissions.

If you need more granular permissions, see Use role-based


access control to manage Site Recovery access.

Site Recovery (on-premises) Your on-premises vCenter Server instance should be running
version 5.5, 6.0, or 6.5

An ESXi host running version 5.5, 6.0, or 6.5

One or more VMware VMs running on the ESXi host.

VMs must meet Azure requirements.

Supported network and storage configuration.


REQUIREMENTS DETAILS

Database Migration Service For the Database Migration Service, you need a compatible
on-premises VPN device.

You must be able to configure the on-premises VPN device.


It must have an external-facing public IPv4 address. The
address can't be located behind a NAT device.

Make sure you have access to your on-premises SQL Server


database.

Windows Firewall should be able to access the source


database engine. Learn how to configure Windows Firewall
for Database Engine access.

If there's a firewall in front of your database machine, add


rules to allow access to the database and files via SMB port
445.

The credentials that are used to connect to the source SQL


Server instance and which target Managed Instance must be
members of the sysadmin server role.

You need a network share in your on-premises database that


the Database Migration Service can use to back up the
source database.

Make sure that the service account running the source SQL
Server instance has write permissions on the network share.

Make a note of a Windows user and password that has full


control permissions on the network share. The Database
Migration Service impersonates these user credentials to
upload backup files to the Azure Storage container.

The SQL Server Express installation process sets the TCP/IP


protocol to Disabled by default. Make sure that it's enabled.

Scenario steps
Here's how Contoso plans to set up the deployment:
Step 1: Set up a SQL Database Managed Instance: Contoso needs a pre-created Managed Instance to
which the on-premises SQL Server database will migrate.
Step 2: Prepare the Database Migration Service: Contoso must register the database migration provider,
create an instance, and then create a Database Migration Service project. Contoso also must set up a shared
access signature (SAS ) Uniform Resource Identifier (URI) for the Database Migration Service. An SAS URI
provides delegated access to resources in Contoso's storage account, so Contoso can grant limited
permissions to storage objects. Contoso sets up an SAS URI, so the Database Migration Service can access
the storage account container to which the service uploads the SQL Server backup files.
Step 3: Prepare Azure for Site Recovery: Contoso must create a storage account to hold replicated data
for Site Recovery. It also must create an Azure Recovery Services vault.
Step 4: Prepare on-premises VMware for Site Recovery: Contoso will prepare accounts for VM
discovery and agent installation to connect to Azure VMs after failover.
Step 5: Replicate VMs: To set up replication, Contoso configure the Site Recovery source and target
environments, sets up a replication policy, and starts replicating VMs to Azure Storage.
Step 6: Migrate the database by using the Database Migration Service: Contoso migrates the
database.
Step 7: Migrate the VMs by using Site Recovery: Contoso runs a test failover to make sure everything's
working. Then, Contoso runs a full failover to migrate the VMs to Azure.

Step 1: Prepare a SQL Database Managed Instance


To set up an Azure SQL Database Managed Instance, Contoso needs a subnet that meets the following
requirements:
The subnet must be dedicated. It must be empty, and it can't contain any other cloud service. The subnet can't
be a gateway subnet.
After the Managed Instance is created, Contoso should not add resources to the subnet.
The subnet can't have a network security group associated with it.
The subnet must have a user-defined routing (UDR ) route table. The only route assigned should be 0.0.0.0/0
next hop internet.
Optional custom DNS: If custom DNS is specified on the Azure virtual network, Azure's recursive resolvers
IP address (such as 168.63.129.16) must be added to the list. Learn how to configure custom DNS for a
Managed Instance.
The subnet mustn't have a service endpoint (storage or SQL ) associated with it. Service endpoints should be
disabled on the virtual network.
The subnet must have a minimum of 16 IP addresses. Learn how to size the Managed Instance subnet.
In Contoso's hybrid environment, custom DNS settings are required. Contoso configures DNS settings to
use one or more of the company's Azure DNS servers. Learn more about DNS customization.
Set up a virtual network for the Managed Instance
Contoso admins set up the virtual network as follows:
1. They create a new virtual network ( VNET-SQLMI -EU2) in the primary East US 2 region. It adds the
virtual network to the ContosoNetworkingRG resource group.
2. They assign an address space of 10.235.0.0/24. They ensure that the range doesn't overlap with any other
networks in its enterprise.
3. They add two subnets to the network:
SQLMI -DS -EUS2 (10.235.0.0.25)
SQLMI -SAW -EUS2 (10.235.0.128/29). This subnet is used to attach a directory to the Managed
Instance.
4. After the virtual network and subnets are deployed, they peer networks as follows:
Peers VNET-SQLMI -EUS2 with VNET-HUB -EUS2 (the hub virtual network for the East US 2).
Peers VNET-SQLMI -EUS2 with VNET-PROD -EUS2 (the production network).

5. They set custom DNS settings. DNS points first to Contoso's Azure domain controllers. Azure DNS is
secondary. The Contoso Azure domain controllers are located as follows:
Located in the PROD -DC -EUS2 subnet, in the East US 2 production network (VNET-PROD -
EUS2)
CONTOSODC3 address: 10.245.42.4
CONTOSODC4 address: 10.245.42.5
Azure DNS resolver: 168.63.129.16
Need more help?
Get an overview of SQL Database Managed Instance.
Learn how to create a virtual network for a SQL Database Managed Instance.
Learn how to set up peering.
Learn how to update Azure Active Directory DNS settings.
Set up routing
The Managed Instance is placed in a private virtual network. Contoso needs a route table for the virtual network
to communicate with the Azure Management Service. If the virtual network can't communicate with the service
that manages it, the virtual network becomes inaccessible.
Contoso considers these factors:
The route table contains a set of rules (routes) that specify how packets sent from the Managed Instance
should be routed in the virtual network.
The route table is associated with subnets in which Managed Instances are deployed. Each packet that
leaves a subnet is handled based on the associated route table.
A subnet can be associated with only one route table.
There are no additional charges for creating route tables in Microsoft Azure.
To set up routing Contoso admins do the following:
1. They create a UDR (route) table in the ContosoNetworkingRG resource group.

2. To comply with Managed Instance requirements, after the route table (MIRouteTable) is deployed, they
add a route that has an address prefix of 0.0.0.0/0. The Next hop type option is set to Internet.

3. they associate the route table with the SQLMI -DB -EUS2 subnet (in the VNET-SQLMI -EUS2 network).

Need more help?


Learn how to set up routes for a Managed Instance.
Create a Managed Instance
Now, Contoso admins can provision a SQL Database Managed Instance:
1. Because the Managed Instance serves a business app, they deploy the Managed Instance in the
company's primary East US 2 region. They add the Managed Instance to the ContosoRG resource group.
2. They select a pricing tier, size compute, and storage for the instance. Learn more about Managed Instance
pricing.
3. After the Managed Instance is deployed, two new resources appear in the ContosoRG resource group:
A virtual cluster in case Contoso has multiple Managed Instances.
The SQL Server Database Managed Instance.

Need more help?


Learn how to provision a Managed Instance.

Step 2: Prepare the Database Migration Service


To prepare the Database Migration Service, Contoso admins need to do a few things:
Register the Database Migration Service provider in Azure.
Provide the Database Migration Service with access to Azure Storage for uploading the backup files that are
used to migrate a database. To provide access to Azure Storage, they create an Azure Blob storage container.
They generate an SAS URI for the Blob storage container.
Create a Database Migration Service project.
Then, they complete the following steps:
1. They register the database migration provider under its subscription.

2. They create a Blob storage container. Contoso generates an SAS URI so that the Database Migration
Service can access it.

3. They create a Database Migration Service instance.

4. They place the Database Migration Service instance in the PROD -DC -EUS2 subnet of the VNET-
PROD -DC -EUS2 virtual network.
The Database Migration Service is placed here because the service must be in a virtual network
that can access the on-premises SQL Server VM via a VPN gateway.
The VNET-PROD -EUS2 is peered to VNET-HUB -EUS2 and is allowed to use remote gateways.
The Use remote gateways option ensures that the Database Migration Service can communicate
as required.

Need more help?


Learn how to set up the Database Migration Service.
Learn how to create and use SAS.

Step 3: Prepare Azure for the Site Recovery service


Several Azure elements are required for Contoso to set up Site Recovery for migration of its web tier VM
(WEBMV ):
A virtual network in which failed-over resources are located.
A storage account to hold replicated data.
A Recovery Services vault in Azure.
Contoso admins set up Site Recovery as follows:
1. Because the VM is a web front end to the SmartHotel360 app, Contoso fails over the VM to its existing
production network (VNET-PROD -EUS2) and subnet (PROD -FE -EUS2). The network and subnet are
located in the primary East US 2 region. Contoso set up the network when it deployed the Azure
infrastructure.
2. They create a storage account (contosovmsacc20180528). Contoso uses a general-purpose account.
Contoso selects standard storage and locally redundant storage replication.
3. With the network and storage account in place, they create a vault (ContosoMigrationVault). Contoso
places the vault in the ContosoFailoverRG resource group, in the primary East US 2 region.

Need more help?


Learn how to set up Azure for Site Recovery.

Step 4: Prepare on-premises VMware for Site Recovery


To prepare VMware for Site Recovery, Contoso admins must complete these tasks:
Prepare an account on the vCenter Server instance or vSphere ESXi host. The account automates VM
discovery.
Prepare an account that allows automatic installation of the Mobility Service on VMware VMs that Contoso
wants to replicate.
Prepare on-premises VMs to connect to Azure VMs when they're created after failover.
Prepare an account for automatic discovery
Site Recovery needs access to VMware servers to:
Automatically discover VMs. A minimum of a read-only account is required.
Orchestrate replication, failover, and failback. Contoso needs an account that can run operations such as
creating and removing disks and turning on VMs.
Contoso admins set up the account by completing these tasks:
1. Creates a role at the vCenter level.
2. Assigns the required permissions to that role.
Need more help?
Learn how to create and assign a role for automatic discovery.
Prepare an account for Mobility Service installation
The Mobility Service must be installed on the VM that Contoso wants to replicate. Contoso considers these
factors about the Mobility Service:
Site Recovery can do an automatic push installation of this component when Contoso enables replication for
the VM.
For automatic push installation, Contoso must prepare an account that Site Recovery uses to access the VM.
This account is specified when replication is configured in the Azure console.
Contoso must have a domain or local account with permissions to install on the VM.
Need more help
Learn how to create an account for push installation of the Mobility Service.
Prepare to connect to Azure VMs after failover
After failover to Azure, Contoso wants to be able to connect to the replicated VMs in Azure. To connect to the
replicated VMs in Azure, Contoso admins must complete a few tasks on the on-premises VM before the
migration:
1. For access over the internet, they enable RDP on the on-premises VM before failover. They ensure that TCP
and UDP rules are added for the Public profile, and that RDP is allowed in Windows Firewall > Allowed
Apps for all profiles.
2. For access over Contoso's site-to-site VPN, they enable RDP on the on-premises machine. They allow RDP
in Windows Firewall > Allowed apps and features for Domain and Private networks.
3. They set the operating system's SAN policy on the on-premises VM to OnlineAll.
Contoso admins also need to check these items when they run a failover:
There should be no Windows updates pending on the VM when a failover is triggered. If Windows updates
are pending, users Contoso can't sign in to the virtual machine until the update is finished.
After failover, admins should check Boot diagnostics to view a screenshot of the VM. If they can't view the
boot diagnostics, they should check that the VM is running, and then review troubleshooting tips.

Step 5: Replicate the on-premises VMs to Azure


Before running a migration to Azure, Contoso admins need to set up and enable replication for the on-premises
VM.
Set a replication goal
1. In the vault, under the vault name (ContosoVMVault), they set a replication goal (Getting Started >
Site Recovery > Prepare infrastructure).
2. They specify that the machines are located on-premises, that they're VMware VMs, replicating to Azure.
Confirm deployment planning
To continue, Contoso admins confirm that they've completed deployment planning. They select Yes, I have
done it. In this deployment, Contoso is migrating only a single VM, deployment planning isn't needed.
Set up the source environment
Now, Contoso admins configure the source environment. To set up its source environment, they download an
OVF template, and use it to deploy the configuration server and its associated components as a highly available,
on-premises VMware VM. Components on the server include:
The configuration server that coordinates communications between the on-premises infrastructure and
Azure. The configuration server manages data replication.
The process server that acts as a replication gateway. The process server:
Receives replication data.
Optimizes replication date by using caching, compression, and encryption.
Sends replication date to Azure Storage.
The process server also installs the Mobility Service on the VMs that will be replicated. The process server
performs automatic discovery of on-premises VMware VMs.
After the configuration server VM is created and started, Contoso registers the server in the vault.
To set up the source environment Contoso admins do the following:
1. They download the OVF template from the Azure portal ( Prepare Infrastructure > Source >
Configuration Server).
2. They import the template into VMware to create and deploy the VM.
3. When they turn on the VM for the first time, it starts in a Windows Server 2016 installation experience.
They accept the license agreement and enters an administrator password.
4. When the installation is finished, they sign in to the VM as the administrator. At first time sign-in, the
Azure Site Recovery Configuration Tool runs automatically.
5. In the Site Recovery Configuration Tool, they enter a name to use to register the configuration server in
the vault.
6. The tool checks the VM's connection to Azure. After the connection is established, they select Sign in to
sign in to the Azure subscription. The credentials must have access to the vault in which the configuration
server is registered.
7. The tool performs some configuration tasks, and then reboots. They sign in to the machine again. The
Configuration Server Management Wizard starts automatically.
8. In the wizard, they select the NIC to receive replication traffic. This setting can't be changed after it's
configured.
9. They select the subscription, resource group, and Recovery Services vault in which to register the
configuration server.

10. They download and installs MySQL Server and VMWare PowerCLI. Then, they validates the server
settings.
11. After validation, they enter the FQDN or IP address of the vCenter Server instance or vSphere host. They
leave the default port, and enter a display name for the vCenter Server instance in Azure.
12. They specify the account created earlier so that Site Recovery can automatically discover VMware VMs
that are available for replication.
13. They enter credentials, so the Mobility Service is automatically installed when replication is enabled. For
Windows machines, the account needs local administrator permissions on the VMs.

14. When registration is finished, in the Azure portal, they verify again that the configuration server and
VMware server are listed on the Source page in the vault. Discovery can take 15 minutes or more.
15. Site Recovery connects to VMware servers by using the specified settings, and discovers VMs.
Set up the target
Now, Contoso admins configure the target replication environment:
1. In Prepare infrastructure > Target, they select the target settings.
2. Site Recovery checks that there's a storage account and network in the specified target.
Create a replication policy
When the source and target are set up, Contoso admins create a replication policy and associates the policy with
the configuration server:
1. In Prepare infrastructure > Replication Settings > Replication Policy > Create and Associate, they
create the ContosoMigrationPolicy policy.
2. They use the default settings:
RPO threshold: Default of 60 minutes. This value defines how often recovery points are created.
An alert is generated if continuous replication exceeds this limit.
Recovery point retention: Default of 24 hours. This value specifies how long the retention
window is for each recovery point. Replicated VMs can be recovered to any point in a window.
App-consistent snapshot frequency: Default of 1 hour. This value specifies the frequency at
which application-consistent snapshots are created.

3. The policy is automatically associated with the configuration server.


Need more help?
You can read a full walkthrough of these steps in Set up disaster recovery for on-premises VMware VMs.
Detailed instructions are available to help you set up the source environment, deploy the configuration
server, and configure replication settings.
Enable replication
Now, Contoso admins can start replicating WebVM.
1. In Replicate application > Source > Replicate, they select the source settings.
2. They indicate that they want to enable virtual machines, select the vCenter Server instance, and set the
configuration server.

3. They specify the target settings, including the resource group and network in which the Azure VM will be
located after failover. They specify the storage account in which replicated data will be stored.
4. They select WebVM for replication. Site Recovery installs the Mobility Service on each VM when
replication is enabled.
5. They check that the correct replication policy is selected, and enable replication for WEBVM. They track
replication progress in Jobs. After the Finalize Protection job runs, the machine is ready for failover.
6. In Essentials in the Azure portal, they can see status for the VMs that are replicating to Azure:

Need more help?


You can read a full walkthrough of these steps in Enable replication.

Step 6: Migrate the database


Contoso admins need to create a Database Migration Service project, and then migrate the database.
Create a Database Migration Service project
1. They create a Database Migration Service project. They select the SQL Server source server type, and
Azure SQL Database Managed Instance as the target.

2. The Migration Wizard opens.


Migrate the database
1. In the Migration Wizard, they specify the source VM on which the on-premises database is located. They
enter the credentials to access the database.
2. they select the database to migrate (SmartHotel.Registration):

3. For the target, they enter the name of the Managed Instance in Azure, and the access credentials.

4. In New Activity > Run Migration, they specify settings to run migration:
Source and target credentials.
The database to migrate.
The network share created on the on-premises VM. The Database Migration Service takes source
backups to this share.
The service account that runs the source SQL Server instance must have write permissions on
this share.
The FQDN path to the share must be used.
The SAS URI that provides the Database Migration Service with access to the storage account
container to which the service uploads the backup files for migration.

5. They save the migration settings, and then run the migration.
6. In Overview, they monitor the migration status.
7. When migration is finished, they verify that the target databases exist on the Managed Instance.

Step 7: Migrate the VM


Contoso admins run a quick test failover, and then migrate the VM.
Run a test failover
Before migrating WEBVM, a test failover helps ensure that everything works as expected. Admins complete the
following steps:
1. They run a test failover to the latest available point in time (Latest processed).
2. They select Shut down machine before beginning failover. With this option selected, Site Recovery
attempts to shut down the source VM before it triggers the failover. Failover continues, even if shutdown fails.
3. Test failover runs:
a. A prerequisites check runs to make sure that all the conditions required for migration are in place.
b. Failover processes the data so that an Azure VM can be created. If the latest recovery point is selected,
a recovery point is created from the data.
c. Azure VM is created by using the data processed in the preceding step.
4. When the failover is finished, the replica Azure VM appears in the Azure portal. they verify that everything is
working properly: the VM is the appropriate size, it's connected to the correct network, and it's running.
5. After verifying the test failover, they clean up the failover, and record any observations.
Migrate the VM
1. After verifying that the test failover worked as expected, Contoso admins create a recovery plan for
migration, and add WEBVM to the plan:

2. They run a failover on the plan, selecting the latest recovery point. They specify that Site Recovery should
try to shut down the on-premises VM before it triggers the failover.

3. After the failover, they verify that the Azure VM appears as expected in the Azure portal.
4. After verifying, they complete the migration to finish the migration process, stop replication for the VM,
and stop Site Recovery billing for the VM.

Update the connection string


As the final step in the migration process, Contoso admins update the connection string of the application to
point to the migrated database that's running on Contoso's Managed Instance.
1. In the Azure portal, they find the connection string by selecting Settings > Connection Strings.

2. They update the string with the user name and password of the SQL Database Managed Instance.
3. After the string is configured, they replace the current connection string in the web.config file of its
application.
4. After updating the file and saving it, they restart IIS on WEBVM by running IISRESET /RESTART in a
Command Prompt window.
5. After IIS is restarted, the app uses the database that's running on the SQL Database Managed Instance.
6. At this point, they can shut down on-premises the SQLVM machine. The migration has been completed.
Need more help?
Learn how to run a test failover.
Learn how to create a recovery plan.
Learn how to fail over to Azure.

Clean up after migration


With the migration complete, the SmartHotel360 app is running on an Azure VM and the SmartHotel360
database is available in the Azure SQL Database Managed Instance.
Now, Contoso needs to do the following cleanup tasks:
Remove the WEBVM machine from the vCenter Server inventory.
Remove the SQLVM machine from the vCenter Server inventory.
Remove WEBVM and SQLVM from local backup jobs.
Update internal documentation to show the new location and IP address for WEBVM.
Remove SQLVM from internal documentation. Alternatively, Contoso can revise the documentation to show
SQLVM as deleted and no longer in the VM inventory.
Review any resources that interact with the decommissioned VMs. Update any relevant settings or
documentation to reflect the new configuration.

Review the deployment


With the migrated resources in Azure, Contoso needs to fully operationalize and secure its new infrastructure.
Security
The Contoso security team reviews the Azure VMs and SQL Database Managed Instance to check for any
security issues with its implementation:
The team reviews the network security groups that are used to control access for the VM. Network
security groups help ensure that only traffic that is allowed to the app can pass.
Contoso's security team also is considering securing the data on the disk by using Azure Disk Encryption
and Azure Key Vault.
The team enables threat detection on the Managed Instance. Threat detection sends an alert to Contoso's
security team/service desk system to open a ticket if a threat is detected. Learn more about threat
detection for Managed Instance.
To learn more about security practices for VMs, see Security best practices for IaaS workloads in Azure.
BCDR
For business continuity and disaster recovery (BCDR ), Contoso takes the following actions:
Keep data safe: Contoso backs up the data on the VMs using the Azure Backup service. Learn more.
Keep apps up and running: Contoso replicates the app VMs in Azure to a secondary region using Site
Recovery. Learn more.
Contoso learns more about managing SQL Managed Instance, including database backups.
Licensing and cost optimization
Contoso has an existing licensing for WEBVM. To take advantage of pricing with Azure Hybrid Benefit,
Contoso converts the existing Azure VM.
Contoso enables Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary. Cost Management is
a multi-cloud cost management solution that helps Contoso use and manage Azure and other cloud
resources. Learn more about Azure Cost Management.

Conclusion
In this article, Contoso rehosts the SmartHotel360 app in Azure by migrating the app front-end VM to Azure by
using the Site Recovery service. Contoso migrates the on-premises database to an Azure SQL Database
Managed Instance by using the Azure Database Migration Service.

Next steps
In the next article in the series, Contoso rehosts the SmartHotel360 app on Azure VMs by using the Azure Site
Recovery service.
Contoso migration: Rehost an on-premises app to
Azure VMs
3/15/2019 • 20 minutes to read • Edit Online

This article demonstrates how Contoso rehosts the on-premises SmartHotel360 app in Azure, by migrating the
app VMs to Azure VMs.
This document is one in a series of articles that show how the fictitious company Contoso migrates on-premises
resources to the Microsoft Azure cloud. The series includes background information, and scenarios that illustrate
setting up a migration infrastructure, assessing on-premises resources for migration, and running different types
of migrations. Scenarios grow in complexity. We'll add additional articles over time.

ARTICLE DETAILS STATUS

Article 1: Overview Overview of the article series, Available


Contoso's migration strategy, and the
sample apps that are used in the
series.

Article 2: Deploy Azure infrastructure Contoso prepares its on-premises Available


infrastructure and its Azure
infrastructure for migration. The same
infrastructure is used for all migration
articles in the series.

Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database
using Data Migration Assistant.

Article 4: Rehost an app on an Azure Contoso runs a lift-and-shift migration Available


VM and SQL Database Managed to Azure for its on-premises
Instance SmartHotel360 app. Contoso migrates
the app front-end VM using Azure Site
Recovery. Contoso migrates the app
database to an Azure SQL Database
Managed Instance using the Azure
Database Migration Service.

Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 This article
app VMs to Azure VMs using the Site
Recovery service.

Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS

Article 7: Rehost a Linux app on Azure Contoso completes a lift-and-shift Available


VMs migration of the Linux osTicket app to
Azure VMs, using Azure Site Recovery

Article 8: Rehost a Linux app on Azure Contoso migrates the Linux osTicket Available
VMs and Azure MySQL app to Azure VMs using Azure Site
Recovery, and migrates the app
database to an Azure MySQL Server
instance using MySQL Workbench.

Article 9: Refactor an app on Azure Contoso migrates the SmartHotel360 Available


Web Apps and Azure SQL database app to an Azure Web App, and
migrates the app database to an Azure
SQL Server instance with Database
Migration Assistant

Article 10: Refactor a Linux app on Contoso migrates its Linux osTicket Available
Azure Web Apps and Azure MySQL app to an Azure web app on multiple
Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.

Article 11: Refactor TFS on Azure Contoso migrates its on-premises Available
DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.

Article 12: Rearchitect an app on Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL
Database.

Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.

Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.

In this article, Contoso will migrate the two-tier Windows. NET SmartHotel360 app running on VMware VMs,
to Azure. If you want to use this app, it's provided as open-source and you can download it from GitHub.

Business drivers
The IT Leadership team has worked closely with business partners to understand what they want to achieve with
this migration:
Address business growth: Contoso is growing, and as a result there is pressure on their on-premises
systems and infrastructure.
Limit risk: The SmartHotel360 app is critical for the Contoso business. It wants to move the app to Azure
with zero risk.
Extend: Contoso doesn't want to modify the app, but does want to ensure that it's stable.

Migration goals
The Contoso cloud team has pinned down goals for this migration. These goals are used to determine the best
migration method:
After migration, the app in Azure should have the same performance capabilities as it does today in VMware.
The app will remain as critical in the cloud as it is on-premises.
Contoso doesn’t want to invest in this app. It is important to the business, but in its current form Contoso
simply wants to move it safely to the cloud.
Contoso doesn't want to change the ops model for this app. Contoso do want to interact with it in the cloud
in the same way that they do now.
Contoso doesn't want to change any app functionality. Only the app location will change.

Solution design
After pinning down goals and requirements, Contoso designs and review a deployment solution, and identifies
the migration process, including the Azure services that Contoso will use for the migration.
Current app
The app is tiered across two VMs (WEBVM and SQLVM ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5).
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
Proposed architecture
Since the app is a production workload, the app VMs in Azure will reside in the production resource group
ContosoRG.
The app VMs will be migrated to the primary Azure region (East US 2) and placed in the production network
(VNET-PROD -EUS2).
The web frontend VM will reside in the frontend subnet (PROD -FE -EUS2) in the production network.
The database VM will reside in the database subnet (PROD -DB -EUS2) in the production network.
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Database considerations
As part of the solution design process, Contoso did a feature comparison between Azure SQL Database and
SQL Server. The following considerations helped them to decide to go with SQL Server running on an Azure
IaaS VM:
Using an Azure VM running SQL Server seems to be an optimal solution if Contoso needs to customize the
operating system or the database server, or if it might want to colocate and run third-party apps on the same
VM.
With Software Assurance, in future Contoso can exchange existing licenses for discounted rates on a SQL
Database Managed Instance using the Azure Hybrid Benefit for SQL Server. This can save up to 30% on
Managed Instance.
Solution review
Contoso evaluates the proposed design by putting together a pros and cons list.

CONSIDERATION DETAILS

Pros Both the app VMs will be moved to Azure without changes,
making the migration simple.

Since Contoso is using lift-and-shift for both app VMs, no


special configuration or migration tools are needed for the
app database.

Contoso can leverage their investment in Software


Assurance, using the Azure Hybrid Benefit.

Contoso will retain full control of the app VMs in Azure.


CONSIDERATION DETAILS

Cons WEBVM and SQLVM are running Windows Server 2008 R2.
The operating system is supported by Azure for specific roles
(July 2018). Learn more.

The web and data tiers of the app will remain a single point
of failover.

SQLVM is running on SQL Server 2008 R2 which isn't in


mainstream support. However it is supported for Azure VMs
(July 2018). Learn more.

Contoso will need to continue supporting the app as Azure


VMs rather than moving to a managed service such as Azure
App Service and Azure SQL Database.

Migration process
Contoso will migrate the app frontend and database VMs to Azure VMs with Site Recovery:
As a first step, Contoso prepares and sets up Azure components for Site Recovery, and prepares the on-
premises VMware infrastructure.
They already have the Azure infrastructure in place, so Contoso just needs to add a couple of Azure
components specifically for Site Recovery.
With everything prepared, Contoso can start replicating the VMs.
After replication is enabled and working, Contoso will migrate the VM by failing it over to Azure.

Azure services
SERVICE DESCRIPTION COST

Azure Site Recovery The service orchestrates and manages During replication to Azure, Azure
migration and disaster recovery for Storage charges are incurred. Azure
Azure VMs, and on-premises VMs and VMs are created, and incur charges,
physical servers. when failover occurs. Learn more
about charges and pricing.

Prerequisites
Here's what Contoso needs to run this scenario.
REQUIREMENTS DETAILS

Azure subscription Contoso created subscriptions in an earlier article in this


series. If you don't have an Azure subscription, create a free
account.

If you create a free account, you're the administrator of your


subscription and can perform all actions.

If you use an existing subscription and you're not the


administrator, you need to work with the admin to assign
you Owner or Contributor permissions.

If you need more granular permissions, review this article.

Azure infrastructure Learn how Contoso set up an Azure infrastructure.

Learn more about specific network and storage requirements


for Site Recovery.

On-premises servers On-premises vCenter Servers should be running version 5.5,


6.0, or 6.5

ESXi hosts should run version 5.5, 6.0 or 6.5

One or more VMware VMs should be running on the ESXi


host.

On-premises VMs VMs must meet Azure requirements.

Scenario steps
Here's how Contoso admins will run the migration:
Step 1: Prepare Azure for Site Recovery: They create an Azure storage account to hold replicated data,
and a Recovery Services vault.
Step 2: Prepare on-premises VMware for Site Recovery: They prepare accounts for VM discovery and
agent installation, and prepare to connect to Azure VMs after failover.
Step 3: Replicate VMs: They set up replication, and start replicating VMs to Azure storage.
Step 4: Migrate the VMs with Site Recovery: They run a test failover to make sure everything's working,
and then run a full failover to migrate the VMs to Azure.

Step 1: Prepare Azure for the Site Recovery service


Here are the Azure components Contoso needs to migrate the VMs to Azure:
A VNet in which Azure VMs will be located when they're created during failover.
An Azure storage account to hold replicated data.
A Recovery Services vault in Azure.
They set these up as follows:
1. Set up a network-Contoso already set up a network that can be for Site Recovery when they deployed the
Azure infrastructure
The SmartHotel360 app is a production app, and the VMs will be migrated to the Azure production
network (VNET-PROD -EUS2) in the primary East US 2 region.
Both VMs will be placed in the ContosoRG resource group, which is used for production resources.
The app frontend VM (WEBVM ) will migrate to the frontend subnet (PROD -FE -EUS2), in the
production network.
The app database VM (SQLVM ) will migrate to the database subnet (PROD -DB -EUS2), in the
production network.
2. Set up a storage account-Contoso creates an Azure storage account (contosovmsacc20180528) in the
primary region.
The storage account must be in the same region as the Recovery Services vault.
They use a general-purpose account, with standard storage, and LRS replication.

3. Create a vault-With the network and storage account in place, Contoso now creates a Recovery Services
vault (ContosoMigrationVault), and places it in the ContosoFailoverRG resource group in the primary
East US 2 region.

Need more help?


Learn about setting up Azure for Site Recovery.

Step 2: Prepare on-premises VMware for Site Recovery


Here's what Contoso prepares on-premises:
An account on the vCenter server or vSphere ESXi host, to automate VM discovery.
An account that allows automatic installation of the Mobility service on the VMware VMs.
On-premises VM settings, so that Contoso can connect to the replicated Azure VMs after failover.
Prepare an account for automatic discovery
Site Recovery needs access to VMware servers to:
Automatically discover VMs.
Orchestrate replication, failover, and failback for VMs.
At least a read-only account is required. The account should be able to run operations such as creating and
removing disks, and turning on VMs.
Contoso admins set up the account as follows:
1. They create a role at the vCenter level.
2. They assign that role the required permissions.
Prepare an account for Mobility service installation
The Mobility service must be installed on each VM.
Site Recovery can do an automatic push installation of the Mobility service when VM replication is enabled.
An account is required, so that Site Recovery can access the VMs for the push installation. You specify this
account when you set up replication.
The account can be domain or local, with permissions to install on the VMs.
Prepare to connect to Azure VMs after failover
After failover, Contoso wants to connect to the Azure VMs. To do this, Contoso admins do the following before
migration:
1. For access over the internet they:
Enable RDP on the on-premises VM before failover.
Ensure that TCP and UDP rules are added for the Public profile.
Check that RDP is allowed in Windows Firewall > Allowed Apps for all profiles.
2. For access over site-to-site VPN, they:
Enable RDP on the on-premises machine.
Allow RDP in the Windows Firewall -> Allowed apps and features, for Domain and Private
networks.
Set the operating system's SAN policy on the on-premises VM to OnlineAll.
In addition, when they run a failover they need to check the following:
There should be no Windows updates pending on the VM when triggering a failover. If there are, they won't
be able to log into the VM until the update completes.
After failover, they can check Boot diagnostics to view a screenshot of the VM. If this doesn't work, they
should verify that the VM is running, and review these troubleshooting tips.
Need more help?
Learn about creating and assigning a role for automatic discovery.
Learn about creating an account for push installation of the Mobility service.

Step 3: Replicate the on-premises VMs


Before Contoso admins can run a migration to Azure, they need to set up and enable replication.
Set a replication goal
1. In the vault, under the vault name (ContosoVMVault) they select a replication goal (Getting Started >
Site Recovery > Prepare infrastructure.
2. They specify that their machines are located on-premises, running on VMware, and replicating to Azure.

Confirm deployment planning


To continue, they confirm that they have completed deployment planning, by selecting Yes, I have done it. In
this scenario, Contoso are only migrating two VMs, and don't need deployment planning.
Set up the source environment
Contoso admins need to configure the source environment. To do this, they download an OVF template and use
it to deploy the Site Recovery configuration server as a highly available, on-premises VMware VM. After the
configuration server is up and running, they register it in the vault.
The configuration server runs a number of components:
The configuration server component that coordinates communications between on-premises and Azure and
manages data replication.
The process server that acts as a replication gateway. It receives replication data; optimizes it with caching,
compression, and encryption; and sends it to Azure storage.
The process server also installs Mobility Service on VMs you want to replicate and performs automatic
discovery of on-premises VMware VMs.
Contoso admins perform these steps as follows:
1. In the vault, they download the OVF template from Prepare Infrastructure > Source > Configuration
Server.
2. They import the template into VMware to create and deploy the VM.
3. When they turn on the VM for the first time, it boots up into a Windows Server 2016 installation
experience. They accept the license agreement, and enter an administrator password.
4. After the installation finishes, they sign in to the VM as the administrator. At first sign-in, the Azure Site
Recovery Configuration Tool runs by default.
5. In the tool, they specify a name to register the configuration server in the vault.
6. The tool checks that the VM can connect to Azure. After the connection is established, they sign in to the
Azure subscription. The credentials must have access to the vault in which they'll register the
configuration server.
7. The tool performs some configuration tasks and then reboots.
8. They sign in to the machine again, and the Configuration Server Management Wizard starts
automatically.
9. In the wizard, they select the NIC to receive replication traffic. This setting can't be changed after it's
configured.
10. They select the subscription, resource group, and the vault in which to register the configuration server.

11. They download and install MySQL Server, and VMWare PowerCLI.
12. After validation, they specify the FQDN or IP address of the vCenter server or vSphere host. They leave
the default port, and specify a friendly name for the server in Azure.
13. They specify the account that they created for automatic discovery, and the credentials that are used to
automatically install the Mobility Service. For Windows machines, the account needs local administrator
privileges on the VMs.

14. After registration finishes, in the Azure portal, they double check that the configuration server and
VMware server are listed on the Source page in the vault. Discovery can take 15 minutes or more.
15. Site Recovery then connects to VMware servers using the specified settings, and discovers VMs.
Set up the target
Now Contoso admins specify the target replication settings.
1. In Prepare infrastructure > Target, they select the target settings.
2. Site Recovery checks that there's an Azure storage account and network in the specified target location.
Create a replication policy
Now Contoso admins can create a replication policy.
1. In Prepare infrastructure > Replication Settings > Replication Policy > Create and Associate, they
create a policy ContosoMigrationPolicy.
2. They use the default settings:
RPO threshold: Default of 60 minutes. This value defines how often recovery points are created.
An alert is generated if continuous replication exceeds this limit.
Recovery point retention. Default of 24 hours. This value specifies how long the retention
window is for each recovery point. Replicated VMs can be recovered to any point in a window.
App-consistent snapshot frequency. Default of one hour. This value specifies the frequency at
which application-consistent snapshots are created.

3. The policy is automatically associated with the configuration server.


Enable replication for WEBVM
With everything in place, Contoso admins can now enable replication for the VMs. They start with WebVM.
1. In Replicate application > Source > +Replicate they select the source settings.
2. They indicate that they want to enable VMs, select the vCenter server, and the configuration server.

3. They select the target settings, including the resource group and Azure network, and the storage account.

4. They select WebVM for replication, check the replication policy, and enable replication.
At this stage they only selects WEBVM because VNet and subnet must be selected, and the app
VMs will be placed in different subnets.
Site Recovery automatically installs the Mobility service on the VM when replication is enabled.

5. They track replication progress in Jobs. After the Finalize Protection job runs, the machine is ready for
failover.
6. In Essentials in the Azure portal, they can see the structure for the VMs replicating to Azure.
Enable replication for SQLVM
Now Contoso admins can start replicating the SQLVM machine, using the same process as above.
1. They select source settings.
2. They then specify the target settings.

3. They select SQLVM for replication.


4. They apply the same replication policy that was used for WEBVM, and enable replication.

Need more help?


You can read a full walkthrough of all these steps in Set up disaster recovery for on-premises VMware VMs.
Detailed instructions are available to help you set up the source environment, deploy the configuration
server, and configure replication settings.
You can learn more about enabling replication.

Step 4: Migrate the VMs


Contoso admins run a quick test failover, and then a full failover to migrate the VMs.
Run a test failover
A test failover helps to ensure that everything's working as expected.
1. They run a test failover to the latest available point in time (Latest processed).
2. They select Shut down machine before beginning failover, so that Site Recovery attempts to shut
down the source VM before triggering the failover. Failover continues even if shutdown fails.
3. Test failover runs:
A prerequisites check runs to make sure all of the conditions required for migration are in place.
Failover processes the data, so that an Azure VM can be created. If select the latest recovery point, a
recovery point is created from the data.
An Azure VM is created using the data processed in the previous step.
4. After the failover finishes, the replica Azure VM appears in the Azure portal. They check that the VM is
the appropriate size, connected to the right network, and is running.
5. After verifying the test failover, they clean up the failover, and record and save any observations.
Create and customize a recovery plan
After verifying that the test failover worked as expected, Contoso admins create a recovery plan for migration.
A recovery plan specifies the order in which failover occurs, and indicates how Azure VMs will be brought
online in Azure.
Since the app is two-tier, they customize the recovery plan so that the data VM (SQLVM ) starts before the
frontend (WEBVM ).
1. In Recovery Plans (Site Recovery) > +Recovery Plan, they create a plan and add the VMs to it.

2. After creating the plan, they customize it (Recovery Plans > SmartHotelMigrationPlan >
Customize).
3. They remove WEBVM from Group 1: Start. This ensures that the first start action affects SQLVM only.
4. In +Group > Add protected items, they add WEBVM to Group 2: Start. The VMs need to be in two
different groups.
Migrate the VMs
Now Contoso admins run a full failover to complete the migration.
1. They select the recovery plan > Failover.
2. They select to fail over to the latest recovery point, and that Site Recovery should try to shut down the on-
premises VM before triggering the failover. They can follow the failover progress on the Jobs page.
3. After the failover, they verify that the Azure VM appears as expected in the Azure portal.

4. After verification, they complete the migration for each VM. This stops replication for the VM, and stops
Site Recovery billing for it.
Need more help?
Learn about running a test failover.
Learn how to create a recovery plan.
Learn about failing over to Azure.

Clean up after migration


With migration complete, the SmartHotel360 app tiers are now running on Azure VMs.
Now, Contoso needs to complete these cleanup steps:
Remove the WEBVM machine from the vCenter inventory.
Remove the SQLVM machine from the vCenter inventory.
Remove WEBVM and SQLVM from local backup jobs.
Update internal documentation to show the new location, and IP addresses for the VMs.
Review any resources that interact with the VMs, and update any relevant settings or documentation to
reflect the new configuration.

Review the deployment


With the app now running, Contoso now needs to fully operationalize and secure it in Azure.
Security
The Contoso security team reviews the Azure VMs, to determine any security issues.
To control access, the team reviews the Network Security Groups (NSGs) for the VMs. NSGs are used to
ensure that only traffic allowed to the app can reach it.
The team also consider securing the data on the disk using Azure Disk Encryption and KeyVault.
Read more about security practices for VMs.

BCDR
For business continuity and disaster recovery (BCDR ), Contoso takes the following actions:
Keep data safe: Contoso backs up the data on the VMs using the Azure Backup service. Learn more.
Keep apps up and running: Contoso replicates the app VMs in Azure to a secondary region using Site
Recovery. Learn more.
Licensing and cost optimization
1. Contoso has existing licensing for their VMs, and will leverage the Azure Hybrid Benefit. Contoso will convert
the existing Azure VMs, to take advantage of this pricing.
2. Contoso will enable Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary. It's a multi-cloud
cost management solution that helps to utilize and manage Azure and other cloud resources. Learn more
about Azure Cost Management.

Conclusion
In this article, Contoso rehosted the SmartHotel360 app in Azure by migrating the app VMs to Azure VMs using
the Site Recovery service.

Next steps
In the next article in the series, we'll show you how Contoso rehosts the SmartHotel360 app frontend VM on an
Azure VM, and migrates the database to a SQL Server AlwaysOn Availability Group in Azure.
Contoso migration: Rehost an on-premises app on
Azure VMs and SQL Server AlwaysOn Availability
Group
3/15/2019 • 30 minutes to read • Edit Online

This article demonstrates how Contoso rehosts the SmartHotel360 app in Azure. Contoso migrates the app
frontend VM to an Azure VM, and the app database to an Azure SQL Server VM, running in a Windows Server
failover cluster with SQL Server AlwaysOn Availability gGroups.
This document is one in a series of articles that show how the fictitious company Contoso migrates on-premises
resources to the Microsoft Azure cloud. The series includes background information, and scenarios that illustrate
setting up a migration infrastructure, assessing on-premises resources for migration, and running different types
of migrations. Scenarios grow in complexity. We'll add additional articles over time.

ARTICLE DETAILS STATUS

Article 1: Overview Overview of the article series, Available


Contoso's migration strategy, and the
sample apps that are used in the
series.

Article 2: Deploy Azure infrastructure Contoso prepares its on-premises Available


infrastructure and its Azure
infrastructure for migration. The same
infrastructure is used for all migration
articles in the series.

Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database
using Data Migration Assistant.

Article 4: Rehost an app on an Azure Contoso runs a lift-and-shift migration Available


VM and SQL Database Managed to Azure for its on-premises
Instance SmartHotel360 app. Contoso migrates
the app front-end VM using Azure Site
Recovery. Contoso migrates the app
database to an Azure SQL Database
Managed Instance using the Azure
Database Migration Service.

Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.
ARTICLE DETAILS STATUS

Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 This article
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.

Article 7: Rehost a Linux app on Azure Contoso completes a lift-and-shift Available


VMs migration of the Linux osTicket app to
Azure VMs, using Azure Site Recovery

Article 8: Rehost a Linux app on Azure Contoso migrates the Linux osTicket Available
VMs and Azure MySQL Server app to Azure VMs using Azure Site
Recovery, and migrates the app
database to an Azure MySQL Server
instance using MySQL Workbench.

Article 9: Refactor an app on Azure Contoso migrates the SmartHotel360 Available


Web Apps and Azure SQL database app to an Azure Web App, and
migrates the app database to an Azure
SQL Server instance with Database
Migration Assistant

Article 10: Refactor a Linux app on Contoso migrates its Linux osTicket Available
Azure Web Apps and Azure MySQL app to an Azure web app on multiple
Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.

Article 11: Refactor TFS on Azure Contoso migrates its on-premises Available
DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.

Article 12: Rearchitect an app on Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL
Database.

Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.

Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.

In this article, Contoso migrates the two-tier Windows .NET SmartHotel360 app running on VMware VMs to
Azure. If you'd like to use this app, it's provided as open source and you can download it from GitHub.
Business drivers
The IT leadership team has worked closely with business partners to understand what they want to achieve with
this migration:
Address business growth: Contoso is growing, and as a result there is pressure on on-premises systems
and infrastructure.
Increase efficiency: Contoso needs to remove unnecessary procedures, and streamline processes for
developers and users. The business needs IT to be fast and not waste time or money, thus delivering faster on
customer requirements.
Increase agility: Contoso IT needs to be more responsive to the needs of the business. It must be able to
react faster than the changes in the marketplace, to enable the success in a global economy. IT mustn't get in
the way, or become a business blocker.
Scale: As the business grows successfully, Contoso IT must provide systems that are able to grow at the
same pace.

Migration goals
The Contoso cloud team has pinned down goals for this migration. These goals were used to determine the best
migration method:
After migration, the app in Azure should have the same performance capabilities as it does today in VMWare.
The app will remain as critical in the cloud as it is on-premises.
Contoso doesn’t want to invest in this app. It is important to the business, but in its current form Contoso
simply want to move it safely to the cloud.
The on-premises database for the app has had availability issues. Contoso would like to deploy it in Azure as
a high-availability cluster, with failover capabilities.
Contoso wants to upgrade from their current SQL Server 2008 R2 platform, to SQL Server 2017.
Contoso doesn't want to use an Azure SQL Database for this app, and is looking for alternatives.

Solution design
After pinning down their goals and requirements, Contoso designs and reviews a deployment solution, and
identifies the migration process, including the Azure services that it will use for the migration.
Current architecture
The app is tiered across two VMs (WEBVM and SQLVM ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5)
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
Proposed architecture
In this scenario:
Contoso will migrate the app frontend WEBVM to an Azure IaaS VM.
The frontend VM in Azure will be deployed in the ContosoRG resource group (used for production
resources).
It will be located in the Azure production network (VNET-PROD -EUS2) in the primary East US2
region.
The app database will be migrated to an Azure SQL Server VM.
It will be located in Contoso's Azure database network (PROD -DB -EUS2) in the primary East US2
region.
It will be placed in a Windows Server failover cluster with two nodes, that uses SQL Server Always On
Availability Groups.
In Azure the two SQL Server VM nodes in the cluster will be deployed in the ContosoRG resource
group.
The VM nodes will be located in the Azure production network (VNET-PROD -EUS2) in the primary
East US2 region.
VMs will run Windows Server 2016 with SQL Server 2017 Enterprise Edition. Contoso doesn't have
licenses for this operating system, so it will use an image in the Azure Marketplace that provides the
license as a charge to their Azure EA commitment.
Apart from unique names, both VMs use the same settings.
Contoso will deploy an internal load balancer which listens for traffic on the cluster, and directs it to the
appropriate cluster node.
The internal load balancer will be deployed in the ContosoNetworkingRG (used for networking
resources).
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.

Database considerations
As part of the solution design process, Contoso did a feature comparison between Azure SQL Database and
SQL Server. The following considerations helped them to decide to go with an Azure IaaS VM running SQL
Server:
Using an Azure VM running SQL Server seems to be an optimal solution if Contoso needs to customize the
operating system or the database server, or if it might want to colocate and run third-party apps on the same
VM.
Using the Data Migration Assistant, Contoso can easily assess and migrate to an Azure SQL Database.
Solution review
Contoso evaluates their proposed design by putting together a pros and cons list.

CONSIDERATION DETAILS
CONSIDERATION DETAILS

Pros WEBVM will be moved to Azure without changes, making


the migration simple.

The SQL Server tier will run on SQL Server 2017 and
Windows Server 2016. This retires their current Windows
Server 2008 R2 operating system, and running SQL Server
2017 supports Contoso's technical requirements and goals.
IT provides 100% compatibility while moving away from SQL
Server 2008 R2.

Contoso can leverage their investment in Software


Assurance, using the Azure Hybrid Benefit.

A high availability SQL Server deployment in Azure provides


fault tolerance so that the app data tier is no longer a single
point of failover.

Cons WEBVM is running Windows Server 2008 R2. The operating


system is supported by Azure for specific roles (July 2018).
Learn more.

The web tier of the app will remain a single point of failover.

Contoso will need to continue supporting the web tier as an


Azure VM rather than moving to a managed service such as
Azure App Service.

With the chosen solution, Contoso will need to continue


managing two SQL Server VMs rather than moving to a
managed platform such as Azure SQL Database Managed
Instance. In addition, with Software Assurance, Contoso
could exchange their existing licenses for discounted rates on
Azure SQL Database Managed Instance.

Azure services
SERVICE DESCRIPTION COST

Database Migration Assistant DMA runs locally from the on- DMA is a free, downloadable tool.
premises SQL Server machine, and
migrates the database across a site-to-
site VPN to Azure.

Azure Site Recovery Site Recovery orchestrates and During replication to Azure, Azure
manages migration and disaster Storage charges are incurred. Azure
recovery for Azure VMs, and on- VMs are created, and incur charges,
premises VMs and physical servers. when failover occurs. Learn more
about charges and pricing.

Migration process
Contoso admins will migrate the app VMs to Azure.
They'll migrate the frontend VM to Azure VM using Site Recovery:
As a first step, they'll prepare and set up Azure components, and prepare the on-premises VMware
infrastructure.
With everything prepared, they can start replicating the VM.
After replication is enabled and working, they migrate the VM by failing it over to Azure.
They'll migrate the database to a SQL Server cluster in Azure, using the Data Migration Assistant (DMA).
As a first step they'll need to provision SQL Server VMs in Azure, set up the cluster and an internal
load balancer, and configure AlwaysOn availability groups.
With this in place, they can migrate the database
After the migration, they'll enable AlwaysOn protection for the database.

Prerequisites
Here's what Contoso needs to do for this scenario.

REQUIREMENTS DETAILS

Azure subscription Contoso already created a subscription in an early article in


this series. If you don't have an Azure subscription, create a
free account.

If you create a free account, you're the administrator of your


subscription and can perform all actions.

If you use an existing subscription and you're not the


administrator, you need to work with the admin to assign
you Owner or Contributor permissions.

If you need more granular permissions, review this article.

Azure infrastructure Learn how Contoso set up an Azure infrastructure.

Learn more about specific network and storage requirements


for Site Recovery.
REQUIREMENTS DETAILS

Site recovery (on-premises) The on-premises vCenter server should be running version
5.5, 6.0, or 6.5

An ESXi host running version 5.5, 6.0 or 6.5

One or more VMware VMs running on the ESXi host.

VMs must meet Azure requirements.

Supported network and storage configuration.

VMs you want to replicate must meet Azure requirements.

Scenario steps
Here's how Contoso will run the migration:
Step 1: Prepare a cluster: Create a cluster for deploying two SQL Server VM nodes in Azure.
Step 2: Deploy and set up the cluster: Prepare an Azure SQL Server cluster. Databases are migrated into
this pre-created cluster.
Step 3: Deploy the load balancer: Deploy a load balancer to balance traffic to the SQL Server nodes.
Step 4: Prepare Azure for Site Recovery: Create an Azure storage account to hold replicated data, and a
Recovery Services vault.
Step 5: Prepare on-premises VMware for Site Recovery: Prepare accounts for VM discovery and agent
installation. Prepare on-premises VMs so that users can connect to Azure VMs after m;migration.
Step 6: Replicate VMs: Enable VM replication to Azure.
Step 7: Install DMA: Download and install the Database Migration Assistant.
Step 7: Migrate the database with DMA: Migrate the database to Azure.
Step 9: Protect the database: Create an Always On Availability Group for the cluster.
Step 10: Migrate the web app VM: Run a test failover to make sure everything's working as expected.
Then run a full failover to Azure.

Step 1: Prepare a SQL Server AlwaysOn availability group cluster


Contoso admins set up the cluster as follows:
1. They create two SQL Server VMs by selecting SQL Server 2017 Enterprise Windows Server 2016 image
in the Azure Marketplace.

2. In the Create virtual machine Wizard > Basics, they configure:


Names for the VMs: SQLAOG1 and SQLAOG2.
Since machines are business-critical, they enable SSD for the VM disk type.
They specify machine credentials.
They deploy the VMs in the primary EAST US 2 region, in the ContosoRG resource group.
3. In Size, they start with D2s_V3 SKU for both VMs. They'll scale later as they need to.
4. In Settings, they do the following:
Since these VMs are critical databases for the app, they use managed disks.
They place the machines in the production network of the EAST US 2 primary region ( VNET-
PROD -EUS2), in the database subnet (PROD -DB -EUS2).
They create a new availability set: SQLAOGAVSET, with two fault domains and five update
domains.

5. In SQL Server settings, they limit SQL connectivity to the virtual network (private), on default port
1433. For authentication they use the same credentials as they use onsite (contosoadmin).

Need more help?


Get help provisioning a SQL Server VM.
Learn about configuring VMs for different SQL Server SKUs.

Step 2: Deploy and set up the cluster


Here's how Contoso admins set up the cluster:
1. They set up an Azure storage account to act as the cloud witness.
2. They add the SQL Server VMs to the Active Directory domain in the Contoso on-premises datacenter.
3. They create the cluster in Azure.
4. They configure the cloud witness.
5. Lastly, they enable SQL Always On availability groups.
Set up a storage account as cloud witness
To set up a cloud witness, Contoso needs an Azure Storage account that will hold the blob file used for cluster
arbitration. The same storage account can be used to set up cloud witness for multiple clusters.
Contoso admins create a storage account as follows:
1. They specify a recognizable name for the account (contosocloudwitness).
2. They deploy a general all-purpose account, with LRS.
3. They place the account in a third region - South Central US. They place it outside the primary and
secondary region so that it remains available in case of regional failure.
4. They place it in their resource group that holds infrastructure resources - ContosoInfraRG.

5. When they create the storage account, primary and secondary access keys are generated for it. They need
the primary access key to create the cloud witness. The key appears under the storage account name >
Access Keys.
Add SQL Server VMs to Contoso domain
1. Contoso adds SQLAOG1 and SQLAOG2 to contoso.com domain.
2. Then, on each VM they install the Windows Failover Cluster Feature and Tools.
Set up the cluster
Before setting up the cluster, Contoso admins take a snapshot of the OS disk on each machine.

1. Then, they run a script they've put together to create the Windows Failover Cluster.
2. After they've created the cluster, they verify that the VMs appear as cluster nodes.

Configure the cloud witness


1. Contoso admins configure the cloud witness using the Quorum Configuration Wizard in Failover
Cluster Manager.
2. In the wizard they select to create a cloud witness with the storage account.
3. After the cloud witness is configured, in appears in the Failover Cluster Manager snap-in.

Enable SQL Server Always On availability groups


Contoso admins can now enable Always On:
1. In SQL Server Configuration Manager, they enable AlwaysOn Availability Groups for the SQL Server
(MSSQLSERVER) service.
2. They restart the service for changes to take effect.
With AlwaysOn enable, Contoso can set up the AlwaysOn availability group that will protect the SmartHotel360
database.
Need more help?
Read about cloud witness and setting up a storage account for it.
Get instructions for setting up a cluster and creating an availability group.

Step 3: Deploy the Azure Load Balancer


Contoso admins now want to deploy an internal load balancer that sits in front of the cluster nodes. The load
balancer listens for traffic, and directs it to the appropriate node.
They create the load balancer as follows:
1. In Azure portal > Networking > Load Balancer, they set up a new internal load balancer: ILB -PROD -
DB -EUS2-SQLAOG.
2. They place the load balancer in the production network VNET-PROD -EUS2, in the database subnet
PROD -DB -EUS2.
3. They assign it a static IP address: 10.245.40.100.
4. As a networking element, they deploy the load balancer in the networking resource group
ContosoNetworkingRG.
After the internal load balancer is deployed, they need to set it up. They create a backend address pool, set up a
health probe, and configure a load balancing rule.
Add a backend pool
To distribute traffic to the VMs in the cluster, Contoso admins set up a backend address pool that contains the IP
addresses of the NICs for VMs that will receive network traffic from the load balancer.
1. In the load balancer settings in the portal, Contoso add a backend pool: ILB -PROD -DB -EUS -SQLAOG-
BEPOOL.
2. They associate the pool with availability set SQLAOGAVSET. The VMs in the set (SQLAOG1 and
SQLAOG2) are added to the pool.
Create a health probe
Contoso admins create a health probe so that the load balancer can monitor the app health. The probe
dynamically adds or removes VMs from the load balancer rotation, based on how they respond to health checks.
They create the probe as follows:
1. In the load balancer settings in the portal, Contoso creates a health probe:
SQLAlwaysOnEndPointProbe.
2. They set the probe to monitor VMs on TCP port 59999.
3. They set an interval of 5 seconds between probes, and a threshold of 2. If two probes fail, the VM will be
considered unhealthy.
Configure the load balancer to receive traffic
Now, Contoso admins set up a load balancer rule to define how traffic is distributed to the VMs.
The front-end IP address handles incoming traffic.
The back-end IP pool receives the traffic.
They create the rule as follows:
1. In the load balancer settings in the portal, they add a new load balancing rule:
SQLAlwaysOnEndPointListener.
2. They set a front-end listener to receive incoming SQL client traffic on TCP 1433.
3. They specify the backend pool to which traffic will be routed, and the port on which VMs listen for traffic.
4. They enable floating IP (direct server return). This is always required for SQL AlwaysOn.
Need more help?
Get an overview of Azure Load Balancer.
Learn about creating a load balancer.

Step 4: Prepare Azure for the Site Recovery service


Here are the Azure components Contoso needs to deploy Site Recovery:
A VNet in which VMs will be located when they're creating during failover.
An Azure storage account to hold replicated data.
A Recovery Services vault in Azure.
Contoso admins set these up as follows:
1. Contoso already created a network/subnet they can use for Site Recovery when they deployed the Azure
infrastructure.
The SmartHotel360 app is a production app, and WEBVM will be migrated to the Azure production
network (VNET-PROD -EUS2) in the primary East US2 region.
WEBVM will be placed in the ContosoRG resource group, which is used for production resources, and
in the production subnet (PROD -FE -EUS2).
2. Contoso admins create an Azure storage account (contosovmsacc20180528) in the primary region.
They use a general-purpose account, with standard storage, and LRS replication.
The account must be in the same region as the vault.
3. With the network and storage account in place, they now create a Recovery Services vault
(ContosoMigrationVault), and place it in the ContosoFailoverRG resource group, in the primary East
US 2 region.

Need more help?


Learn about setting up Azure for Site Recovery.

Step 5: Prepare on-premises VMware for Site Recovery


Here's what Contoso admins prepare on-premises:
An account on the vCenter server or vSphere ESXi host, to automate VM discovery.
An account that allows automatic installation of the Mobility service on VMware VMs that you want to
replicate.
On-premises VM settings, so that Contoso can connect to the replicated Azure VM after failover.
Prepare an account for automatic discovery
Site Recovery needs access to VMware servers to:
Automatically discover VMs.
Orchestrate replication, failover, and failback.
At least a read-only account is required. You need an account that can run operations such as creating and
removing disks, and turning on VMs.
Contoso admins set up the account as follows:
1. They create a role at the vCenter level.
2. They then assign that role the required permissions.
Prepare an account for Mobility service installation
The Mobility service must be installed on each VM.
Site Recovery can do an automatic push installation of this component when replication is enabled for the
VM.
You need an account that Site Recovery can use to access the VM for the push installation. You specify this
account when you set up replication in the Azure console.
The account can be domain or local, with permissions to install on the VM.
Prepare to connect to Azure VMs after failover
After failover, Contoso wants to be able to connect to Azure VMs. To do this, Contoso admins do the following
before migration:
1. For access over the internet they:
Enable RDP on the on-premises VM before failover
Ensure that TCP and UDP rules are added for the Public profile.
Check that RDP is allowed in Windows Firewall > Allowed Apps for all profiles.
2. For access over site-to-site VPN, they:
Enable RDP on the on-premises machine.
Allow RDP in the Windows Firewall -> Allowed apps and features, for Domain and Private
networks.
Set the operating system's SAN policy on the on-premises VM to OnlineAll.
In addition, when they run a failover they need to check the following:
There should be no Windows updates pending on the VM when triggering a failover. If there are, users won't
be able to log into the VM until the update completes.
After failover, they can check Boot diagnostics to view a screenshot of the VM. If this doesn't work, they
should verify that the VM is running, and review these troubleshooting tips.
Need more help?
Learn about creating and assigning a role for automatic discovery.
Learn about creating an account for push installation of the Mobility service.

Step 6: Replicate the on-premises VMs to Azure with Site Recovery


Before they can run a migration to Azure, Contoso admins need to set up and enable replication.
Set a replication goal
1. In the vault, under the vault name (ContosoVMVault) they select a replication goal (Getting Started >
Site Recovery > Prepare infrastructure.
2. They specify that their machines are located on-premises, running on VMware, and replicating to Azure.
Confirm deployment planning
To continue, they need to confirm that they have completed deployment planning, by selecting Yes, I have done
it. In this scenario Contoso are only migrating a VM, and don't need deployment planning.
Set up the source environment
Contoso admins need to configure their source environment. To do this, they download an OVF template and
use it to deploy the Site Recovery configuration server as a highly available, on-premises VMware VM. After the
configuration server is up and running, they register it in the vault.
The configuration server runs a number of components:
The configuration server component that coordinates communications between on-premises and Azure and
manages data replication.
The process server that acts as a replication gateway. It receives replication data; optimizes it with caching,
compression, and encryption; and sends it to Azure storage.
The process server also installs Mobility Service on VMs you want to replicate and performs automatic
discovery of on-premises VMware VMs.
Contoso admins perform these steps as follows:
1. In the vault, they download the OVF template from Prepare Infrastructure > Source > Configuration
Server.
2. They import the template into VMware to create and deploy the VM.
3. When they turn on the VM for the first time, it boots up into a Windows Server 2016 installation
experience. They accept the license agreement, and enter an administrator password.
4. After the installation finishes, they sign in to the VM as the administrator. At first sign-in, the Azure Site
Recovery Configuration Tool runs by default.
5. In the tool, they specify a name to use for registering the configuration server in the vault.
6. The tool checks that the VM can connect to Azure. After the connection is established, they sign in to the
Azure subscription. The credentials must have access to the vault in which you want to register the
configuration server.
7. The tool performs some configuration tasks and then reboots.
8. They sign in to the machine again, and the Configuration Server Management Wizard starts
automatically.
9. In the wizard, they select the NIC to receive replication traffic. This setting can't be changed after it's
configured.
10. They select the subscription, resource group, and vault in which to register the configuration server.

11. They then download and install MySQL Server, and VMWare PowerCLI.
12. After validation, they specify the FQDN or IP address of the vCenter server or vSphere host. They leave
the default port, and specify a friendly name for the vCenter server.
13. They specify the account that they created for automatic discovery, and the credentials that are used to
automatically install the Mobility Service. For Windows machines, the account needs local administrator
privileges on the VMs.

14. After registration finishes, in the Azure portal, they double check that the configuration server and
VMware server are listed on the Source page in the vault. Discovery can take 15 minutes or more.
15. Site Recovery then connects to VMware servers using the specified settings, and discovers VMs.
Set up the target
Now Contoso admins specify target replication settings.
1. In Prepare infrastructure > Target, they select the target settings.
2. Site Recovery checks that there's an Azure storage account and network in the specified target.
Create a replication policy
Now, Contoso admins can create a replication policy.
1. In Prepare infrastructure > Replication Settings > Replication Policy > Create and Associate,
they create a policy ContosoMigrationPolicy.
2. They use the default settings:
RPO threshold: Default of 60 minutes. This value defines how often recovery points are created.
An alert is generated if continuous replication exceeds this limit.
Recovery point retention. Default of 24 hours. This value specifies how long the retention
window is for each recovery point. Replicated VMs can be recovered to any point in a window.
App-consistent snapshot frequency. Default of one hour. This value specifies the frequency at
which application-consistent snapshots are created.

3. The policy is automatically associated with the configuration server.


Enable replication
Now Contoso admins can start replicating WebVM.
1. In Replicate application > Source > +Replicate they select the source settings.
2. They indicate that they want to enable VMs, select the vCenter server, and the configuration server.

3. Now, they specify the target settings, including the resource group and VNet, and the storage account in
which replicated data will be stored.

4. They select the WebVM for replication, checks the replication policy, and enables replication. Site
Recovery installs the Mobility Service on the VM when replication is enabled.

5. They track replication progress in Jobs. After the Finalize Protection job runs, the machine is ready for
failover.
6. In Essentials in the Azure portal, they can see the structure for the VMs replicating to Azure.

Need more help?


You can read a full walkthrough of all these steps in Set up disaster recovery for on-premises VMware VMs.
Detailed instructions are available to help you set up the source environment, deploy the configuration
server, and configure replication settings.
You can learn more about enabling replication.

Step 7: Install the Database Migration Assistant (DMA)


Contoso admins will migrate the SmartHotel360 database to Azure VM SQLAOG1 using the DMA. They set up
DMA as follows:
1. They download the tool from the Microsoft Download Center to the on-premises SQL Server VM (SQLVM ).
2. They run setup (DownloadMigrationAssistant.msi) on the VM.
3. On the Finish page, they select Launch Microsoft Data Migration Assistant before finishing the wizard.

Step 8: Migrate the database with DMA


1. In the DMA they run a new migration - SmartHotel.
2. They select the Target server type as SQL Server on Azure Virtual Machines.

3. In the migration details, they add SQLVM as the source server, and SQLAOG1 as the target. They specify
credentials for each machine.
4. They create a local share for the database and configuration information. It must be accessible with write
access by the SQL Service account on SQLVM and SQLAOG1.

5. Contoso selects the logins that should be migrated, and starts the migration. After it finishes, DMA shows
the migration as successful.
6. They verify that the database is running on SQLAOG1.

DMS connects to the on-premises SQL Server VM across a site-to-site VPN connection between the Contoso
datacenter and Azure, and then migrates the database.

Step 7: Protect the database with AlwaysOn


With the app database running on SQLAOG1, Contoso admins can now protect it using AlwaysOn availability
groups. They configure AlwaysOn using SQL Management Studio, and then assign a listener using Windows
clustering.
Create an AlwaysOn availability group
1. In SQL Management Studio, they right-click on Always on High Availability to start the New
Availability Group Wizard.
2. In Specify Options, they name the availability group SHAOG. In Select Databases, they select the
SmartHotel360 database.
3. In Specify Replicas, they add the two SQL nodes as availability replicas, and configure them to provide
automatic failover with synchronous commit.

4. They configure a listener for the group (SHAOG) and port. The IP address of the internal load balancer is
added as a static IP address (10.245.40.100).

5. In Select Data Synchronization, they enable automatic seeding. With this option, SQL Server
automatically creates the secondary replicas for every database in the group, so Contoso don't have to
manually back up and restore these. After validation, the availability group is created.
6. Contoso ran into an issue when creating the group. They aren't using Active Directory Windows
Integrated security, and thus need to grant permissions to the SQL login to create the Windows Failover
Cluster roles.

7. After the group is created, Contoso can see it in SQL Management Studio.
Configure a listener on the cluster
As a last step in setting up the SQL deployment, Contoso admins configure the internal load balancer as the
listener on the cluster, and brings the listener online. They use a script to do this.

Verify the configuration


With everything set up, Contoso now has a functional availability group in Azure that uses the migrated
database. Admins verify this by connecting to the internal load balancer in SQL Management Studio.

Need more help?


Learn about creating an availability group and listener.
Manually set up the cluster to use the load balancer IP address.
Learn more about creating and using SAS.

Step 8: Migrate the VM with Site Recovery


Contoso admins run a quick test failover, and then migrate the VM.
Run a test failover
Running a test failover helps ensure that everything's working as expected before the migration.
1. They run a test failover to the latest available point in time (Latest processed).
2. They select Shut down machine before beginning failover, so that Site Recovery attempts to shut
down the source VM before triggering the failover. Failover continues even if shutdown fails.
3. Test failover runs:
A prerequisites check runs to make sure all of the conditions required for migration are in place.
Failover processes the data, so that an Azure VM can be created. If select the latest recovery point, a
recovery point is created from the data.
An Azure VM is created using the data processed in the previous step.
4. After the failover finishes, the replica Azure VM appears in the Azure portal. They check that the VM is
the appropriate size, that it's connected to the right network, and that it's running.
5. After verifying, They clean up the failover, and record and save any observations.
Run a failover
1. After verifying that the test failover worked as expected, Contoso admins create a recovery plan for
migration, and add WEBVM to the plan.
2. They run a failover on the plan. They select the latest recovery point, and specify that Site Recovery
should try to shut down the on-premises VM before triggering the failover.

3. After the failover, they verify that the Azure VM appears as expected in the Azure portal.

4. After verifying the VM in Azure, they complete the migration to finish the migration process, stop
replication for the VM, and stop Site Recovery billing for the VM.

Update the connection string


As the final step in the migration process, Contoso admins update the connection string of the application to
point to the migrated database running on the SHAOG listener. This configuration will be changed on the
WEBVM now running in Azure. This configuration is located in the web.config of the ASP application.
1. Locate the file at C:\inetpub\SmartHotelWeb\web.config. Change the name of the server to reflect the
FQDN of the AOG: shaog.contoso.com.

2. After updating the file and saving it, they restart IIS on WEBVM. They do this using the IISRESET
/RESTART from a cmd prompt.
3. After IIS has been restarted, the application is now using the database running on the SQL MI.
Need more help?
Learn about running a test failover.
Learn how to create a recovery plan.
Learn about failing over to Azure.

Clean up after migration


After migration, the SmartHotel360 app is running on an Azure VM, and the SmartHotel360 database is located
in the Azure SQL cluster.
Now, Contoso needs to complete these cleanup steps:
Remove the on-premises VMs from the vCenter inventory.
Remove the VMs from local backup jobs.
Update internal documentation to show the new locations and IP addresses for VMs.
Review any resources that interact with the decommissioned VMs, and update any relevant settings or
documentation to reflect the new configuration.
Add the two new VMs (SQLAOG1 and SQLAOG2) should be added to production monitoring systems.

Review the deployment


With the migrated resources in Azure, Contoso needs to fully operationalize and secure their new infrastructure.
Security
The Contoso security team reviews the Azure VMs WEBVM, SQLAOG1 and SQLAOG2 to determine any
security issues.
The team reviews the Network Security Groups (NSGs) for the VM to control access. NSGs are used to
ensure that only traffic allowed to the application can pass.
The team considers securing the data on the disk using Azure Disk Encryption and KeyVault.
The team should evaluate transparent data encryption (TDE ), and then enable it on the SmartHotel360
database running on the new SQL AOG. Learn more.
Read more about security practices for VMs.

BCDR
For business continuity and disaster recovery (BCDR ), Contoso takes the following actions:
Keep data safe: Contoso backs up the data on the WEBVM, SQLAOG1 and SQLAOG2 VMs using the Azure
Backup service. [Learn more]. (https://docs.microsoft.com/azure/backup/backup-introduction-to-azure-
backup?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json).
Contoso will also learn about how to use Azure Storage to back up SQL Server directly to blob
storage. Learn more.
Keep apps up and running: Contoso replicates the app VMs in Azure to a secondary region using Site
Recovery. Learn more.
Licensing and cost optimization
1. Contoso has existing licensing for their WEBVM and will leverage the Azure Hybrid Benefit. Contoso will
convert the existing Azure VMs to take advantage of this pricing.
2. Contoso will enable Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary. It's a multi-cloud
cost management solution that helps you to utilize and manage Azure and other cloud resources. Learn more
about Azure Cost Management.

Conclusion
In this article, Contoso rehosted the SmartHotel360 app in Azure by migrating the app frontend VM to Azure
using the Site Recovery service. Contoso migrated the app database to a SQL Server cluster provisioned in
Azure, and protected it in a SQL Server AlwaysOn availability group.

Next steps
In the next article in the series, we'll show how Contoso rehost their service desk osTicket app running on Linux,
and deployed with a MySQL database.
Contoso migration: Rehost an on-premises Linux
app to Azure VMs
4/4/2019 • 20 minutes to read • Edit Online

This article shows how Contoso is rehosting an on-premises Linux-based service desk app ( osTicket), to Azure
IaaS VMs.
This document is one in a series of articles that document how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background information, and a set of
scenarios that illustrate how to set up a migration infrastructure, and run different types of migrations. Scenarios
grow in complexity. We'll add additional articles over time.

ARTICLE DETAILS STATUS

Article 1: Overview Overview of the article series, Available


Contoso's migration strategy, and the
sample apps that are used in the
series.

Article 2: Deploy Azure infrastructure Contoso prepares its on-premises Available


infrastructure and its Azure
infrastructure for migration. The same
infrastructure is used for all migration
articles in the series.

Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database
using Data Migration Assistant.

Article 4: Rehost an app on an Azure Contoso runs a lift-and-shift migration Available


VM and SQL Database Managed to Azure for its on-premises
Instance SmartHotel360 app. Contoso migrates
the app front-end VM using Azure Site
Recovery. Contoso migrates the app
database to an Azure SQL Database
Managed Instance using the Azure
Database Migration Service.

Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.

Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS

Article 7: Rehost a Linux app on Azure Contoso completes a lift-and-shift This article
VMs migration of the Linux osTicket app to
Azure VMs, using Azure Site Recovery

Article 8: Rehost a Linux app on Azure Contoso migrates the Linux osTicket Available
VMs and Azure MySQL app to Azure VMs using Azure Site
Recovery, and migrates the app
database to an Azure MySQL Server
instance using MySQL Workbench.

Article 9: Refactor an app on Azure Contoso migrates the SmartHotel360 Available


Web Apps and Azure SQL database app to an Azure Web App, and
migrates the app database to an Azure
SQL Server instance with Database
Migration Assistant

Article 10: Refactor a Linux app on Contoso migrates its Linux osTicket Available
Azure Web Apps and Azure MySQL app to an Azure web app on multiple
Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.

Article 11: Refactor TFS on Azure Contoso migrates its on-premises Available
DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.

Article 12: Rearchitect an app on Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL
Database.

Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.

Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.

In this article, Contoso will migrate the two-tier osTicket app, running on Linux Apache MySQL PHP (LAMP ) to
Azure. The app VMs will be migrated using the Azure Site Recovery service. If you'd like to use this open-source
app, you can download it from GitHub.

Business drivers
The IT Leadership team has worked closely with business partners to understand what they want to achieve with
this migration:
Address business growth: Contoso is growing, and as a result there's pressure on the on-premises systems
and infrastructure.
Limit risk: The service desk app is critical for the Contoso business. Contoso wants to move it to Azure with
zero risk.
Extend: Contoso don't want to change the app right now. It simply wants to ensure that the app is stable.

Migration goals
The Contoso cloud team has pinned down goals for this migration, to determine the best migration method:
After migration, the app in Azure should have the same performance capabilities as it does today in their on-
premises VMWare environment. The app will remain as critical in the cloud as it is on-premises.
Contoso doesn’t want to invest in this app. It is important to the business, but in its current form Contoso
simply wants to move it safely to the cloud.
Contoso doesn't want to change the ops model for this app. It wants to interact with the app in the cloud in
the same way that they do now.
Contoso doesn't want to change app functionality. Only the app location will change.
Having completed a couple of Windows app migrations, Contoso wants to learn how to use a Linux-based
infrastructure in Azure.

Solution design
After pinning down goals and requirements, Contoso designs and review a deployment solution, and identifies
the migration process, including the Azure services that Contoso will use for the migration.
Current app
The OSTicket app is tiered across two VMs (OSTICKETWEB and OSTICKETMYSQL ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5).
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1)
Proposed architecture
Since the app is a production workload, the VMs in Azure will reside in the production resource group
ContosoRG.
The VMs will be migrated to the primary region (East US 2) and placed in the production network (VNET-
PROD -EUS2):
The web VM will reside in the frontend subnet (PROD -FE -EUS2).
The database VM will reside in the database subnet (PROD -DB -EUS2).
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Solution review
Contoso evaluates the proposed design by putting together a pros and cons list.

CONSIDERATION DETAILS

Pros Both the app VMs will be moved to Azure without changes,
making the migration simple.

Since Contoso is using lift-and-shift for both app VMs, no


special configuration or migration tools are needed for the
app database.

Contoso will retain full control of the app VMs in Azure.


/br> The app VMs are running Ubuntu 16.04-TLS, which is
an endorsed Linux distribution. Learn more.

Cons The web and data tier of the app will remain a single point of
failover.

Contoso will need to continue supporting the app as Azure


VMs rather than moving to a managed service such as Azure
App Service and Azure Database for MySQL.

Contoso is aware that by keeping things simple with a lift-


and-shift VM migration, they're not taking full advantage of
the features provided by Azure Database for MySQL (built-in
high availability, predictable performance, simple scaling,
automatic backups and built-in security).

Migration process
Contoso will migrate as follows:
1. As a first step, Contoso sets up the Azure and on-premises infrastructure needed to deploy Site Recovery.
2. After preparing Azure and on-premises components, Contoso sets up and enables replication for the VMs.
3. After replication is working, Contoso migrates the VMs by failing them over to Azure.

Azure services
SERVICE DESCRIPTION COST

Azure Site Recovery The service orchestrates and manages During replication to Azure, Azure
migration and disaster recovery for Storage charges are incurred. Azure
Azure VMs, and on-premises VMs and VMs are created, and incur charges,
physical servers. when failover occurs. Learn more about
charges and pricing.

Prerequisites
Here's what Contoso needs for this scenario.

REQUIREMENTS DETAILS

Azure subscription Contoso created subscriptions in an early article in this series.


If you don't have an Azure subscription, create a free
account.

If you create a free account, you're the administrator of your


subscription and can perform all actions.

If you use an existing subscription and you're not the


administrator, you need to work with the admin to assign
you Owner or Contributor permissions.

If you need more granular permissions, review this article.

Azure infrastructure Contoso set up their Azure infrastructure as described in


Azure infrastructure for migration.

Learn more about specific network and storage requirements


for Site Recovery.

On-premises servers The on-premises vCenter server should be running version


5.5, 6.0, or 6.5

An ESXi host running version 5.5, 6.0 or 6.5

One or more VMware VMs running on the ESXi host.


REQUIREMENTS DETAILS

On-premises VMs Review Linux machines that are supported for migration with
Site Recovery.

Verify supported Linux file and storage systems.

VMs must meet Azure requirements.

Scenario steps
Here's how Contoso will complete the migration:
Step 1: Prepare Azure for Site Recovery: Contoso creates an Azure storage account to hold replicated
data, and creates a Recovery Services vault.
Step 2: Prepare on-premises VMware for Site Recovery: Contoso prepares accounts to be used for VM
discovery and agent installation, and prepares to connect to Azure VMs after failover.
Step 3: Replicate VMs: Contoso sets up the source and target migration environment, creates a replication
policy, and starts replicating VMs to Azure storage.
Step 4: Migrate the VMs with Site Recovery: Contoso runs a test failover to make sure everything's
working, and then runs a full failover to migrate the VMs to Azure.

Step 1: Prepare Azure for the Site Recovery service


Contoso needs a couple of Azure components for Site Recovery:
A new Azure storage account to hold replicated data.
A Recovery Services vault in Azure.
A VNet in which failed over resources are located. Contoso already created the VNet during Azure
infrastructure deployment, so they just need to create a storage account and vault.
1. Contoso admins create an Azure storage account (contosovmsacc20180528) in the East US 2 region.
The storage account must be in the same region as the Recovery Services vault.
They use a general purpose account, with standard storage, and LRS replication.

2. With the network and storage account in place, they create a vault (ContosoMigrationVault), and place it
in the ContosoFailoverRG resource group, in the primary East US 2 region.

Need more help?


Learn about setting up Azure for Site Recovery.

Step 2: Prepare on-premises VMware for Site Recovery


Contoso admins prepare the on-premises VMware infrastructure as follows:
They create an account on the vCenter server or vSphere ESXi host, to automate VM discovery.
They create an account that allows automatic installation of the Mobility service on VMware VMs that you
want to replicate.
They prepare on-premises VMs, so that they can connect to Azure VMs when they're created after migration.
Prepare an account for automatic discovery
Site Recovery needs access to VMware servers to:
Automatically discover VMs. At least a read-only account is required.
Orchestrate replication, failover, and failback. You need an account that can run operations such as creating
and removing disks, and turning on VMs.
Contoso admins set up the account as follows:
1. They create a role at the vCenter level.
2. They assign that role the required permissions.
Prepare an account for Mobility service installation
The Mobility service must be installed on the Linux VMs that will be migrated.
Site Recovery can do an automatic push installation of this component when replication is enabled for VMs.
For automatic push installation, they need to prepare an account that Site Recovery will use to access the
VMs.
Accounts details are input during replication setup.
The account can be domain or local account, with permissions to install on VMs.
Prepare to connect to Azure VMs after failover
After failover to Azure, Contoso wants to be able to connect to the replicated VMs in Azure. To do this, there's a
couple of things that the Contoso admins need to do:
To access Azure VMs over the internet, they enable SSH on the on-premises Linux VM before migration. For
Ubuntu this can be completed using the following command: Sudo apt-get ssh install -y.
After they run the migration (failover), they can check Boot diagnostics to view a screenshot of the VM.
If this doesn't work, they'll need to check that the VM is running, and review these troubleshooting tips.
Need more help?
Learn about creating and assigning a role for automatic discovery.
Learn about creating an account for push installation of the Mobility service.

Step 3: Replicate the on-premises VMs


Before they can migrate the web VM to Azure, Contoso admins set up and enable replication.
Set a protection goal
1. In the vault, under the vault name (ContosoVMVault) they set a replication goal (Getting Started > Site
Recovery > Prepare infrastructure.
2. They specify that their machines are located on-premises, that they're VMware VMs, and that they want to
replicate to Azure.

Confirm deployment planning


To continue, they confirm that they've completed deployment planning, by selecting Yes, I have done it.
Contoso are only migrating a single VM in this scenario, and don't need deployment planning.
Set up the source environment
Contoso admins now need to configure the source environment. To do this, they download an OVF template and
use it to deploy the Site Recovery configuration server as a highly available, on-premises VMware VM. After the
configuration server is up and running, they register it in the vault.
The configuration server runs a number of components:
The configuration server component that coordinates communications between on-premises and Azure and
manages data replication.
The process server that acts as a replication gateway. It receives replication data; optimizes it with caching,
compression, and encryption; and sends it to Azure storage.
The process server also installs Mobility Service on VMs you want to replicate and performs automatic
discovery of on-premises VMware VMs.
Contoso admins perform these steps as follows:
1. They download the OVF template from Prepare Infrastructure > Source > Configuration Server.

2. They import the template into VMware to create the VM, and deploy the VM.
3. When they turn on the VM for the first time, it boots up into a Windows Server 2016 installation
experience. They accept the license agreement, and enter an administrator password.
4. After the installation finishes, they sign into the VM as an administrator. At first sign-in, the Azure Site
Recovery Configuration Tool runs by default.
5. In the tool, they specify a name to use for registering the configuration server in the vault.
6. The tool checks that the VM can connect to Azure. After the connection is established, they sign in to the
Azure subscription. The credentials must have access to the vault in which you want to register the
configuration server.
7. The tool performs some configuration tasks and then reboots.
8. They sign in to the machine again, and the Configuration Server Management Wizard starts
automatically.
9. In the wizard, they select the NIC to receive replication traffic. This setting can't be changed after it's
configured.
10. They select the subscription, resource group, and vault in which to register the configuration server.

11. They then download and install MySQL Server, and VMWare PowerCLI.
12. After validation, they specify the FQDN or IP address of the vCenter server or vSphere host. They leave
the default port, and specify a friendly name for the vCenter server.
13. They specify the account that they created for automatic discovery, and the credentials that should be
used to automatically install the Mobility Service.

14. After registration finishes, in the Azure portal, they check that the configuration server and VMware
server are listed on the Source page in the vault. Discovery can take 15 minutes or more.
15. Site Recovery then connects to VMware servers, and discovers VMs.
Set up the target
Now Contoso admins configure the target replication settings.
1. In Prepare infrastructure > Target, they select the target settings.
2. Site Recovery checks that there's an Azure storage account and network in the specified target.
Create a replication policy
After the source and target are set up, they're ready to create a replication policy.
1. In Prepare infrastructure > Replication Settings > Replication Policy > Create and Associate, they
create a policy ContosoMigrationPolicy.
2. They use the default settings:
RPO threshold: Default of 60 minutes. This value defines how often recovery points are created.
An alert is generated if continuous replication exceeds this limit.
Recovery point retention. Default of 24 hours. This value specifies how long the retention
window is for each recovery point. Replicated VMs can be recovered to any point in a window.
App-consistent snapshot frequency. Default of one hour. This value specifies the frequency at
which application-consistent snapshots are created.

3. The policy is automatically associated with the configuration server.

Need more help?


You can read a full walkthrough of all these steps in Set up disaster recovery for on-premises VMware VMs.
Detailed instructions are available to help you set up the source environment, deploy the configuration
server, and configure replication settings.
Learn more about the Azure Guest agent for Linux.
Need more help?
You can read a full walkthrough of all these steps in Set up disaster recovery for on-premises VMware VMs.
Detailed instructions are available to help you set up the source environment, deploy the configuration
server, and configure replication settings.
Learn more about the Azure Guest agent for Linux.
Enable replication for OSTICKETWEB
Now Contoso admins can start replicating the OSTICKETWEB VM.
1. In Replicate application > Source > +Replicate they select the source settings.
2. They select that they want to enable virtual machines, select the source settings, including the vCenter
server, and the configuration server.

3. They specify the target settings, including the resource group and VNet in which the Azure VM will be
located after failover, and the storage account in which replicated data will be stored.
4. They select the OSTICKETWEB VM for replication.
At this stage they select OSTICKETWEB only, because the VNet and subnet must both be
selected, and the VMs aren't in the same subnet.
Site Recovery automatically installs the Mobility service when replication is enabled for the VM.
5. In the VM properties, they select the account that's used by the process server to automatically install
Mobility Service on the machine.

6. in Replication settings > Configure replication settings, they check that the correct replication policy
is applied, and select Enable Replication.
7. They track replication progress in Jobs. After the Finalize Protection job runs, the machine is ready for
failover.
Enable replication for OSTICKETMYSQL
Now Contoso admins can start replicating OSTICKETMYSQL.
1. In Replicate application > Source > +Replicate they select the source and target settings.
2. They select the OSTICKETMYSQL VM for replication, and select the account to use for Mobility service
installation.

3. They apply the same replication policy that was used for OSTICKETWEB, and enable replication.
Need more help?
You can read a full walkthrough of all these steps in Enable replication.
Step 4: Migrate the VMs
Contoso admins run a quick test failover, and then migrate the VMs.
Run a test failover
Running a test failover helps ensure that everything's working as expected before the migration.
1. They run a test failover to the latest available point in time (Latest processed).
2. They select Shut down machine before beginning failover, so that Site Recovery attempts to shut down
the source VM before triggering the failover. Failover continues even if shutdown fails.
3. Test failover runs:
A prerequisites check runs to make sure all of the conditions required for migration are in place.
Failover processes the data, so that an Azure VM can be created. If the latest recovery point is selected,
a recovery point is created from the data.
An Azure VM is created using the data processed in the previous step.
4. After the failover finishes, the replica Azure VM appears in the Azure portal. They check that the VM is the
appropriate size, that it's connected to the right network, and that it's running.
5. After verifying, they clean up the failover, and record and save any observations.
Create and customize a recovery plan
After verifying that the test failover worked as expected, Contoso admins create a recovery plan for migration.
A recovery plan specifies the order in which failover occurs, how Azure VMs will be brought up in Azure.
Since they want to migrate a two-tier app, they'll customize the recovery plan so that the data VM (SQLVM )
starts before the frontend (WEBVM ).
1. In Recovery Plans (Site Recovery) > +Recovery Plan, they create a plan and add the VMs to it.

2. After creating the plan, they select it for customization (Recovery Plans > OsTicketMigrationPlan >
Customize.
3. They remove OSTICKETWEB from Group 1: Start. This ensures that the first start action affects
OSTICKETMYSQL only.
4. In +Group > Add protected items, they add OSTICKETWEB to Group 2: Start. They need these in
two different groups.
Migrate the VMs
Contoso admins are now ready to run a failover on the recovery plan, to migrate the VMs.
1. They select the plan > Failover.
2. They select to fail over to the latest recovery point, and specify that Site Recovery should try to shut down
the on-premises VM before triggering the failover. They can follow the failover progress on the Jobs
page.

3. During the failover, vCenter Server issues commands to stop the two VMs running on the ESXi host.
4. After the failover, they verify that the Azure VM appears as expected in the Azure portal.

5. After verifying the VM in Azure, they complete the migration to finish the migration process for each VM.
This stops replication for the VM, and stops Site Recovery billing for the VM.
Connect the VM to the database
As the final step in the migration process, Contoso admins update the connection string of the application to
point to the app database running on the OSTICKETMYSQL VM.
1. They make an SSH connection to the OSTICKETWEB VM using Putty or another SSH client. The VM is
private so they connect using the private IP address.
2. They need to make sure that the OSTICKETWEB VM can communicate with the OSTICKETMYSQL
VM. Currently the configuration is hardcoded with the on-premises IP address 172.16.0.43.
Before the update
After the update

3. They restart the service with systemctl restart apache2.

4. Finally, they update the DNS records for OSTICKETWEB and OSTICKETMYSQL, on one of the
Contoso domain controllers.
Need more help?
Learn about running a test failover.
Learn how to create a recovery plan.
Learn about failing over to Azure.

Clean up after migration


With migration complete, the osTicket app tiers are now running on Azure VMs.
Now, Contoso needs to clean up as follows:
Remove the on-premises VMs from the vCenter inventory.
Remove the on-premises VMs from local backup jobs.
Update their internal documentation to show the new location, and IP addresses for OSTICKETWEB and
OSTICKETMYSQL.
Review any resources that interact with the VMs, and update any relevant settings or documentation to
reflect the new configuration.
Contoso used the Azure Migrate service with dependency mapping to assess the VMs for migration. Admins
should remove the Microsoft Monitoring Agent, and the Dependency Agent they installed for this purpose,
from the VM.

Review the deployment


With the app now running, Contoso needs to fully operationalize and secure their new infrastructure.
Security
The Contoso security team review the OSTICKETWEB and OSTICKETMYSQLVMs to determine any security
issues.
The team reviews the Network Security Groups (NSGs) for the VMs to control access. NSGs are used to
ensure that only traffic allowed to the application can pass.
The team also considers securing the data on the VM disks using Disk encryption and Azure KeyVault.
Read more about security practices for VMs.
BCDR
For business continuity and disaster recovery, Contoso takes the following actions:
Keep data safe: Contoso backs up the data on the VMs using the Azure Backup service. Learn more.
Keep apps up and running: Contoso replicates the app VMs in Azure to a secondary region using Site
Recovery. Learn more.
Licensing and cost optimization
After deploying resources, Contoso assigns Azure tags as defined during the Azure infrastructure
deployment.
Contoso has no licensing issues with the Ubuntu servers.
Contoso will enable Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary. It's a multi-cloud
cost management solution that helps you to utilize and manage Azure and other cloud resources. Learn more
about Azure Cost Management.

Next steps
In this article we showed how Contoso migrated an on-premises service desk app tiered on two Linux VMs to
Azure IaaS VMs, using Azure Site Recovery.
In the next article in the series, we'll show you how Contoso migrates the same service desk app to Azure. This
time Contoso uses Site Recovery to migrate the frontend VM for the app, and migrates the app database using
backup and restore to Azure Database for MySQL, using the MySQL workbench tool. Get started.
Contoso migration: Rehost an on-premises Linux
app to Azure VMs and Azure MySQL
3/15/2019 • 20 minutes to read • Edit Online

This article shows how Contoso rehosts its on-premises two-tier Linux service desk app (osTicket), by migrating
it to Azure and Azure MySQL.
This document is one in a series of articles that show how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background information, and scenarios
that illustrate how to set up a migration infrastructure, and run different types of migrations. Scenarios grow in
complexity. We'll add additional articles over time.

ARTICLE DETAILS STATUS

Article 1: Overview Overview of the article series, Available


Contoso's migration strategy, and the
sample apps that are used in the
series.

Article 2: Deploy Azure infrastructure Contoso prepares its on-premises Available


infrastructure and its Azure
infrastructure for migration. The same
infrastructure is used for all migration
articles in the series.

Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database
using Data Migration Assistant.

Article 4: Rehost an app on an Azure Contoso runs a lift-and-shift migration Available


VM and SQL Database Managed to Azure for its on-premises
Instance SmartHotel360 app. Contoso migrates
the app front-end VM using Azure Site
Recovery. Contoso migrates the app
database to an Azure SQL Database
Managed Instance using the Azure
Database Migration Service.

Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.

Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS

Article 7: Rehost a Linux app on Azure Contoso completes a lift-and-shift Available


VMs migration of the Linux osTicket app to
Azure VMs, using Azure Site Recovery

Article 8: Rehost a Linux app on Azure Contoso migrates the Linux osTicket This article
VMs and Azure MySQL app to Azure VMs using Azure Site
Recovery, and migrates the app
database to an Azure MySQL Server
instance using MySQL Workbench.

Article 9: Refactor an app on Azure Contoso migrates the SmartHotel360 Available


Web Apps and Azure SQL database app to an Azure Web App, and
migrates the app database to an Azure
SQL Server instance with Database
Migration Assistant

Article 10: Refactor a Linux app on Contoso migrates its Linux osTicket Available
Azure Web Apps and Azure MySQL app to an Azure web app on multiple
Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.

Article 11: Refactor TFS on Azure Contoso migrates its on-premises Available
DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.

Article 12: Rearchitect an app on Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL
Database.

Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.

Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.

In this article, Contoso migrates a two-tier Linux Apache MySQL PHP (LAMP ) service desk app (osTicket) to
Azure. If you'd like to use this open-source app, you can download it from GitHub.

Business drivers
The IT Leadership team has worked closely with business partners to understand what they want to achieve:
Address business growth: Contoso is growing, and as a result there's pressure on the on-premises systems
and infrastructure.
Limit risk: The service desk app is critical for the business. Contoso wants to move it to Azure with zero risk.
Extend: Contoso doesn't want to change the app right now. It simply wants to keep the app stable.

Migration goals
The Contoso cloud team has pinned down goals for this migration, in order to determine the best migration
method:
After migration, the app in Azure should have the same performance capabilities as it does today in their on-
premises VMware environment. The app will remain as critical in the cloud as it is on-premises.
Contoso doesn’t want to invest in this app. It's important to the business, but in its current form Contoso
simply want to move it safely to the cloud.
Having completed a couple of Windows app migrations, Contoso wants to learn how to use a Linux-based
infrastructure in Azure.
Contoso wants to minimize database admin tasks after the application is moved to the cloud.

Proposed architecture
In this scenario:
The app is tiered across two VMs (OSTICKETWEB and OSTICKETMYSQL ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5).
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
The web tier app on OSTICKETWEB will be migrated to an Azure IaaS VM.
The app database will be migrated to the Azure Database for MySQL PaaS service.
Since Contoso is migrating a production workload, the resources will reside in the production resource
group ContosoRG.
The resources will be replicated to the primary region (East US 2), and placed in the production network
(VNET-PROD -EUS2):
The web VM will reside in the frontend subnet (PROD -FE -EUS2).
The database instance will reside in the database subnet (PROD -DB -EUS2).
The app database will be migrated to Azure MySQL using MySQL tools.
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Migration process
Contoso will complete the migration process as follows:
To migrate the web VM:
1. As a first step, Contoso sets up the Azure and on-premises infrastructure needed to deploy Site Recovery.
2. After preparing the Azure and on-premises components, Contoso sets up and enables replication for the web
VM.
3. After replication is up-and-running, Contoso migrates the VM by failing it over to Azure.
To migrate the database:
1. Contoso provisions a MySQL instance in Azure.
2. Contoso sets up MySQL workbench, and backs up the database locally.
3. Contoso then restore the database from the local backup to Azure.

Azure services
SERVICE DESCRIPTION COST

Azure Site Recovery The service orchestrates and manages During replication to Azure, Azure
migration and disaster recovery for Storage charges are incurred. Azure
Azure VMs, and on-premises VMs and VMs are created, and incur charges,
physical servers. when failover occurs. Learn more
about charges and pricing.

Azure Database for MySQL The database is based on the open-


source MySQL Server engine. It
provides a fully-managed, enterprise-
ready community MySQL database, as
a service for app development and
deployment.

Prerequisites
Here's what Contoso needs for this scenario.

REQUIREMENTS DETAILS

Azure subscription Contoso created subscriptions during an earlier article. If you


don't have an Azure subscription, create a free account.

If you create a free account, you're the administrator of your


subscription and can perform all actions.

If you use an existing subscription and you're not the


administrator, you need to work with the admin to assign
you Owner or Contributor permissions.

If you need more granular permissions, review this article.

Azure infrastructure Contoso set up the Azure infrastructure as described in


Azure infrastructure for migration.

Learn more about specific network and storage requirements


for Site Recovery.

On-premises servers The on-premises vCenter server should be running version


5.5, 6.0, or 6.5

An ESXi host running version 5.5, 6.0 or 6.5

One or more VMware VMs running on the ESXi host.

On-premises VMs Review Linux VM requirements that are supported for


migration with Site Recovery.

Verify supported Linux file and storage systems.

VMs must meet Azure requirements.

Scenario steps
Here's how Contoso admins will complete the migration:
Step 1: Prepare Azure for Site Recovery: They create an Azure storage account to hold replicated data,
and create a Recovery Services vault.
Step 2: Prepare on-premises VMware for Site Recovery: They prepare accounts for VM discovery and
agent installation, and prepare to connect to Azure VMs after failover.
Step 3: Provision the database]: In Azure, they provision an instance of Azure MySQL database.
Step 4: Replicate VMs: They configure the Site Recovery source and target environment, set up a
replication policy, and start replicating VMs to Azure storage.
Step 5: Migrate the database: They set up migration with MySQL tools.
Step 6: Migrate the VMs with Site Recovery: Lastly, they run a test failover to make sure everything's
working, and then run a full failover to migrate the VMs to Azure.

Step 1: Prepare Azure for the Site Recovery service


Contoso needs a couple of Azure components for Site Recovery:
A VNet in which failed over resources are located. Contoso already created the VNet during Azure
infrastructure deployment
A new Azure storage account to hold replicated data.
A Recovery Services vault in Azure.
The Contoso admins create a storage account and vault as follows:
1. They create a storage account (contosovmsacc20180528) in the East US 2 region.
The storage account must be in the same region as the Recovery Services vault.
They use a general purpose account, with standard storage, and LRS replication.

2. With the network and storage account in place, they create a vault (ContosoMigrationVault), and place it
in the ContosoFailoverRG resource group, in the primary East US 2 region.
Need more help?
Learn about setting up Azure for Site Recovery.

Step 2: Prepare on-premises VMware for Site Recovery


Contoso admins prepare the on-premises VMware infrastructure as follows:
They create an account on the vCenter server, to automate VM discovery.
They create an account that allows automatic installation of the Mobility service on VMware VMs that will be
replicated.
They prepare on-premises VMs, so that they can connect to Azure VMs when they're created after the
migration.
Prepare an account for automatic discovery
Site Recovery needs access to VMware servers to:
Automatically discover VMs. At least a read-only account is required.
Orchestrate replication, failover, and failback. You need an account that can run operations such as creating
and removing disks, and turning on VMs.
Contoso admins set up the account as follows:
1. They create a role at the vCenter level.
2. They then assign that role the required permissions.
Prepare an account for Mobility service installation
The Mobility service must be installed on each VM that Contoso wants to migrate.
Site Recovery can do an automatic push installation of this component when you enable replication for the
VMs.
For automatic installation. Site Recovery needs an account with permissions to access the VM.
Account details are input during replication setup.
The account can be domain or local account, as long as it has installation permissions.
Prepare to connect to Azure VMs after failover
After failover to Azure, Contoso wants to be able to connect to the Azure VMs. To do this, Contoso admins need
to do the following:
To access over the internet, they enable SSH on the on-premises Linux VM before the migration. For Ubuntu
this can be completed using the following command: Sudo apt-get ssh install -y.
After the failover, they should check Boot diagnostics to view a screenshot of the VM.
If this doesn't work, they need to verify that the VM is running, and review these troubleshooting tips.
Need more help?
Learn about creating and assigning a role for automatic discovery.
Learn about creating an account for push installation of the Mobility service.

Step 3: Provision Azure Database for MySQL


Contoso admins provision a MySQL database instance in the primary East US 2 region.
1. In the Azure portal, they create an Azure Database for MySQL resource.

2. They add the name contosoosticket for the Azure database. They add the database to the production
resource group ContosoRG, and specify credentials for it.
3. The on-premises MySQL database is version 5.7, so they select this version for compatibility. They use
the default sizes, which match their database requirements.
4. For Backup Redundancy Options, they select to use Geo-Redundant. This option allows them to
restore the database in their secondary Central US region if an outage occurs. They can only configure
this option when they provision the database.

5. In the VNET-PROD -EUS2 network > Service endpoints, they add a service endpoint (a database
subnet) for the SQL service.
6. After adding the subnet, they create a virtual network rule that allows access from the database subnet in
the production network.

Step 4: Replicate the on-premises VMs


Before they can migrate the web VM to Azure, Contoso admins set up and enable replication.
Set a protection goal
1. In the vault, under the vault name (ContosoVMVault) they set a replication goal (Getting Started > Site
Recovery > Prepare infrastructure.
2. They specify that their machines are located on-premises, that they're VMware VMs, and that they want
to replicate to Azure.
Confirm deployment planning
To continue, they confirm that they've completed deployment planning, by selecting Yes, I have done it.
Contoso are only migrating a single VM in this scenario, and don't need deployment planning.
Set up the source environment
Contoso admins now configure the source environment. To do this, using an OVF template they deploy a Site
Recovery configuration server as a highly available, on-premises VMware VM. After the configuration server is
up and running, they register it in the vault.
The configuration server runs a number of components:
The configuration server component that coordinates communications between on-premises and Azure and
manages data replication.
The process server that acts as a replication gateway. It receives replication data; optimizes it with caching,
compression, and encryption; and sends it to Azure storage.
The process server also installs Mobility Service on VMs you want to replicate and performs automatic
discovery of on-premises VMware VMs.
Contoso admins do this as follows:
1. They download the OVF template from Prepare Infrastructure > Source > Configuration Server.
2. They import the template into VMware to create the VM, and deploy the VM.
3. When they turn on the VM for the first time, it boots up into a Windows Server 2016 installation
experience. They accept the license agreement, and enter an administrator password.
4. After the installation finishes, they sign in to the VM as the administrator. At first sign-in, the Azure Site
Recovery Configuration Tool runs by default.
5. In the tool, they specify a name to use for registering the configuration server in the vault.
6. The tool checks that the VM can connect to Azure.
7. After the connection is established, they sign in to the Azure subscription. The credentials must have
access to the vault in which they'll register the configuration server.
8. The tool performs some configuration tasks and then reboots.
9. They sign in to the machine again, and the Configuration Server Management Wizard starts
automatically.
10. In the wizard, they select the NIC to receive replication traffic. This setting can't be changed after it's
configured.
11. They select the subscription, resource group, and vault in which to register the configuration server.

12. Now, they download and install MySQL Server, and VMWare PowerCLI.
13. After validation, they specify the FQDN or IP address of the vCenter server or vSphere host. They leave
the default port, and specify a friendly name for the vCenter server.
14. They input the account that they created for automatic discovery, and the credentials that Site Recovery
will use to automatically install the Mobility Service.

15. After registration finishes, in the Azure portal, they check that the configuration server and VMware
server are listed on the Source page in the vault. Discovery can take 15 minutes or more.
16. With everything in place, Site Recovery connects to VMware servers, and discovers VMs.
Set up the target
Now Contoso admins input target replication settings.
1. In Prepare infrastructure > Target, they select the target settings.
2. Site Recovery checks that there's an Azure storage account and network in the specified target.
Create a replication policy
With the source and target set up, Contoso admins are ready to create a replication policy.
1. In Prepare infrastructure > Replication Settings > Replication Policy > Create and Associate,
they create a policy ContosoMigrationPolicy.
2. They use the default settings:
RPO threshold: Default of 60 minutes. This value defines how often recovery points are created.
An alert is generated if continuous replication exceeds this limit.
Recovery point retention. Default of 24 hours. This value specifies how long the retention
window is for each recovery point. Replicated VMs can be recovered to any point in a window.
App-consistent snapshot frequency. Default of one hour. This value specifies the frequency at
which application-consistent snapshots are created.

3. The policy is automatically associated with the configuration server.

Need more help?


You can read a full walkthrough of all these steps in Set up disaster recovery for on-premises VMware VMs.
Detailed instructions are available to help you set up the source environment, deploy the configuration
server, and configure replication settings.
Learn more about the Azure Guest agent for Linux.
Enable replication for the Web VM
Now Contoso admins can start replicating the OSTICKETWEB VM.
1. In Replicate application > Source > +Replicate they select the source settings.
2. They indicate that they want to enable virtual machines, and select the source settings, including the
vCenter server, and the configuration server.

3. Now they specify the target settings. These include the resource group and network in which the Azure
VM will be located after failover, and the storage account in which replicated data will be stored.
4. They select OSTICKETWEB for replication.

5. In the VM properties, they select the account that should be used to automatically install the Mobility
Service on the VM.
6. in Replication settings > Configure replication settings, they check that the correct replication policy
is applied, and select Enable Replication. The Mobility service will be automatically installed.
7. They track replication progress in Jobs. After the Finalize Protection job runs, the machine is ready for
failover.
Need more help?
You can read a full walkthrough of all these steps in Enable replication.

Step 5: Migrate the database


Contoso admins migrate the database using backup and restore, with MySQL tools. They install MySQL
Workbench, back up the database from OSTICKETMYSQL, and then restore it to Azure Database for MySQL
Server.
Install MySQL Workbench
1. They check the prerequisites and downloads MySQL Workbench.
2. They install MySQL Workbench for Windows in accordance with the installation instructions.
3. In MySQL Workbench, they create a MySQL connection to OSTICKETMYSQL.

4. They export the database as osticket, to a local self-contained file.


5. After the database has been backed up locally, they create a connection to the Azure Database for
MySQL instance.

6. Now, they can import (restore) the database in the Azure MySQL instance, from the self-contained file. A
new schema (osticket) is created for the instance.
Step 6: Migrate the VMs with Site Recovery
Finally, Contoso admins run a quick test failover, and then migrate the VM.
Run a test failover
Running a test failover helps verify that everything's working as expected, before the migration.
1. They run a test failover to the latest available point in time (Latest processed).
2. They select Shut down machine before beginning failover, so that Site Recovery attempts to shut
down the source VM before triggering the failover. Failover continues even if shutdown fails.
3. Test failover runs:
A prerequisites check runs to make sure all of the conditions required for migration are in place.
Failover processes the data, so that an Azure VM can be created. If select the latest recovery point, a
recovery point is created from the data.
An Azure VM is created using the data processed in the previous step.
4. After the failover finishes, the replica Azure VM appears in the Azure portal. They check that the VM is
the appropriate size, that it's connected to the right network, and that it's running.
5. After verifying, they clean up the failover, and record and save any observations.
Migrate the VM
To migrate the VM, Contoso admins creates a recovery plan that includes the VM, and fail over the plan to
Azure.
1. They create a plan, and add OSTICKETWEB to it.
2. They run a failover on the plan. They select the latest recovery point, and specify that Site Recovery
should try to shut down the on-premises VM before triggering the failover. They can follow the failover
progress on the Jobs page.

3. During the failover, vCenter Server issues commands to stop the two VMs running on the ESXi host.
4. After the failover, they verify that the Azure VM appears as expected in the Azure portal.

5. After checking the VM, they complete the migration. This stops replication for the VM, and stops Site
Recovery billing for the VM.
Need more help?
Learn about running a test failover.
Learn how to create a recovery plan.
Learn about failing over to Azure.
Connect the VM to the database
As the final step in the migration process, Contoso admins update the connection string of the app to point to
the Azure Database for MySQL.
1. They make an SSH connection to the OSTICKETWEB VM using Putty or another SSH client. The VM is
private so they connect using the private IP address.
2. They update settings so that the OSTICKETWEB VM can communicate with the OSTICKETMYSQL
database. Currently the configuration is hardcoded with the on-premises IP address 172.16.0.43.
Before the update
After the update

3. They restart the service with systemctl restart apache2.

4. Finally, they update the DNS records for OSTICKETWEB, on one of the Contoso domain controllers.

Clean up after migration


With migration complete, the osTicket app tiers are running on Azure VMs.
Now, Contoso needs to do the following:
Remove the VMware VMs from the vCenter inventory
Remove the on-premises VMs from local backup jobs.
Update internal documentation show new locations and IP addresses.
Review any resources that interact with the on-premises VMs, and update any relevant settings or
documentation to reflect the new configuration.
Contoso used the Azure Migrate service with dependency mapping to assess the OSTICKETWEB VM for
migration. They should now remove the agents (Microsoft Monitoring Agent/Dependency Agent) they
installed for this purpose, from the VM.

Review the deployment


With the app now running, Contoso need to fully operationalize and secure their new infrastructure.
Security
The Contoso security team review the VM and database to determine any security issues.
They review the Network Security Groups (NSGs) for the VM, to control access. NSGs are used to ensure
that only traffic allowed to the application can pass.
They consider securing the data on the VM disks using Disk encryption and Azure KeyVault.
Communication between the VM and database instance isn't configured for SSL. They will need to do this to
ensure that database traffic can't be hacked.
Read more about security practices for VMs.
BCDR
For business continuity and disaster recovery, Contoso takes the following actions:
Keep data safe: Contoso backs up the data on the app VM using the Azure Backup service. Learn more.
They don't need to configure backup for the database. Azure Database for MySQL automatically creates and
stores server backups. They selected to use geo-redundancy for the database, so it's resilient and production-
ready.
Keep apps up and running: Contoso replicates the app VMs in Azure to a secondary region using Site
Recovery. Learn more.
Licensing and cost optimization
After deploying resources, Contoso assigns Azure tags, in accordance with decisions they made during the
Azure infrastructure deployment.
There are no licensing issues for the Contoso Ubuntu servers.
Contoso will enable Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary. It's a multi-cloud
cost management solution that helps you to utilize and manage Azure and other cloud resources. Learn
more about Azure Cost Management.

Next steps
In this scenario we showed the final rehost scenario. Contoso migrated the frontend VM of the on-premises
Linux osTicket app to an Azure VM, and migrated the app database to an Azure MySQL instance.
In the next set of tutorials in the migration series, we're going to show you how Contoso performed a more
complex set of migrations, involving app refactoring, rather than simple lift-and-shift migrations.
Contoso migration: Refactor an on-premises app to
an Azure Web App and Azure SQL database
3/15/2019 • 17 minutes to read • Edit Online

This article demonstrates how Contoso refactors their SmartHotel360 app in Azure. They migrate the app
frontend VM to an Azure Web App, and the app database to an Azure SQL database.
This document is one in a series of articles that show how the fictitious company Contoso migrates their on-
premises resources to the Microsoft Azure cloud. The series includes background information, and scenarios
that illustrate setting up a migration infrastructure, assessing on-premises resources for migration, and running
different types of migrations. Scenarios grow in complexity. We'll add additional articles over time.

ARTICLE DETAILS STATUS

Article 1: Overview Provides an overview of Contoso's Available


migration strategy, the article series,
and the sample apps we use.

Article 2: Deploy an Azure Describes how Contoso prepares its Available


infrastructure on-premises and Azure infrastructure
for migration. The same infrastructure
is used for all migration articles.

Article 3: Assess on-premises resources Shows how Contoso runs an Available


assessment of an on-premises two-tier
SmartHotel app running on VMware.
Contoso assesses app VMs with the
Azure Migrate service, and the app
SQL Server database with the
Database Migration Assistant.

Article 4: Rehost an app to Azure VMs Demonstrates how Contoso runs a lift- Available
and a SQL Managed Instance and-shift migration to Azure for the
SmartHotel app. Contoso migrates the
app frontend VM using Azure Site
Recovery, and the app database to a
SQL Managed Instance, using the
Azure Database Migration Service.

Article 5: Rehost an app to Azure VMs Shows how Contoso migrate the Available
SmartHotel app VMs using Site
Recovery only.

Article 6: Rehost an app to Azure VMs Shows how Contoso migrates the Available
and SQL Server Always On Availability SmartHotel app. Contoso uses Site
Group Recovery to migrate the app VMs, and
the Database Migration service to
migrate the app database to a SQL
Server cluster protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS

Article 7: Rehost a Linux app to Azure Shows how Contoso does a lift-and- Available
VMs shift migration of the Linux osTicket
app to Azure VMs, using Site Recovery

Article 8: Rehost a Linux app to Azure Demonstrates how Contoso migrates Available
VMs and Azure MySQL Server the Linux osTicket app to Azure VMs
using Site Recovery, and migrates the
app database to an Azure MySQL
Server instance using MySQL
Workbench.

Article 9: Refactor an app to an Azure Demonstrates how Contoso migrates This article
Web App and Azure SQL database the SmartHotel app to an Azure Web
App, and migrates the app database to
Azure SQL Server instance

Article 10: Refactor a Linux app to Shows how Contoso migrates the Available
Azure Web Apps and Azure MySQL Linux osTicket app to Azure Web Apps
in multiple sites, integrated with
GitHub for continuous delivery. They
migrate the app database to an Azure
MySQL instance.

Article 11: Refactor TFS on Azure Shows how Contoso migrates their on- Available
DevOps Services premises Team Foundation Server (TFS)
deployment by migrating it to Azure
DevOps Services in Azure.

Article 12: Rearchitect an app on Azure Shows how Contoso migrates and Available
containers and Azure SQL Database rearchitects their SmartHotel app to
Azure. They rearchitect the app web
tier as a Windows container, and the
app database in an Azure SQL
Database.

Article 13: Rebuild an app in Azure Shows how Contoso rebuild their Available
SmartHotel app using a range of Azure
capabilities and services, including App
Services, Azure Kubernetes, Azure
Functions, Cognitive services, and
Cosmos DB.

Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.

In this article, Contoso migrates the two-tier Windows. NET SmartHotel360 app running on VMware VMs to
Azure. If you'd like to use this app, it's provided as open source and you can download it from GitHub.

Business drivers
The IT leadership team has worked closely with business partners to understand what they want to achieve with
this migration:
Address business growth: Contoso is growing, and there is pressure on on-premises systems and
infrastructure.
Increase efficiency: Contoso needs to remove unnecessary procedures, and streamline processes for
developers and users. The business needs IT to be fast and not waste time or money, thus delivering faster on
customer requirements.
Increase agility: Contoso IT needs to be more responsive to the needs of the business. It must be able to
react faster than the changes in the marketplace, to enable the success in a global economy. It mustn't get in
the way, or become a business blocker.
Scale: As the business grows successfully, Contoso IT must provide systems that are able to grow at the
same pace.
Costs: Contoso wants to minimize licensing costs.

Migration goals
The Contoso cloud team has pinned down goals for this migration. These goals were used to determine the best
migration method.

REQUIREMENTS DETAILS

App The app in Azure will remain as critical as it is today.

It should have the same performance capabilities as it


currently does in VMWare.

The team doesn't want to invest in the app. For now, admins
will simply move the app safely to the cloud.

The team want to stop supporting Windows Server 2008 R2,


on which the app currently runs.

The team also wants to move away from SQL Server 2008
R2 to a modern PaaS Database platform, which will minimize
the need for management.

Contoso want to leverage its investment in SQL Server


licensing and Software Assurance where possible.

In addition, Contoso wants to mitigate the single point of


failure on the web tier.

Limitations The app consists of an ASP.NET app and a WCF service


running on the same VM. They want to split this across two
web apps using the Azure App Service.

Azure Contoso wants to move the app to Azure, but doesn't want
to run it on VMs. Contoso wants to leverage Azure PaaS
services for both the web and data tiers.

DevOps Contoso wants to move to a DevOps model, using Azure


DevOps for their builds and release pipelines.

Solution design
After pinning down goals and requirements, Contoso designs and review a deployment solution, and identifies
the migration process, including the Azure services that will be used for migration.
Current app
The SmartHotel360 on-premises app is tiered across two VMs (WEBVM and SQLVM ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5)
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Proposed solution
For the database tier of the app, Contoso compared Azure SQL Database with SQL Server using this article.
Contoso decided to go with Azure SQL Database for a few reasons:
Azure SQL Database is a relational-database managed service. It delivers predictable performance at
multiple service levels, with near-zero administration. Advantages include dynamic scalability with no
downtime, built-in intelligent optimization, and global scalability and availability.
Contoso can leverage the lightweight Data Migration Assistant (DMA) to assess and migrate the on-
premises database to Azure SQL.
With Software Assurance, Contoso can exchange existing licenses for discounted rates on a SQL
Database, using the Azure Hybrid Benefit for SQL Server. This could provide savings of up to 30%.
SQL Database provides a number of security features including always encrypted, dynamic data
masking, and row -level security/threat detection.
For the app web tier, Contoso has decided to use Azure App Service. This PaaS service enables that to deploy
the app with just a few configuration changes. Contoso will use Visual Studio to make the change, and deploy
two web apps. One for the website, and one for the WCF service.
To meet requirements for a DevOps pipeline, Contoso has selected to use Azure DevOps for Source Code
Management (SCM ) with Git repos. Automated builds and release will be used to build the code, and deploy
it to the Azure Web Apps.
Solution review
Contoso evaluates their proposed design by putting together a pros and cons list.

CONSIDERATION DETAILS

Pros The SmartHotel360 app code won't need to be altered for


migration to Azure.

Contoso can leverage their investment in Software Assurance


using the Azure Hybrid Benefit for both SQL Server and
Windows Server.

After the migration Windows Server 2008 R2 won't need to


be supported. Learn more.

Contoso can configure the web tier of the app with multiple
instances, so that it's no longer a single point of failure.

The database will no longer be dependent on the aging SQL


Server 2008 R2.

SQL Database supports the technical requirements. Contoso


assessed the on-premises database using the Database
Migration Assistant and found that it's compatible.

SQL Database has built-in fault tolerance that Contoso don't


need to set up. This ensures that the data tier is no longer a
single point of failover.
CONSIDERATION DETAILS

Cons Azure App Services only supports one app deployment for
each Web App. This means that two Web Apps must be
provisioned (one for the website and one for the WCF
service).

If Contoso uses the Data Migration Assistant instead of Data


Migration Service to migrate their database, it won’t have
the infrastructure ready for migrating databases at scale.
Contoso will need to build another region to ensure failover
if the primary region is unavailable.

Proposed architecture

Migration process
1. Contoso provisions an Azure SQL instance, and migrates the SmartHotel360 database to it.
2. Contoso provisions and configures Web Apps, and deploys the SmartHotel360 app to them.

Azure services
SERVICE DESCRIPTION COST
SERVICE DESCRIPTION COST

Database Migration Assistant (DMA) Contoso will use DMA to assess and It's a downloadable tool free of charge.
detect compatibility issues that might
impact their database functionality in
Azure. DMA assesses feature parity
between SQL sources and targets, and
recommends performance and
reliability improvements.

Azure SQL Database An intelligent, fully managed relational Cost based on features, throughput,
cloud database service. and size. Learn more.

Azure App Services - Web Apps Create powerful cloud apps using a Cost based on size, location, and usage
fully managed platform duration. Learn more.

Azure DevOps Provides a continuous integration and


continuous deployment (CI/CD)
pipeline for app development. The
pipeline starts with a Git repository for
managing app code, a build system for
producing packages and other build
artifacts, and a Release Management
system to deploy changes in dev, test,
and production environments.

Prerequisites
Here's Contoso needs to run this scenario:

REQUIREMENTS DETAILS

Azure subscription Contoso created subscriptions during an early article. If you


don't have an Azure subscription, create a free account.

If you create a free account, you're the administrator of your


subscription and can perform all actions.

If you use an existing subscription and you're not the


administrator, you need to work with the admin to assign
you Owner or Contributor permissions.

Azure infrastructure Learn how Contoso set up an Azure infrastructure.

Scenario steps
Here's how Contoso will run the migration:
Step 1: Provision a SQL Database instance in Azure: Contoso provisions a SQL instance in Azure. After
the app website is migrate to Azure, the WCF service web app will point to this instance.
Step 2: Migrate the database with DMA: Contoso migrates the app database with the Database Migration
Assistant.
Step 3: Provision Web Apps: Contoso provisions the two web apps.
Step 4: Set up Azure DevOps: Contoso creates a new Azure DevOps project, and imports the Git repo.
Step 5: Configure connection strings: Contoso configures connection strings so that the web tier web app,
the WCF service web app, and the SQL instance can communicate.
Step 6: Set up build and release pipelines: As a final step, Contoso sets up build and release pipelines to
create the app, and deploys them to two separate Azure Web Apps.

Step 1: Provision an Azure SQL Database


1. Contoso admins select to create a SQL Database in Azure.

2. They specify a database name to match the database running on the on-premises VM
(SmartHotel.Registration). They place the database in the ContosoRG resource group. This is the
resource group they use for production resources in Azure.
3. They set up a new SQL Server instance (sql-smarthotel-eus2) in the primary region.

4. They set the pricing tier to match their server and database needs. And they select to save money with
Azure Hybrid Benefit because they already have a SQL Server license.
5. For sizing they use v-Core-based purchasing, and set the limits for their expected requirements.

6. Then they create the database instance.


7. After the instance is created, they open the database, and note details they need when they use the
Database Migration Assistance for migration.

Need more help?


Get help provisioning a SQL Database.
Learn about v-Core resource limits.

Step 2: Migrate the database with DMA


Contoso admins will migrate the SmartHotel360 database using DMA.
Install DMA
1. They download the tool from the Microsoft Download Center to the on-premises SQL Server VM (SQLVM ).
2. They run setup (DownloadMigrationAssistant.msi) on the VM.
3. On the Finish page, they select Launch Microsoft Data Migration Assistant before finishing the wizard.
Migrate the database with DMA
1. In the DMA, they create a new project (SmartHotelDB ) and select Migration
2. They select the source server type as SQL Server, and the target as Azure SQL Database.

3. In the migration details, they add SQLVM as the source server, and the SmartHotel.Registration
database.

4. They receive an error which seems to be associated with authentication. However after investigating, the
issue is the period (.) in the database name. As a workaround, they decided to provision a new SQL
database using the name SmartHotel-Registration, to resolve the issue. When they run DMA again,
they're able to select SmartHotel-Registration, and continue with the wizard.
5. In Select Objects, they select the database tables, and generate a SQL script.

6. After DMS creates the script, they click Deploy schema.


7. DMA confirms that the deployment succeeded.

8. Now they start the migration.


9. After the migration finishes, Contoso admins can verify that the database is running on the Azure SQL
instance.

10. They delete the extra SQL database SmartHotel.Registration in the Azure portal.

Step 3: Provision Web Apps


With the database migrated, Contoso admins can now provision the two web apps.
1. They select Web App in the portal.

2. They provide an app name (SHWEB -EUS2), run it on Windows, and place it un the production resources
group ContosoRG. They create a new app service and plan.

3. After the web app is provisioned, they repeat the process to create a web app for the WCF service
(SHWCF-EUS2)
4. After they're done, they browse to the address of the apps to check they've been created successfully.

Step 4: Set up Azure DevOps


Contoso needs to build the DevOps infrastructure and pipelines for the application. To do this, Contoso admins
create a new DevOps project, import the code, and then set up build and release pipelines.
1. In the Contoso Azure DevOps account, they create a new project (ContosoSmartHotelRefactor), and
select Git for version control.
2. They import the Git Repo that currently holds their app code. It's in a public repo and you can download
it.

3. After the code is imported, they connect Visual Studio to the repo, and clone the code using Team
Explorer.
4. After the repo is cloned to the developer machine, they open the Solution file for the app. The web app
and wcf service each have separate project within the file.

Step 5: Configure connection strings


Contoso admins need to make sure the web apps and database can all communicate. To do this, they configure
connection strings in the code and in the web apps.
1. In the web app for the WCF service ( SHWCF-EUS2) > Settings > Application settings, they add a
new connection string named DefaultConnection.
2. The connection string is pulled from the SmartHotel-Registration database, and should be updated
with the correct credentials.
3. Using Visual Studio, they open the SmartHotel.Registration.wcf project from the solution file. The
connectionStrings section of the web.config file for the WCF service SmartHotel.Registration.Wcf
should be updated with the connection string.

4. The client section of the web.config file for the SmartHotel.Registration.Web should be changed to point
to the new location of the WCF service. This is the URL of the WCF web app hosting the service
endpoint.

5. After the changes are in the code, admins need to commit the changes. Using Team Explorer in Visual
Studio, they commit and sync.

Step 6: Set up build and release pipelines in Azure DevOps


Contoso admins now configure Azure DevOps to perform build and release process.
1. In Azure DevOps, they click Build and release > New pipeline.
2. They select Azure Repos Git and the relevant repo.

3. In Select a template, they select the ASP.NET template for their build.
4. The name ContosoSmartHotelRefactor-ASP.NET-CI is used for the build. They click Save & Queue.

5. This kicks off the first build. They click on the build number to watch the process. After it's finished they
can see the process feedback, and click Artifacts to review the build results.

6. The folder Drop contains the build results.


The two zip files are the packages that contain the apps.
These files are used in the release pipeline for deployment to Azure Web Apps
7. They click Releases > +New pipeline.

8. They select the Azure App Service deployment template.


9. They name the release pipeline ContosoSmartHotel360Refactor, and specify the name of the WCF
web app (SHWCF -EUS2) for the Stage name.

10. Under the stages, they click 1 job, 1 task to configure deployment of the WCF service.

11. They verify the subscription is selected and authorized, and select the App service name.
12. On the pipeline > Artifacts, they select +Add an artifact, and select to build with the
ContosoSmarthotel360Refactor pipeline.
13. They click the lightning bolt on the artifact is checked., to enable continuous deployment trigger.

14. The continuous deployment trigger should be set to Enabled.


15. Now, they move back to the Stage 1 job, I tasks, and click Deploy Azure App Service.

16. In Select a file or folder, they locate the SmartHotel.Registration.Wcf.zip file that was creating
during the build, and click Save.

17. They click Pipeline > Stages +Add, to add an environment for SHWEB -EUS2. They select another
Azure App Service deployment.

18. They repeat the process to publish the web app (SmartHotel.Registration.Web.zip) file to the correct
web app.
19. After it's saved, the release pipeline will show as follows.

20. They move back to Build, and click Triggers > Enable continuous integration. This enables the
pipeline so that when changes are committed to the code, and full build and release occurs.
21. They click Save & Queue to run the full pipeline. A new build is triggered that in turn creates the first
release of the app to the Azure App Service.

22. Contoso admins can follow the build and release pipeline process from Azure DevOps. After the build
completes, the release will start.

23. After the pipeline finishes, both sites have been deployed and the app is up and running online.
At this point, the app is successfully migrated to Azure.

Clean up after migration


After migration, Contoso needs to complete these cleanup steps:
Remove the on-premises VMs from the vCenter inventory.
Remove the VMs from local backup jobs.
Update internal documentation to show the new locations for the SmartHotel360 app. Show the database as
running in Azure SQL database, and the front end as running in two web apps.
Review any resources that interact with the decommissioned VMs, and update any relevant settings or
documentation to reflect the new configuration.

Review the deployment


With the migrated resources in Azure, Contoso needs to fully operationalize and secure their new infrastructure.
Security
Contoso needs to ensure that their new SmartHotel-Registration database is secure. Learn more.
In particular, Contoso should update the web apps to use SSL with certificates.
Backups
Contoso needs to review backup requirements for the Azure SQL Database. Learn more.
Contoso also needs to learn about managing SQL Database backups and restores. Learn more about
automatic backups.
Contoso should consider implementing failover groups to provide regional failover for the database. Learn
more.
Contoso needs to consider deploying the Web App in the main East US 2 and Central US region for
resilience. Contoso could configure Traffic Manager to ensure failover in case of regional outages.
Licensing and cost optimization
After all resources are deployed, Contoso should assign Azure tags based on their infrastructure planning.
All licensing is built into the cost of the PaaS services that Contoso is consuming. This will be deducted from
the EA.
Contoso will enable Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary. It's a multi-cloud
cost management solution that helps you to utilize and manage Azure and other cloud resources. Learn more
about Azure Cost Management.

Conclusion
In this article, Contoso refactored the SmartHotel360 app in Azure by migrating the app frontend VM to two
Azure Web Apps. The app database was migrated to an Azure SQL database.
Contoso migration: Refactor a Contoso Linux
service desk app to multiple regions with Azure App
Service, Traffic Manager, and Azure MySQL
3/15/2019 • 14 minutes to read • Edit Online

This article shows how Contoso refactors their on-premises two-tier Linux service desk app (osTicket), by
migrating it to Azure App Service with GitHub integration, and Azure MySQL.
This document is one in a series of articles that show how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background information, and scenarios
that illustrate how to set up a migration infrastructure, and run different types of migrations. Scenarios grow in
complexity. We'll add additional articles over time.

ARTICLE DETAILS STATUS

Article 1: Overview Overview of the article series, Available


Contoso's migration strategy, and the
sample apps that are used in the
series.

Article 2: Deploy Azure infrastructure Contoso prepares its on-premises Available


infrastructure and its Azure
infrastructure for migration. The same
infrastructure is used for all migration
articles in the series.

Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database
using Data Migration Assistant.

Article 4: Rehost an app on an Azure Contoso runs a lift-and-shift migration Available


VM and SQL Database Managed to Azure for its on-premises
Instance SmartHotel360 app. Contoso migrates
the app front-end VM using Azure Site
Recovery. Contoso migrates the app
database to an Azure SQL Database
Managed Instance using the Azure
Database Migration Service.

Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.
ARTICLE DETAILS STATUS

Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.

Article 7: Rehost a Linux app on Azure Contoso completes a lift-and-shift Available


VMs migration of the Linux osTicket app to
Azure VMs, using Azure Site Recovery

Article 8: Rehost a Linux app on Azure Contoso migrates the Linux osTicket Available
VMs and Azure MySQL app to Azure VMs using Azure Site
Recovery, and migrates the app
database to an Azure MySQL Server
instance using MySQL Workbench.

Article 9: Refactor an app on Azure Contoso migrates the SmartHotel360 Available


Web Apps and Azure SQL database app to an Azure Web App, and
migrates the app database to an Azure
SQL Server instance with Database
Migration Assistant

Article 10: Refactor a Linux app on Contoso migrates its Linux osTicket This article
Azure Web Apps and Azure MySQL app to an Azure web app on multiple
Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.

Article 11: Refactor TFS on Azure Contoso migrates its on-premises Available
DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.

Article 12: Rearchitect an app on Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL
Database.

Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel360 Available
app by using a range of Azure
capabilities and services, including
Azure App Service, Azure Kubernetes
Service (AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.

Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.

In this article, Contoso migrates a two-tier Linux Apache MySQL PHP (LAMP ) service desk app (osTicket) to
Azure. If you'd like to use this open-source app, you can download it from GitHub.
Business drivers
The IT Leadership team has worked closely with business partners to understand what they want to achieve:
Address business growth: Contoso is growing and moving into new markets. It needs additional customer
service agents.
Scale: The solution should be built so that Contoso can add more customer service agents as the business
scales.
Increase resiliency: In the past issues with the system affected internal users only. With the new business
model, external users will be affected, and Contoso need the app up and running at all times.

Migration goals
The Contoso cloud team has pinned down goals for this migration, in order to determine the best migration
method:
The application should scale beyond current on-premises capacity and performance. Contoso is moving the
application to take advantage of Azure's on-demand scaling.
Contoso want to move the app code base to a continuous delivery pipeline. As app changes are pushed to
GitHub, Contoso want to deploy those changes without tasks for operations staff.
The application must be resilient with capabilities for growth and failover. Contoso want to deploy the app in
two different Azure regions, and set it up to scale automatically.
Contoso wants to minimize database admin tasks after the app is moved to the cloud.

Solution design
After pinning down their goals and requirements, Contoso designs and reviews a deployment solution, and
identifies the migration process, including the Azure services that will be used for the migration.

Current architecture
The app is tiered across two VMs (OSTICKETWEB and OSTICKETMYSQL ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5).
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
Proposed architecture
Here's the proposed architecture:
The web tier app on OSTICKETWEB will be migrated by building an Azure App Service in two Azure
regions. Azure App Service for Linux will be implemented using the PHP 7.0 Docker container.
The app code will be moved to GitHub, and Azure Web App will be configured for continuous delivery with
GitHub.
Azure App Servers will be deployed in both the primary (East US 2) and secondary (Central US ) region.
Traffic Manager will be set up in front of the two Azure Web Apps in both regions.
Traffic Manager will be configured in priority mode to force the traffic through East US 2.
If the Azure App Server in East US 2 goes offline, users can access the failed over app in Central US.
The app database will be migrated to the Azure MySQL PaaS service using MySQL Workbench tools. The
on-premises database will be backed up locally, and restored directly to Azure MySQL.
The database will reside in the primary East US 2 region, in the database subnet (PROD -DB -EUS2) in the
production network (VNET-PROD -EUS2):
Since they're migrating a production workload, Azure resources for the app will reside in the production
resource group ContosoRG.
The Traffic Manager resource will be deployed in Contoso's infrastructure resource group ContosoInfraRG.
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.

Migration process
Contoso will complete the migration process as follows:
1. As a first step, Contoso admins set up the Azure infrastructure, including provisioning Azure App Services,
setting up Traffic Manager, and provisioning an Azure MySQL instance.
2. After preparing the Azure, they migrate the database using MySQL Workbench.
3. After the database is running in Azure, they up a GitHub private repo for the Azure App Service with
continuous delivery, and load it with the osTicket app.
4. In the Azure portal, they load the app from GitHub to the Docker container running Azure App Service.
5. They tweak DNS settings, and configure autoscaling for the app.
Azure services
SERVICE DESCRIPTION COST

Azure App Service The service runs and scales Pricing is based on the size of the
applications using the Azure PaaS instances, and the features required.
service for websites. Learn more.

Traffic Manager A load balancer that uses DNS to Pricing is based on the number of DNS
direct users to Azure, or external queries received, and the number of
websites and services. monitored endpoints. Learn more.

Azure Database for MySQL The database is based on the open- Pricing based on compute, storage,
source MySQL Server engine. It and backup requirements. Learn more.
provides a fully managed, enterprise-
ready community MySQL database, as
a service for app development and
deployment.

Prerequisites
Here's what Contoso needs to run this scenario.

REQUIREMENTS DETAILS

Azure subscription Contoso created subscriptions earlier in this article series. If


you don't have an Azure subscription, create a free account.

If you create a free account, you're the administrator of your


subscription and can perform all actions.

If you use an existing subscription and you're not the


administrator, you need to work with the admin to assign
you Owner or Contributor permissions.

Azure infrastructure Contoso set up their Azure infrastructure as described in


Azure infrastructure for migration.

Scenario steps
Here's how Contoso will complete the migration:
Step 1: Provision Azure App Services: Contoso admins will provision Web Apps in the primary and
secondary regions.
Step 2: Set up Traffic Manager: They set up Traffic Manager in front of the Web Apps, for routing and load
balancing traffic.
Step 3: Provision MySQL: In Azure, they provision an instance of Azure MySQL database.
Step 4: Migrate the database: They migrate the database using MySQL Workbench.
Step 5: Set up GitHub: They set up a local GitHub repository for the app web sites/code.
Step 6: Deploy the web apps: They deploy the web apps from GitHub.

Step 1: Provision Azure App Services


Contoso admins provision two Web apps (one in each region) using Azure App Services.
1. They create a Web App resource in the primary East US 2 region ( osticket-eus2) from the Azure
Marketplace.
2. They put the resource in the production resource group ContosoRG.

3. They create a new App Service plan in the primary region ( APP -SVP -EUS2), using the standard size.
4. They select a Linux OS with PHP 7.0 runtime stack, which is a Docker container.

5. They create a second web app (osticket-cus), and App service plan for the Central US region.
Need more help?
Learn about Azure App Service Web apps.
Learn about Azure App Service on Linux.

Step 2: Set up Traffic Manager


Contoso admins set up Traffic Manager to direct inbound web requests to Web Apps running on the osTicket
web tier.
1. They create a Traffic Manager resource (osticket.trafficmanager.net) from the Azure Marketplace. They
use priority routing so that East US 2 is the primary site. They place the resource in their infrastructure
resource group (ContosoInfraRG). Note that Traffic Manager is global and not bound to a specific
location
2. Now, they configure Traffic Manager with endpoints. They add East US 2 Web app as the primary site
(osticket-eus2), and the Central US app as secondary ( osticket-cus).

3. After adding the endpoints, they can monitor them.


Need more help?
Learn about Traffic Manager.
Learn about routing traffic to a priority endpoint.

Step 3: Provision Azure Database for MySQL


Contoso admins provision a MySQL database instance in the primary East US 2 region.
1. In the Azure portal, they create an Azure Database for MySQL resource.

2. They add the name contosoosticket for the Azure database. They add the database to the production
resource group ContosoRG, and specify credentials for it.
3. The on-premises MySQL database is version 5.7, so they select this version for compatibility. They use
the default sizes, which match their database requirements.
4. For Backup Redundancy Options, they select to use Geo-Redundant. This option allows them to
restore the database in their secondary Central US region if an outage occurs. They can only configure
this option when they provision the database.

5. They set up connection security. In the database > Connection Security, they set up Firewall rules to
allow the database to access Azure services.
6. They add the local workstation client IP address to the start and end IP addresses. This allows the Web
apps to access the MySQL database, along with the database client that's performing the migration.
Step 4: Migrate the database
Contoso admins migrate the database using backup and restore, with MySQL tools. They install MySQL
Workbench, back up the database from OSTICKETMYSQL, and then restore it to Azure Database for MySQL
Server.
Install MySQL Workbench
1. They check the prerequisites and downloads MySQL Workbench.
2. They install MySQL Workbench for Windows in accordance with the installation instructions. The
machine on which they install must be accessible to the OSTICKETMYSQL VM, and Azure via the
internet.
3. In MySQL Workbench, they create a MySQL connection to OSTICKETMYSQL.

4. They export the database as osticket, to a local self-contained file.


5. After the database has been backed up locally, they create a connection to the Azure Database for
MySQL instance.

6. Now, they can import (restore) the database in the Azure MySQL instance, from the self-contained file. A
new schema (osticket) is created for the instance.
7. After data is restored, it can be queried using Workbench, and appears in the Azure portal.

8. Finally, they need to update the database information on the web apps. On the MySQL instance, they
open Connection Strings.
9. In the strings list, they locate the Web App settings, and click to copy them.

10. They open a Notepad window and paste the string into a new file, and update it to match the osticket
database, MySQL instance, and credentials settings.

11. They can verify the server name and login from Overview in the MySQL instance in the Azure portal.

Step 5: Set up GitHub


Contoso admins create a new private GitHub repo, and sets up a connection to the osTicket database in Azure
MySQL. Then, they load the Azure Web App with the app.
1. They browse to the OsTicket software public GitHub repo, and fork it to the Contoso GitHub account.

2. After forking, they navigate to the include folder, and find the ost-config.php file.
3. The file opens in the browser and they edit it.

4. In the editor, they update the database details, specifically DBHOST and DBUSER.

5. Then they commit the changes.

6. For each web app (osticket-eus2 and osticket-cus), they modify the Application settings in the Azure
portal.
7. They enter the connection string with the name osticket, and copy the string from notepad into the
value area. They select MySQL in the dropdown list next to the string, and save the settings.

Step 6: Configure the Web Apps


As the final step in the migration process, Contoso admins configure the web apps with the osTicket web sites.
1. In the primary web app ( osticket-eus2) they open Deployment option and set the source to GitHub.

2. They select the deployment options.


3. After setting the options, the configuration shows as pending in the Azure portal.

4. After the configuration is updated and the osTicket web app is loaded from GitHub to the Docket
container running the Azure App Service, the site shows as Active.

5. They repeat the above steps for the secondary web app ( osticket-cus).
6. After the site is configured, it's accessible via the Traffic Manager profile. The DNS name is the new
location of the osTicket app. Learn more.
7. Contoso wants a DNS name that's easy to remember. They create an alias record (CNAME )
osticket.contoso.com which points to the Traffic Manager name, in the DNS on their domain
controllers.

8. They configure both the osticket-eus2 and osticket-cus web apps to allow the custom hostnames.

Set up autoscaling
Finally, they set up automatic scaling for the app. This ensures that as agents use the app, the app instances
increase and decrease according to business needs.
1. In App Service APP -SRV -EUS2, they open Scale Unit.
2. They configure a new autoscale setting with a single rule that increases the instance count by one when
the CPU percentage for the current instance is above 70% for 10 minutes.

3. They configure the same setting on APP -SRV -CUS to ensure that the same behavior applies if the app
fails over to the secondary region. The only difference is that they set the instance limit to 1 since this is
for failovers only.

Clean up after migration


With migration complete, the osTicket app is refactored to running in an Azure Web app with continuous
delivery using a private GitHub repo. The app's running in two regions for increased resilience. The osTicket
database is running in Azure database for MySQL after migration to the PaaS platform.
For clean up, Contoso needs to do the following:
Remove the VMware VMs from the vCenter inventory.
Remove the on-premises VMs from local backup jobs.
Update internal documentation show new locations and IP addresses.
Review any resources that interact with the on-premises VMs, and update any relevant settings or
documentation to reflect the new configuration.
Reconfigure monitoring to point at the osticket-trafficmanager.net URL, to track that the app is up and
running.

Review the deployment


With the app now running, Contoso need to fully operationalize and secure their new infrastructure.
Security
The Contoso security team reviewed the app to determine any security issues. They identified that the
communication between the osTicket app and the MySQL database instance isn't configured for SSL. They will
need to do this to ensure that database traffic can't be hacked. Learn more.
Backups
The osTicket web apps don't contain state data and thus don't need to be backed up.
They don't need to configure backup for the database. Azure Database for MySQL automatically creates
server backups and stores. They selected to use geo-redundancy for the database, so it's resilient and
production-ready. Backups can be used to restore your server to a point-in-time. Learn more.
Licensing and cost optimization
There are no licensing issues for the PaaS deployment.
Contoso will enable Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary. It's a multi-cloud
cost management solution that helps you to utilize and manage Azure and other cloud resources. Learn
more about Azure Cost Management.
Contoso migration: Refactor a Team Foundation
Server deployment to Azure DevOps Services
3/21/2019 • 17 minutes to read • Edit Online

This article shows how Contoso are refactoring their on-premises Team Foundation Server (TFS ) deployment
by migrating it to Azure DevOps Services in Azure. Contoso's development team have used TFS for team
collaboration and source control for the past five years. Now, they want to move to a cloud-based solution for
dev and test work, and for source control. Azure DevOps Services will play a role as they move to an Azure
DevOps model, and develop new cloud-native apps.
This document is one in a series of articles that show how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background information, and scenarios
that illustrate how to set up a migration infrastructure, and run different types of migrations. Scenarios grow in
complexity. We'll add additional articles over time.

ARTICLE DETAILS STATUS

Article 1: Overview Provides an overview of Contoso's Available


migration strategy, the article series,
and the sample apps we use.

Article 2: Deploy an Azure Describes how Contoso prepares its Available


infrastructure on-premises and Azure infrastructure
for migration. The same infrastructure
is used for all Contoso migration
scenarios.

Article 3: Assess on-premises resources Shows how Contoso runs an Available


assessment of their on-premises two-
tier SmartHotel app running on
VMware. They assess app VMs with
the Azure Migrate service, and the app
SQL Server database with the Azure
Database Migration Assistant.

Article 4: Rehost to Azure VMs and a Demonstrates how Contoso migrates Available
SQL Managed Instance the SmartHotel app to Azure. They
migrate the app web VM using Azure
Site Recovery, and the app database
using the Azure Database Migration
service, to migrate to a SQL Managed
Instance.

Article 5: Rehost to Azure VMs Shows how Contoso migrate their


SmartHotel to Azure IaaS VMs, using
the Site Recovery service.

Article 6: Rehost to Azure VMs and Shows how Contoso migrates the Available
SQL Server Availability Groups SmartHotel app. They use Site
Recovery to migrate the app VMs, and
the Database Migration service to
migrate the app database to a SQL
Server Availability Group.
ARTICLE DETAILS STATUS

Article 7: Rehost a Linux app to Azure Shows how Contoso migrates their
VMs osTicket Linux app to Azure IaaS VMs
using Azure Site Recovery.

Article 8: Rehost a Linux app to Azure Demonstrates how Contoso migrates Available
VMs and Azure MySQL Server the osTicket Linux app. They use Site
Recovery for VM migration, and
MySQL Workbench to migrate to an
Azure MySQL Server instance.

Article 9: Refactor an app to an Azure Demonstrates how Contoso migrates Available


Web App and Azure SQL Database the SmartHotel app to an Azure
container-based web app, and
migrates the app database to Azure
SQL Server.

Article 10: Refactor a Linux app to Shows how Contoso migrates the Available
Azure App Service and Azure MySQL osTicket Linux app to Azure App
Server Service using PHP 7.0 Docker
container. The code base for the
deployment is migrated to GitHub. The
app database is migrated to Azure
MySQL.

Article 11: Refactor a TFS deployment Migrate the dev app TFS to Azure This article
in Azure DevOps Services DevOps Services in Azure

Article 12: Rearchitect an app on Azure Shows how Contoso migrates and Available
containers and Azure SQL Database rearchitects their SmartHotel app to
Azure. They rearchitect the app web
tier as a Windows container, and the
app database in an Azure SQL
Database.

Article 13: Rebuild an app in Azure Shows how Contoso rebuild their Available
SmartHotel app using a range of Azure
capabilities and services, including App
Services, Azure Kubernetes, Azure
Functions, Cognitive services, and
Cosmos DB.

Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.

Business drivers
The IT Leadership team has worked closely with business partners to identify future goals. Partners aren't overly
concerned with dev tools and technologies, but they have captured these points:
Software: Regardless of the core business, all companies are now software companies, including Contoso.
Business leadership is interested in how IT can help lead the company with new working practices for users,
and experiences for their customers.
Efficiency: Contoso needs to streamline process and remove unnecessary procedures for developers and
users. This will allow the company to deliver on customer requirements more efficiently. The business needs
IT to fast, without wasting time or money.
Agility: Contoso IT needs to respond to business needs, and react more quickly than the marketplace to
enable success in a global economy. IT mustn't be a blocker for the business.

Migration goals
The Contoso cloud team has pinned down goals for the migration to Azure DevOps Services:
The team needs a tool to migrate the data to the cloud. Few manual processes should be needed.
Work item data and history for the last year must be migrated.
They don't want to set up new user names and passwords. All current system assignments must be
maintained.
They want to move away from Team Foundation Version Control (TFVC ) to Git for source control.
The cutover to Git will be a "tip migration" that imports only the latest version of the source code. It will
happen during a downtime when all work will be halted as the codebase shifts. They understand that only
the current master branch history will be available after the move.
They're concerned about the change and want to test it before doing a full move. They want to retain access
to TFS even after the move to Azure DevOps Services.
They have multiple collections, and want to start with one that has only a few projects to better understand
the process.
They understand that TFS collections are a one-to-one relationship with Azure DevOps Services
organizations, so they'll have multiple URLs. However, this matches their current model of separation for
code bases and projects.

Proposed architecture
Contoso will move their TFS projects to the cloud, and no longer host their projects or source control on-
premises.
TFS will be migrated to Azure DevOps Services.
Currently Contoso has one TFS collection named ContosoDev, which will be migrated to an Azure DevOps
Services organization called contosodevmigration.visualstudio.com.
The projects, work items, bugs and iterations from the last year will be migrated to Azure DevOps Services.
Contoso will leverage their Azure Active Directory, which they set up when they deployed their Azure
infrastructure at the beginning of their migration planning.
Migration process
Contoso will complete the migration process as follows:
1. There's a lot of preparation involved. As a first step, Contoso needs to upgrade their TFS implementation to
a supported level. Contoso is currently running TFS 2017 Update 3, but to use database migration it needs
to run a supported 2018 version with the latest updates.
2. After upgrading, Contoso will run the TFS migration tool, and validate their collection.
3. Contoso will build a set of preparation files, and perform a migration dry run for testing.
4. Contoso will then run another migration, this time a full migration that includes work items, bugs, sprints,
and code.
5. After the migration, Contoso will move their code from TFVC to Git.

Scenario steps
Here's how Contoso will complete the migration:
Step 1: Create an Azure storage account: This storage account will be used during the migration process.
Step 2: Upgrade TFS: Contoso will upgrade their deployment to TFS 2018 Upgrade 2.
Step 3: Validate collection: Contoso will validate the TFS collection in preparation for migration.
Step 4: Build preparation file: Contoso will create the migration files using the TFS Migration Tool.

Step 1: Create a storage account


1. In the Azure portal, Contoso admins create a storage account ( contosodevmigration).
2. They place the account in their secondary region they use for failover - Central US. They use a general-
purpose standard account with locally-redundant storage.
Need more help?
Introduction to Azure storage.
Create a storage account.

Step 2: Upgrade TFS


Contoso admins upgrade the TFS server to TFS 2018 Update 2. Before they start:
They download TFS 2018 Update 2
They verify the hardware requirements, and read through the release notes and upgrade gotchas.
They upgrade as follows:
1. To start, they back up their TFS server (running on a VMware vM ) and take a VMware snapshot.
2. The TFS installer starts, and they choose the install location. The installer needs internet access.

3. After the installation finishes, the Server Configuration Wizard starts.


4. After verification, the Wizard completes the upgrade.

5. They verify the TFS installation by reviewing projects, work items, and code.
NOTE
Some TFS upgrades need to run the Configure Features Wizard after the upgrade completes. Learn more.

Need more help?


Learn about upgrading TFS.

Step 3: Validate the TFS collection


Contoso admins run the TFS Migration Tool against the ContosoDev collection database to validate it before
migration.
1. They download and unzip the TFS Migration Tool. It's important to download the version for the TFS
update that's running. The version can be checked in the admin console.

2. They run the tool to perform the validation, by specifying the URL of the project collection:
TfsMigrator validate /collection:http://contosotfs:8080/tfs/ContosoDev
3. The tool shows an error.
4. They located the log files are located in the Logs folder, just before the tool location. A log file is
generated for each major validation. TfsMigration.log holds the main information.

5. They find this entry, related to identity.

6. They run TfsMigration validate /help at the command line, and see that the command
/tenantDomainName seems to be required to validate identities.

7. They run the validation command again, and include this value, along with their Azure AD name:
TfsMigrator validate /collection:http://contosotfs:8080/tfs/ContosoDev
/tenantDomainName:contosomigration.onmicrosoft.com.

8. An Azure AD Sign In screen appears, and they enter the credentials of a Global Admin user.
9. The validation passes, and is confirmed by the tool.

Step 4: Create the migration files


With the validation complete, Contoso admins can use the TFS Migration Tool to build the migration files.
1. They run the prepare step in the tool.
TfsMigrator prepare /collection:http://contosotfs:8080/tfs/ContosoDev
/tenantDomainName:contosomigration.onmicrosoft.com /accountRegion:cus

Prepare does the following:


Scans the collection to find a list of all users and populates the identify map log
(IdentityMapLog.csv])
Prepares the connection to Azure Active Directory to find a match for each identity.
Contoso has already deployed Azure AD and synchronized it using AD Connect, so Prepare should be
able to find the matching identities and mark them as Active.
2. An Azure AD Sign In screen appears, and they enter the credentials of a Global Admin.

3. Prepare completes, and the tool reports that the import files have been generated successfully.

4. They can now see that both the IdentityMapLog.csv and the import.json file have been created in a new
folder.
5. The import.json file provides import settings. It includes information such as the desired organization
name, and storage account information. Most of the fields are populated automatically. Some fields
required user input. Contoso opens the file, and adds the Azure DevOps Services organization name to
be created: contosodevmigration. With this name, their Azure DevOps Services URL will be
contosodevmigration.visualstudio.com.

NOTE
The organization must be created before the migration, It can be changed after migration is done.

6. They review the identity log map file that shows the accounts that will be brought into Azure DevOps
Services during the import.
Active identities refer to identities that will become users in Azure DevOps Services after the
import.
On Azure DevOps Services, these identities will be licensed, and show up as a user in the
organization after migration.
These identities are marked as Active in the Expected Import Status column in the file.
Step 5: Migrate to Azure DevOps Services
With preparation in place, Contoso admins can now focus on the migration. After running the migration, they'll
switch from using TFVC to Git for version control.
Before they start, the admins schedule downtime with the dev team, to take the collection offline for migration.
These are the steps for the migration process:
1. Detach the collection: Identity data for the collection resides in the TFS server configuration database
while the collection is attached and online. When a collection is detached from the TFS server, it takes a copy
of that identity data, and packages it with the collection for transport. Without this data, the identity portion
of the import cannot be executed. It's recommended that the collection stay detached until the import has
been completed, as there's no way to import the changes which occurred during the import.
2. Generate a backup: The next step of the migration process is to generate a backup that can be imported
into Azure DevOps Services. Data-tier Application Component Packages (DACPAC ), is a SQL Server feature
that allows database changes to be packaged into a single file, and deployed to other instances of SQL. It can
also be restored directly to Azure DevOps Services, and is therefore used as the packaging method for
getting collection data into the cloud. Contoso will use the SqlPackage.exe tool to generate the DACPAC. This
tool is included in SQL Server Data Tools.
3. Upload to storage: After the DACPAC is created, they upload it to Azure Storage. After it's uploaded, they
get a shared access signature (SAS ), to allow the TFS Migration Tool access to the storage.
4. Fill out the import: Contoso can then fill out missing fields in the import file, including the DACPAC setting.
To start with they'll specify that they want to do a DryRun import, to check that everything's working
properly before the full migration.
5. Do a dry run: Dry run imports help test collection migration. Dry runs have limited life, and are deleted
before a production migration runs. They're deleted automatically after a set period of time. A note about
when the dry run will be deleted is included in the success email received after the import finishes. Take note
and plan accordingly.
6. Complete the production migration: With the Dry Run migration completed, Contoso admins do the
final migration by updating the import.json, and running import again.
Detach the collection
Before starting, Contoso admins take a local SQL Server backup, and VMware snapshot of the TFS server,
before detaching.
1. In the TFS Admin console, they select the collection they want to detach (ContosoDev).

2. In General, they select Detach Collection


3. In the Detach Team Project Collection Wizard > Servicing Message, they provide a message for users
who might try to connect to projects in the collection.

4. In Detach Progress, they monitor progress and click Next when the process finishes.

5. In Readiness Checks, when checks finish they click Detach.


6. They click Close to finish up.

7. The collection is no longer referenced in the TFS Admin console.


Generate a DACPAC
Contoso creates a backup (DACPAC ) for import into Azure DevOps Services.
SqlPackage.exe in SQL Server Data Tools is used to create the DACPAC. There are multiple versions of
SqlPackage.exe installed with SQL Server Data Tools, located under folders with names such as 120, 130,
and 140. It's important to use the right version to prepare the DACPAC.
TFS 2018 imports need to use SqlPackage.exe from the 140 folder or higher. For CONTOSOTFS, this file is
located in folder: C:\Program Files (x86)\Microsoft Visual
Studio\2017\Enterprise\Common7\IDE\Extensions\Microsoft\SQLDB\DAC\140.
Contoso admins generate the DACPAC as follows:
1. They open a command prompt and navigate to the SQLPackage.exe location. They type this following
command to generate the DACPAC:
SqlPackage.exe /sourceconnectionstring:"Data
Source=SQLSERVERNAME\INSTANCENAME;Initial Catalog=Tfs_ContosoDev;Integrated
Security=True" /targetFile:C:\TFSMigrator\Tfs_ContosoDev.dacpac /action:extract
/p:ExtractAllTableData=true /p:IgnoreUserLoginMappings=true /p:IgnorePermissions=true
/p:Storage=Memory

2. The following message appears after the command runs.

3. They verify the properties of the DACPAC file


Update the file to storage
After the DACPAC is created, Contoso uploads it to Azure Storage.
1. They download and install Azure Storage Explorer.

2. They connect to their subscription and locate the storage account they created for the migration
(contosodevmigration). They create a new blob container, azuredevopsmigration.

3. They specify the DACPAC file for upload as a block blob.


4. After the file's uploaded, they click the file name > Generate SAS. They expand the blob containers
under the storage account, select the container with the import files, and click Get Shared Access
Signature.

5. They accept the defaults and click Create. This enables access for 24 hours.
6. They copy the Shared Access Signature URL, so that it can be used by the TFS Migration Tool.
NOTE
The migration must happen before within the allowed time window or permissions will expire. Don't generate an SAS key
from the Azure portal. Keys generated like this are account-scoped, and won't work with the import.

Fill in the import settings


Earlier, Contoso admins partially filled out the import specification file (import.json). Now, they need to add the
remaining settings.
They open the import.json file, and fill out the following fields: • Location: Location of the SAS key that was
generated above. • Dacpac: Set the name to the DACPAC file you uploaded to the storage account. Include the
".dacpac" extension. • ImportType: Set to DryRun for now.

Do a dry run migration


Contoso admins start with a dry run migration, to make sure everything's working as expected.
1. They open a command prompt, and locate to the TfsMigration location (C:\TFSMigrator).
2. As a first step they validate the import file. They want to be sure the file is formatted properly, and that
the SAS key is working.
TfsMigrator import /importFile:C:\TFSMigrator\import.json /validateonly
3. The validation returns an error that the SAS key needs a longer expiry time.

4. They use Azure Storage Explorer to create a new SAS key with expiry set to seven days.

5. They update the import.json file and run the validation again. This time it completes successfully.
TfsMigrator import /importFile:C:\TFSMigrator\import.json /validateonly
6. They start the dry run:
TfsMigrator import /importFile:C:\TFSMigrator\import.json
7. A message is issued to confirm the migration. Note the length of time for which the staged data will be
maintained after the dry run.

8. Azure AD Sign In appears, and should be completing with Contoso Admin sign-in.
9. A message shows information about the import.

10. After 15 minutes or so, they browse to the URL, and see the following information:
11. After the migration finishes a Contoso Dev Leads signs into Azure DevOps Services to check that the dry
run worked properly. After authentication, Azure DevOps Services needs a few details to confirm the
organization.
12. In Azure DevOps Services, the Dev Lead can see that the projects have been migrated to Azure DevOps
Services. There's a notice that the organization will be deleted in 15 days.

13. The Dev Lead opens one of the projects and opens Work Items > Assigned to me. This shows that
work item data has been migrated, along with identity.
14. The lead also checks other projects and code, to confirm that the source code and history has been
migrated.

Run the production migration


With the dry run complete, Contoso admins move on to the production migration. They delete the dry run,
update the import settings, and run import again.
1. In the Azure DevOps Services portal, they delete the dry run organization.
2. They update the import.json file to set the ImportType to ProductionRun.
3. They start the migration as they did for the dry run: TfsMigrator import
/importFile:C:\TFSMigrator\import.json.
4. A message shows to confirm the migration, and warns that data could be held in a secure location as a
staging area for up to seven days.

5. In Azure AD Sign In, they specify a Contoso Admin sign-in.


6. A message shows information about the import.

7. After around 15 minutes, they browse to the URL, and sees the following information:
8. After the migration finishes, a Contoso Dev Lead logs onto Azure DevOps Services to check that the
migration worked properly. After login, he can see that projects have been migrated.

9. The Dev Lead opens one of the projects and opens Work Items > Assigned to me. This shows that
work item data has been migrated, along with identity.
10. The lead checks other work item data to confirm.

11. The lead also checks other projects and code, to confirm that the source code and history has been
migrated.
Move source control from TFVC to GIT
With migration complete, Contoso wants to move from TFVC to Git for source code management. They need to
import the source code currently in their Azure DevOps Services organization as Git repos in the same
organization.
1. In the Azure DevOps Services portal, they open one of the TFVC repos ( $/PolicyConnect) and review it.

2. They click the Source dropdown > Import.


3. In Source type they select TFVC, and specify the path to the repo. They've decided not to migrate the
history.

NOTE
Due to differences in how TFVC and Git store version control information, we recommend that Contoso don't
migrate history. This is the approach that Microsoft took when it migrated Windows and other products from
centralized version control to Git.

4. After the import, admins review the code.


5. They repeat the process for the second repository ( $/SmartHotelContainer).

6. After reviewing the source, the Dev Leads agree that the migration to Azure DevOps Services is done.
Azure DevOps Services now becomes the source for all development within teams involved in the
migration.
Need more help?
Learn more about importing from TFVC.

Clean up after migration


With migration complete, Contoso needs to do the following:
Review the post-import article for information about additional import activities.
Either delete the TFVC repos, or place them in read-only mode. The code bases mustn't used, but can be
referenced for their history.

Next steps
Contoso will need to provide Azure DevOps Services and Git training for relevant team members.
Contoso migration: Rearchitect an on-premises app
to an Azure container and Azure SQL Database
3/15/2019 • 23 minutes to read • Edit Online

This article demonstrates how Contoso migrates and rearchitect its SmartHotel360 app in Azure. Contoso
migrates the app frontend VM to an Azure Windows container, and the app database to an Azure SQL
database.
This document is one in a series of articles that show how the fictitious company Contoso migrates on-premises
resources to the Microsoft Azure cloud. The series includes background information, and scenarios that
illustrate setting up a migration infrastructure, assessing on-premises resources for migration, and running
different types of migrations. Scenarios grow in complexity. Additional articles will be added over time.

ARTICLE DETAILS STATUS

Article 1: Overview Overview of the article series, Available


Contoso's migration strategy, and the
sample apps that are used in the
series.

Article 2: Deploy Azure infrastructure Contoso prepares its on-premises Available


infrastructure and its Azure
infrastructure for migration. The same
infrastructure is used for all migration
articles in the series.

Article 3: Assess on-premises Contoso runs an assessment of its on- Available


resources for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database
using Data Migration Assistant.

Article 4: Rehost an app on an Azure Contoso runs a lift-and-shift migration Available


VM and SQL Database Managed to Azure for its on-premises
Instance SmartHotel360 app. Contoso migrates
the app front-end VM using Azure Site
Recovery. Contoso migrates the app
database to an Azure SQL Database
Managed Instance using the Azure
Database Migration Service.

Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.

Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS

Article 7: Rehost a Linux app on Azure Contoso completes a lift-and-shift Available


VMs migration of the Linux osTicket app to
Azure VMs, using Azure Site Recovery

Article 8: Rehost a Linux app on Azure Contoso migrates the Linux osTicket Available
VMs and Azure MySQL app to Azure VMs using Azure Site
Recovery, and migrates the app
database to an Azure MySQL Server
instance using MySQL Workbench.

Article 9: Refactor an app on Azure Contoso migrates the SmartHotel360 Available


Web Apps and Azure SQL database app to an Azure Web App, and
migrates the app database to an Azure
SQL Server instance with Database
Migration Assistant

Article 10: Refactor a Linux app on Contoso migrates its Linux osTicket Available
Azure Web Apps and Azure MySQL app to an Azure web app on multiple
Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.

Article 11: Refactor TFS on Azure Contoso migrates its on-premises Available
DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.

Article 12: Rearchitect an app on Azure Contoso migrates its SmartHotel app This article
Containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL
Database.

Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.

Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.

In this article, Contoso migrates the two-tier Windows WPF, XAML forms SmartHotel360 app running on
VMware VMs to Azure. If you'd like to use this app, it's provided as open source and you can download it from
GitHub.

Business drivers
The Contoso IT leadership team has worked closely with business partners to understand what they want to
achieve with this migration:
Address business growth: Contoso is growing, and as a result there is pressure on its on-premises systems
and infrastructure.
Increase efficiency: Contoso needs to remove unnecessary procedures, and streamline processes for
developers and users. The business needs IT to be fast and not waste time or money, thus delivering faster
on customer requirements.
Increase agility: Contoso IT needs to be more responsive to the needs of the business. It must be able to
react faster than the changes in the marketplace, to enable the success in a global economy. It mustn't get in
the way, or become a business blocker.
Scale: As the business grows successfully, Contoso IT must provide systems that are able to grow at the
same pace.
Costs: Contoso wants to minimize licensing costs.

Migration goals
The Contoso cloud team has pinned down goals for this migration. These goals were used to determine the best
migration method.

GOALS DETAILS

App reqs The app in Azure will remain as critical as it is today.

It should have the same performance capabilities as it


currently does in VMWare.

Contoso wants to stop supporting Windows Server 2008


R2, on which the app currently runs, and are willing to invest
in the app.

Contoso wants to move away from SQL Server 2008 R2 to a


modern PaaS Database platform, which will minimize the
need for management.

Contoso want to leverage its investment in SQL Server


licensing and Software Assurance where possible.

Contoso wants to be able to scale up the app web tier.

Limitations The app consists of an ASP.NET app and a WCF service


running on the same VM. Contoso wants to split this across
two web apps using the Azure App Service.

Azure reqs Contoso wants to move the app to Azure, and run it in a
container to extend app life. It doesn't want to start
completely from scratch to implement the app in Azure.

DevOps Contoso wants to move to a DevOps model using Azure


DevOps Services for code builds and release pipeline.

Solution design
After pinning down goals and requirements, Contoso designs and review a deployment solution, and identifies
the migration process, including the Azure services that Contoso will use for the migration.
Current app
The SmartHotel360 on-premises app is tiered across two VMs (WEBVM and SQLVM ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5)
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Proposed architecture
For the database tier of the app, Contoso compared Azure SQL Database with SQL Server using this
article. It decided to go with Azure SQL Database for a few reasons:
Azure SQL Database is a relational-database managed service. It delivers predictable performance at
multiple service levels, with near-zero administration. Advantages include dynamic scalability with no
downtime, built-in intelligent optimization, and global scalability and availability.
Contoso leverages the lightweight Data Migration Assistant (DMA) to assess and migrate the on-
premises database to Azure SQL.
With Software Assurance, Contoso can exchange its existing licenses for discounted rates on a SQL
Database, using the Azure Hybrid Benefit for SQL Server. This could provide savings of up to 30%.
SQL Database provides a number of security features including always encrypted, dynamic data
masking, and row -level security/threat detection.
For the app web tier, Contoso has decided convert it to the Windows Container using Azure DevOps
services.
Contoso will deploy the app using Azure Service Fabric, and pull the Windows container image from
the Azure Container Registry (ACR ).
A prototype for extending the app to include sentiment analysis will be implemented as another
service in Service Fabric, connected to Cosmos DB. This will read information from Tweets, and
display on the app.
To implement a DevOps pipeline, Contoso will use Azure DevOps for source code management (SCM ),
with Git repos. Automated builds and releases will be used to build code, and deploy it to the Azure
Container Registry and Azure Service Fabric.

Solution review
Contoso evaluates the proposed design by putting together a pros and cons list.

CONSIDERATION DETAILS
CONSIDERATION DETAILS

Pros The SmartHotel360 app code will need to be altered for


migration to Azure Service Fabric. However, the effort is
minimal, using the Service Fabric SDK tools for the changes.

With the move to Service Fabric, Contoso can start to


develop microservices to add to the application quickly over
time, without risk to the original code base.

Windows Containers offer the same benefits as containers in


general. They improve agility, portability, and control.

Contoso can leverage its investment in Software Assurance


using the Azure Hybrid Benefit for both SQL Server and
Windows Server.

After the migration it will no longer need to support


Windows Server 2008 R2. Learn more.

Contoso can configure the web tier of the app with multiple
instances, so that it's no longer a single point of failure.

It will no longer be dependent on the aging SQL Server


2008 R2.

SQL Database supports Contoso's technical requirements.


Contoso admins assessed the on-premises database using
the Database Migration Assistant and found it compatible.

SQL Database has built-in fault tolerance that Contoso


doesn't need to set up. This ensures that the data tier is no
longer a single point of failover.

Cons Containers are more complex than other migration options.


The learning curve on containers could be an issue for
Contoso. They introduce a new level of complexity that
provides a lot of value in spite of the curve.

The operations team at Contoso will need to ramp up to


understand and support Azure, containers and microservices
for the app.

If Contoso uses the Data Migration Assistant instead of Data


Migration Service to migrate the database, It won’t have the
infrastructure ready for migrating databases at scale.

Migration process
1. Contoso provisions the Azure service fabric cluster for Windows.
2. It provisions an Azure SQL instance, and migrates the SmartHotel360 database to it.
3. Contoso converts the Web tier VM to a Docker container using the Service Fabric SDK tools.
4. It connects the service fabric cluster and the ACR, and deploys the app using Azure service fabric.
Azure services
SERVICE DESCRIPTION COST

Database Migration Assistant (DMA) Assesses and detect compatibility It's a downloadable tool free of charge.
issues that might impact database
functionality in Azure. DMA assesses
feature parity between SQL sources
and targets, and recommends
performance and reliability
improvements.

Azure SQL Database Provides an intelligent, fully managed Cost based on features, throughput
relational cloud database service. and size. Learn more.

Azure Container Registry Stores images for all types of container Cost based on features, storage, and
deployments. usage duration. Learn more.

Azure Service Fabric Builds and operate always-on, scalable Cost based on size, location, and
and distributed apps duration of the compute nodes. Learn
more.

Azure DevOps Provides a continuous integration and


continuous deployment (CI/CD)
pipeline for app development. The
pipeline starts with a Git repository for
managing app code, a build system for
producing packages and other build
artifacts, and a Release Management
system to deploy changes in dev, test,
and production environments.

Prerequisites
Here's what Contoso needs to run this scenario:

REQUIREMENTS DETAILS
REQUIREMENTS DETAILS

Azure subscription Contoso created subscriptions earlier in this article series. If


you don't have an Azure subscription, create a free account.

If you create a free account, you're the administrator of your


subscription and can perform all actions.

If you use an existing subscription and you're not the


administrator, you need to work with the admin to assign
you Owner or Contributor permissions.

Azure infrastructure Learn how Contoso previously set up an Azure


infrastructure.

Developer prerequisites Contoso needs the following tools on a developer


workstation:

- Visual Studio 2017 Community Edition: Version 15.5

- .NET workload enabled.

- Git

- Service Fabric SDK v 3.0 or later

- Docker CE (Windows 10) or Docker EE (Windows Server)


set to use Windows Containers.

Scenario steps
Here's how Contoso runs the migration:
Step 1: Provision a SQL Database instance in Azure: Contoso provisions a SQL instance in Azure. After
the frontend web VM is migrated to an Azure container, the container instance with the app web frontend
will point to this database.
Step 2: Create an Azure Container Registry (ACR): Contoso provisions an enterprise container registry
for the docker container images.
Step 3: Provision Azure Service Fabric: It provisions a Service Fabric Cluster.
Step 4: Manage service fabric certificates: Contoso sets up certificates for Azure DevOps Services access
to the cluster.
Step 5: Migrate the database with DMA: It migrates the app database with the Database Migration
Assistant.
Step 6: Set up Azure DevOps Services: Contoso sets up a new project in Azure DevOps Services, and
imports the code into the Git Repo.
Step 7: Convert the app: Contoso converts the app to a container using Azure DevOps and SDK tools.
Step 8: Set up build and release: Contoso sets up the build and release pipelines to create and publish the
app to the ACR and Service Fabric Cluster.
Step 9: Extend the app: After the app is public, Contoso extends it to take advantage of Azure capabilities,
and republishes it to Azure using the pipeline.

Step 1: Provision an Azure SQL Database


Contoso admins provision an Azure SQL database.
1. They select to create a SQL Database in Azure.
2. They specify a database name to match the database running on the on-premises VM
(SmartHotel.Registration). They place the database in the ContosoRG resource group. This is the
resource group they use for production resources in Azure.

3. They set up a new SQL Server instance (sql-smarthotel-eus2) in the primary region.
4. They set the pricing tier to match server and database needs. And they select to save money with Azure
Hybrid Benefit because they already have a SQL Server license.
5. For sizing they use v-Core-based purchasing, and set the limits for the expected requirements.

6. Then they create the database instance.


7. After the instance is created, they open the database, and note details they need when they use the
Database Migration Assistance for migration.

Need more help?


Get help provisioning a SQL Database.
Learn about v-Core resource limits.

Step 2: Create an ACR and provision an Azure Container


The Azure container is created using the exported files from the Web VM. The container is housed in the Azure
Container Registry (ACR ).
1. Contoso admins create a Container Registry in the Azure portal.

2. They provide a name for the registry ( contosoacreus2), and place it in the primary region, in the
resource group they use for their infrastructure resources. They enable access for admin users, and set it
as a premium SKU so that they can leverage geo-replication.
Step 3: Provision Azure Service Fabric
The SmartHotel360 container will run in the Azure Service Fabric Cluster. Contoso admins create the Service
Fabric Cluster as follows:
1. Create a Service Fabric resource from the Azure Marketplace
2. In Basics, they provide a unique DS name for the cluster, and credentials for accessing the on-premises
VM. They place the resource in the production resource group (ContosoRG) in the primary East US 2
region.

3. In Node type configuration, they input a node type name, durability settings, VM size, and app
endpoints.
4. In Create key vault, they create a new key vault in their infrastructure resource group, to house the
certificate.
5. In Access Policies they enable access to virtual machines to deploy the key vault.

6. They specify a name for the certificate.


7. In the summary page, they copy the link that's used to download the certificate. They need this to
connect to the Service Fabric Cluster.
8. After validation passes, they provision the cluster.
9. In the Certificate Import Wizard, they import the downloaded certificate to dev machines. The certificate
is used to authenticate to the cluster.

10. After the cluster is provisioned, they connect to the Service Fabric Cluster Explorer.
11. They need to select the correct certificate.

12. The Service Fabric Explorer loads, and the Contoso Admin can manage the cluster.

Step 4: Manage Service Fabric certificates


Contoso needs cluster certificates to allow Azure DevOps Services access to the cluster. Contoso admins set this
up.
1. They open the Azure portal and browse to the KeyVault.
2. They open the certificates, and copy the thumbprint of the certificate that was created during the
provisioning process.

3. They copy it to a text file for later reference.


4. Now, they add a client certificate that will become an Admin client certificate on the cluster. This allows
Azure DevOps Services to connect to the cluster for the app deployment in the release pipeline. To do
they, they open KeyVault in the portal, and select Certificates > Generate/Import.

5. They enter the name of the certificate, and provide an X.509 distinguished name in Subject.

6. After the certificate is created, they download it locally in PFX format.

7. Now, they go back to the certificates list in the KeyVault, and copy the thumbprint of the client certificate
that's just been created. They save it in the text file.

8. For Azure DevOps Services deployment, they need to determine the Base64 value of the certificate. They
do this on the local developer workstation using PowerShell. They paste the output into a text file for
later use.

[System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes("C:\path\to\certificate.pfx"))

9. Finally, they add the new certificate to the Service Fabric cluster. To do this, in the portal they open the
cluster, and click Security.

10. They click Add > Admin Client, and paste in the thumbprint of the new client certificate. Then they click
Add. This can take up to 15 minutes.

Step 5: Migrate the database with DMA


Contoso admins can now migrate the SmartHotel360 database using DMA.
Install DMA
1. They download the tool from the Microsoft Download Center to the on-premises SQL Server VM (SQLVM ).
2. They run setup (DownloadMigrationAssistant.msi) on the VM.
3. On the Finish page, they select Launch Microsoft Data Migration Assistant before finishing the wizard.
Configure the firewall
To connect to the Azure SQL Database, Contoso admins set up a firewall rule to allow access.
1. In the Firewall and virtual networks properties for the database, they allow access to Azure services,
and add a rule for the client IP address of the on-premises SQL Server VM.
2. A server-level firewall rule is created.

Need more help?


Learn about creating and managing firewall rules for Azure SQL Database.
Migrate
Contoso admins now migrate the database.
1. In the DMA create a new project (SmartHotelDB ) and select Migration
2. They select the source server type as SQL Server, and the target as Azure SQL Database.

3. In the migration details, they add SQLVM as the source server, and the SmartHotel.Registration
database.

4. They receive an error which seems to be associated with authentication. However after investigating, the
issue is the period (.) in the database name. As a workaround, they decided to provision a new SQL
database using the name SmartHotel-Registration, to resolve the issue. When they run DMA again,
they're able to select SmartHotel-Registration, and continue with the wizard.
5. In Select Objects, they select the database tables, and generate a SQL script.

6. After DMS creates the script, they click Deploy schema.


7. DMA confirms that the deployment succeeded.

8. Now they start the migration.


9. After the migration finishes, Contoso can verify that the database is running on the Azure SQL instance.

10. They delete the extra SQL database SmartHotel.Registration in the Azure portal.

Step 6: Set up Azure DevOps Services


Contoso needs to build the DevOps infrastructure and pipelines for the application. To do this, Contoso admins
create a new Azure DevOps project, import their code, and then build and release pipelines.
1. In the Contoso Azure DevOps account, they create a new project (ContosoSmartHotelRearchitect),
and select Git for version control.

2. They import the Git Repo that currently holds their app code. It's in a public repo and you can download
it.

3. After the code is imported, they connect Visual Studio to the repo, and clone the code using Team
Explorer.
4. After the repo is cloned to the developer machine, they open the Solution file for the app. The web app
and wcf service each have separate project within the file.
Step 7: Convert the app to a container
The on-premises app is a traditional three tier app:
It contains WebForms and a WCF Service connecting to SQL Server.
It uses Entity Framework to integrate with the data in the SQL database, exposing it through a WCF service.
The WebForms application interacts with the WCF service.
Contoso admins will convert the app to a container using Visual Studio and the SDK Tools, as follows:
1. Using Visual Studio, they review the open solution file (SmartHotel.Registration.sln) in the
SmartHotel360-internal-booking-apps\src\Registration directory of the local repo. Two apps are
shown. The web frontend SmartHotel.Registration.Web and the WCF service app
SmartHotel.Registration.WCF.

2. They right-click the web app > Add > Container Orchestrator Support.
3. In Add Container Orchestra Support, they select Service Fabric.

4. They repeat the process for SmartHotel.Registration.WCF app.


5. Now, they check how the solution has changed.
The new app is SmartHotel.RegistrationApplication/
It contains two services: SmartHotel.Registration.WCF and SmartHotel.Registration.Web.
6. Visual Studio created the Docker file, and pulled down the required images locally to the developer
machine.

7. A manifest file (ServiceManifest.xml) is created and opened by Visual Studio. This file tells Service
Fabric how to configure the container when it's deployed to Azure.

8. Another manifest file (**ApplicationManifest.xml) contains the configuration applications for the
containers.

9. They open the ApplicationParameters/Cloud.xml file, and update the connection string to connect the
app to the Azure SQL database. The connection string can be located in the database in the Azure portal.
10. They commit the updated code and push to Azure DevOps Services.

Step 8: Build and release pipelines in Azure DevOps Services


Contoso admins now configure Azure DevOps Services to perform build and release process to action the
DevOps practices.
1. In Azure DevOps Services, they click Build and release > New pipeline.

2. They select Azure DevOps Services Git and the relevant repo.
3. In Select a template, they select fabric with Docker support.

4. They change the Action Tag images to Build an image, and configure the task to use the provisioned
ACR.
5. In the Push images task, they configure the image to be pushed to the ACR, and select to include the
latest tag.
6. In Triggers, they enable continuous integration, and add the master branch.

7. They click Save and Queue to start a build.


8. After the build succeeds, they move onto the release pipeline. In Azure DevOps Services they click
Releases > New pipeline.

9. They select the Azure Service Fabric deployment template, and name the Stage (SmartHotelSF).

10. They provide a pipeline name (ContosoSmartHotel360Rearchitect). For the stage, they click 1 job, 1
task to configure the Service Fabric deployment.

11. Now, they click New to add a new cluster connection.


12. In Add Service Fabric service connection, they configure the connection, and the authentication
settings that will be used by Azure DevOps Services to deploy the app. The cluster endpoint can be
located in the Azure portal, and they add tcp:// as a prefix.
13. The certificate information they collected is input in Server Certificate Thumbprint and Client
Certificate.

14. They click the pipeline > Add an artifact.

15. They select the project and build pipeline, using the latest version.
16. Note that the lightning bolt on the artifact is checked.

17. In addition, note that the continuous deployment trigger is enabled.

18. They click Save > Create a release.


19. After the deployment finishes, SmartHotel360 will now be running Service Fabric.

20. To connect to the app, they direct traffic to the public IP address of the Azure load balancer in front of the
Service Fabric nodes.

Step 9: Extend the app and republish


After the SmartHotel360 app and database are running in Azure, Contoso wants to extend the app.
Contoso’s developers are prototyping a new .NET Core application which will run on the Service Fabric
cluster.
The app will be used to pull sentiment data from CosmosDB.
This data will be in the form of Tweets that are processed using a Serverless Azure Function, and the
Cognitive Services Text Analysis API.
Provision Azure Cosmos DB
As a first step, Contoso admins provision an Azure Cosmos database.
1. They create an Azure Cosmos DB resource from the Azure Marketplace.
2. They provide a database name (contososmarthotel), select the SQL API, and place the resource in the
production resource group, in the primary East US 2 region.

3. In Getting Started, they select Data Explorer, and add a new collection.
4. In Add Collection they provide IDs and set storage capacity and throughput.

5. In the portal, they open the new database > Collection > Documents and click New Document.
6. They paste the following JSON code into the document window. This is sample data in the form of a
single tweet.

{
"id": "2ed5e734-8034-bf3a-ac85-705b7713d911",
"tweetId": 927750234331580911,
"tweetUrl": "https://twitter.com/status/927750237331580911",
"userName": "CoreySandersWA",
"userAlias": "@CoreySandersWA",
"userPictureUrl": "",
"text": "This is a tweet about #SmartHotel360",
"language": "en",
"sentiment": 0.5,
"retweet_count": 1,
"followers": 500,
"hashtags": [
""
]
}
7. They locate the Cosmos DB endpoint, and the authentication key. These are used in the app to connect to
the collection. In the database, they click Keys, and copy the URI and primary key to Notepad.

Update the sentiment app


With the Cosmos DB provisioned, Contoso admins can configure the app to connect to it.
1. In Visual Studio, they open file ApplicationModern\ApplicationParameters\cloud.xml in Solution
Explorer.
2. They fill in the following two parameters:

<Parameter Name="SentimentIntegration.CosmosDBEndpoint" Value="[URI]" />

<Parameter Name="SentimentIntegration.CosmosDBAuthKey" Value="[Key]" />

Republish the app


After extending the app, Contoso admins republish it to Azure using the pipeline.
1. They commit and push their code to Azure DevOps Services. This kicks off the build and release
pipelines.
2. After the build and deployment finishes, SmartHotel360 will now be running Service Fabric. The Service
Fabric Management console now shows three services.

3. They can now click through the services to see that the SentimentIntegration app is up and running
Clean up after migration
After migration, Contoso needs to complete these cleanup steps:
Remove the on-premises VMs from the vCenter inventory.
Remove the VMs from local backup jobs.
Update internal documentation to show the new locations for the SmartHotel360 app. Show the database as
running in Azure SQL database, and the front end as running in Service Fabric.
Review any resources that interact with the decommissioned VMs, and update any relevant settings or
documentation to reflect the new configuration.

Review the deployment


With the migrated resources in Azure, Contoso needs to fully operationalize and secure their new infrastructure.
Security
Contoso admins need to ensure that their new SmartHotel-Registration database is secure. Learn more.
In particular, they should update the container to use SSL with certificates.
They should consider using KeyVault to protect secrets for their Service Fabric apps. Learn more.
Backups
Contoso needs to review backup requirements for the Azure SQL Database. Learn more.
Contoso admins should consider implementing failover groups to provide regional failover for the database.
Learn more.
They can leverage geo-replication for the ACR premium SKU. Learn more.
Contoso need to consider deploying the Web App in the main East US 2 and Central US region when Web
App for Containers becomes available. Contoso admins could configure Traffic Manager to ensure failover in
case of regional outages.
Cosmos DB backs up automatically. Contoso read about this process to learn more.
Licensing and cost optimization
After all resources are deployed, Contoso should assign Azure tags based on infrastructure planning.
All licensing is built into the cost of the PaaS services that Contoso is consuming. This will be deducted from
the EA.
Contoso will enable Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary. It's a multi-cloud
cost management solution that helps you to utilize and manage Azure and other cloud resources. Learn
more about Azure Cost Management.

Conclusion
In this article, Contoso refactored the SmartHotel360 app in Azure by migrating the app frontend VM to
Service Fabric. The app database was migrated to an Azure SQL database.
Contoso migration: Rebuild an on-premises app to
Azure
4/4/2019 • 22 minutes to read • Edit Online

This article demonstrates how Contoso migrates and rebuilds the SmartHotel360 app in Azure. Contoso
migrates the app's front end VM to Azure App Services Web apps. The app back end is built using
microservices deployed to containers managed by Azure Kubernetes Service (AKS ). The site interacts with
Azure Functions to provide pet photo functionality.
This document is one in a series of articles that show how the fictitious company Contoso migrates on-
premises resources to the Microsoft Azure cloud. The series includes background information, and scenarios
that illustrate setting up a migration infrastructure, assessing on-premises resources for migration, and running
different types of migrations. Scenarios grow in complexity. We'll add additional articles over time.

ARTICLE DETAILS STATUS

Article 1: Overview Provides an overview of Contoso's Available


migration strategy, the article series,
and the sample apps we use.

Article 2: Deploy an Azure Describes how Contoso prepares its Available


infrastructure on-premises and Azure infrastructure
for migration. The same infrastructure
is used for all migration articles.

Article 3: Assess on-premises Shows how Contoso runs an Available


resources assessment of an on-premises two-tier
SmartHotel360 app running on
VMware. Contoso assesses app VMs
with the Azure Migrate service, and
the app SQL Server database with the
Database Migration Assistant.

Article 4: Rehost an app on Azure VMs Demonstrates how Contoso runs a Available
and a SQL Managed Instance lift-and-shift migration to Azure for
the SmartHotel360 app. Contoso
migrates the app frontend VM using
Azure Site Recovery, and the app
database to a SQL Managed Instance,
using the Azure Database Migration
Service.

Article 5: Rehost an app on Azure VMs Shows how Contoso migrate the Available
SmartHotel360 app VMs using Site
Recovery only.

Article 6: Rehost an app to Azure VMs Shows how Contoso migrates the Available
and SQL Server Always On Availability SmartHotel360 app. Contoso uses Site
Group Recovery to migrate the app VMs, and
the Database Migration service to
migrate the app database to a SQL
Server cluster protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS

Article 7: Rehost a Linux app on Azure Shows how Contoso does a lift-and- Available
VMs shift migration of the Linux osTicket
app to Azure VMs, using Site Recovery

Article 8: Rehost a Linux app on Azure Demonstrates how Contoso migrates Available
VMs and Azure MySQL Server the Linux osTicket app to Azure VMs
using Site Recovery, and migrates the
app database to an Azure MySQL
Server instance using MySQL
Workbench.

Article 9: Refactor an app on Azure Demonstrates how Contoso migrates Available


Web Apps and Azure SQL database the SmartHotel360 app to an Azure
Web App, and migrates the app
database to Azure SQL Server instance

Article 10: Refactor a Linux app to Shows how Contoso migrates the Available
Azure Web Apps and Azure MySQL Linux osTicket app to Azure Web Apps
in multiple sites, integrated with
GitHub for continuous delivery. They
migrate the app database to an Azure
MySQL instance.

Article 11: Refactor TFS on Azure Shows how Contoso migrates the on- Available
DevOps Services premises Team Foundation Server
(TFS) deployment by migrating it to
Azure DevOps Services in Azure.

Article 12: Rearchitect an app to Azure Shows how Contoso migrates and Available
containers and SQL Database rearchitects their SmartHotel app to
Azure. They rearchitect the app web
tier as a Windows container, and the
app database in an Azure SQL
Database.

Article 13: Rebuild an app to Azure Shows how Contoso rebuild their This article
SmartHotel app using a range of Azure
capabilities and services, including App
Services, Azure Kubernetes, Azure
Functions, Cognitive services, and
Cosmos DB.

Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.

In this article, Contoso migrates the two-tier Windows. NET SmartHotel360 app running on VMware VMs to
Azure. If you'd like to use this app, it's provided as open source and you can download it from GitHub.

NOTE
This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which
will continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM
compatibility, see Introducing the new Azure PowerShell Az module. For Az module installation instructions, see Install
Azure PowerShell.
Business drivers
The IT leadership team has worked closely with business partners to understand what they want to achieve with
this migration:
Address business growth: Contoso is growing, and wants to provide differentiated experiences for
customers on Contoso websites.
Agility: Contoso must be able to react faster than the changes in the marketplace, to enable the success in a
global economy.
Scale: As the business grows successfully, the Contoso IT team must provide systems that are able to grow
at the same pace.
Costs: Contoso wants to minimize licensing costs.

Migration goals
The Contoso cloud team has pinned down app requirements for this migration. These requirements were used
to determine the best migration method:
The app in Azure will remain as critical as it is today. It should perform well and scale easily.
The app shouldn't use IaaS components. Everything should be built to use PaaS or serverless services.
The app builds should run in cloud services, and containers should reside in a private Enterprise-wide
container registry in the cloud.
The API service used for pet photos should be accurate and reliable in the real world, since decisions made
by the app must be honored in their hotels. Any pet granted access is allowed to stay at the hotels.
To meet requirements for a DevOps pipeline, Contoso will use Azure DevOps for Source Code Management
(SCM ), with Git Repos. Automated builds and releases will be used to build code, and deploy it to the Azure
Web Apps, Azure Functions and AKS.
Different CI/CD pipelines are needed for microservices on the backend, and for the web site on the frontend.
The backend services have a different release cycle from the frontend web app. To meet this requirement,
they will deploy two different DevOps pipelines.
Contoso needs management approval for all front end website deployment, and the CI/CD pipeline must
provide this.

Solution design
After pinning down goals and requirements, Contoso designs and review a deployment solution, and identifies
the migration process, including the Azure services that will be used for the migration.
Current app
The SmartHotel360 on-premises app is tiered across two VMs (WEBVM and SQLVM ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5)
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Proposed architecture
The frontend of the app is deployed as an Azure App Services Web app, in the primary Azure region.
An Azure function provides uploads of pet photos, and the site interacts with this functionality.
The pet photo function leverages Cognitive Services Vision API, and CosmosDB.
The back end of the site is built using microservices. These will be deployed to containers managed on
the Azure Kubernetes service (AKS ).
Containers will be built using Azure DevOps, and pushed to the Azure Container Registry (ACR ).
For now, Contoso will manually deploy the Web app and function code using Visual Studio
Microservices will be deployed using a PowerShell script that calls Kubernetes command-line tools.

Solution review
Contoso evaluates the proposed design by putting together a pros and cons list.

CONSIDERATION DETAILS

Pros Using PaaS and serverless solutions for the end-to-end


deployment significantly reduces management time that
Contoso must provide.

Moving to a microservice architecture allows Contoso to


easily extend the solution over time.

New functionality can be brought online without disrupting


any of the existing solutions code bases.

The Web App will be configured with multiple instances with


no single point of failure.

Autoscaling will be enabled so that the app can handle


differing traffic volumes.

With the move to PaaS services Contoso can retire out-of-


date solutions running on Windows Server 2008 R2
operating system.

CosmosDB has built-in fault tolerance, which requires no


configuration by Contoso. This means that the data tier is no
longer a single point of failover.
CONSIDERATION DETAILS

Cons Containers are more complex than other migration options.


The learning curve could be an issue for Contoso. They
introduce a new level of complexity that provides a lot of
value in spite of the curve.

The operations team at Contoso needs to ramp up to


understand and support Azure, containers and microservices
for the app.

Contoso hasn't fully implemented DevOps for the entire


solution. Contoso needs to think about that for the
deployment of services to AKS, functions, and App Services.

Migration process
1. Contoso provision the ACR, AKS, and CosmosDB.
2. They provision the infrastructure for the deployment, including the Azure Web App, storage account,
function, and API.
3. After the infrastructure is in place, they'll build their microservices container images using Azure
DevOps, which pushes them to the ACR.
4. Contoso will deploy these microservices to ASK using a PowerShell script.
5. Finally, they'll deploy the Azure function and Web App.

Azure services
SERVICE DESCRIPTION COST

AKS Simplifies Kubernetes management, AKS is a free service. Pay for only the
deployment, and operations. Provides virtual machines, and associated
a fully managed Kubernetes container storage and networking resources
orchestration service. consumed. Learn more.

Azure Functions Accelerates development with an Pay only for consumed resources. Plan
event-driven, serverless compute is billed based on per-second resource
experience. Scale on demand. consumption and executions. Learn
more.
SERVICE DESCRIPTION COST

Azure Container Registry Stores images for all types of container Cost based on features, storage, and
deployments. usage duration. Learn more.

Azure App Service Quickly build, deploy, and scale App Service plans are billed on a per
enterprise-grade web, mobile, and API second basis. Learn more.
apps running on any platform.

Prerequisites
Here's what Contoso needs for this scenario:

REQUIREMENTS DETAILS

Azure subscription Contoso created subscriptions during an earlier article. If you


don't have an Azure subscription, create a free account.

If you create a free account, you're the administrator of your


subscription and can perform all actions.

If you use an existing subscription and you're not the


administrator, you need to work with the admin to assign
you Owner or Contributor permissions.

Azure infrastructure Learn how Contoso set up an Azure infrastructure.

Developer prerequisites Contoso needs the following tools on a developer


workstation:

- Visual Studio 2017 Community Edition: Version 15.5

.NET workload enabled.

Git

Azure PowerShell

Azure CLI

Docker CE (Windows 10) or Docker EE (Windows Server) set


to use Windows Containers.

Scenario steps
Here's how Contoso will run the migration:
Step 1: Provision AKS and ACR: Contoso provisions the managed AKS cluster and Azure container
registry using PowerShell
Step 2: Build Docker containers: They set up CI for Docker containers using Azure DevOps, and push
them to the ACR.
Step 3: Deploy back-end microservices: They deploy the rest of the infrastructure that will be leveraged
by back-end microservices.
Step 4: Deploy front-end infrastructure: They deploy the front-end infrastructure, including blob storage
for the pet phones, the Cosmos DB, and Vision API.
Step 5: Migrate the back end: They deploy microservices and run on AKS, to migrate the back end.
Step 6: Publish the front end: They publish the SmartHotel360 app to the Azure App service, and the
Function App that will be called by the pet service.

Step 1: Provision back-end resources


Contoso admins run a deployment script to create the managed Kubernetes cluster using AKS and the Azure
Container Registry (ACR ).
The instructions for this section use the SmartHotel360-Azure-backend repository.
The SmartHotel360-Azure-backend GitHub repo contains all of the software for this part of the
deployment.
Prerequisites
1. Before they start, Contoso admins ensure that all prerequisite software in installed on the dev machine
they're using for the deployment.
2. They clone the repo local to the dev machine using Git: git clone
https://github.com/Microsoft/SmartHotel360-Azure-backend.git
Provision AKS and ACR
The Contoso admins provision as follows:
1. They open the folder using Visual Studio Code, and moves to the /deploy/k8s directory, which contains
the script gen-aks-env.ps1.
2. They run the script to create the managed Kubernetes cluster, using AKS and ACR.

3. With the file open, they update the $location parameter to eastus2, and save the file.
4. They click View > Integrated Terminal to open the integrated terminal in Code.

5. In the PowerShell Integrated terminal, they sign into Azure using the Connect-AzAccount command.
Learn more about getting started with PowerShell.

6. They authenticate Azure CLI by running the az login command, and following the instructions to
authenticate using their web browser. Learn more about logging in with Azure CLI.

7. They run the following command, passing the resource group name of ContosoRG, the name of the AKS
cluster smarthotel-aks-eus2, and the new registry name.

.\gen-aks-env.ps1 -resourceGroupName ContosoRg -orchestratorName smarthotelakseus2 -registryName


smarthotelacreus2

8. Azure creates another resource group, containing the resources for the AKS cluster.
9. After the deployment is finished, they install the kubectl command-line tool. The tool is already installed
on the Azure CloudShell.
az aks install-cli
10. They verify the connection to the cluster by running the kubectl get nodes command. The node is the
same name as the VM in the automatically created resource group.

11. They run the following command to start the Kubernetes Dashboard:
az aks browse --resource-group ContosoRG --name smarthotelakseus2
12. A browser tab opens to the Dashboard. This is a tunneled connection using the Azure CLI.
Step 2: Configure the back-end pipeline
Create an Azure DevOps project and build
Contoso creates an Azure DevOps project, and configures a CI Build to create the container and then pushes it
to the ACR. The instructions in this section use the SmartHotel360-Azure-Backend repository.r
1. From visualstudio.com, they create a new organization (contosodevops360.visualstudio.com ), and
configure it to use Git.
2. They create a new project (SmartHotelBackend) using Git for version control, and Agile for the
workflow.

3. They import the GitHub repo.


4. In Pipelines, they click Build, and create a new pipeline using Azure Repos Git as a source, from the
repository.

5. They select to start with an empty job.


6. They select Hosted Linux Preview for the build pipeline.

7. In Phase 1, they add a Docker Compose task. This task builds the Docker compose.

8. They repeat and add another Docker Compose task. This one pushes the containers to ACR.
9. They select the first task (to build), and configure the build with the Azure subscription, authorization, and
the ACR.

10. They specify the path of the docker-compose.yaml file, in the src folder of the repo. They select to build
service images and include the latest tag. When the action changes to Build service images, the name
of the Azure DevOps task changes to Build services automatically
11. Now, they configure the second Docker task (to push). They select the subscription and the
smarthotelacreus2 ACR.

12. Again, they enter the file to the docker-compose.yaml file, and select Push service images and include
the latest tag. When the action changes to Push service images, the name of the Azure DevOps task
changes to Push services automatically
13. With the Azure DevOps tasks configured, Contoso saves the build pipeline, and starts the build process.

14. They click on the build job to check progress.


15. After the build finishes, the ACR shows the new repos, which are populated with the containers used by
the microservices.
Deploy the back-end infrastructure
With the AKS cluster created and the Docker images built, Contoso admins now deploy the rest of the
infrastructure that will be leveraged by back-end microservices.
Instructions in the section use the SmartHotel360-Azure-Backend repo.
In the /deploy/k8s/arm folder, there's a single script to create all items.
They deploy as follows:
1. They open a developer command prompt, and use the command az login for the Azure subscription.
2. They use the deploy.cmd file to deploy the Azure resources in the ContosoRG resource group and EUS2
region, by typing the following command:
.\deploy.cmd azuredeploy ContosoRG -c eastus2
3. In the Azure portal, they capture the connection string for each database, to be used later.

Create the back-end release pipeline


Now, Contoso admins do the following:
Deploy the NGINX ingress controller to allow inbound traffic to the services.
Deploy the microservices to the AKS cluster.
As a first step they update the connection strings to the microservices using Azure DevOps. They then
configure a new Azure DevOps Release pipeline to deploy the microservices.
The instructions in this section use the SmartHotel360-Azure-Backend repo.
Note that Some of the configuration settings (for example Active Directory B2C ) aren’t covered in this
article. Read more information about these settings in the repo.
They create the pipeline:
1. Using Visual Studio they update the /deploy/k8s/config_local.yml file with the database connection
information they noted earlier.

2. They open Azure DevOps, and in the SmartHotel360 project, in Releases, they click +New Pipeline.
3. They click Empty Job to start the pipeline without a template.
4. They provide the stage and pipeline names.

5. They add an artifact.

6. They select Git as the source type, and specify the project, source, and master branch for the
SmartHotel360 app.
7. They click the task link.

8. They add a new Azure PowerShell task so that they can run a PowerShell script in an Azure environment.

9. They select the Azure subscription for the task, and select the deploy.ps1 script from the Git repo.
10. They add arguments to the script. the script will delete all cluster content (except ingress and ingress
controller), and deploy the microservices.

11. They set the preferred Azure PowerShell version to the latest, and save the pipeline.
12. They move back to the Release page, and manually create a new release.
13. They click the release after creating it, and in Actions, they click Deploy.

14. When the deployment is complete, they run the following command to check the status of services, using
the Azure Cloud Shell: kubectl get services.

Step 3: Provision front-end services


Contoso admins need to deploy the infrastructure that will be used by the front end apps. They create a blob
storage container for storing the pet images; the Cosmos database to store documents with the pet information;
and the Vision API for the website.
Instructions for this section use the SmartHotel360-public-web repo.
Create blob storage containers
1. In Azure portal, they open the storage account that was created, and clicks Blobs.
2. They create a new container (Pets) with the public access level set to container. Users will upload their
pet photos to this container.
3. They create a second new container named settings. A file with all the front end app settings will be
placed in this container.

4. They capture the access details for the storage account in a text file, for future reference.

Provision a Cosmos database


Contoso admins provision a Cosmos database to be used for pet information.
1. They create an Azure Cosmos DB in the Azure Marketplace.
2. They specify a name (contosomarthotel), select the SQL API, and place it in the production resource
group ContosoRG, in the main East US 2 region.

3. They add a new collection to the database, with default capacity and throughput.
4. They note the connection information for the database, for future reference.

Provision Computer Vision


Contoso admins provision the Computer Vision API. The API will be called by the function, to evaluate pictures
uploaded by users.
1. They create a Computer Vision instance in the Azure Marketplace.
2. They provision the API (smarthotelpets) in the production resource group ContosoRG, in the main East
US 2 region.

3. They save the connection settings for the API to a text file for later reference.
Provision the Azure Web App
Contoso admins provision the web app using the Azure portal.
1. They select Web App in the portal.

2. They provide an app name (smarthotelcontoso), run it on Windows, and place it in the production
resources group ContosoRG. They create a new Application Insights instance for app monitoring..
3. After they're done, they browse to the address of the app to check it's been created successfully.
4. Now, in the Azure portal they create a staging slot for the code. the pipeline will deploy to this slot. This
ensures that code isn't put into production until admins perform a release.
Provision the Azure function app
In the Azure portal, Contoso admins provision the Function App.
1. They select Function App.

2. They provide an app name (smarthotelpetchecker). They place the app in the production resource
group ContosoRG.They set the hosting place to Consumption Plan, and place the app in the East US 2
region. A new storage account is created, along with an Application Insights instance for monitoring.
3. After the app is deployed, they browse to the app address to check it's been created successfully.

Step 4: Set up the front-end pipeline


Contoso admins create two different projects for the front-end site.
1. In Azure DevOps, they create a project SmartHotelFrontend.
2. They import the SmartHotel360 front-end Git repo into the new project.
3. For the Function App, they create another Azure DevOps project (SmartHotelPetChecker), and import
the PetChecker Git repo into this project.
Configure the Web App
Now Contoso admins configure the Web App to use Contoso resources.
1. They connect to the Azure DevOps project, and clone the repo locally to the dev machine.
2. In Visual Studio, they open the folder to show all the files in the repo.

3. They update the configuration changes as required.


When the web app starts up, it looks for the SettingsUrl app setting.
This variable must contain a URL pointing to a configuration file.
By default, the setting used is a public endpoint.
4. They update the /config-sample.json/sample.json file.
This is the configuration file for the web when using the public endpoint.
They edit the urls and pets_config sections with the values for the AKS API endpoints, storage
accounts, and Cosmos database.
The URLs should match the DNS name of the new web app that Contoso will create.
For Contoso, this is smarthotelcontoso.eastus2.cloudapp.azure.com.

5. After the file is updated, they rename it smarthotelsettingsurl, and upload it to the blob storage they
created earlier.

6. They click the file to get the URL. The URL is used by the app when it pulls down the configuration files.

7. In the appsettings.Production.json file, they update the SettingsURL to the URL of the new file.
Deploy the website to the Azure App Service
Contoso admins can now publish the website.
1. They open Azure DevOps, and in the SmartHotelFrontend project, in Builds and Releases, they click
+New Pipeline.
2. They select Azure DevOps Git as a source.
3. They select the ASP.NET Core template.
4. They review the pipeline, and check that Publish Web Projects and Zip Published Projects are
selected.

5. In Triggers, they enable continuous integration, and add the master branch. This ensures that each time
the solution has new code committed to the master branch, the build pipeline starts.
6. They click Save & queue to start a build.
7. After the build completes, they configure a release pipeline using the Azure App Service Deployment.
8. They provide a Stage name Staging.

9. They add an artifact and select the build they just configured.
10. They click the lightning bolt icon on the artifact, and enable continuous deployment.
11. In Environment, they click 1 job, 1 task under Staging.
12. After selecting the subscription, and app name, they open the Deploy Azure App Service task. The
deployment is configured to use the staging deployment slot. This automatically builds code for review
and approval in this slot.
13. In the Pipeline, they add a new stage.
14. They select Azure App Service deployment with slot, and name the environment Prod.
15. They click on 1 job, 2 tasks, and select the subscription, app service name, and the staging slot.
16. They remove the Deploy Azure App Service to Slot from the pipeline. It was placed there by the
previous steps.
17. They save the pipeline. On the pipeline, they click on Post-deployment conditions.
18. They enable Post-deployment approvals, and add a dev lead as the approver.

19. In the Build pipeline, they manually kick off a build. This triggers the new release pipeline, which deploys
the site to the staging slot. For Contoso, the URL for the slot is https://smarthotelcontoso-
staging.azurewebsites.net/.
20. After the build finishes, and the release deploys to the slot, Azure DevOps emails the dev lead for
approval.
21. The dev lead clicks View approval, and can approve or reject the request in the Azure DevOps portal.

22. The lead makes a comment and approves. This starts the swap of the staging and prod slots, and moves
the build into production.
23. The pipeline completes the swap.

24. The team checks the prod slot to verify that the web app is in production at
https://smarthotelcontoso.azurewebsites.net/.
Deploy the PetChecker Function App
Contoso admins deploy the app as follows.
1. They clone the repo locally to the dev machine by connecting to the Azure DevOps project.
2. In Visual Studio, they open the folder to show all the files in the repo.
3. They open the src/PetCheckerFunction/local.settings.json file, and add the app settings for storage,
the Cosmos database, and the Computer Vision API.

4. They commit the code, and sync it back to Azure DevOps, pushing their changes.
5. They add a new Build pipeline, and select Azure DevOps Git for the source.
6. They select the ASP.NET Core (.NET Framework) template.
7. They accept the defaults for the template.
8. In Triggers, then select to Enable continuous integration, and click Save & Queue to start a build.
9. After the build succeeds, they build a Release pipeline, adding the Azure App Service deployment
with slot.
10. They name the environment Prod, and select the subscription. They set the App type to Function App,
and the app service name as smarthotelpetchecker.
11. They add an artifact Build.
12. They enable Continuous deployment trigger, and click Save.
13. They click Queue new build to run the full CI/CD pipeline.
14. After the function is deployed, it appears in the Azure portal, with the Running status.
15. They browse to the app to test that the Pet Checker app is working as expected, at
http://smarthotel360public.azurewebsites.net/Pets.
16. They click on the avatar to upload a picture.

17. The first photo they want to check is of a small dog.


18. The app returns a message of acceptance.

Review the deployment


With the migrated resources in Azure, Contoso now needs to fully operationalize and secure the new
infrastructure.
Security
Contoso needs to ensure that the new databases are secure. Learn more.
The app needs to be updated to use SSL with certificates. The container instance should be redeployed to
answer on 443.
Contoso should consider using KeyVault to protect secrets for their Service Fabric apps. Learn more.
Backups and disaster recovery
Contoso needs to review backup requirements for the Azure SQL Database. Learn more.
Contoso should consider implementing SQL failover groups to provide regional failover for the database.
Learn more.
Contoso can leverage geo-replication for the ACR premium SKU. Learn more.
Cosmos DB backs up automatically. Contoso can learn more about this process.
Licensing and cost optimization
After all resources are deployed, Contoso should assign Azure tags based on their infrastructure planning.
All licensing is built into the cost of the PaaS services that Contoso is consuming. This will be deducted from
the EA.
Contoso will enable Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary. It's a multi-cloud
cost management solution that helps you to utilize and manage Azure and other cloud resources. Learn
more about Azure Cost Management.

Conclusion
In this article, Contoso rebuilds the SmartHotel360 app in Azure. The on-premises app front-end VM is rebuilt
to Azure App Services Web Apps. The app back end is built using microservices deployed to containers
managed by Azure Kubernetes Service (AKS ). Contoso enhanced app functionality with a pet photo app.
Contoso - Scale a migration to Azure
3/14/2019 • 23 minutes to read • Edit Online

In this article, Contoso performs a migration at scale to Azure. They consider how to plan and perform a
migration of more than 3000 workloads, 8000 databases, and over 10,000 VMs.
This article is one in a series of articles that document how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background and planning information,
and deployment scenarios that illustrate how to set up a migration infrastructure, assess the suitability of on-
premises resources for migration, and run different types of migrations. Scenarios grow in complexity. We'll
add articles to the series over time.

ARTICLE DETAILS STATUS

Article 1: Overview Overview of the article series, Available


Contoso's migration strategy, and the
sample apps that are used in the
series.

Article 2: Deploy an Azure Contoso prepares its on-premises Available.


infrastructure infrastructure and its Azure
infrastructure for migration. The same
infrastructure is used for all migration
articles in the series.

Article 3: Assess on-premises Contoso runs an assessment of its on- Available


resources for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database
using Data Migration Assistant.

Article 4: Rehost an app on an Azure Contoso runs a lift-and-shift migration Available


VM and SQL Database Managed to Azure for its on-premises
Instance SmartHotel360 app. Contoso migrates
the app front-end VM using Azure Site
Recovery. Contoso migrates the app
database to an Azure SQL Database
Managed Instance using the Azure
Database Migration Service.

Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.

Article 6: Rehost an app on Azure VMs Contoso migrates the app, using Site Available
and in a SQL Server AlwaysOn Recovery to migrate the app VMs, and
availability group the Database Migration Service to
migrate the app database to a SQL
Server cluster that's protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS

Article 7: Rehost a Linux app on Azure Contoso completes a lift-and-shift Available


VMs migration of its Linux osTicket app to
Azure VMs, using the Site Recovery
service.

Article 8: Rehost a Linux app on Azure Contoso migrates its Linux osTicket Available
VMs and Azure Database for MySQL app to Azure VMs by using Site
Recovery. It migrates the app database
to Azure Database for MySQL by
using MySQL Workbench.

Article 9: Refactor an app in an Azure Contoso migrates its SmartHotel360 Available


web app and Azure SQL Database app to an Azure web app and migrates
the app database to an Azure SQL
Server instance with the Database
Migration Assistant.

Article 10: Refactor a Linux app in an Contoso migrates its Linux osTicket Available
Azure web app and Azure Database app to an Azure web app on multiple
for MySQL sites. The web app is integrated with
GitHub for continuous delivery. It
migrates the app database to an Azure
Database for MySQL instance.

Article 11: Refactor Team Foundation Contoso migrates its on-premises Available
Server on Azure DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.

Article 12: Rearchitect an app in Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the app database with Azure SQL
Database.

Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.

Article 14: Scale a migration to Azure After trying out migration This article
combinations, Contoso prepares to
scale to a full migration to Azure.

Business drivers
The IT leadership team has worked closely with business partners to understand what they want to achieve
with this migration:
Address business growth: Contoso is growing, causing pressure on on-premises systems and
infrastructure.
Increase efficiency: Contoso needs to remove unnecessary procedures, and streamline processes for
developers and users. The business needs IT to be fast and not waste time or money, thus delivering faster
on customer requirements.
Increase agility: Contoso IT needs to be more responsive to the needs of the business. It must be able to
react faster than the changes in the marketplace, to enable the success in a global economy. It mustn't get in
the way, or become a business blocker.
Scale: As the business grows successfully, the Contoso IT team must provide systems that are able to grow
at the same pace.
Improve cost models: Contoso wants to lessen capital requirements in the IT budget. Contoso wants to
use cloud abilities to scale and reduce the need for expensive hardware.
Lower licensing costs: Contoso wants to minimize cloud costs.

Migration goals
The Contoso cloud team has pinned down goals for this migration. These goals were used to determine the
best migration method.

REQUIREMENTS DETAILS

Move to Azure quickly Contoso wants to start moving apps and VMs to Azure as
quickly as possible.

Compile a full inventory Contoso wants a complete inventory of all apps, databases,
and VMs in the organization.

Assess and classify apps Contoso wants fully leverage the cloud. As a default
Contoso assumes that all services will run as PaaS. IaaS will
be used where PaaS isn't appropriate.

Train and move to DevOps Contoso wants to move to a DevOps model. Contoso will
provide Azure and DevOps training, and reorganize teams
as necessary.

After pinning down goals and requirements, Contoso reviews the IT footprint, and identifies the migration
process.

Current deployment
After planning and setting up an Azure infrastructure and trying out different proof-of-concept (POC )
migration combinations as detailed in the table above, Contoso is ready to embark on a full migration to Azure
at scale. Here's what Contoso wants to migrate.

ITEM VOLUME DETAILS

Workloads More than 3,000 apps Apps run on VMs.

Apps are Windows, SQL-based, and


OSS LAMP.

Databases Around 8,500 Databases include SQL Server, MySQL,


PostgreSQL.

VMs More than 10,000 VMs run on VMware hosts and


managed by vCenter Servers.

Migration process
Now that Contoso have pinned down business drivers and migration goals, it determines a four-pronged
approach for the migration process:
Phase 1-Assess: Discover the current assets, and figure out whether they're suitable for migration to Azure.
Phase 2-Migrate: Move the assets to Azure. How they move apps and objects to Azure will depend upon
the app, and what they want to achieve.
Phase 3-Optimize: After moving resources to Azure, Contoso needs to improve and streamline them for
maximum performance and efficiency.
Phase 4-Secure & Manage: With everything in place, Contoso now uses Azure security and management
resources and services to govern, secure, and monitor its cloud apps in Azure.
These phases aren't serial across the organization. Each piece of Contoso's migration project will be at a
different stage of the assessment and migration process. Optimization, security, and management will be
ongoing over time.

Phase 1: Assess
Contoso kicks off the process by discovering and assessing on-premises apps, data, and infrastructure. Here's
what Contoso will do:
Contoso needs to discover apps, maps dependencies across apps, and decide on migration order and
priority.
As Contoso assesses, it will build out a comprehensive inventory of apps and resources. Along with the new
inventory, Contoso will use and update the existing Configuration Management Database (CMDB ) and
Service Catalog.
The CMDB holds technical configurations for Contoso apps.
The Service Catalog documents the operational details of apps, including associated business
partners, and Service Level Agreements (SLAs)
Discover apps
Contoso runs thousands of apps across a range of servers. In addition to the CMDB and Service Catalog,
Contoso needs discovery and assessment tools.
The tools must provide a mechanism that can feed assessment data into the migration process.
Assessment tools must provide data that helps build up an intelligent inventory of Contoso's physical and
virtual resources. Data should include profile information, and performance metrics.
When discovery is complete, Contoso should have a complete inventory of assets, and metadata associated
with them. This inventory will be used to define the migration plan.
Identify classifications
Contoso identifies some common categories to classify assets in the inventory. These classifications are critical
to Contoso’s decision making for migration. The classification list helps to establish migration priorities, and
identify complex issues.

CATEGORY ASSIGNED VALUE DETAILS

Business group List of business group names Which group is responsible for the
inventory item?

POC candidate Y/N Can the app be used as a POC or early


adopter for cloud migration?
CATEGORY ASSIGNED VALUE DETAILS

Technical debt None/Some/Severe Is the inventory item running or using


an out-of-support product, platform
or operating system?

Firewall implications Y/N Does the app communicate with the


Internet/outside traffic? Does it
integrate with a firewall?

Security issues Y/N Are there known security issues with


the app? Does the app use
unencrypted data or out-of-date
platforms?

Discover app dependencies


As part of the assessment process, Contoso needs to identify where apps are running, and figure out the
dependencies and connections between app servers. Contoso maps the environment in steps.
1. As a first step, Contoso discovers how servers and machines map to individual apps, network locations, and
groups.
2. With this information, Contoso can clearly identify apps that have few dependencies, and are thus suitable
for a quick migration.
3. Contoso can use mapping to help them identify more complex dependencies and communications between
app servers. Contoso can then group these servers logically to represent apps, and plan a migration strategy
based on these groups.
With mapping completed, Contoso can ensure that all app components are identified and accounted for when
building the migration plan.
Evaluate apps
As the last step in the discovery and assessment process, Contoso can evaluate assessment and mapping
results to figure out how to migrate each app in the Service Catalog.
To capture this evaluation process, they add a couple of additional classifications to the inventory.

CATEGORY ASSIGNED VALUE DETAILS

Business group List of business group names Which group is responsible for the
inventory item?

POC candidate Y/N Can the app be used as a POC or early


adopter for cloud migration?

Technical debt None/Some/Severe Is the inventory item running or using


an out-of-support product, platform
or operating system?

Firewall implications Y/N Does the app communicate with the


Internet/outside traffic? Does it
integrate with a firewall?
CATEGORY ASSIGNED VALUE DETAILS

Security issues Y/N Are there known security issues with


the app? Does the app use
unencrypted data or out-of-date
platforms?

Migration strategy Rehost/Refactor/Rearchitect/Rebuild What kind of migration is needed for


the app? How will the app be deployed
in Azure? Learn more.

Technical complexity 1-5 How complex is the migration? This


value should be defined by Contoso
DevOps and relevant partners.

Business criticality 1-5 How important is the app for the


business? For example, a small
workgroup app might be assigned a
score of one, while a critical app used
across the org might be assigned a
score of five. This score will impact the
migration priority level.

Migration priority 1/2/3 What the migration priority for the


app?

Migration risk 1-5 What's the risk level for migrating the
app? This value should be agreed
upon by Contoso DevOps and
relevant partners.

Figure out costs


To figure out costs and the potential savings of Azure migration, Contoso can use the Total Cost of Ownership
(TCO ) calculator to calculate and compare the TCO for Azure to a comparable on-premises deployment.
Identify assessment tools
Contoso decides which tool to use for discovery, assessment, and building the inventory. Contoso identifies a
mix of Azure tools and services, native app tools and scripts, and partner tools. In particular, Contoso is
interested in how Azure Migrate can be used to assess at scale.
Azure Migrate
The Azure Migrate service helps you to discover and assess on-premises VMware VMs, in preparation for
migration to Azure. Here's what Azure Migrate does:
1. Discover: Discover on-premises VMware VMs.
Azure Migrate supports discovery from multiple vCenter Servers (serially), and can run discoveries in
separate Azure Migrate projects.
Azure Migrate performs discovery by means on a VMware VM running the Migrate Collector. The
same collector can discover VMs on different vCenter servers, and send data to different projects.
2. Assess readiness: Assess whether on-premises machines are suitable for running in Azure. Assessment
includes:
Size recommendations: Get size recommendations for Azure VMs, based on the performance history
of on-premises VMs.
Estimated monthly costs: Get estimated costs for running on-premises machines in Azure.
3. Identify dependencies: Visualize dependencies of on-premises machines, to create optimal machines groups
for assessment and migration.
M i g r a t e a t sc a l e

Contoso needs to use Azure Migrate correctly give the scale of this migration.
Contoso will do an app-by-app assessment with Azure Migrate. This ensures that Azure Migrate returns
timely data to the Azure portal.
Contoso admins read about deploying Azure Migrate at scale
Contoso notes the Azure Migrate limits summarized in the following table.

ACTION LIMIT

Create Azure Migrate project 1500 VMs

Discovery 1500 VMs

Assessment 1500 VMs

Contoso will use Azure Migrate as follows:


In vCenter Contoso will organize VMs into folders. This will make it easy for them to focus as they run an
assessment against VMs in a specific folder.
Azure Migrate uses Azure Service Map to assess dependencies between machines. This requires agents to
be installed on VMs to be assessed.
Contoso will use automated scripts to install the required Windows or Linux agents.
By scripting, Contoso can push the installation to VMs within a vCenter folder.
Database tools
In addition to Azure Migrate, Contoso will focus on using tools specifically for database assessment. Tools such
as the Database Migration Assistant will help assess SQL Server databases for migration.
The Data Migration Assistant (DMA) can help Contoso to figure out whether on-premises databases are
compatible with a range of Azure database solutions, such as Azure SQL Database, SQL Server running on an
Azure IaaS VM, and Azure SQL Managed Instance.
In addition to DMS, Contoso has some other scripts that they use to discover and documenting the SQL Server
databases. These are located in the GitHub repo.
Partner tools
There are several other partner tools which can help Contoso in assessing the on-premises environment for
migration to Azure. Learn more about Azure Migration partners.
Phase 2: Migrate
With their assessment complete Contoso needs to identify tools to move their apps, data, and infrastructure to
Azure.
Migration strategies
There are four broad migration strategies that Contoso can consider.

STRATEGY DETAILS USAGE

Rehost Often referred to as “lift and shift” Contoso can rehost less-strategic
migration, this is a no-code option for apps, requiring no code changes.
migrating existing apps to Azure
quickly.

An app is migrated as-is, with the


benefits of the cloud, without the risks
or costs associated with code changes.

Refactor Also referred to as “repackaging", this Contoso can refactor strategic apps to
strategy requires minimal app code or retain the same basic functionality, but
configuration changes need to connect move them to run on an Azure
the app to Azure PaaS, and take better platform such as Azure App Services.
advantage of cloud capabilities.
This requires minimum code changes.

On the other hand, Contoso will have


to maintain a VM platform since this
won't be managed by Microsoft.

Rearchitect This strategy modifies or extends an


app code base to optimize the app
architecture for cloud capabilities and
scale.

It modernizes an app into a resilient,


highly scalable, independently
deployable architecture.

Azure services can accelerate the


process, scale applications with
confidence, and manage apps with
ease.

Rebuild This strategy rebuilds an app from Contoso can rewrite critical apps from
scratch using cloud-native the ground up, to take advantage of
technologies. cloud technologies such as serverless
computer, or microservices.
Azure platform as a service (PaaS)
provides a complete development and Contoso will manage the app and
deployment environment in the cloud. services it develops, and Azure
It eliminates some expense and manages everything else.
complexity of software licenses, and
removes the need for an underlying
app infrastructure, middleware, and
other resources.

Data must also be considered, especially with the volume of databases that Contoso has. Contoso's default
approach is to use PaaS services such as Azure SQL Database to take full advantage of cloud features. By
moving to a PaaS service for databases, Contoso will only have to maintain data, leaving the underlying
platform to Microsoft.
Evaluate migration tools
Contoso are primarily using a couple of Azure services and tools for the migration:
Azure Site Recovery: Orchestrates disaster recovery, and migrates on-premises VMs to Azure.
Azure Database Migration Service: Migrates on-premises databases such as SQL Server, MySQL, and
Oracle to Azure.
Azure Site Recovery
Azure Site Recovery is the primary Azure service for orchestrating disaster recovery and migration from within
Azure, and from on-premises sites to Azure.
1. Site Recovery enables, orchestrates replication from your on-premises sites to Azure.
2. When replication is set up and running, on-premises machines can be failed over to Azure, completing the
migration.
Contoso already completed a POC to see how Site Recovery can help them to migrate to the cloud.
U si n g Si t e R e c o v e r y a t sc a l e

Contoso plans on running multiple lift-and-shift migrations. To ensure this works, Site Recovery will be
replicating batches of around 100 VMs at a time. To figure out how this will work, Contoso needs to perform
capacity planning for the proposed Site Recovery migration.
Contoso needs to gather information about their traffic volumes. In particular:
Contoso needs to determine the rate of change for VMs it wants to replicate.
Contoso also needs to take network connectivity from the on-premises site to Azure into account.
In response to capacity and volume requirements, Contoso will need to allocate sufficient bandwidth based
on the daily data change rate for the required VMs, to meet its recovery point objective (RPO ).
Lastly, they need to figure out how many servers are needed to run the Site Recovery components that are
needed for the deployment.
Ga t h e r o n -p re mi s e s i n f o rma t i o n

Contoso can use the Site Recovery Deployment Planner tool to complete these steps:
Contoso can use the tool to remotely profile VMs without an impact on the production environment. This
helps pinpoint bandwidth and storage requirements for replication and failover.
Contoso can run the tool without installing any Site Recovery components on-premises.
The tool gathers information about compatible and incompatible VMs, disks per VM, and data churn per
disk. It also identifies network bandwidth requirements, and the Azure infrastructure needed for successful
replication and failover.
Contoso needs to ensure that then run the planner tool on a Windows Server machines that matches the
minimum requirements for the Site Recovery configuration server. The configuration server is a Site
Recovery machine that's needed in order to replicate on-premises VMware VMs.
I d e n t i f y Si t e R e c o v e ry re q u i re me n t s

In addition to the VMs being replicated, Site Recovery requires a number of components for VMware
migration.

COMPONENT DETAILS

Configuration server Usually a VMware VM set up using an OVF template.

The configuration server component coordinates


communications between on-premises and Azure, and
manages data replication.
COMPONENT DETAILS

Process server Installed by default on the configuration server.

The process server component receives replication data;


optimizes it with caching, compression, and encryption; and
sends it to Azure storage.

The process server also installs Azure Site Recovery Mobility


Service on VMs you want to replicate, and performs
automatic discovery of on-premises machines.

Scaled deployments need additional, standalone process


servers to handle large volumes of replication traffic.

Mobility Service The Mobility Service agent is installed on each VMware VM


that will be migrated with Site Recovery.

Contoso needs to figure out how to deploy these components, based on capacity considerations.

COMPONENT CAPACITY REQUIREMENTS

Maximum daily change rate A single process server can handle a daily change rate up to
2 TB. Since a VM can only use one process server, the
maximum daily data change rate that’s supported for a
replicated VM is 2 TB.

Maximum throughput A standard Azure storage account can handle a maximum of


20,000 requests per second, and input/output operations
per second (IOPS) across a replicating VM should be within
this limit. For example, if a VM has 5 disks, and each disk
generates 120 IOPS (8K size) on the VM, then it will be
within the Azure per disk IOPS limit of 500.

Note that the number of storage accounts needed is equal


to the total source machine IOPS, divided by 20,000. A
replicated machine can only belong to a single storage
account in Azure.

Configuration server Based on Contoso's estimate of replicating 100=200 VMs


together, and the configuration server sizing requirements,
Contoso estimate is needs a configuration server machine as
follows:

CPU: 16 vCPUs (2 sockets * 8 cores @ 2.5 GHz)

Memory: 32 GB

Cache disk: 1 TB

Data change rate: 1 TB to 2 TB.

In addition to sizing requirements Contoso will need to


make sure that the configuration server is optimally located,
on the same network and LAN segment as the VMs that will
be migrated.
COMPONENT CAPACITY REQUIREMENTS

Process server Contoso will deploy a standalone dedicated process server


with the ability to replicate 100-200 VMs:

CPU: 16 vCPUs (2 sockets * 8 cores @ 2.5 GHz)

Memory: 32 GB

Cache disk: 1 TB

Data change rate: 1 TB to 2 TB.

The process server will be working hard, and as such should


be located on an ESXi host that can handle the disk I/O,
network traffic and CPU required for the replication.
Contoso will consider a dedicated host for this purpose.

Networking Contoso has reviewed the current site-to-site VPN


infrastructure, and decided to implement Azure
ExpressRoute. The implementation is critical because it will
lower latency, and improve bandwidth to Contoso's primary
East US 2 Azure region.

Monitoring: Contoso will need to carefully monitor data


flowing from the process server. If the data overloads the
network bandwidth Contoso will consider throttling the
process server bandwidth.

Azure storage For migration, Contoso must identify the right type and
number of target Azure storage accounts. Site Recovery
replicates VM data to Azure storage.

Site Recovery can replicate to standard or premium (SSD)


storage accounts.

To decide about storage, Contoso must review storage


limits, and factor in expected growth and increased usage
over time. Given the speed and priority of migrations,
Contoso has decided to use premium SSDs

Contoso has made the decision to use Managed disks for all VMs that are deployed to Azure. The IOPS
required will determine if the disks will be Standard HDD, Standard SSD, or Premium (SSD ).

Data Migration Service


The Azure Database Migration Service (DMS ), is a fully managed service that enables seamless migrations
from multiple database sources to Azure data platforms, with minimal downtime.
DMS integrates functionality of existing tools and services. It uses the Data Migration Assistant (DMA), to
generate assessment reports that pinpoint recommendations about database compatibility and any required
modifications.
DMS uses a simple, self-guided migration process, with intelligent assessment that helps address potential
issues before the migration.
DMS can migrate at scale from multiple sources to the target Azure database.
DMS provides support from SQL Server 2005 to SQL Server 2017.
DMS isn't the only Microsoft database migration tool. Get a comparison of tools and services.
Us i n g DM S a t s c a l e
Contoso will use DMS when migrating from SQL Server.
When provisioning DMS, Contoso needs to ensure that it's sized correctly, and set to optimize
performance for data migrations. Contoso will select the "business-critical tier with 4 vCores" option,
thus allowing the service to take advantage of multiple vCPUs for parallelization and faster data transfer.

Another scaling tactic for Contoso is temporarily scale up the Azure SQL or MySQL Database target
instance to the Premium tier SKU during the data migration. This minimizes database throttling that
could impact data transfer activities when using lower-level SKUs.
U si n g o t h e r t o o l s

In addition to DMS, Contoso can use other tools and services to identify VM information.
They have scripts to help with manual migrations. These are available in the GitHub repo.
A number of partner tools can also be used for migration.

Phase 3: Optimize
After Contoso moves resources to Azure, they need to streamline them to improve performance, and maximize
ROI with cost management tools. Given that Azure is a pay-for-use service, it's critical for Contoso to
understand how systems are performing, and to ensure they're sized properly.
Azure cost management
To make the most of their cloud investment, Contoso will leverage the free Azure Cost Management tool.
This licensed solution built by Cloudyn, a Microsoft subsidiary, lets Contoso manage cloud spending with
transparency and accuracy. It provides tools to monitor, allocate, and trim cloud costs.
Azure Cost Management provides simple dashboard reports to help with cost allocation, showbacks and
chargebacks.
Cost Management can optimize cloud spending by identifying underutilized resources that Contoso can
then manage and adjust.
Learn more about Azure Cost Management.
Native Tools
Contoso will also use scripts to locate unused resources.
During large migrations, there are often leftover pieces of data such as virtual hard drives (VHDs), which
incur a charge, but provide no value to the company. Scripts are available in the GitHub repo.
Contoso will leverage work done by Microsoft’s IT department, and consider implementing the Azure
Resource Optimization (ARO ) Toolkit.
Contoso can deploy an Azure Automation account with preconfigured runbooks and schedules to its
subscription, and start saving money. Azure resource optimization happens automatically on a subscription
after a schedule is enabled or created, including optimization on new resources.
This provides decentralized automation capabilities to reduce costs. Features include:
Auto-snooze Azure VMs based on low CPU.
Schedule Azure VMs to snooze and unsnooze.
Schedule Azure VMs to snooze or unsnooze in ascending and descending order using Azure tags.
Bulk deletion of resource groups on-demand.
Get started with the ARO toolkit in this GitHub repo.
Partner Tools
Partner tools such as Hanu and Scalr can be leveraged.

Phase 4: Secure & manage


In this phase, Contoso uses Azure security and management resources to govern, secure, and monitor cloud
apps in Azure. These resources help you run a secure and well-managed environment while using products
available in the Azure portal. Contoso begins using these services during migration and, with Azure hybrid
support, continues using many of them for a consistent experience across the hybrid cloud.
Security
Contoso will rely on the Azure Security Center for unified security management and advanced threat protection
across hybrid cloud workloads.
The Security Center provides full visibility into, and control over, the security of cloud apps in Azure.
Contoso can quickly detect and take action in response to threats, and reduce security exposure by enabling
adaptive threat protection.
Learn more about the Security Center.
Monitoring
Contoso needs visibility into the health and performance of the newly migrated apps, infrastructure, and data
now running Azure. Contoso will leverage built-in Azure cloud monitoring tools such as Azure Monitor, Log
Analytics workspace, and Application Insights.
Using these tools Contoso can easily collect data from sources and gain rich insights. For example, Contoso
can gauge CPU disk and memory utilization for VMs, view applications and network dependencies across
multiple VMs, and track application performance.
Contoso will use these cloud monitoring tools to take action and integrate with service solutions.
Learn more about Azure monitoring.
BCDR
Contoso will need a business continuity and disaster recovery (BCDR ) strategy for their Azure resources.
Azure provides built-in BCDR features to keep data safe and apps/services up and running.
In addition to built-in features, Contoso wants to ensure that it can recover from failures, avoid costly
business disruptions, meet compliance goals, and protect data against ransomware and human errors. To do
this
Contoso will deploy Azure Backup as a cost-efficient solution for backup of Azure resources. Because
it’s built-in, Contoso can set up cloud backups in a few simple steps.
Contoso will set up disaster recovery for Azure VMs using Azure Site Recovery for replication,
failover, and failback between Azure regions that it specifies. This ensures that apps running on Azure
VMs will remain available in a secondary region of Contoso's choosing if an outage occurs in the
primary region. Learn more.

Conclusion
In this article, Contoso planned for an Azure migration at scale. They divided the migration process into four
stages. From assessment and migration, through to optimization, security, and management after migration
was complete. Mostly, it's important to plan a migration project as a whole process, but to migrate systems
within an organization by breaking sets down into classifications and numbers that make sense for the
business. By assessing data and applying classifications, and project can be broken down into a series of smaller
migrations, which can run safely and rapidly. The sum of these smaller migrations quickly turns into a large
successful migration to Azure.
Discover and assess a large VMware environment
4/10/2019 • 14 minutes to read • Edit Online

Azure Migrate has a limit of 1500 machines per project, this article describes how to assess large numbers of on-
premises virtual machines (VMs) by using Azure Migrate.

NOTE
We have a preview release available that allows discovery of up to 10,000 VMware VMs in a single project using a single
appliance, if you are interested in trying it out, please sign up here.

Prerequisites
VMware: The VMs that you plan to migrate must be managed by vCenter Server version 5.5, 6.0, 6.5 or 6.7.
Additionally, you need one ESXi host running version 5.5 or later to deploy the collector VM.
vCenter account: You need a read-only account to access vCenter Server. Azure Migrate uses this account to
discover the on-premises VMs.
Permissions: In vCenter Server, you need permissions to create a VM by importing a file in OVA format.
Statistics settings: This requirement is only applicable to the one-time discovery model which is deprecated
now. For one-time discovery model, the statistics settings for vCenter Server should be set to level 3 before
you start deployment. The statistics level is to be set to 3 for each of the day, week, and month collection
intervals. If the level is lower than 3 for any of the three collection intervals, the assessment will work, but the
performance data for storage and network won't be collected. The size recommendations will then be based on
performance data for CPU and memory, and configuration data for disk and network adapters.

NOTE
The one-time discovery appliance is now deprecated as this method relied on vCenter Server's statistics settings for
performance data point availability and collected average performance counters which resulted in under-sizing of VMs for
migration to Azure.

Set up permissions
Azure Migrate needs access to VMware servers to automatically discover VMs for assessment. The VMware
account needs the following permissions:
User type: At least a read-only user
Permissions: Data Center object –> Propagate to Child Object, role=Read-only
Details: User assigned at datacenter level, and has access to all the objects in the datacenter.
To restrict access, assign the No access role with the Propagate to child object, to the child objects (vSphere
hosts, datastores, VMs, and networks).
If you are deploying in a multi-tenant environment and would like to scope by folder of VMs for a single tenant,
you cannot directly select the VM folder when scoping collection in Azure Migrate. Following are instructions on
how to scope discovery by folder of VMs:
1. Create a user per tenant and assign read-only permissions to all the VMs belonging to a particular tenant.
2. Grant this user read-only access to all the parent objects where the VMs are hosted. All parent objects - host,
folder of hosts, cluster, folder of clusters - in the hierarchy up to the data center are to be included. You do not
need to propagate the permissions to all child objects.
3. Use the credentials for discovery selecting datacenter as Collection Scope. The RBAC set up ensures that the
corresponding vCenter user will have access to only tenant-specific VMs.

Plan your migration projects and discoveries


Based on the number of VMs you are planning to discover, you can create multiple projects and deploy multiple
appliances in your environment. An appliance can be connected to a single vCenter Server and a single project
(unless you stop the discovery and start afresh).
In case of one-time discovery (deprecated now ), the discovery works in a fire and forget model, once a discovery
is done, you can use the same collector to collect data from a different vCenter Server or send it to a different
migration project.

NOTE
The one-time discovery appliance is now deprecated as this method relied on vCenter Server's statistics settings for
performance data point availability and collected average performance counters which resulted in under-sizing of VMs for
migration to Azure. It is recommended to move to the one-time discovery appliance.

Plan your discoveries and assessments based on the following limits:

ENTITY MACHINE LIMIT

Project 1,500

Discovery 1,500

Assessment 1,500

Keep these planning considerations in mind:


When you do a discovery by using the Azure Migrate collector, you can set the discovery scope to a vCenter
Server folder, datacenter, cluster, or host.
To do more than one discovery from the same vCenter Server, verify in vCenter Server that the VMs you want
to discover are in folders, datacenters, clusters, or hosts that support the limitation of 1,500 machines.
We recommend that for assessment purposes, you keep machines with interdependencies within the same
project and assessment. In vCenter Server, make sure that dependent machines are in the same folder,
datacenter, or cluster for the assessment.
Depending on your scenario, you can split your discoveries as prescribed below:
Multiple vCenter Servers with less than 1500 VMs
If you have multiple vCenter Servers in your environment, and the total number of virtual machines is less than
1500, you can use the following approach based on your scenario:
Continuous discovery: In case of continuous discovery, one appliance can be connected to only a single project.
So you need to deploy one appliance for each of your vCenter Servers and then create one project for each
appliance and trigger discoveries accordingly.
One-time discovery (deprecated now): You can use a single collector and a single migration project to
discover all the virtual machines across all vCenter Servers. Since the one-time discovery collector discovers one
vCenter Server at a time, you can run the same collector against all the vCenter Servers, one after another, and
point the collector to the same migration project. Once all the discoveries are complete, you can then create
assessments for the machines.
Multiple vCenter Servers with more than 1500 VMs
If you have multiple vCenter Servers with less than 1500 virtual machines per vCenter Server, but more than
1500 VMs across all vCenter Servers, you need to create multiple migration projects (one migration project can
hold only 1500 VMs). You can achieve this by creating a migration project per vCenter Server and splitting the
discoveries.
Continuous discovery: You need to create multiple collector appliances (one for each vCenter Server) and
connect each appliance to a project and trigger discovery accordingly.
One-time discovery (deprecated now): You can use a single collector to discover each vCenter Server (one
after another). If you want the discoveries to start at the same time, you can also deploy multiple appliances and
run the discoveries in parallel.
More than 1500 machines in a single vCenter Server
If you have more than 1500 virtual machines in a single vCenter Server, you need to split the discovery into
multiple migration projects. To split discoveries, you can leverage the Scope field in the appliance and specify the
host, cluster, folder of hosts, folder of clusters or datacenter that you want to discover. For example, if you have
two folders in vCenter Server, one with 1000 VMs (Folder1) and other with 800 VMs (Folder2), you can use the
scope field to split the discoveries between these folders.
Continuous discovery: In this case, you need to create two collector appliances, for the first collector, specify the
scope as Folder1 and connect it to the first migration project. You can in parallel start the discovery of Folder2
using the second collector appliance and connect it to the second migration project.
One-time discovery (deprecated now): You can use the same collector to trigger both the discoveries. In the
first discovery, you can specify Folder1 as the scope and point it to the first migration project, once the first
discovery is complete, you can use the same collector, change its scope to Folder2 and migration project details to
the second migration project and do the second discovery.
Multi-tenant environment
If you have an environment that is shared across tenants and you do not want to discover the VMs of one tenant
in another tenant's subscription, you can use the Scope field in the collector appliance to scope the discovery. If
the tenants are sharing hosts, create a credential that has read-only access to only the VMs belonging to the
specific tenant and then use this credential in the collector appliance and specify the Scope as the host to do the
discovery.

Discover on-premises environment


Once you are ready with your plan, you can then start discovery of the on-premises virtual machines:
Create a project
Create an Azure Migrate project in accordance with your requirements:
1. In the Azure portal, select Create a resource.
2. Search for Azure Migrate, and select the service Azure Migrate in the search results. Then select Create.
3. Specify a project name and the Azure subscription for the project.
4. Create a new resource group.
5. Specify the location in which you want to create the project, and then select Create. Note that you can still
assess your VMs for a different target location. The location specified for the project is used to store the
metadata gathered from on-premises VMs.
Set up the collector appliance
Azure Migrate creates an on-premises VM known as the collector appliance. This VM discovers on-premises
VMware VMs, and it sends metadata about them to the Azure Migrate service. To set up the collector appliance,
you download an OVA file and import it to the on-premises vCenter Server instance.
Download the collector appliance
If you have multiple projects, you need to download the collector appliance only once to vCenter Server. After you
download and set up the appliance, you run it for each project, and you specify the unique project ID and key.
1. In the Azure Migrate project, click Getting Started > Discover & Assess > Discover Machines.
2. In Discover machines, click Download to download the appliance.
The Azure Migrate appliance communicates with vCenter Server and continuously profiles the on-
premises environment to gather real-time utilization data for each VM. It collects peak counters for each
metric (CPU utilization, memory utilization etc.). This model does not depend on the statistics settings of
vCenter Server for performance data collection. You can stop the continuous profiling anytime from the
appliance.

NOTE
The one-time discovery appliance is now deprecated as this method relied on vCenter Server's statistics settings for
performance data point availability and collected average performance counters which resulted in under-sizing of
VMs for migration to Azure.

Instant gratification: With the continuous discovery appliance, once the discovery is complete (takes
couple of hours depending on the number of VMs), you can immediately create assessments. Since the
performance data collection starts when you kick off discovery, if you are looking for instant gratification,
you should select the sizing criterion in the assessment as as on-premises. For performance-based
assessments, it is advised to wait for at least a day after kicking off discovery to get reliable size
recommendations.
Note that the appliance only collects performance data continuously, it does not detect any configuration
change in the on-premises environment (i.e. VM addition, deletion, disk addition etc.). If there is a
configuration change in the on-premises environment, you can do the following to reflect the changes in
the portal:
Addition of items (VMs, disks, cores etc.): To reflect these changes in the Azure portal, you can stop
the discovery from the appliance and then start it again. This will ensure that the changes are
updated in the Azure Migrate project.
Deletion of VMs: Due to the way the appliance is designed, deletion of VMs is not reflected even if
you stop and start the discovery. This is because data from subsequent discoveries are appended to
older discoveries and not overridden. In this case, you can simply ignore the VM in the portal, by
removing it from your group and recalculating the assessment.
3. In Copy project credentials, copy the ID and key for the project. You need these when you configure the
collector.
Verify the collector appliance
Check that the OVA file is secure before you deploy it:
1. On the machine to which you downloaded the file, open an administrator command window.
2. Run the following command to generate the hash for the OVA:
C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]

Example usage: C:\>CertUtil -HashFile C:\AzureMigrate\AzureMigrate.ova SHA256

3. Make sure that the generated hash matches the following settings.
Continuous discovery
For OVA version 1.0.10.4

ALGORITHM HASH VALUE

MD5 2ca5b1b93ee0675ca794dd3fd216e13d

SHA1 8c46a52b18d36e91daeae62f412f5cb2a8198ee5

SHA256 3b3dec0f995b3dd3c6ba218d436be003a687710abab9fcd17
d4bdc90a11276be

One-time discovery (deprecated now)


For OVA version 1.0.9.15 (Released on 10/23/2018)

ALGORITHM HASH VALUE

MD5 e9ef16b0c837638c506b5fc0ef75ebfa

SHA1 37b4b1e92b3c6ac2782ff5258450df6686c89864

SHA256 8a86fc17f69b69968eb20a5c4c288c194cdcffb4ee6568d85ae
5ba96835559ba

For OVA version 1.0.9.14 (Released on 8/24/2018)

ALGORITHM HASH VALUE

MD5 6d8446c0eeba3de3ecc9bc3713f9c8bd

SHA1 e9f5bdfdd1a746c11910ed917511b5d91b9f939f

SHA256 7f7636d0959379502dfbda19b8e3f47f3a4744ee9453fc9ce54
8e6682a66f13c

For OVA version 1.0.9.12

ALGORITHM HASH VALUE

MD5 d0363e5d1b377a8eb08843cf034ac28a

SHA1 df4a0ada64bfa59c37acf521d15dcabe7f3f716b

SHA256 f677b6c255e3d4d529315a31b5947edfe46f45e4eb4dbc8019
d68d1d1b337c2e

For OVA version 1.0.9.8

ALGORITHM HASH VALUE

MD5 b5d9f0caf15ca357ac0563468c2e6251

SHA1 d6179b5bfe84e123fabd37f8a1e4930839eeb0e5
ALGORITHM HASH VALUE

SHA256 09c68b168719cb93bd439ea6a5fe21a3b01beec0e15b84204
857061ca5b116ff

For OVA version 1.0.9.7

ALGORITHM HASH VALUE

MD5 d5b6a03701203ff556fa78694d6d7c35

SHA1 f039feaa10dccd811c3d22d9a59fb83d0b01151e

SHA256 e5e997c003e29036f62bf3fdce96acd4a271799211a84b34b3
5dfd290e9bea9c

Create the collector VM


Import the downloaded file to vCenter Server:
1. In the vSphere Client console, select File > Deploy OVF Template.

2. In the Deploy OVF Template Wizard > Source, specify the location of the OVA file.
3. In Name and Location, specify a friendly name for the collector VM, and the inventory object in which the
VM will be hosted.
4. In Host/Cluster, specify the host or cluster on which the collector VM will run.
5. In storage, specify the storage destination for the collector VM.
6. In Disk Format, specify the disk type and size.
7. In Network Mapping, specify the network to which the collector VM will connect. The network needs
internet connectivity to send metadata to Azure.
8. Review and confirm the settings, and then select Finish.
Identify the ID and key for each project
If you have multiple projects, be sure to identify the ID and key for each one. You need the key when you run the
collector to discover the VMs.
1. In the project, select Getting Started > Discover & Assess > Discover Machines.
2. In Copy project credentials, copy the ID and key for the project.

Run the collector to discover VMs


For each discovery that you need to perform, you run the collector to discover VMs in the required scope. Run the
discoveries one after the other. Concurrent discoveries aren't supported, and each discovery must have a different
scope.
1. In the vSphere Client console, right-click the VM > Open Console.
2. Provide the language, time zone, and password preferences for the appliance.
3. On the desktop, select the Run collector shortcut.
4. In the Azure Migrate collector, open Set up prerequisites and then:
a. Accept the license terms, and read the third-party information.
The collector checks that the VM has internet access.
b. If the VM accesses the internet via a proxy, select Proxy settings, and specify the proxy address and
listening port. Specify credentials if the proxy needs authentication.
The collector checks that the collector service is running. The service is installed by default on the collector
VM.
c. Download and install VMware PowerCLI.
5. In Specify vCenter Server details, do the following:
Specify the name (FQDN ) or IP address of vCenter Server.
In User name and Password, specify the read-only account credentials that the collector will use to
discover VMs in vCenter Server.
In Select scope, select a scope for VM discovery. The collector can discover only VMs within the
specified scope. Scope can be set to a specific folder, datacenter, or cluster. It shouldn't contain more than
1,000 VMs.
6. In Specify migration project, specify the ID and key for the project. If you didn't copy them, open the
Azure portal from the collector VM. On the project's Overview page, select Discover Machines and copy
the values.
7. In View collection progress, monitor the discovery process and check that metadata collected from the
VMs is in scope. The collector provides an approximate discovery time.
Verify VMs in the portal
The collector will continuously profile the on-premises environment and will keep sending the performance data
at an hour interval. You can review the machines in the portal after an hour of kicking off the discovery. It is
strongly recommended to wait for at least a day before creating any performance-based assessments for the
VMs.
1. In the migration project, click Manage > Machines.
2. Check that the VMs you want to discover appear in the portal.
Data collected from on-premises environment
The collector appliance discovers the following configuration data about the selected virtual machines.
1. VM Display name (on vCenter)
2. VM’s inventory path (host/folder in vCenter)
3. IP address
4. MAC address
5. Operating system
6. Number of cores, disks, NICs
7. Memory size, Disk sizes
8. And performance counters of the VM, disk and network as listed in the table below.
The collector appliance collects the following performance counters for each VM from the ESXi host at an interval
of 20 seconds. These counters are vCenter counters and although the terminology says average, the 20-second
samples are real time counters. The appliance then rolls-up the 20-second samples to create a single data point
for every 15 minutes by selecting the peak value from the 20-second samples and sends it to Azure. The
performance data for the VMs starts becoming available in the portal two hours after you have kicked off the
discovery. It is strongly recommended to wait for at least a day before creating performance-based assessments
to get accurate right-sizing recommendations. If you are looking for instant gratification, you can create
assessments with sizing criterion as as on-premises which will not consider the performance data for right-sizing.

COUNTER IMPACT ON ASSESSMENT

cpu.usage.average Recommended VM size and cost

mem.usage.average Recommended VM size and cost

virtualDisk.read.average Calculates disk size, storage cost, VM size

virtualDisk.write.average Calculates disk size, storage cost, VM size

virtualDisk.numberReadAveraged.average Calculates disk size, storage cost, VM size

virtualDisk.numberWriteAveraged.average Calculates disk size, storage cost, VM size

net.received.average Calculates VM size

net.transmitted.average Calculates VM size


WARNING
The one-time discovery method that relied on vCenter Server's statistic settings for performance data collection is now
deprecated.

Next steps
Learn how to create a group for assessment.
Learn more about how assessments are calculated.
Group machines for assessment
12/11/2018 • 2 minutes to read • Edit Online

This article describes how to create a group of machines for assessment by Azure Migrate. Azure Migrate assesses
machines in the group to check whether they're suitable for migration to Azure, and provides sizing and cost
estimations for running the machine in Azure. If you know the machines that need be migrated together, you can
manually create the group in Azure Migrate using the following method. If you are not very sure about the
machines that need to be grouped together, you can use the dependency visualization functionality in Azure
Migrate to create groups. Learn more.

NOTE
The dependency visualization functionality is not available in Azure Government.

Create a group
1. In the Overview of the Azure Migrate project, under Manage, clickGroups>+Group, and specify a group
name.
2. Add one or more machines to the group, and clickCreate.
3. You can optionally select to run a new assessment for the group.
After the group is created, you can modify it by selecting the group on the Groups page, and then adding or
removing machines.

Next steps
Learn how to use machine dependency mapping to create high confidence groups.
Learn more about how assessments are calculated.
Group machines using machine dependency
mapping
4/9/2019 • 7 minutes to read • Edit Online

This article describes how to create a group of machines for Azure Migrate assessment by visualizing
dependencies of machines. You typically use this method when you want to assess groups of VMs with higher
levels of confidence by cross-checking machine dependencies, before you run an assessment. Dependency
visualization can help you effectively plan your migration to Azure. It helps you ensure that nothing is left behind
and surprise outages do not occur when you are migrating to Azure. You can discover all interdependent systems
that need to migrate together and identify whether a running system is still serving users or is a candidate for
decommissioning instead of migration.

NOTE
The dependency visualization functionality is not available in Azure Government.

Prepare for dependency visualization


Azure Migrate leverages Service Map solution in Azure Monitor logs to enable dependency visualization of
machines.
Associate a Log Analytics workspace
To leverage dependency visualization, you need to associate a Log Analytics workspace, either new or existing,
with an Azure Migrate project. You can only create or attach a workspace in the same subscription where the
migration project is created.
To attach a Log Analytics workspace to a project, in Overview, go to Essentials section of the project click
Requires configuration

While associating a workspace, you will get the option to create a new workspace or attach an existing one:
When you create a new workspace, you need to specify a name for the workspace. The workspace is
then created in a region in the same Azure geography as the migration project.
When you attach an existing workspace, you can pick from all the available workspaces in the same
subscription as the migration project. Note that only those workspaces are listed which were created in
a region where Service Map is supported. To be able to attach a workspace, ensure that you have
'Reader' access to the workspace.
NOTE
You cannot change the workspace associated to a migration project.

Download and install the VM agents


Once you configure a workspace, you need to download and install agents on each on-premises machine that you
want to evaluate. In addition, if you have machines with no internet connectivity, you need to download and install
Log Analytics gateway on them.
1. In Overview, click Manage > Machines, and select the required machine.
2. In the Dependencies column, click Install agents.
3. On the Dependencies page, download and install the Microsoft Monitoring Agent (MMA), and the
Dependency agent on each VM you want to evaluate.
4. Copy the workspace ID and key. You need these when you install the MMA on the on-premises machine.

NOTE
To automate the installation of agents you can use any deployment tool like System Center Configuration Manager or use
our partner tool, Intigua, that has an agent deployment solution for Azure Migrate.

Install the MMA


Install the agent on a Windows machine
To install the agent on a Windows machine:
1. Double-click the downloaded agent.
2. On the Welcome page, click Next. On the License Terms page, click I Agree to accept the license.
3. In Destination Folder, keep or modify the default installation folder > Next.
4. In Agent Setup Options, select Azure Log Analytics > Next.
5. Click Add to add a new Log Analytics workspace. Paste in the workspace ID and key that you copied from the
portal. Click Next.
You can install the agent from the command line or using an automated method such as System Center
Configuration Manager. Learn more about using these methods to install the MMA agent.
Install the agent on a Linux machine
To install the agent on a Linux machine:
1. Transfer the appropriate bundle (x86 or x64) to your Linux computer using scp/sftp.
2. Install the bundle by using the --install argument.
sudo sh ./omsagent-<version>.universal.x64.sh --install -w <workspace id> -s <workspace key>

Learn more about the list of Linux operating systems support by MMA.
Install the agent on a machine monitored by SCOM
For machines monitored by System Center Operations Manager 2012 R2 or later, there is no need to install the
MMA agent. Service Map has an integration with SCOM that leverages the SCOM MMA to gather the necessary
dependency data. You can enable the integration using the guidance here. Note, however, that the dependency
agent will need to installed on these machines.
Install the Dependency agent
1. To install the Dependency agent on a Windows machine, double-click the setup file and follow the wizard.
2. To install the Dependency agent on a Linux machine, install as root using the following command:
sh InstallDependencyAgent-Linux64.bin

Learn more about the Dependency agent support for the Windows and Linux operating systems.
Learn more about how you can use scripts to install the Dependency agent.

Create a group
1. After you install the agents, go to the portal and click Manage > Machines.
2. Search for the machine where you installed the agents.
3. The Dependencies column for the machine should now show as View Dependencies. Click the column
to view the dependencies of the machine.
4. The dependency map for the machine shows the following details:
Inbound (Clients) and outbound (Servers) TCP connections to/from the machine
The dependent machines that do not have the MMA and dependency agent installed are
grouped by port numbers
The dependent machines that have the MMA and the dependency agent installed are shown as
separate boxes
Processes running inside the machine, you can expand each machine box to view the processes
Properties like Fully Qualified Domain Name, Operating System, MAC Address etc. of each
machine, you can click on each machine box to view these details

5. You can look at dependencies for different time durations by clicking on the time duration in the time range
label. By default the range is an hour. You can modify the time range, or specify start and end dates, and
duration.
NOTE
Currently, the dependency visualization UI does not support selection of a time range longer than an hour. Use
Azure Monitor logs to query the dependency data over a longer duration.

6. After you've identified dependent machines that you want to group together, use Ctrl+Click to select
multiple machines on the map, and click Group machines.
7. Specify a group name. Verify that the dependent machines are discovered by Azure Migrate.

NOTE
If a dependent machine is not discovered by Azure Migrate, you cannot add it to the group. To add such machines
to the group, you need to run the discovery process again with the right scope in vCenter Server and ensure that
the machine is discovered by Azure Migrate.

8. If you want to create an assessment for this group, select the checkbox to create a new assessment for the
group.
9. Click OK to save the group.
Once the group is created, it is recommended to install agents on all the machines of the group and refine the
group by visualizing the dependency of the entire group.

Query dependency data from Azure Monitor logs


Dependency data captured by Service Map is available for querying in the Log Analytics workspace associated
with your Azure Migrate project. Learn more about the Service Map data tables to query in Azure Monitor logs.
To run the Kusto queries:
1. After you install the agents, go to the portal and click Overview.
2. In Overview, go to Essentials section of the project and click on workspace name provided next to OMS
Workspace.
3. On the Log Analytics workspace page, click General > Logs.
4. Write your query to gather dependency data using Azure Monitor logs. Find sample queries in the next
section.
5. Run your query by clicking on Run.
Learn more about how to write Kusto queries.
Sample Azure Monitor logs queries
Following are sample queries you can use to extract dependency data. You can modify the queries to extract your
preferred data points. An exhaustive list of the fields in dependency data records is available here. Find more
sample queries here.
Summarize inbound connections on a set of machines
Note that the records in the table for connection metrics, VMConnection, do not represent individual physical
network connections. Multiple physical network connections are grouped into a logical connection. Learn more
about how physical network connection data is aggregated into a single logical record in VMConnection.
// the machines of interest
let ips=materialize(ServiceMapComputer_CL
| summarize ips=makeset(todynamic(Ipv4Addresses_s)) by MonitoredMachine=ResourceName_s
| mvexpand ips to typeof(string));
let StartDateTime = datetime(2019-03-25T00:00:00Z);
let EndDateTime = datetime(2019-03-30T01:00:00Z);
VMConnection
| where Direction == 'inbound'
| where TimeGenerated > StartDateTime and TimeGenerated < EndDateTime
| join kind=inner (ips) on $left.DestinationIp == $right.ips
| summarize sum(LinksEstablished) by Computer, Direction, SourceIp, DestinationIp, DestinationPort

Summarize volume of data sent and received on inbound connections between a set of machines

// the machines of interest


let ips=materialize(ServiceMapComputer_CL
| summarize ips=makeset(todynamic(Ipv4Addresses_s)) by MonitoredMachine=ResourceName_s
| mvexpand ips to typeof(string));
let StartDateTime = datetime(2019-03-25T00:00:00Z);
let EndDateTime = datetime(2019-03-30T01:00:00Z);
VMConnection
| where Direction == 'inbound'
| where TimeGenerated > StartDateTime and TimeGenerated < EndDateTime
| join kind=inner (ips) on $left.DestinationIp == $right.ips
| summarize sum(BytesSent), sum(BytesReceived) by Computer, Direction, SourceIp, DestinationIp,
DestinationPort

Next steps
Learn more about the FAQs on dependency visualization.
Learn how to refine the group by visualizing group dependencies.
Learn more about how assessments are calculated.
Refine a group using group dependency mapping
4/9/2019 • 7 minutes to read • Edit Online

This article describes how to refine a group by visualizing dependencies of all machines in the group. You typically
use this method when you want to refine membership for an existing group, by cross-checking group
dependencies, before you run an assessment. Refining a group using dependency visualization can help you
effectively plan your migration to Azure. You can discover all interdependent systems that need to migrate
together. It helps you ensure that nothing is left behind and surprise outages do not occur when you are migrating
to Azure.

NOTE
Groups for which you want to visualize dependencies shouldn't contain more than 10 machines. If you have more than 10
machines in the group, we recommend you to split it into smaller groups to leverage the dependency visualization
functionality.

NOTE
This article was recently updated to use the term Azure Monitor logs instead of Log Analytics. Log data is still stored in a Log
Analytics workspace and is still collected and analyzed by the same Log Analytics service. We are updating the terminology to
better reflect the role of logs in Azure Monitor. See Azure Monitor terminology changes for details.

Prepare for dependency visualization


Azure Migrate leverages Service Map solution in Azure Monitor logs to enable dependency visualization of
machines.

NOTE
The dependency visualization functionality is not available in Azure Government.

Associate a Log Analytics workspace


To leverage dependency visualization, you need to associate a Log Analytics workspace, either new or existing, with
an Azure Migrate project. You can only create or attach a workspace in the same subscription where the migration
project is created.
To attach a Log Analytics workspace to a project, in Overview, go to Essentials section of the project click
Requires configuration
While associating a workspace, you will get the option to create a new workspace or attach an existing one:
When you create a new workspace, you need to specify a name for the workspace. The workspace is then
created in a region in the same Azure geography as the migration project.
When you attach an existing workspace, you can pick from all the available workspaces in the same
subscription as the migration project. Note that only those workspaces are listed which were created in a
region where Service Map is supported. To be able to attach a workspace, ensure that you have 'Reader'
access to the workspace.

NOTE
You cannot change the workspace associated to a migration project.

Download and install the VM agents


To view dependencies of a group, you need to download and install agents on each on-premises machine that is
part of the group. In addition, if you have machines with no internet connectivity, you need to download and install
Log Analytics gateway on them.
1. In Overview, click Manage > Groups, go to the required group.
2. In the list of machines, in the Dependency agent column, click Requires installation to see instructions
regarding how to download and install the agents.
3. On the Dependencies page, download and install the Microsoft Monitoring Agent (MMA), and the
Dependency agent on each VM that is part of the group.
4. Copy the workspace ID and key. You need these when you install the MMA on the on-premises machines.
Install the MMA
Install the agent on a Windows machine
To install the agent on a Windows machine:
1. Double-click the downloaded agent.
2. On the Welcome page, click Next. On the License Terms page, click I Agree to accept the license.
3. In Destination Folder, keep or modify the default installation folder > Next.
4. In Agent Setup Options, select Azure Log Analytics > Next.
5. Click Add to add a new Log Analytics workspace. Paste in the workspace ID and key that you copied from the
portal. Click Next.
You can install the agent from the command line or using an automated method such as System Center
Configuration Manager. Learn more about using these methods to install the MMA agent.
Install the agent on a Linux machine
To install the agent on a Linux machine:
1. Transfer the appropriate bundle (x86 or x64) to your Linux computer using scp/sftp.
2. Install the bundle by using the --install argument.
sudo sh ./omsagent-<version>.universal.x64.sh --install -w <workspace id> -s <workspace key>

Install the agent on a machine monitored by System Center Operations Manager


For machines monitored by Operations Manager 2012 R2 or later, there is no need to install the MMA agent.
Service Map has an integration with Operations Manager that leverages the Operations Manager MMA to gather
the necessary dependency data. You can enable the integration using the guidance here. Note, however, that the
dependency agent needs to installed on these machines.
Install the Dependency agent
1. To install the Dependency agent on a Windows machine, double-click the setup file and follow the wizard.
2. To install the Dependency agent on a Linux machine, install as root using the following command:
sh InstallDependencyAgent-Linux64.bin

Learn more about the Dependency agent support for the Windows and Linux operating systems.
Learn more about how you can use scripts to install the Dependency agent.

Refine the group based on dependency visualization


Once you have installed agents on all the machines of the group, you can visualize the dependencies of the group
and refine it by following the below steps.
1. In the Azure Migrate project, under Manage, clickGroups, and select the group.
2. On the group page, clickView Dependencies, to open the group dependency map.
3. The dependency map for the group shows the following details:
Inbound (Clients) and outbound (Servers) TCP connections to/from all the machines that are part of
the group
The dependent machines that do not have the MMA and dependency agent installed are grouped
by port numbers
The dependent machines that have the MMA and the dependency agent installed are shown as
separate boxes
Processes running inside the machine, you can expand each machine box to view the processes
Properties like Fully Qualified Domain Name, Operating System, MAC Address etc. of each machine,
you can click on each machine box to view these details

4. To view more granular dependencies, click the time range to modify it. By default, the range is an hour. You
can modify the time range, or specify start and end dates, and duration.

NOTE
Currently, the dependency visualization UI does not support selection of a time range longer than an hour. Use Azure
Monitor logs to query the dependency data over a longer duration.

5. Verify the dependent machines, the process running inside each machine and identify the machines that
should be added or removed from the group.
6. Use Ctrl+Click to select machines on the map to add or remove them from the group.
You can only add machines that have been discovered.
Adding and removing machines from a group invalidates past assessments for it.
You can optionally create a new assessment when you modify the group.
7. Click OK to save the group.

If you want to check the dependencies of a specific machine that appears in the group dependency map, set up
machine dependency mapping.

Query dependency data from Azure Monitor logs


Dependency data captured by Service Map is available for querying in the Log Analytics workspace associated
with your Azure Migrate project. Learn more about the Service Map data tables to query in Azure Monitor logs.
To run the Kusto queries:
1. After you install the agents, go to the portal and click Overview.
2. In Overview, go to Essentials section of the project and click on workspace name provided next to OMS
Workspace.
3. On the Log Analytics workspace page, click General > Logs.
4. Write your query to gather dependency data using Azure Monitor logs. Find sample queries in the next section.
5. Run your query by clicking on Run.
Learn more about how to write Kusto queries.

Sample Azure Monitor logs queries


Following are sample queries you can use to extract dependency data. You can modify the queries to extract your
preferred data points. An exhaustive list of the fields in dependency data records is available here. Find more
sample queries here.
Summarize inbound connections on a set of machines
Note that the records in the table for connection metrics, VMConnection, do not represent individual physical
network connections. Multiple physical network connections are grouped into a logical connection. Learn more
about how physical network connection data is aggregated into a single logical record in VMConnection.

let ips=materialize(ServiceMapComputer_CL
| summarize ips=makeset(todynamic(Ipv4Addresses_s)) by MonitoredMachine=ResourceName_s
| mvexpand ips to typeof(string));
let StartDateTime = datetime(2019-03-25T00:00:00Z);
let EndDateTime = datetime(2019-03-30T01:00:00Z);
VMConnection
| where Direction == 'inbound'
| where TimeGenerated > StartDateTime and TimeGenerated < EndDateTime
| join kind=inner (ips) on $left.DestinationIp == $right.ips
| summarize sum(LinksEstablished) by Computer, Direction, SourceIp, DestinationIp, DestinationPort

Summarize volume of data sent and received on inbound connections between a set of machines

// the machines of interest


let ips=materialize(ServiceMapComputer_CL
| summarize ips=makeset(todynamic(Ipv4Addresses_s)) by MonitoredMachine=ResourceName_s
| mvexpand ips to typeof(string));
let StartDateTime = datetime(2019-03-25T00:00:00Z);
let EndDateTime = datetime(2019-03-30T01:00:00Z);
VMConnection
| where Direction == 'inbound'
| where TimeGenerated > StartDateTime and TimeGenerated < EndDateTime
| join kind=inner (ips) on $left.DestinationIp == $right.ips
| summarize sum(BytesSent), sum(BytesReceived) by Computer, Direction, SourceIp, DestinationIp,
DestinationPort

Next steps
Learn more about the FAQs on dependency visualization.
Learn more about how assessments are calculated.
Customize an assessment
1/10/2019 • 5 minutes to read • Edit Online

Azure Migrate creates assessments with default properties. After creating an assessment, you can modify the
default properties using the instructions in this article.

Edit assessment properties


1. In the Assessments page of the migration project, select the assessment, and click Edit properties.
2. Customize the assessment properties based on the following details:

SETTING DETAILS DEFAULT

Target location The Azure location to which you West US 2 is the default location.
want to migrate.

Azure Migrate currently supports 30


regions including Australia East,
Australia Southeast, Brazil South,
Canada Central, Canada East, Central
India, Central US, China East, China
North, East Asia, East US, Germany
Central, Germany Northeast, East US
2, Japan East, Japan West, Korea
Central, Korea South, North Central
US, North Europe, South Central US,
Southeast Asia, South India, UK
South, UK West, US Gov Arizona, US
Gov Texas, US Gov Virginia, West
Central US, West Europe, West India,
West US, and West US2.
SETTING DETAILS DEFAULT

Storage type You can use this property to specify The default value is Premium-
the type of disks you want to move managed disks (with Sizing criterion
to in Azure. For as-on premises as as on-premises sizing).
sizing, you can specify the target disk
type either as Premium-managed
disks or Standard-managed disks.
For performance-based sizing, you
can specify the target disk type either
as Automatic, Premium-managed
disks or Standard-managed disks.
When you specify the storage type
as automatic, the disk
recommendation is done based on
the performance data of the disks
(IOPS and throughput). For example,
if you want to achieve a single
instance VM SLA of 99.9%, you may
want to specify the storage type as
Premium-managed disks. This
ensures that all disks in the
assessment are recommended as
Premium-managed disks. Note that
Azure Migrate only supports
managed disks for migration
assessment.

Reserved Instances You can also specify if you have The default value for this property is
reserved instances in Azure and 3-years reserved instances.
Azure Migrate will estimate the cost
accordingly. Reserved instances are
currently only supported for Pay-As-
You-Go offer in Azure Migrate.

Sizing criterion The criterion to be used by Azure Performance-based sizing is the


Migrate to right-size VMs for Azure. default option.
You can do either do performance-
based sizing or size the VMs as on-
premises, without considering the
performance history.

Performance history The duration to consider for Default is one day.


evaluating the performance of the
VMs. This property is only applicable
when sizing criterion is performance-
based sizing.

Percentile utilization The percentile value of the Default is 95th percentile.


performance sample set to be
considered for right-sizing. This
property is only applicable when
sizing criterion is performance-based
sizing.
SETTING DETAILS DEFAULT

VM series You can specify the VM series that By default, all VM series are selected.
you would like to consider for right-
sizing. For example, if you have a
production environment that you do
not plan to migrate to A-series VMs
in Azure, you can exclude A-series
from the list or series and the right-
sizing is done only in the selected
series.

Comfort factor Azure Migrate considers a buffer Default setting is 1.3x.


(comfort factor) during assessment.
This buffer is applied on top of
machine utilization data for VMs
(CPU, memory, disk, and network).
The comfort factor accounts for
issues such as seasonal usage, short
performance history, and likely
increases in future usage.

For example, 10-core VM with 20%


utilization normally results in a 2-
core VM. However, with a comfort
factor of 2.0x, the result is a 4-core
VM instead.

Offer Azure Offer that you are enrolled to. Pay-as-you-go is the default.

Currency Billing currency. Default is US dollars.

Discount (%) Any subscription-specific discount The default setting is 0%.


you receive on top of the Azure offer.

VM uptime If your VMs are not going to be The default value is 31 days per
running 24x7 in Azure, you can month and 24 hours per day.
specify the duration (number of days
per month and number of hours per
day) for which they would be running
and the cost estimations would be
done accordingly.

Azure Hybrid Benefit Specify if you have software Default is Yes.


assurance and are eligible for Azure
Hybrid Benefit. If set to Yes, non-
Windows Azure prices are considered
for Windows VMs.

3. Click Save to update the assessment.

FAQs on assessment properties


What is the difference between as-on-premises sizing and performance -based sizing?
When you specify the sizing criterion to be as-on-premises sizing, Azure Migrate does not consider the
performance data of the VMs and sizes the VMs based on the on-premises configuration. If the sizing criterion is
performance-based, the sizing is done based on utilization data. For example, if there is an on-premises VM with 4
cores and 8-GB memory with 50% CPU utilization and 50% memory utilization. If the sizing criterion is as on-
premises sizing an Azure VM SKU with 4 cores and 8-GB memory is recommended, however, if the sizing
criterion is performance-based as VM SKU of 2 cores and 4 GB would be recommended as the utilization
percentage is considered while recommending the size.
Similarly, for disks, the disk sizing depends on two assessment properties - sizing criterion and storage type. If the
sizing criterion is performance-based and storage type is automatic, the IOPS and throughput values of the disk
are considered to identify the target disk type (Standard or Premium). If the sizing criterion is performance-based
and storage type is premium, a premium disk is recommended, the premium disk SKU in Azure is selected based
on the size of the on-premises disk. The same logic is used to do disk sizing when the sizing criterion is as on-
premises sizing and storage type is standard or premium.
What impact does performance history and percentile utilization have on the size recommendations?
These properties are only applicable for performance-based sizing. Azure Migrate collects performance history of
on-premises machines and uses it to recommend the VM size and disk type in Azure.
The collector appliance continuously profiles the on-premises environment to gather real-time utilization data
every 20 seconds.
The appliance rolls up the 20-second samples, and creates a single data point for every 15 minutes. To create
the single data point, the appliance selects the peak value from all the 20-second samples, and sends it to
Azure.
When you create an assessment in Azure, based on the performance duration and performance history
percentile value, Azure Migrate calculates the effective utilization value and uses it for sizing.
For example, if you have set the performance duration to be 1 day and percentile value to 95 percentile, Azure
Migrate uses the 15-min sample points sent by collector for the last one day, sorts them in ascending order and
picks the 95th percentile value as the effective utilization. The 95th percentile value ensures that you are ignoring
any outliers, which may come if you pick the 99th percentile. If you want to pick the peak usage for the period and
do not want to miss any outliers, you should select the 99th percentile.

Next steps
Learn more about how assessments are calculated.
Migrate machines after assessment
12/11/2018 • 2 minutes to read • Edit Online

Azure Migrate assesses on-premises machines to check whether they're suitable for migration to Azure, and
provides sizing and cost estimations for running the machine in Azure. Currently, Azure Migrate only assesses
machines for migration. The migration itself is currently performed using other Azure services.
This article describes how to get suggestions for a migration tool after you've run a migration assessment.

NOTE
The migration tool suggestion is not available in Azure Government.

Migration tool suggestion


To get suggestions regarding migration tools, you need to do a deep discovery of the on-premises environment.
The deep discovery is done by installing agents on the on-premises machines.
1. Create an Azure Migrate project, discover on-premises machines, and create a migration assessment. Learn
more.
2. Download and install the Azure Migrate agents on each on-premises machine for which you want to see a
recommended migration method. Follow this procedure to install the agents.
3. Identify your on-premises machines that are suitable for lift-and-shift migration. These are the VMs that don't
require any changes to apps running on them, and can be migrated as is.
4. For lift-and-shift migration, we suggest using Azure Site Recovery. Learn more. Alternately, you can use third-
party tools that support migration to Azure.
5. If you have on-premises machines that aren't suitable for a lift-and-shift migration, that is, if you want to
migrate specific app rather than an entire VM, you can use other migration tools. For example, we suggest the
Azure Database Migration service if you want to migrate on-premises databases such a SQL Server, MySQL,
or Oracle to Azure.

Review suggested migration methods


1. Before you can get a suggested migration method, you need to create an Azure Migrate project, discover
on-premises machines, and run a migration assessment. Learn more.
2. After the assessment is created, view it in the project > Overview > Dashboard. Click Assessment
Readiness

3. In Suggested Tool, review the suggestions for tools you can use for migration.
Next steps
Learn more about how assessments are calculated.
Scale migration of VMs using Azure Site Recovery
4/1/2019 • 3 minutes to read • Edit Online

This article helps you understand the process of using scripts to migrate large number of VMs using Azure Site
Recovery. These scripts are available for your download at Azure PowerShell Samples repo on GitHub. The scripts
can be used to migrate VMware, AWS, GCP VMs, and physical servers to Azure and support migration to
managed disks. You can also use these scripts to migrate Hyper-V VMs if you migrate the VMs as physical servers.
The scripts leverage Azure Site Recovery PowerShell documented here.

Current Limitations:
Support specifying the static IP address only for the primary NIC of the target VM
The scripts do not take Azure Hybrid Benefit related inputs, you need to manually update the properties of the
replicated VM in the portal

How does it work?


Prerequisites
Before you get started, you need to do the following steps:
Ensure that the Site Recovery vault is created in your Azure subscription
Ensure that the Configuration Server and Process Server are installed in the source environment and the vault
is able to discover the environment
Ensure that a Replication Policy is created and associated with the Configuration Server
Ensure that you have added the VM admin account to the config server (that will be used to replicate the on
premises VMs)
Ensure that the target artifacts in Azure are created
Target Resource Group
Target Storage Account (and its Resource Group) - Create a premium storage account if you plan to
migrate to premium-managed disks
Cache Storage Account (and its Resource Group) - Create a standard storage account in the same region
as the vault
Target Virtual Network for failover (and its Resource Group)
Target Subnet
Target Virtual Network for Test failover (and its Resource Group)
Availability Set (if needed)
Target Network Security Group and its Resource Group
Ensure that you have decided on the properties of the target VM
Target VM name
Target VM size in Azure (can be decided using Azure Migrate assessment)
Private IP Address of the primary NIC in the VM
Download the scripts from Azure PowerShell Samples repo on GitHub
CSV Input file
Once you have all the pre-requisites completed, you need to create a CSV file, which has data for each source
machine that you want to migrate. The input CSV must have a header line with the input details and a row with
details for each machine that needs to be migrated. All the scripts are designed to work on the same CSV file. A
sample CSV template is available in the scripts folder for your reference.
Script execution
Once the CSV is ready, you can execute the following steps to perform migration of the on-premises VMs:

STEP # SCRIPT NAME DESCRIPTION

1 asr_startmigration.ps1 Enable replication for all the VMs listed


in the csv, the script creates a CSV
output with the job details for each VM

2 asr_replicationstatus.ps1 Check the status of replication, the


script creates a csv with the status for
each VM

3 asr_updateproperties.ps1 Once the VMs are replicated/protected,


use this script to update the target
properties of the VM (Compute and
Network properties)

4 asr_propertiescheck.ps1 Verify if the properties are appropriately


updated

5 asr_testmigration.ps1 Start the test failover of the VMs listed


in the csv, the script creates a CSV
output with the job details for each VM

6 asr_cleanuptestmigration.ps1 Once you manually validate the VMs


that were test failed-over, you can use
this script to clean up the test failover
VMs

7 asr_migration.ps1 Perform an unplanned failover for the


VMs listed in the csv, the script creates
a CSV output with the job details for
each VM. The script does not shut
down the on premises VMs before
triggering the failover, for application
consistency, it is recommended that you
manually shut down the VMs before
executing the script.

8 asr_completemigration.ps1 Perform the commit operation on the


VMs and delete the Azure Site Recovery
entities

9 asr_postmigration.ps1 If you plan to assign network security


groups to the NICs post-failover, you
can use this script to do that. It assigns
a NSG to any one NIC in the target VM.

How to migrate to managed disks?


The script, by default, migrates the VMs to managed disks in Azure. If the target storage account provided is a
premium storage account, premium-managed disks are created post migration. The cache storage account can still
be a standard account. If the target storage account is a standard storage account, standard disks are created post
migration.
Next steps
Learn more about migrating servers to Azure using Azure Site Recovery
Troubleshoot Azure Migrate
3/29/2019 • 20 minutes to read • Edit Online

Troubleshoot common errors


Azure Migrate assesses on-premises workloads for migration to Azure. Use this article to troubleshoot issues
when deploying and using Azure Migrate.
I am using the OVA that continuously discovers my on-premises environment, but the VMs that are deleted in
my on-premises environment are still being shown in the portal.
The continuous discovery appliance only collects performance data continuously, it does not detect any
configuration change in the on-premises environment (i.e. VM addition, deletion, disk addition etc.). If there is a
configuration change in the on-premises environment, you can do the following to reflect the changes in the portal:
Addition of items (VMs, disks, cores etc.): To reflect these changes in the Azure portal, you can stop the
discovery from the appliance and then start it again. This will ensure that the changes are updated in the
Azure Migrate project.

Deletion of VMs: Due to the way the appliance is designed, deletion of VMs is not reflected even if you stop
and start the discovery. This is because data from subsequent discoveries are appended to older discoveries
and not overridden. In this case, you can simply ignore the VM in the portal, by removing it from your group
and recalculating the assessment.
Deletion of Azure Migrate projects and associated Log Analytics workspace
When you delete an Azure Migrate project, it deletes the migration project along with all the groups and
assessments. However, if you have attached a Log Analytics workspace to the project, it does not automatically
delete the Log Analytics workspace. This is because the same Log Analytics workspace might be used for multiple
use cases. If you would like to delete the Log Analytics workspace as well, you need to do it manually.
1. Browse to the Log Analytics workspace attached to the project. a. If you have not deleted the migration
project yet, you can find the link to the workspace from the project overview page in the Essentials section.
b. If you already deleted the migration project, click Resource Groups in the left pane in Azure portal and
go to the Resource Group in which the workspace was created and then browse to it.
2. Follow the instructions in this article to delete the workspace.
Migration project creation failed with error Requests must contain user identity headers
This issue can happen for users who do not have access to the Azure Active Directory (Azure AD ) tenant of the
organization. When a user is added to an Azure AD tenant for the first time, he/she receives an email invite to join
the tenant. Users need to go to the email and accept the invitation to get successfully added to the tenant. If you are
unable to see the email, reach out to a user who already has access to the tenant and ask them to resend the
invitation to you using the steps specified here.
Once the invitation email is received, you need to open the email and click the link in the email to accept the
invitation. Once this is done, you need to sign out of Azure portal and sign-in again, refreshing the browser will not
work. You can then try creating the migration project.
I am unable to export the assessment report
If you are unable to export the assessment report from the portal, try using the below REST API to get a download
URL for the assessment report.
1. Install armclient on your computer (if you don’t have it already installed):
a. In an administrator Command Prompt window, run the following command:
@powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((New-Object
System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET
"PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"

b. In an administrator Windows PowerShell window, run the following command: choco install armclient

2. Get the download URL for the assessment report using Azure Migrate REST API
a. In an administrator Windows PowerShell window, run the following command: armclient login

This opens the Azure login pop-up where you need to sign in to Azure.

b. In the same PowerShell window, run the following command to get the download URL for the
assessment report (replace the URI parameters with the appropriate values, sample API request below )

```armclient POST
https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers
/Microsoft.Migrate/projects/{projectName}/groups/{groupName}/assessments/{assessmentName}/downloadUrl?
api-version=2018-02-02```

Sample request and output:


esourceGroups/ContosoDemo/providers/Microsoft.Migrate/projects/Demo/groups/contosopayroll/assessments/as
sessment_11_16_2
018_12_16_21/downloadUrl?api-version=2018-02-02
{
"assessmentReportUrl": "https://migsvcstoragewcus.blob.core.windows.net/4f7dddac-f33b-4368-8e6a-
45afcbd9d4df/contosopayrollassessment_11_16_2018_12_16_21?sv=2016-05-
31&sr=b&sig=litQmHuwi88WV%2FR%2BDZX0%2BIttlmPMzfVMS7r7dULK7Oc%3D&st=2018-11-20T16%3A09%3A30Z&se=2018-11-
20T16%3A19%3A30Z&sp=r",
"expirationTime": "2018-11-20T22:09:30.5681954+05:30"```

3. Copy the URL from the response and open it in a browser to download the assessment report.
4. Once the report is downloaded, use Excel to browse to the downloaded folder and open the file in Excel to
view it.
Performance data for CPU, memory and disks is showing up as zeroes
Azure Migrate continuously profiles the on-premises environment to collect performance data of the on-premises
VMs. If you have just started the discovery of your environment, you need to wait for at least a day for the
performance data collection to be done. If an assessment is created without waiting for one day, the performance
metrics will show up as zeroes. After waiting for a day, you can either create a new assessment or update the
existing assessment by using the 'Recalculate' option in the assessment report.
I specified an Azure geography, while creating a migration project, how do I find out the exact Azure region
where the discovered metadata would be stored?
You can go to the Essentials section in the Overview page of the project to identify the exact location where the
metadata is stored. The location is selected randomly within the geography by Azure Migrate and you cannot
modify it. If you want to create a project in a specific region only, you can use the REST APIs to create the
migration project and pass the desired region.

Collector issues
Deployment of Azure Migrate Collector failed with the error: The provided manifest file is invalid: Invalid OVF
manifest entry.
1. Verify if Azure Migrate Collector OVA file is downloaded correctly by checking its hash value. Refer to the article
to verify the hash value. If the hash value is not matching, download the OVA file again and retry the
deployment.
2. If it still fails and if you are using VMware vSphere Client to deploy the OVF, try deploying it through vSphere
Web Client. If it still fails, try using different web browser.
3. If you are using vSphere web client and trying to deploy it on vCenter Server 6.5 or 6.7, try to deploy the OVA
directly on ESXi host by following the below steps:
Connect to the ESXi host directly (instead of vCenter Server) using the web client (https://<host IP
Address>/ui)
Go to Home > Inventory
Click File > Deploy OVF template > Browse to the OVA and complete the deployment
4. If the deployment still fails, contact Azure Migrate support.
Unable to select the Azure cloud in the appliance, fails with error "Azure cloud selection failed"
This is a known issue and a fix is available for the issue. Please download the latest upgrade bits for the appliance
and update the appliance to apply the fix.
Collector is not able to connect to the internet
This can happen when the machine you are using is behind a proxy. Make sure you provide the authorization
credentials if the proxy needs one. If you are using any URL -based firewall proxy to control outbound connectivity,
be sure to whitelist these required URLs:

URL PURPOSE

*.portal.azure.com Required to check connectivity with the Azure service and


validate time synchronization issues.

*.oneget.org Required to download the powershell based vCenter PowerCLI


module.

The collector can't connect to the internet because of a certificate validation failure
This can happen if you are using an intercepting proxy to connect to the Internet, and if you have not imported the
proxy certificate on to the collector VM. You can import the proxy certificate using the steps detailed here.
The collector can't connect to the project using the project ID and key I copied from the portal.
Make sure you've copied and pasted the right information. To troubleshoot, install the Microsoft Monitoring Agent
(MMA) and verify if the MMA can connect to the project as follows:
1. On the collector VM, download the MMA.
2. To start the installation, double-click the downloaded file.
3. In setup, on the Welcome page, click Next. On the License Terms page, click I Agree to accept the license.
4. In Destination Folder, keep or modify the default installation folder > Next.
5. In Agent Setup Options, select Azure Log Analytics > Next.
6. Click Add to add a new Log Analytics workspace. Paste in project ID and key that you copied. Then click Next.
7. Verify that the agent can connect to the project. If it can't, verify the settings. If the agent can connect but the
collector can't, contact Support.
Error 802: Date and time synchronization error
The server clock might be out-of-synchronization with the current time by more than five minutes. Change the
clock time on the collector VM to match the current time, as follows:
1. Open an admin command prompt on the VM.
2. To check the time zone, run w32tm /tz.
3. To synchronize the time, run w32tm /resync.
VMware PowerCLI installation failed
Azure Migrate collector downloads PowerCLI and installs it on the appliance. Failure in PowerCLI installation could
be due to unreachable endpoints for the PowerCLI repository. To troubleshoot, try manually installing PowerCLI in
the collector VM using the following step:
1. Open Windows PowerShell in administrator mode
2. Go to the directory C:\ProgramFiles\ProfilerService\VMWare\Scripts\
3. Run the script InstallPowerCLI.ps1
Error UnhandledException Internal error occurred: System.IO.FileNotFoundException
This issue could occur due to an issue with VMware PowerCLI installation. Follow the below steps to resolve the
issue:
1. If you are not on the latest version of the collector appliance, upgrade your Collector to the latest version
and check if the issue is resolved.
2. If you already have the latest collector version, follow the below steps to do a clean installation of PowerCLI :
a. Close the web browser in the appliance.
b. Stop the 'Azure Migrate Collector' service by going to Windows Service Manager (Open 'Run' and type
services.msc to open Windows Service Manager). Right click on Azure Migrate Collector Service and click
Stop.
c. Delete all folders starting with 'VMware' from the following locations: C:\Program
Files\WindowsPowerShell\Modules
C:\Program Files (x86)\WindowsPowerShell\Modules
d. Restart the 'Azure Migrate Collector' service in Windows Service Manager (Open 'Run' and type
services.msc to open Windows Service Manager). Right click on Azure Migrate Collector Service and click
Start.
e. Double-click the desktop shortcut 'Run collector' to start the collector application. The collector
application should automatically download and install the required version of PowerCLI.
3. If the above does not resolve the issue, follow steps a to c above and then manually install PowerCLI in the
appliance using the following steps:
a. Clean up all incomplete PowerCLI installation files by following steps #a to #c in step #2 above.
b. Go to Start > Run > Open Windows PowerShell(x86) in administrator mode
c. Run the command: Install-Module "VMWare.VimAutomation.Core" -RequiredVersion "6.5.2.6234650"
(type 'A' when it asks for confirmation)
d. Restart the 'Azure Migrate Collector' service in Windows Service Manager (Open 'Run' and type
services.msc to open Windows Service Manager). Right click on Azure Migrate Collector Service and click
Start.
e. Double-click the desktop shortcut 'Run collector' to start the collector application. The collector
application should automatically download and install the required version of PowerCLI.
4. If you are unable to download the module in the appliance due to firewall issues, download and install the
module in a machine that has access to internet using the following steps:
a. Clean up all incomplete PowerCLI installation files by following steps #a to #c in step #2 above.
b. Go to Start > Run > Open Windows PowerShell(x86) in administrator mode
c. Run the command: Install-Module "VMWare.VimAutomation.Core" -RequiredVersion "6.5.2.6234650"
(type 'A' when it asks for confirmation)
d. Copy all modules starting with "VMware" from “C:\Program Files (x86)\WindowsPowerShell\Modules”
to the same location on the collector VM.
e. Restart the 'Azure Migrate Collector' service in Windows Service Manager (Open 'Run' and type
services.msc to open Windows Service Manager). Right click on Azure Migrate Collector Service and click
Start.
f. Double-click the desktop shortcut 'Run collector' to start the collector application. The collector application
should automatically download and install the required version of PowerCLI.
Error UnableToConnectToServer
Unable to connect to vCenter Server "Servername.com:9443" due to error: There was no endpoint listening at
https://Servername.com:9443/sdk that could accept the message.
Check if you are running the latest version of the collector appliance, if not, upgrade the appliance to the latest
version.
If the issue still happens in the latest version, it could be because the collector machine is unable to resolve the
vCenter Server name specified or the port specified is wrong. By default, if the port is not specified, collector will
try to connect to the port number 443.
1. Try to ping the Servername.com from the collector machine.
2. If step 1 fails, try to connect to the vCenter server over IP address.
3. Identify the correct port number to connect to the vCenter.
4. Finally check if the vCenter server is up and running.
Antivirus exclusions
To harden the Azure Migrate appliance, you need to exclude the following folders in the appliance from antivirus
scanning:
Folder that has the binaries for Azure Migrate Service. Exclude all sub-folders. %ProgramFiles%\ProfilerService
Azure Migrate Web Application. Exclude all sub-folders. %SystemDrive%\inetpub\wwwroot
Local Cache for Database and log files. Azure migrate service needs RW access to this folder.
%SystemDrive%\Profiler

Dependency visualization issues


I am unable to find the dependency visualization functionality for Azure Government projects.
Azure Migrate depends on Service Map for the dependency visualization functionality and since Service Map is
currently unavailable in Azure Government, this functionality is not available in Azure Government.
I installed the Microsoft Monitoring Agent (MMA ) and the dependency agent on my on-premises VMs, but the
dependencies are now showing up in the Azure Migrate portal.
Once you have installed the agents, Azure Migrate typically takes 15-30 mins to display the dependencies in the
portal. If you have waited for more than 30 minutes, ensure that the MMA agent is able to talk to the OMS
workspace by following the below steps:
For Windows VM:
1. Go to Control Panel and launch Microsoft Monitoring Agent
2. Go to the Azure Log Analytics (OMS ) tab in the MMA properties pop-up
3. Ensure that the Status for the workspace is green.
4. If the status is not green, try removing the workspace and adding it again to MMA.
For Linux VM, ensure that the installation commands for MMA and dependency agent had succeeded.
What are the operating systems supported by MMA?
The list of Windows operating systems supported by MMA is here. The list of Linux operating systems supported
by MMA is here.
What are the operating systems supported by dependency agent?
The list of Windows operating systems supported by dependency agent is here. The list of Linux operating systems
supported by dependency agent is here.
I am unable to visualize dependencies in Azure Migrate for more than one hour duration?
Azure Migrate lets you visualize dependencies for up to one hour duration. Although, Azure Migrate allows you to
go back to a particular date in the history for up to last one month, the maximum duration for which you can
visualize the dependencies is up to 1 hour. For example, you can use the time duration functionality in the
dependency map, to view dependencies for yesterday, but can only view it for a one hour window. However, you
can use Azure Monitor logs to query the dependency data over a longer duration.
I am unable to visualize dependencies for groups with more than 10 VMs?
You can visualize dependencies for groups that have up to 10 VMs, if you have a group with more than 10 VMs, we
recommend you to split the group in to smaller groups and visualize the dependencies.
I installed agents and used the dependency visualization to create groups. Now post failover, the machines show
"Install agent" action instead of "View dependencies"
Post planned or unplanned failover, on-premises machines are turned off and equivalent machines are spun up
in Azure. These machines acquire a different MAC address. They may acquire a different IP address based on
whether the user chose to retain on-premises IP address or not. If both MAC and IP addresses differ, Azure
Migrate does not associate the on-premises machines with any Service Map dependency data and asks user to
install agents instead of viewing dependencies.
Post test failover, the on-premises machines remain turned on as expected. Equivalent machines spun up in
Azure acquire different MAC address and may acquire different IP address. Unless the user blocks outgoing
Azure Monitor logs traffic from these machines, Azure Migrate does not associate the on-premises machines
with any Service Map dependency data and asks user to install agents instead of viewing dependencies.
Troubleshoot Azure readiness issues
ISSUE FIX

Unsupported boot type Azure does not support VMs with EFI boot type. It is
recommended to convert the boot type to BIOS before you
run a migration.

You can use Azure Site Recovery to do the migration of such


VMs as it will convert the boot type of the VM to BIOS during
the migration.

Conditionally supported Windows OS The OS has passed its end of support date and needs a
Custom Support Agreement (CSA) for support in Azure,
consider upgrading the OS before migrating to Azure.

Unsupported Windows OS Azure supports only selected Windows OS versions, consider


upgrading the OS of the machine before migrating to Azure.

Conditionally endorsed Linux OS Azure endorses only selected Linux OS versions, consider
upgrading the OS of the machine before migrating to Azure.

Unendorsed Linux OS The machine may boot in Azure, but no OS support is


provided by Azure, consider upgrading the OS to an endorsed
Linux version before migrating to Azure

Unknown operating system The operating system of the VM was specified as 'Other' in
vCenter Server, due to which Azure Migrate cannot identify
the Azure readiness of the VM. Ensure that the OS running
inside the machine is supported by Azure before you migrate
the machine.

Unsupported OS bitness VMs with 32-bit OS may boot in Azure, but it is


recommended to upgrade the OS of the VM from 32-bit to
64-bit before migrating to Azure.

Requires Visual Studio subscription. The machines has a Windows client OS running inside it which
is supported only in Visual Studio subscription.

VM not found for the required storage performance. The storage performance (IOPS/throughput) required for the
machine exceeds Azure VM support. Reduce storage
requirements for the machine before migration.

VM not found for the required network performance. The network performance (in/out) required for the machine
exceeds Azure VM support. Reduce the networking
requirements for the machine.

VM not found in the specified location. Use a different target location before migration.

One or more unsuitable disks. One or more disks attached to the VM do not meet the Azure
requirements. For each disk attached to the VM, ensure that
the size of the disk is < 4 TB, if not, shrink the disk size before
migrating to Azure. Ensure that the performance
(IOPS/throughput) needed by each disk is supported by Azure
managed virtual machine disks.

One or more unsuitable network adapters. Remove unused network adapters from the machine before
migration.
ISSUE FIX

Disk count exceeds limit Remove unused disks from the machine before migration.

Disk size exceeds limit Azure supports disks with up to size 4 TB. Shrink disks to less
than 4 TB before migration.

Disk unavailable in the specified location Make sure the disk is in your target location before you
migrate.

Disk unavailable for the specified redundancy The disk should use the redundancy storage type defined in
the assessment settings (LRS by default).

Could not determine disk suitability due to an internal error Try creating a new assessment for the group.

VM with required cores and memory not found Azure couldn't fine a suitable VM type. Reduce the memory
and number of cores of the on-premises machine before you
migrate.

Could not determine VM suitability due to an internal error. Try creating a new assessment for the group.

Could not determine suitability for one or more disks due to Try creating a new assessment for the group.
an internal error.

Could not determine suitability for one or more network Try creating a new assessment for the group.
adapters due to an internal error.

Collect logs
How do I collect logs on the collector VM?
Logging is enabled by default. Logs are located as follows:
C:\Profiler\ProfilerEngineDB.sqlite
C:\Profiler\Service.log
C:\Profiler\WebApp.log
To collect Event Tracing for Windows, do the following:
1. On the collector VM, open a PowerShell command window.
2. Run Get-EventLog -LogName Application | export-csv eventlog.csv.
How do I collect portal network traffic logs?
1. Open the browser and navigate and log in to the portal.
2. Press F12 to start the Developer Tools. If needed, clear the setting Clear entries on navigation.
3. Click the Network tab, and start capturing network traffic:
In Chrome, select Preserve log. The recording should start automatically. A red circle indicates that
traffic is being capture. If it doesn't appear, click the black circle to start
In Microsoft Edge/IE, recording should start automatically. If it doesn't, click the green play button.
4. Try to reproduce the error.
5. After you've encountered the error while recording, stop recording, and save a copy of the recorded activity:
In Chrome, right-click and click Save as HAR with content. This zips and exports the logs as a .har file.
In Microsoft Edge/IE, click the Export captured traffic icon. This zips and exports the log.
6. Navigate to the Console tab to check for any warnings or errors. To save the console log:
In Chrome, right-click anywhere in the console log. Select Save as, to export and zip the log.
In Microsoft Edge/IE, right-click on the errors and select Copy all.
7. Close Developer Tools.

Collector error codes and recommended actions


RECOMMENDED
ERROR CODE ERROR NAME MESSAGE POSSIBLE CAUSES ACTION

601 CollectorExpired Collector has expired. Collector Expired. Please download a


new version of
collector and retry.

751 UnableToConnectToSe Unable to connect to Check the error Resolve the issue and
rver vCenter Server message for more try again.
'%Name;' due to details.
error: %ErrorMessage;

752 InvalidvCenterEndpoi The server '%Name;' Provide vCenter Retry the operation
nt is not a vCenter Server details. with correct vCenter
Server. Server details.

753 InvalidLoginCredential Unable to connect to Connection to the Ensure that the login
s the vCenter Server vCenter Server failed credentials provided
'%Name;' due to due to invalid login are correct.
error: %ErrorMessage; credentials.

754 NoPerfDataAvailable Performance data not Check Statistics Level Change Statistics
available. in vCenter Server. It Level to 3 (for 5
should be set to 3 for minutes, 30 minutes,
performance data to and 2 hours duration)
be available. and try after waiting
at least for a day.

756 NullInstanceUUID Encountered a vCenter Server may Resolve the issue and
machine with null have an inappropriate try again.
InstanceUUID object.

757 VMNotFound Virtual machine is not Virtual machine may Ensure that the virtual
found be deleted: %VMID; machines selected
while scoping vCenter
inventory exists
during the discovery

758 GetPerfDataTimeout VCenter request vCenter Server Check vCenter Server


timed out. Message credentials are credentials and ensure
%Message; incorrect that vCenter Server is
reachable. Retry the
operation. If the issue
persists, contact
support.
RECOMMENDED
ERROR CODE ERROR NAME MESSAGE POSSIBLE CAUSES ACTION

759 VmwareDllNotFound VMWare.Vim DLL not PowerCLI is not Please check if


found. installed properly. PowerCLI is installed
properly. Retry the
operation. If the issue
persists, contact
support.

800 ServiceError Azure Migrate Azure Migrate Use services.msc to


Collector service is Collector service is start the service and
not running. not running. retry the operation.

801 PowerCLIError VMware PowerCLI VMware PowerCLI Retry the operation. If


installation failed. installation failed. the issue persists,
install it manually and
retry the operation.

802 TimeSyncError Time is not in sync Time is not in sync Ensure that the time
with the internet time with the internet time on the machine is
server. server. accurately set for the
machine's time zone
and retry the
operation.

702 OMSInvalidProjectKey Invalid project key Invalid project key Retry the operation
specified. specified. with correct project
key.

703 OMSHttpRequestExce Error while sending Check project ID and Retry the operation. If
ption request. Message key and ensure that the issue persists,
%Message; endpoint is reachable. contact Microsoft
Support.

704 OMSHttpRequestTim HTTP request timed Check project ID and Retry the operation. If
eoutException out. Message key and ensure that the issue persists,
%Message; endpoint is reachable. contact Microsoft
Support.
Azure Migrate - Frequently Asked Questions (FAQ)
4/15/2019 • 13 minutes to read • Edit Online

This article includes frequently asked questions about Azure Migrate. If you have any further queries after reading
this article, post them on the Azure Migrate forum.

General
Does Azure Migrate support assessment of only VMware workloads?
Yes, Azure Migrate currently only supports assessment of VMware workloads. Support for Hyper-V is in preview,
please sign up here to get access to the preview. Support for physical servers will be enabled in future.
Does Azure Migrate need vCenter Server to discover a VMware environment?
Yes, Azure Migrate requires vCenter Server to discover a VMware environment. It does not support discovery of
ESXi hosts that are not managed by a vCenter Server.
How is Azure Migrate different from Azure Site Recovery?
Azure Migrate is an assessment service that helps you discover your on-premises workloads and plan your
migration to Azure. Azure Site Recovery, along with being a disaster recovery solution, helps you migrate on-
premises workloads to IaaS VMs in Azure.
What's the difference between using Azure Migrate for assessments and the Map Toolkit?
Azure Migrate provides migration assessment specifically to assist with migration readiness and evaluation of on-
premises workloads into Azure. Microsoft Assessment and Planning (MAP ) Toolkit has other functionalities such
as migration planning for newer versions of Windows client and server operating systems and software usage
tracking. For those scenarios, continue to use the MAP Toolkit.
How is Azure Migrate different from Azure Site Recovery Deployment Planner?
Azure Migrate is a migration planning tool and Azure Site Recovery Deployment Planner is a disaster recovery
(DR ) planning tool.
Migration from VMware to Azure: If you intend to migrate your on-premises workloads to Azure, use Azure
Migrate for migration planning. Azure Migrate assesses on-premises workloads and provides guidance, insights,
and mechanisms to assist you in migrating to Azure. Once you are ready with your migration plan, you can use
services such as Azure Site Recovery and Azure Database Migration Service to migrate the machines to Azure.
Migration from Hyper-V to Azure: The Generally Available version of Azure Migrate currently supports
assessment of VMware virtual machines for migration to Azure. Support for Hyper-V is currently in preview with
production support. If you are interested in trying out the preview, please sign up here.
Disaster Recovery from VMware/Hyper-V to Azure: If you intend to do disaster recovery (DR ) on Azure using
Azure Site Recovery (Site Recovery), use Site Recovery Deployment Planner for DR planning. Site Recovery
Deployment Planner does a deep, ASR -specific assessment of your on-premises environment. It provides
recommendations that are required by Site Recovery for successful DR operations such as replication, failover of
your virtual machines.
Which Azure geographies are supported by Azure Migrate?
Azure Migrate currently supports Europe, United States, and Azure Government as the project geographies. Even
though you can only create migration projects in these geographies, you can still assess your machines for
multiple target locations. The project geography is only used to store the discovered metadata.
GEOGRAPHY METADATA STORAGE LOCATION

Azure Government US Gov Virginia

Asia Southeast Asia

Europe North Europe or West Europe

Unites States East US or West Central US

How does the on-premises site connect to Azure Migrate?


The connection can be over the internet or use ExpressRoute with public peering.
What network connectivity requirements are needed for Azure Migrate?
For the URLs and ports needed for Azure Migrate to communicate with Azure, see URLs for connectivity.
Can I harden the VM set up with the OVA template?
Additional components (for example anti-virus) can be added into the OVA template as long as the
communication and firewall rules required for the Azure Migrate appliance to work are left as is.
To harden the Azure Migrate appliance, what are the recommended Antivirus (AV ) exclusions?
You need to exclude the following folders in the appliance for antivirus scanning:
Folder that has the binaries for Azure Migrate Service. Exclude all sub-folders. %ProgramFiles%\ProfilerService
Azure Migrate Web Application. Exclude all sub-folders. %SystemDrive%\inetpub\wwwroot
Local Cache for Database and log files. Azure migrate service needs RW access to this folder.
%SystemDrive%\Profiler

Discovery
What data is collected by Azure Migrate?
Azure Migrate supports two kinds of discovery, appliance-based discovery and agent-based discovery. The
appliance-based discovery collects metadata about the on-premises VMs, the complete list of metadata collected
by the appliance is listed below:
Configuration data of the VM
VM display name (on vCenter)
VM inventory path (host/cluster/folder in vCenter)
IP address
MAC address
Operating system
Number of cores, disks, NICs
Memory size, Disk sizes
Performance data of the VM
CPU usage
Memory usage
For each disk attached to the VM:
Disk read throughput
Disk writes throughput
Disk read operations per sec
Disk writes operations per sec
For each network adapter attached to the VM:
Network in
Network out
The agent-based discovery is an option available on top of the appliance-based discovery and helps customers
visualize dependencies of the on premises VMs. The dependency agents collect details like, FQDN, OS, IP address,
MAC address, processes running inside the VM and the incoming/outgoing TCP connections from the VM. The
agent-based discovery is optional and you can choose to not install the agents if you do not want to visualize the
dependencies of the VMs.
Would there be any performance impact on the analyzed ESXi host environment?
With continuous profiling of performance data, there is no need to change the vCenter Server statistics level to run
a performance-based assessment. The collector appliance will profile the on-premises machines to measure the
performance data of the virtual machines. This would have almost zero performance impact on the ESXi hosts as
well as on the vCenter Server.
Where is the collected data stored and for how long?
The data collected by the collector appliance is stored in the Azure location that you specify while creating the
migration project. The data is securely stored in a Microsoft subscription and is deleted when the user deletes the
Azure Migrate project.
For dependency visualization, if you install agents on the VMs, the data collected by the dependency agents is
stored in the US in a Log Analytics workspace created in user’s subscription. This data is deleted when you delete
the Log Analytics workspace in your subscription. Learn more.
What is the volume of data which is uploaded by Azure Migrate in the case of continuous profiling?
The volume of data which is sent to Azure Migrate would vary based on several parameters. To give an indicative
number, a project having ten machines (each having one disk and one NIC ), would send around 50 MB per day.
This is an approximate value and would change based on the number of data points for the NICs and disks (the
data sent would be non-linear if the number of machines, NICs or disks increase).
Is the data encrypted at rest and while in transit?
Yes, the collected data is encrypted both at rest and while in transit. The metadata collected by the appliance is
securely sent to the Azure Migrate service over internet via https. The collected metadata is stored in Cosmos DB
and in Azure blob storage in a Microsoft subscription and is encrypted at rest.
The data collected by the dependency agents is also encrypted in transit (secure https channel) and is stored in a
Log Analytics workspace in the user’s subscription. It is also encrypted at rest.
How does the collector communicate with the vCenter Server and the Azure Migrate service?
The collector appliance connects to the vCenter Server (port 443) using the credentials provided by the user in the
appliance. It queries the vCenter Server using VMware PowerCLI to collect metadata about the VMs managed by
vCenter Server. It collects both configuration data about VMs (cores, memory, disks, NIC etc.) as well as
performance history of each VM for the last one month from vCenter Server. The collected metadata is then sent
to the Azure Migrate service (over internet via https) for assessment. Learn more
Can I connect the same collector appliance to multiple vCenter servers?
Yes, a single collector appliance can be used to discover multiple vCenter Servers, but not concurrently. You need
to run the discovery one after another.
Is the OVA template used by Site Recovery integrated with the OVA used by Azure Migrate?
Currently there is no integration. The .OVA template in Site Recovery is used to set up a Site Recovery
configuration server for VMware VM/physical server replication. The .OVA used by Azure Migrate is used to
discover VMware VMs managed by a vCenter server, for the purposes of migration assessment.
I changed my machine size. Can I rerun the assessment?
If you change the settings on a VM you want to assess, trigger discover again using the collector appliance. In the
appliance, use the Start collection again option to do this. After the collection is done, select the Recalculate
option for the assessment in the portal, to get updated assessment results.
How can I discover a multi-tenant environment in Azure Migrate?
If you have an environment that is shared across tenants and you do not want to discover the VMs of one tenant
in another tenant's subscription, you can use the Scope field in the collector appliance to scope the discovery. If the
tenants are sharing hosts, create a credential that has read-only access to only the VMs belonging to the specific
tenant and then use this credential in the collector appliance and specify the Scope as the host to do the discovery.
Alternatively, you can also create folders in vCenter Server (let's say folder1 for tenant1 and folder2 for tenant2),
under the shared host, move the VMs for tenant1 into folder1 and for tenant2 into folder2 and then scope the
discoveries in the collector accordingly by specifying the appropriate folder.
How many virtual machines can be discovered in a single migration project?
You can discover 1500 virtual machines in a single migration project. If you have more machines in your on-
premises environment, learn more about how you can discover a large environment in Azure Migrate.

Assessment
Does Azure Migrate support Enterprise Agreement (EA ) based cost estimation?
Azure Migrate currently does not support cost estimation for Enterprise Agreement offer. The workaround is to
specify Pay-As-You-Go as the offer and manually specifying the discount percentage (applicable to the
subscription) in the 'Discount' field of the assessment properties.

What is the difference between as-on-premises sizing and performance -based sizing?
When you specify the sizing criterion to be as-on-premises sizing, Azure Migrate does not consider the
performance data of the VMs and sizes the VMs based on the on-premises configuration. If the sizing criterion is
performance-based, the sizing is done based on utilization data. For example, if there is an on-premises VM with 4
cores and 8 GB memory with 50% CPU utilization and 50% memory utilization. If the sizing criterion is as on-
premises sizing an Azure VM SKU with 4 cores and 8GB memory is recommended, however, if the sizing criterion
is performance-based as VM SKU of 2 cores and 4 GB would be recommended as the utilization percentage is
considered while recommending the size. Similarly, for disks, the disk sizing depends on two assessment
properties - sizing criterion and storage type. If the sizing criterion is performance-based and storage type is
automatic, the IOPS and throughput values of the disk are considered to identify the target disk type (Standard or
Premium). If the sizing criterion is performance-based and storage type is premium, a premium disk is
recommended, the premium disk SKU in Azure is selected based on the size of the on-premises disk. The same
logic is used to do disk sizing when the sizing criterion is as on-premises sizing and storage type is standard or
premium.
What impact does performance history and percentile utilization have on the size recommendations?
These properties are only applicable for performance-based sizing. Azure Migrate collects performance history of
on-premises machines and uses it to recommend the VM size and disk type in Azure. The collector appliance
continuously profiles the on-premises environment to gather real-time utilization data every 20 seconds. The
appliance rolls up the 20-second samples, and creates a single data point for every 15 minutes. To create the single
data point, the appliance selects the peak value from all the 20-second samples, and sends it to Azure. When you
create an assessment in Azure, based on the performance duration and performance history percentile value,
Azure Migrate calculates the effective utilization value and uses it for sizing. For example, if you have set the
performance duration to be 1 day and percentile value to 95 percentile, Azure Migrate uses the 15 min sample
points sent by collector for the last one day, sorts them in ascending order and picks the 95th percentile value as
the effective utilization. The 95th percentile value ensures that you are ignoring any outliers which may come if
you pick the 99th percentile. If you want to pick the peak usage for the period and do not want to miss any outliers,
you should select the 99th percentile.

Dependency visualization
NOTE
The dependency visualization functionality is not available in Azure Government.

What is dependency visualization?


Dependency visualization enables you to assess groups of VMs for migration with greater confidence by cross-
checking machine dependencies before you run an assessment. Dependency visualization helps you to ensure that
nothing is left behind, avoiding unexpected outages when you migrate to Azure. Azure Migrate leverages the
Service Map solution in Azure Monitor logs to enable dependency visualization.
Do I need to pay to use the dependency visualization feature?
No. Learn more about Azure Migrate pricing here.
Do I need to install anything for dependency visualization?
To use dependency visualization, you need to download and install agents on each on-premises machine that you
want to evaluate.
Microsoft Monitoring agent(MMA) needs to be installed on each machine.
The Dependency agent needs to be installed on each machine.
In addition, if you have machines with no internet connectivity, you need to download and install Log Analytics
gateway on them.
You don't need these agents on machines you want to assess unless you're using dependency visualization.
Can I use an existing workspace for dependency visualization?
Yes, Azure Migrate now allows you to attach an existing workspace to the migration project and leverage it for
dependency visualization. Learn more.
Can I export the dependency visualization report?
No, the dependency visualization cannot be exported. However, since Azure Migrate uses Service Map for
dependency visualization, you can use the Service Map REST APIs to get the dependencies in a json format.
How can I automate the installation of Microsoft Monitoring Agent (MMA ) and dependency agent?
Here is a script that you can use for installation of dependency agent. Here are instructions on how you can install
MMA using command line or automated methods. For MMA, you can also leverage a script available here on
Technet.
In addition to scripts, you can also leverage deployment tools like System Center Configuration Manager (SCCM ),
Intigua etc. to deploy the agents.
What are the operating systems supported by MMA?
The list of Windows operating systems supported by MMA is here. The list of Linux operating systems supported
by MMA is here.
What are the operating systems supported by dependency agent?
The list of Windows operating systems supported by dependency agent is here. The list of Linux operating
systems supported by dependency agent is here.
Can I visualize dependencies in Azure Migrate for more than one hour duration?
No, Azure Migrate lets you visualize dependencies for up to one hour duration. Azure Migrate allows you to go
back to a particular date in the history for up to last one month, but the maximum duration for which you can
visualize the dependencies is up to 1 hour. For example, you can use the time duration functionality in the
dependency map, to view dependencies for yesterday, but can only view it for a one hour window. However, you
can use Azure Monitor logs to query the dependency data over a longer duration.
Is dependency visualization supported for groups with more than 10 VMs?
You can visualize dependencies for groups that have up to 10 VMs. If you have a group with more than 10 VMs,
we recommend you to split the group in to smaller groups and visualize the dependencies.

Next steps
Read the Azure Migrate overview
Learn how you can discover and assess a VMware environment

You might also like