Professional Documents
Culture Documents
The Azure Migrate service assesses on-premises workloads for migration to Azure. The service assesses the
migration suitability of on-premises machines, performs performance-based sizing, and provides cost
estimations for running on-premises machines in Azure. If you're contemplating lift-and-shift migrations, or are
in the early assessment stages of migration, this service is for you. After the assessment, you can use services
such as Azure Site Recovery and Azure Database Migration Service, to migrate the machines to Azure.
Current limitations
You can only assess on-premises VMware virtual machines (VMs) for migration to Azure VMs. The
VMware VMs must be managed by vCenter Server (version 5.5, 6.0, 6.5 or 6.7).
Support for Hyper-V is currently in preview with production support, if you are interested in trying it out,
please sign up here.
For assessment of physical servers, you can leverage our partner tools.
You can discover up to 1500 VMs in a single discovery and in a single project. We have a preview release
available that allows discovery of up to 10,000 VMware VMs in a single project using a single appliance, if
you are interested in trying it out, please sign up here.
If you want to discover a larger environment, you can split the discovery and create multiple projects.
Learn more. Azure Migrate supports up to 20 projects per subscription.
Azure Migrate only supports managed disks for migration assessment.
You can only create an Azure Migrate project in the following geographies. However, this does not restrict
your ability to create assessments for other target Azure locations.
What's in an assessment?
Assessment settings can be customized based on your needs. Assessment properties are summarized in the
following table.
PROPERTY DETAILS
Storage type The type of managed disks you want to allocate for all VMs
that are part of the assessment. If the sizing criterion is as
on-premises sizing you can specify the target disk type
either as premium disks (the default), standard SSD disks or
standard HDD disks. For performance-based sizing, along
with the above options, you also have the option to select
Automatic which will ensure that the disk sizing
recommendation is automatically done based on the
performance data of the VMs. For example, if you want to
achieve a single instance VM SLA of 99.9%, you may want to
specify the storage type as Premium managed disks which
will ensure that all disks in the assessment will be
recommended as Premium managed disks. Note that Azure
Migrate only supports managed disks for migration
assessment.
VM series The VM series used for size estimations. For example, if you
have a production environment that you do not plan to
migrate to A-series VMs in Azure, you can exclude A-series
from the list or series. Sizing is based on the selected series
only.
VM uptime If your VMs are not going to be running 24x7 in Azure, you
can specify the duration (number of days per month and
number of hours per day) for which they would be running
and the cost estimations will be done accordingly. The default
value is 31 days per month and 24 hours per day.
Azure offer The Azure offer you're enrolled to. Azure Migrate estimates
the cost accordingly.
Azure Hybrid Benefit Whether you have software assurance and are eligible for
Azure Hybrid Benefit with discounted costs.
Next steps
Follow the tutorial to create an assessment for an on-premises VMware VM.
Review frequently asked questions about Azure Migrate.
Discover and assess on-premises VMware VMs for
migration to Azure
4/10/2019 • 13 minutes to read • Edit Online
The Azure Migrate services assesses on-premises workloads for migration to Azure.
In this tutorial, you learn how to:
Create an account that Azure Migrate uses to discover on-premises VMs
Create an Azure Migrate project.
Set up an on-premises collector virtual machine (VM ), to discover on-premises VMware VMs for assessment.
Group VMs and create an assessment.
If you don't have an Azure subscription, create a free account before you begin.
Prerequisites
VMware: The VMs that you plan to migrate must be managed by vCenter Server running version 5.5, 6.0, 6.5,
or 6.7. Additionally, you need one ESXi host running version 5.5 or higher to deploy the collector VM.
vCenter Server account: You need a read-only account to access the vCenter Server. Azure Migrate uses this
account to discover the on-premises VMs.
Permissions: On the vCenter Server, you need permissions to create a VM by importing a file in .OVA format.
Create a project
1. In the Azure portal, click Create a resource.
2. Search for Azure Migrate, and select the service Azure Migrate in the search results. Then click Create.
3. Specify a project name, and the Azure subscription for the project.
4. Create a new resource group.
5. Specify the geography in which you want to create the project, then click Create. You can only create an Azure
Migrate project in the following geographies. However, you can still plan your migration for any target Azure
location. The geography specified for the project is only used to store the metadata gathered from on-premises
VMs.
GEOGRAPHY STORAGE LOCATION
NOTE
The one-time discovery appliance is now deprecated as this method relied on vCenter Server's statistics settings for
performance data point availability and collected average performance counters which resulted in under-sizing of
VMs for migration to Azure.
Quick assessments: With the continuous discovery appliance, once the discovery is complete (takes
couple of hours depending on the number of VMs), you can immediately create assessments. Since the
performance data collection starts when you kick off discovery, if you are looking for quick assessments,
you should select the sizing criterion in the assessment as as on-premises. For performance-based
assessments, it is advised to wait for at least a day after kicking off discovery to get reliable size
recommendations.
The appliance only collects performance data continuously, it does not detect any configuration change in
the on-premises environment (that is, VM addition, deletion, disk addition etc.). If there is a configuration
change in the on-premises environment, you can do the following to reflect the changes in the portal:
Addition of items (VMs, disks, cores etc.): To reflect these changes in the Azure portal, you can stop
the discovery from the appliance and then start it again. This will ensure that the changes are
updated in the Azure Migrate project.
Deletion of VMs: Due to the way the appliance is designed, deletion of VMs is not reflected even if
you stop and start the discovery. This is because data from subsequent discoveries are appended to
older discoveries and not overridden. In this case, you can simply ignore the VM in the portal, by
removing it from your group and recalculating the assessment.
3. In Copy project credentials, copy the project ID and key. You need these when you configure the
collector.
MD5 5f6b199d8272428ccfa23543b0b5f600
SHA1 daa530de6e8674a66a728885a7feb3b0a2e8ccb0
SHA256 85da50a21a7a6ca684418a87ccc1dd4f8aab30152c438a17b2
16ec401ebb3a21
MD5 169f6449cc1955f1514059a4c30d138b
SHA1 f8d0a1d40c46bbbf78cd0caa594d979f1b587c8f
SHA256 d68fe7d94be3127eb35dd80fc5ebc60434c8571dcd0e114b87
587f24d6b4ee4d
MD5 2ca5b1b93ee0675ca794dd3fd216e13d
SHA1 8c46a52b18d36e91daeae62f412f5cb2a8198ee5
SHA256 3b3dec0f995b3dd3c6ba218d436be003a687710abab9fcd17d
4bdc90a11276be
MD5 e9ef16b0c837638c506b5fc0ef75ebfa
SHA1 37b4b1e92b3c6ac2782ff5258450df6686c89864
SHA256 8a86fc17f69b69968eb20a5c4c288c194cdcffb4ee6568d85ae
5ba96835559ba
For OVA version 1.0.9.14
MD5 6d8446c0eeba3de3ecc9bc3713f9c8bd
SHA1 e9f5bdfdd1a746c11910ed917511b5d91b9f939f
SHA256 7f7636d0959379502dfbda19b8e3f47f3a4744ee9453fc9ce54
8e6682a66f13c
MD5 d0363e5d1b377a8eb08843cf034ac28a
SHA1 df4a0ada64bfa59c37acf521d15dcabe7f3f716b
SHA256 f677b6c255e3d4d529315a31b5947edfe46f45e4eb4dbc8019
d68d1d1b337c2e
2. In the Deploy OVF Template Wizard > Source, specify the location of the .ova file.
3. In Name and Location, specify a friendly name for the collector VM, and the inventory object in which the
VM will be hosted.
4. In Host/Cluster, specify the host or cluster on which the collector VM will run.
5. In storage, specify the storage destination for the collector VM.
6. In Disk Format, specify the disk type and size.
7. In Network Mapping, specify the network to which the collector VM will connect. The network needs
internet connectivity, to send metadata to Azure.
8. Review and confirm the settings, then click Finish.
NOTE
The proxy address needs to be entered in the form http://ProxyIPAddress or http://ProxyFQDN. Only HTTP
proxy is supported. If you have an intercepting proxy, the internet connection might initially fail if you have
not imported the proxy certificate; learn more on how you can fix this by importing the proxy certificate as a
trusted certificate on the collector VM.
The collector checks that the collector service is running. The service is installed by default on the
collector VM.
Download and install VMware PowerCLI.
6. In Specify vCenter Server details, do the following:
Specify the name (FQDN ) or IP address of the vCenter server.
In User name and Password, specify the read-only account credentials that the collector will use to
discover VMs on the vCenter server.
In Collection scope, select a scope for VM discovery. The collector can only discover VMs within
the specified scope. Scope can be set to a specific folder, datacenter, or cluster. It shouldn't contain
more than 1500 VMs. Learn more about how you can discover a larger environment.
NOTE
Collection scope lists only folders of hosts and clusters. Folders of VMs cannot be directly selected as
collection scope. However, you can discover by using a vCenter account that has access to the individual
VMs. Learn more about how to scope to a folder of VMs.
7. In Specify migration project, specify the Azure Migrate project ID and key that you copied from the
portal. If didn't copy them, open the Azure portal from the collector VM. In the project Overview page,
click Discover Machines, and copy the values.
8. In View collection progress, monitor discovery status. Learn more about what data is collected by the
Azure Migrate collector.
NOTE
The collector only supports "English (United States)" as the operating system language and the collector interface language.
If you change the settings on a machine you want to assess, trigger discover again before you run the assessment. In the
collector, use the Start collection again option to do this. After the collection is done, select the Recalculate option for the
assessment in the portal, to get updated assessment results.
NOTE
It is strongly recommended to wait for at least a day, after starting discovery, before creating an assessment. If you would
like to update an existing assessment with the latest performance data, you can use the Recalculate command on the
assessment to update it.
Assessment details
An assessment includes information about whether the on-premises VMs are compatible for Azure, what would
be the right VM size for running the VM in Azure and the estimated monthly Azure costs.
Azure readiness
The Azure readiness view in the assessment shows the readiness status of each VM. Depending on the properties
of the VM, each VM can be marked as:
Ready for Azure
Conditionally ready for Azure
Not ready for Azure
Readiness unknown
For VMs that are ready, Azure Migrate recommends a VM size in Azure. The size recommendation done by Azure
Migrate depends on the sizing criterion specified in the assessment properties. If the sizing criterion is
performance-based sizing, the size recommendation is done by considering the performance history of the VMs
(CPU and memory) and disks (IOPS and throughput). If the sizing criterion is 'as on-premises', Azure Migrate
does not consider the performance data for the VM and disks. The recommendation for the VM size in Azure is
done by looking at the size of the VM on-premises and the disk sizing is done based on the Storage type specified
in the assessment properties (default is premium disks). Learn more about how sizing is done in Azure Migrate.
For VMs that aren't ready or conditionally ready for Azure, Azure Migrate explains the readiness issues, and
provides remediation steps.
The VMs for which Azure Migrate cannot identify Azure readiness (due to data unavailability) are marked as
readiness unknown.
In addition to Azure readiness and sizing, Azure Migrate also suggests tools that you can use for the migrating the
VM. This requires a deeper discovery of on the on-premises environment. Learn more about how you can do a
deeper discovery by installing agents on the on-premises machines. If the agents are not installed on the on-
premises machines, lift and shift migration is suggested using Azure Site Recovery. If the agents are installed on
the on-premises machine, Azure Migrate looks at the processes running inside the machine and identifies whether
the machine is a database machine or not. If the machine is a database machine, Azure Database Migration
Service is suggested, else Azure Site Recovery is suggested as the migration tool.
Monthly cost estimate
This view shows the total compute and storage cost of running the VMs in Azure along with the details for each
machine. Cost estimates are calculated considering the size recommendations done by Azure Migrate for a
machine, its disks, and the assessment properties.
NOTE
The cost estimation provided by Azure Migrate is for running the on-premises VMs as Azure Infrastructure as a service
(IaaS) VMs. Azure Migrate does not consider any Platform as a service (PaaS) or Software as a service (SaaS) costs.
Estimated monthly costs for compute and storage are aggregated for all VMs in the group.
Confidence rating
Each performance-based assessment in Azure Migrate is associated with a confidence rating that ranges from 1
star to 5 star (1 star being the lowest and 5 star being the highest). The confidence rating is assigned to an
assessment based on the availability of data points needed to compute the assessment. The confidence rating of
an assessment helps you estimate the reliability of the size recommendations provided by Azure Migrate.
Confidence rating is not applicable to "as-is" on-premises assessments.
For performance-based sizing, Azure Migrate needs the utilization data for CPU, memory of the VM. Additionally,
for every disk attached to the VM, it needs the disk IOPS and throughput data. Similarly for each network adapter
attached to a VM, Azure Migrate needs the network in/out to do performance-based sizing. If any of the above
utilization numbers are not available in vCenter Server, the size recommendation done by Azure Migrate may not
be reliable. Depending on the percentage of data points available, the confidence rating for the assessment is
provided as below:
0%-20% 1 Star
21%-40% 2 Star
41%-60% 3 Star
61%-80% 4 Star
81%-100% 5 Star
An assessment may not have all the data points available due to one of the following reasons:
You did not profile your environment for the duration for which you are creating the assessment. For
example, if you are creating the assessment with performance duration set to 1 day, you need to wait for at
least a day after you start the discovery for all the data points to get collected.
Few VMs were shut down during the period for which the assessment is calculated. If any VMs were
powered off for some duration, we will not be able to collect the performance data for that period.
Few VMs were created in between the period for which the assessment is calculated. For example, if you
are creating an assessment for the performance history of last one month, but few VMs were created in the
environment only a week ago. In such cases, the performance history of the new VMs will not be there for
the entire duration.
NOTE
If the confidence rating of any assessment is below 5 Stars, wait for at least a day for the appliance to profile the
environment and then Recalculate the assessment. If the preceding cannot be done , performance-based sizing may not be
reliable and it is recommended to switch to as on-premises sizing by changing the assessment properties.
Next steps
Learn how to customize an assessment based on your requirements.
Learn how to create high-confidence assessment groups using machine dependency mapping
Learn more about how assessments are calculated.
Learn how to discover and assess a large VMware environment.
Learn more about the FAQs on Azure Migrate
About the Collector appliance
4/26/2019 • 13 minutes to read • Edit Online
Discovery method
Previously, there were two options for the collector appliance, one-time discovery, and continuous discovery. The
one-time discovery model is now deprecated as it relied on vCenter Server statistics settings for performance data
collection (required statistics settings to be set to level 3) and also collected average counters (instead of peak)
which resulted in under-sizing. The continuous discovery model ensures granular data collection and results in
accurate sizing due to collection of peak counters. Below is how it works:
The collector appliance is continuously connected to the Azure Migrate project and continuously collects
performance data of VMs.
The collector continuously profiles the on-premises environment to gather real-time utilization data every 20
seconds.
The appliance rolls up the 20-second samples, and creates a single data point every 15 minutes.
To create the data point the appliance selects the peak value from the 20-second samples, and sends it to Azure.
This model doesn't depend on the vCenter Server statistics settings to collect performance data.
You can stop continuous profiling at anytime from the Collector.
Quick assessments: With the continuous discovery appliance, once the discovery is complete (it takes couple of
hours depending on the number of VMs), you can immediately create assessments. Since the performance data
collection starts when you kick off discovery, if you are looking for quick assessments, you should select the sizing
criterion in the assessment as as on-premises. For performance-based assessments, it is advised to wait for at least
a day after kicking off discovery to get reliable size recommendations.
The appliance only collects performance data continuously, it does not detect any configuration change in the on-
premises environment (i.e. VM addition, deletion, disk addition etc.). If there is a configuration change in the on-
premises environment, you can do the following to reflect the changes in the portal:
Addition of items (VMs, disks, cores etc.): To reflect these changes in the Azure portal, you can stop the
discovery from the appliance and then start it again. This will ensure that the changes are updated in the
Azure Migrate project.
Deletion of VMs: Due to the way the appliance is designed, deletion of VMs is not reflected even if you stop
and start the discovery. This is because data from subsequent discoveries are appended to older discoveries
and not overridden. In this case, you can simply ignore the VM in the portal, by removing it from your
group and recalculating the assessment.
NOTE
The one-time discovery appliance is now deprecated as this method relied on vCenter Server's statistics settings for
performance data point availability and collected average performance counters which resulted in under-sizing of VMs for
migration to Azure.
Deploying the Collector
You deploy the Collector appliance using an OVF template:
You download the OVF template from an Azure Migrate project in the Azure portal. You import the
downloaded file to vCenter Server, to set up the Collector appliance VM.
From the OVF, VMware sets up a VM with 8 cores, 16 GB RAM, and one disk of 80 GB. The operating
system is Windows Server 2016 (64 bit).
When you run the Collector, a number of prerequisite checks run to make sure that the Collector can
connect to Azure Migrate.
Learn more about creating the Collector.
Collector prerequisites
The Collector must pass a few prerequisite checks to ensure it can connect to the Azure Migrate service over the
internet, and upload discovered data.
Verify Azure cloud: The Collector needs to know the Azure cloud to which you are planning to migrate.
Select Azure Government if you are planning to migrate to Azure Government cloud.
Select Azure Global if you are planning to migrate to commercial Azure cloud.
Based on the cloud specified here, the appliance will send discovered metadata to the respective end
points.
Check internet connection: The Collector can connect to the internet directly, or via a proxy.
The prerequisite check verifies connectivity to required and optional URLs.
If you have a direct connection to the internet, no specific action is required, other than making sure that
the Collector can reach the required URLs.
If you're connecting via a proxy, note the requirements below.
Verify time synchronization: The Collector should synchronized with the internet time server to ensure the
requests to the service are authenticated.
The portal.azure.com url should be reachable from the Collector so that the time can be validated.
If the machine isn't synchronized, you need to change the clock time on the Collector VM to match the
current time. To do this open an admin prompt on the VM, run w32tm /tz to check the time zone. Run
w32tm /resync to synchronize the time.
Check collector service running: The Azure Migrate Collector service should be running on the Collector
VM.
This service is started automatically when the machine boots.
If the service isn't running, start it from the Control Panel.
The Collector service connects to vCenter Server, collects the VM metadata and performance data, and
sends it to the Azure Migrate service.
Check VMware PowerCLI 6.5 installed: The VMware PowerCLI 6.5 PowerShell module must be installed on
the Collector VM, so that it can communicate with vCenter Server.
If the Collector can access the URLs required to install the module, it's install automatically during
Collector deployment.
If the Collector can't install the module during deployment, you must install it manually.
Check connection to vCenter Server: The Collector must be able to vCenter Server and query for VMs, their
metadata, and performance counters. Verify prerequisites for connecting.
Connect to the internet via a proxy
If the proxy server requires authentication, you can specify the username and password when you set up the
Collector.
The IP address/FQDN of the Proxy server should specified as http://IPaddress or http://FQDN.
Only HTTP proxy is supported. HTTPS -based proxy servers aren't supported by the Collector.
If the proxy server is an intercepting proxy, you must import the proxy certificate to the Collector VM.
1. In the collector VM, go to Start Menu > Manage computer certificates.
2. In the Certificates tool, under Certificates - Local Computer, find Trusted Publishers >
Certificates.
3. Copy the proxy certificate to the collector VM. You might need to obtain it from your network admin.
4. Double-click to open the certificate, and click Install Certificate.
5. In the Certificate Import Wizard > Store Location, choose Local Machine.
6. Select Place all certificates in the following store > Browse > Trusted Publishers. Click Finish
to import the certificate.
7. Check that the certificate is imported as expected, and check that the internet connectivity
prerequisite check works as expected.
URLs for connectivity
The connectivity check is validated by connecting to a list of URLs.
ACCOUNT PERMISSIONS
At least a read-only user account Data Center object –> Propagate to Child Object, role=Read-
only
Collector communications
The collector communicates as summarized in the following diagram and table.
Collected metadata
NOTE
Metadata discovered by the Azure Migrate collector appliance is used to help you right-size your applications as you migrate
them to Azure, perform Azure suitability analysis, application dependency analysis, and cost planning. Microsoft does not use
this data in relation to any license compliance audit.
The collector appliance discovers the following configuration metadata for each VM. The configuration data for the
VMs is available an hour after you start discovery.
VM display name (on vCenter Server)
VM’s inventory path (the host/folder on vCenter Server)
IP address
MAC address
Operating system
Number of cores, disks, NICs
Memory size, Disk sizes
Performance counters of the VM, disk and network.
Performance counters
The collector appliance collects the following performance counters for each VM from the ESXi host at an interval
of 20 seconds. These counters are vCenter counters and although the terminology says average, the 20-second
samples are real time counters. The performance data for the VMs starts becoming available in the portal two
hours after you have kicked off the discovery. It is strongly recommended to wait for at least a day before creating
performance-based assessments to get accurate right-sizing recommendations. If you are looking for instant
gratification, you can create assessments with sizing criterion as as on-premises which will not consider the
performance data for right-sizing.
The complete list of VMware counters collected by Azure Migrate is available below:
Disk Details (per disk) Disk name This value is generated using
disk.UnitNumber, disk.Key and
disk.ControllerKey.Value
Disk Details (per disk) Number of read operations per second virtualDisk.numberReadAveraged.avera
ge
Disk Details (per disk) Number of write operations per second virtualDisk.numberWriteAveraged.avera
ge
Network Adapter Details (per NIC) Megabytes per second of read net.received.average
throughput
Network Adapter Details (per NIC) Megabytes per second of write net.transmitted.average
throughput
Inventory Path Details Complete inventory path container.Name with complete path
Inventory Path Details Datacenter details for each Host Folder ((Datacenter)container).HostFolder
CATEGORY METADATA VCENTER DATAPOINT
Next steps
Set up an assessment for on-premises VMware VMs
Collector appliance updates
3/29/2019 • 2 minutes to read • Edit Online
This article summarizes upgrade information for the Collector appliance in Azure Migrate.
The Azure Migrate Collector is a lightweight appliance that's used to discover an on-premises vCenter
environment, for the purposes of assessment before migration to Azure. Learn more.
MD5 846b1eb29ef2806bcf388d10519d78e6
SHA1 6243239fa49c6b3f5305f77e9fd4426a392d33a0
SHA256 fb058205c945a83cc4a31842b9377428ff79b08247f3fb8bb4ff
30c125aa47ad
MD5 27704154082344c058238000dff9ae44
SHA1 41e9e2fb71a8dac14d64f91f0fd780e0d606785e
SHA256 c6e7504fcda46908b636bfe25b8c73f067e3465b748f77e5002
7e66f2727c2a9
NOTE
The one-time discovery appliance is now deprecated as this method relied on vCenter Server's statistics settings for
performance data point availability and collected average performance counters which resulted in under-sizing of VMs for
migration to Azure.
MD5 d2c53f683b0ec7aaf5ba3d532a7382e1
SHA1 e5f922a725d81026fa113b0c27da185911942a01
SHA256 a159063ff508e86b4b3b7b9a42d724262ec0f2315bdba8418b
ce95d973f80cfc
Version 1.0.9.14
Hash values for upgrade package 1.0.9.14
MD5 c5bf029e9fac682c6b85078a61c5c79c
SHA1 af66656951105e42680dfcc3ec3abd3f4da8fdec
SHA256 58b685b2707f273aa76f2e1d45f97b0543a8c4d017cd27f0bd
b220e6984cc90e
Version 1.0.9.13
Hash values for upgrade package 1.0.9.13
MD5 739f588fe7fb95ce2a9b6b4d0bf9917e
SHA1 9b3365acad038eb1c62ca2b2de1467cb8eed37f6
SHA256 7a49fb8286595f39a29085534f29a623ec2edb12a3d76f90c96
54b2f69eef87e
Next steps
Learn more about the Collector appliance.
Run an assessment for VMware VMs.
Assessment calculations
3/15/2019 • 11 minutes to read • Edit Online
Azure Migrate assesses on-premises workloads for migration to Azure. This article provides information about
how assessments are calculated.
Overview
An Azure Migrate assessment has three stages. Assessment starts with a suitability analysis, followed by sizing,
and lastly, a monthly cost estimation. A machine only moves along to a later stage if it passes the previous one.
For example, if a machine fails the Azure suitability check, it’s marked as unsuitable for Azure, and sizing and
costing won't be done.
Boot type Azure supports VMs with boot type as Conditionally ready if boot type is UEFI.
BIOS, and not UEFI.
PROPERTY DETAILS AZURE READINESS STATUS
Cores The number of cores in the machines Ready if less than or equal to limits.
must be equal to or less than the
maximum number of cores (128 cores)
supported for an Azure VM.
NOTE
Azure Migrate considers the OS specified in vCenter Server to do the following analysis. Since the discovery done by Azure
Migrate is appliance-based, it does not have a way to verify if the OS running inside the VM is same as the one specified in
vCenter Server.
The following logic is used by Azure Migrate to identify the Azure readiness of the VM based on the operating
system.
OPERATING SYSTEM DETAILS AZURE READINESS STATUS
Windows Server 2016 & all SPs Azure provides full support. Ready for Azure
Windows Server 2012 R2 & all SPs Azure provides full support. Ready for Azure
Windows Server 2012 & all SPs Azure provides full support. Ready for Azure
Windows Server 2008 R2 with all SPs Azure provides full support. Ready for Azure
Windows Server 2008 (32-bit and 64- Azure provides full support. Ready for Azure
bit)
Windows Server 2003, 2003 R2 These operating systems have passed Conditionally ready for Azure, consider
their end of support date and need a upgrading the OS before migrating to
Custom Support Agreement (CSA) for Azure.
support in Azure.
Windows 2000, 98, 95, NT, 3.1, MS- These operating systems have passed Conditionally ready for Azure, it is
DOS their end of support date, the machine recommended to upgrade the OS
may boot in Azure, but no OS support before migrating to Azure.
is provided by Azure.
Windows Client 7, 8 and 10 Azure provides support with Visual Conditionally ready for Azure
Studio subscription only.
Windows 10 Pro Desktop Azure provides support with Conditionally ready for Azure
Multitenant Hosting Rights.
Windows Vista, XP Professional These operating systems have passed Conditionally ready for Azure, it is
their end of support date, the machine recommended to upgrade the OS
may boot in Azure, but no OS support before migrating to Azure.
is provided by Azure.
Linux Azure endorses these Linux operating Ready for Azure if the version is
systems. Other Linux operating endorsed.
systems may boot in Azure, but it is
recommended to upgrade the OS to an Conditionally ready if the version is not
endorsed version before migrating to endorsed.
Azure.
Other operating systems Azure does not endorse these Conditionally ready for Azure, it is
operating systems. The machine may recommended to install a supported
e.g., Oracle Solaris, Apple Mac OS etc., boot in Azure, but no OS support is OS before migrating to Azure.
FreeBSD, etc. provided by Azure.
OS specified as Other in vCenter Server Azure Migrate cannot identify the OS in Unknown readiness. Ensure that the OS
this case. running inside the VM is supported in
Azure.
32-bit operating systems The machine may boot in Azure, but Conditionally ready for Azure, consider
Azure may not provide full support. upgrading the OS of the machine from
32-bit OS to 64-bit OS before
migrating to Azure.
Sizing
After a machine is marked as ready for Azure, Azure Migrate sizes the VM and its disks for Azure. If the sizing
criterion specified in the assessment properties is to do performance-based sizing, Azure Migrate considers the
performance history of the machine to identify the VM size and disk type in Azure. This method is helpful in
scenarios where you have over-allocated the on-premises VM but the utilization is low and you would like to
right-size the VMs in Azure to save cost.
If you do not want to consider the performance history for VM -sizing and want to take the VM as-is to Azure, you
can specify the sizing criterion as as on-premises and Azure Migrate will then size the VMs based on the on-
premises configuration without considering the utilization data. Disk sizing, in this case, will be done based on the
Storage type you specify in the assessment properties (Standard disk or Premium disk)
Performance -based sizing
For performance-based sizing, Azure Migrate starts with the disks attached to the VM, followed by network
adapters and then maps an Azure VM based on the compute requirements of the on-premises VM.
Storage: Azure Migrate tries to map every disk attached to the machine to a disk in Azure.
NOTE
Azure Migrate supports only managed disks for assessment.
To get the effective disk I/O per second (IOPS ) and throughput (MBps), Azure Migrate multiplies the
disk IOPS and the throughput with the comfort factor. Based on the effective IOPS and throughput
values, Azure Migrate identifies if the disk should be mapped to a standard or premium disk in Azure.
If Azure Migrate can't find a disk with the required IOPS & throughput, it marks the machine as
unsuitable for Azure. Learn more about Azure limits per disk and VM.
If it finds a set of suitable disks, Azure Migrate selects the ones that support the storage redundancy
method, and the location specified in the assessment settings.
If there are multiple eligible disks, it selects the one with the lowest cost.
If performance data for disks in unavailable, all the disks are mapped to standard disks in Azure.
Network: Azure Migrate tries to find an Azure VM that can support the number of network adapters
attached to the on-premises machine and the performance required by these network adapters.
To get the effective network performance of the on-premises VM, Azure Migrate aggregates the data
transmitted per second (MBps) out of the machine (network out), across all network adapters, and
applies the comfort factor. This number is used to find an Azure VM that can support the required
network performance.
Along with network performance, it also considers if the Azure VM can support the required the
number of network adapters.
If no network performance data is available, only the network adapters count is considered for VM
sizing.
Compute: After storage and network requirements are calculated, Azure Migrate considers CPU and
memory requirements to find a suitable VM size in Azure.
Azure Migrate looks at the utilized cores and memory, and applies the comfort factor to get the
effective cores and memory. Based on that number, it tries to find a suitable VM size in Azure.
If no suitable size is found, the machine is marked as unsuitable for Azure.
If a suitable size is found, Azure Migrate applies the storage and networking calculations. It then applies
location and pricing tier settings, for the final VM size recommendation.
If there are multiple eligible Azure VM sizes, the one with the lowest cost is recommended.
As on-premises sizing
If the sizing criterion is as on-premises sizing, Azure Migrate does not consider the performance history of the
VMs and disks and allocates a VM SKU in Azure based on the size allocated on-premises. Similarly for disk sizing,
it looks at the Storage type specified in assessment properties (Standard/Premium) and recommends the disk
type accordingly. Default storage type is Premium disks.
Confidence rating
Each performance-based assessment in Azure Migrate is associated with a confidence rating that ranges from 1
star to 5 star (1 star being the lowest and 5 star being the highest). The confidence rating is assigned to an
assessment based on the availability of data points needed to compute the assessment. The confidence rating of
an assessment helps you estimate the reliability of the size recommendations provided by Azure Migrate.
Confidence rating is not applicable to as on-premises assessments.
For performance-based sizing, Azure Migrate needs the utilization data for CPU, memory of the VM. Additionally,
for every disk attached to the VM, it needs the disk IOPS and throughput data. Similarly for each network adapter
attached to a VM, Azure Migrate needs the network in/out to do performance-based sizing. If any of the above
utilization numbers are not available in vCenter Server, the size recommendation done by Azure Migrate may not
be reliable. Depending on the percentage of data points available, the confidence rating for the assessment is
provided as below:
0%-20% 1 Star
21%-40% 2 Star
41%-60% 3 Star
61%-80% 4 Star
81%-100% 5 Star
Below are the reasons regarding why an assessment could get a low confidence rating:
You did not profile your environment for the duration for which you are creating the assessment. For
example, if you are creating the assessment with performance duration set to 1 day, you need to wait for at
least a day after you start the discovery for all the data points to get collected.
Few VMs were shut down during the period for which the assessment is calculated. If any VMs were
powered off for some duration, we will not be able to collect the performance data for that period.
Few VMs were created in between the period for which the assessment is calculated. For example, if you
are creating an assessment for the performance history of last one month, but few VMs were created in the
environment only a week ago. In such cases, the performance history of the new VMs will not be there for
the entire duration.
NOTE
If the confidence rating of any assessment is below 5 Stars, we recommend you to wait for at least a day for the
appliance to profile the environment and then Recalculate the assessment. If the preceding cannot be done ,
performance-based sizing may not be reliable and it is recommended to switch to as on-premises sizing by
changing the assessment properties.
Next steps
Create an assessment for on-premises VMware VMs
Dependency visualization
3/14/2019 • 3 minutes to read • Edit Online
The Azure Migrate services assesses groups of on-premises machines for migration to Azure. You can use the
dependency visualization functionality in Azure Migrate to create groups. This article provides information about
this feature.
NOTE
The dependency visualization functionality is not available in Azure Government.
Overview
Dependency visualization in Azure Migrate allows you to create high-confidence groups for migration
assessments. Using dependency visualization you can view network dependencies of machines and identify related
machines that needed to be migrated together to Azure. This functionality is useful in scenarios where you are not
completely aware of the machines that constitute your application and need to be migrated together to Azure.
While associating a workspace, you will get the option to create a new workspace or attach an existing one:
When you create a new workspace, you need to specify a name for the workspace. The workspace is then
created in a region in the same Azure geography as the migration project.
When you attach an existing workspace, you can pick from all the available workspaces in the same
subscription as the migration project. Note that only those workspaces are listed which were created in a
region where Service Map is supported. To be able to attach a workspace, ensure that you have 'Reader'
access to the workspace.
NOTE
Once you have attached a workspace to a project, you cannot change it later.
The associated workspace is tagged with the key Migration Project, and value Project name, which you
can use to search in the Azure portal.
To navigate to the workspace associated with the project, you can go to Essentials section of the project
Overview page and access the workspace
To use dependency visualization, you need to download and install agents on each on-premises machine that you
want to analyze.
Microsoft Monitoring agent(MMA) needs to be installed on each machine.
The Dependency agent needs to be installed on each machine.
In addition, if you have machines with no internet connectivity, you need to download and install Log Analytics
gateway on them.
You don't need these agents on machines you want to assess unless you're using dependency visualization.
NOTE
The dependency visualization feature uses Service Map via a Log Analytics workspace. Since 28 February 2018, with the
announcement of Azure Migrate general availability, the feature is now available at no extra charge. You will need to create a
new project to make use of the free usage workspace. Existing workspaces before general availability are still chargeable,
hence we recommend you to move to a new project.
Next steps
Group machines using machine dependencies
Learn more about the FAQs on dependency visualization.
Best practices for securing and managing workloads
migrated to Azure
3/26/2019 • 35 minutes to read • Edit Online
As you plan and design for migration, in addition to thinking about the migration itself, you need to consider your
security and management model in Azure after migration. This article describes planning and best practices for
securing your Azure deployment after migrating, and for ongoing tasks to keep your deployment running at an
optimal level.
IMPORTANT
The best practices and opinions described in this article are based on the Azure platform and service features available at the
time of writing. Features and capabilities change over time.
Delete locks
Learn more:
Learn about locking resources to prevent unexpected changes.
Tagging
Learn more:
Learn about tagging and tag limitations.
Review PowerShell and CLI examples to set up tagging, and to apply tags from a resource group to its
resources.
Read Azure tagging best practices.
Best practice: Implement blueprints
Just as blueprint allows engineers and architects to sketch a project's design parameters, Azure Blueprints enable
cloud architects and central IT groups to define a repeatable set of Azure resources that implements and adheres
to an organization's standards, patterns, and requirements. Using Azure blueprints development teams can rapidly
build and create new environments that meet organizational compliance requirements, and that have a set of built-
in components, such as networking, to speed up development and delivery.
Use blueprints to orchestrate the deployment of resource groups, Azure Resource Manager templates, and
policy and role assignments.
Azure blueprints are stored in a globally distributed Azure Cosmos DB. Blueprint objects are replicated to
multiple Azure regions. Replication provides low latency, high availability, and consistent access to blueprint,
regardless of the region to which a blueprint deploys resources.
Learn more:
Read about blueprints.
Review a blueprint example used to accelerate AI in healthcare.
Management groups
Learn more:
Learn more about organizing resources into management groups.
Azure Policy
Learn more:
Get an overview of Azure Policy.
Learn about creating and managing policies to enforce compliance.
Best practice: Implement a BCDR strategy
Planning for business continuity and disaster recovery (BCDR ), is a critical exercise that you should complete
during planning for migration to Azure. In legal terms, your contract includes a force majeure clause that excuse
obligations due to a greater force such as hurricanes or earthquakes. However, you also have obligations around
an ability to ensure that services will continue to run, and recover where necessary, when disaster strike. Your
ability to do this can make or break your company's future.
Broadly, your BCDR strategy must consider:
Data backup: How to keep your data safe so that you can recover it easily if outages occur.
Disaster recovery: How to keep your apps resilient and available if outages occur.
Azure resiliency features
The Azure platform provides a number of resiliency features.
Region pairing: Azure pairs regions to provide regional protection within data residency boundaries. Azure
ensures physical isolation between region pairs, prioritizes the recovery of one region in the pair in case of a
broad outage, deploys system updates separately in each region, and allows features such as Azure geo-
redundant storage to replicate across the regional pairs.
Availability zones: Availability zones protect against failure of an entire Azure datacenter by establishing
physical separate zones with an Azure region. Each zone has a distinctive power source, network infrastructure,
and cooling mechanism.
Availability sets: Availability sets protect against failures within a datacenter. You group VMs in availability sets
to keep them highly available. Within each availability set, Azure implements multiple fault domains that group
together underlying hardware with a common power source and network switch, and update domains that
group together underlying hardware that can undergo maintenance, or be rebooted, at the same time. As an
example, when a workload is spread across Azure VMs, you can put two or more VMs for each app tier into a
set. For example, you can place frontend VMs in one set, and data tier VMs in another. Since only one update
domain is every rebooted at a time in a set, and Azure ensures that VMs in a set are spread across fault
domains, you ensure that not all VMs in a set will fail at the same time.
Set up BCDR
When migrating to Azure, it's important to understand that although the Azure platform provides these inbuilt
resiliency capabilities, you need to design your Azure deployment to take advantage of Azure features and services
that provide high availability, disaster recovery, and backup.
Your BCDR solution will depend your company objectives, and influenced by your Azure deployment strategy.
Infrastructure as a Service (IaaS ) and Platform as a Service (PaaS ) deployments present different challenges for
BCDR.
Once in place, your BCDR solutions should be tested regularly to check that your strategy remains viable.
Azure Backup
Learn more:
Learn about different types of backups.
Plan a backup infrastructure for Azure VMs.
Storage snapshots
Azure VMs are stored as page blobs in Azure Storage.
Snapshots capture the blob state at a specific point in time.
As an alternative backup method for Azure VM disks, you can take a snapshot of storage blobs and copy them
to another storage account.
You can copy an entire blob, or use an incremental snapshot copy to copy only delta changes and reduce
storage space.
As an extra precaution, you can enable soft delete for blob storage accounts. With this feature enabled, a blob
that's deleted is marked for deletion but not immediately purged. During the interim period the blob can be
restored.
Learn more:
Learn about Azure blob storage.
Learn how to create a blob snapshot.
Review a sample scenario for blob storage backup.
Read about soft delete.
Disaster recovery and forced failover (preview ) in Azure Storage
Third-party backup
In addition, you can use third-party solutions to back up Azure VMs and storage containers to local storage or
other cloud providers. Learn more about backup solutions in the Azure marketplace.
Back up a PaaS deployment
Unlike IaaS where you manage your own VMs and infrastructure, in a PaaS model platform and infrastructure is
managed by the provider, leaving you to focus on core app logic and capabilities. With so many different types of
PaaS services, each service will be evaluated individually for the purposes of backup. We'll look at two common
Azure PaaS services - Azure SQL Database, and Azure Functions.
Back up Azure SQL Database
Azure SQL Database is a fully managed PaaS Database Engine. It provides a number of business continuity
features, including automate backup.
SQL Database automatically performs weekly full database backups, and differential backups every 12 hours.
Transaction log backups are taken every five to ten minutes to protect the database from data loss.
Backups are transparent and don't incur additional cost.
Backups are stored in RA-GRS storage for geo-redundancy, and replicated to the paired geographical region.
Backup retention depends on the purchasing model. DTU -based service tiers go from seven days for Basic tier
to 35 days for other tiers.
You can restore a database to a point-in-time within the retention period. You can also restore a deleted
database, restore to a different geographical region, or restore from a long-term backup if the database has a
long-term retention policy (LTR ).
Azure SQL backup
Learn more:
Automated backups for SQL Database.
Recover a database using automated backups.
Back up Azure Functions
Since Azure Functions functions more or less as code, you should back them up using the same methods that you
use to protect code such as source control in GitHub or Azure DevOps Services
Learn more:
Data protection for Azure DevOps.
Site
Recovery
Learn more:
Review disaster recovery scenarios for Azure VMs.
Learn how to set up disaster recovery for an Azure VM after migration.
Azure storage
Azure storage is replicated for built-in resilience and high availability.
Geo-redundant storage (GRS ): Protects against a region-wide outage, with at least 99.99999999999999%
(16 9's) durability of objects over a given year.
Storage data replicates to the secondary region with which your primary region is paired.
If the primary region goes down, and Microsoft initiates a failover to the secondary region, you'll have
read access to your data.
Read access geo-redundant storage (RA -GRS ): Protects against a region-wide outage.
Storage data replicates to the secondary region.
You have guaranteed read access to replicated data in the secondary region, regardless of whether or not
Microsoft initiates a failover. where two or more data centers in the same region might have an issue yet
your data is still available in a geographically separated region.
Zone redundant storage (ZRS ): Protects against datacenter failure.
ZRS replicates data synchronously across three storage clusters in a single region. Clusters and
physically separated and each located in its own availability zone.
If disaster occurs, your storage will still be available. ZRS should be the minimum target for mission-
critical workloads.
Learn more:
Learn about Azure storage replication.
Set up disaster recovery for PaaS workloads
Let's consider disaster recovery options for our PaaS workload examples.
Disaster recovery of Azure SQL Server
There are a number of different options, each impacting data loss, recovery time, and cost.
You can use failover groups and active geo-replication to provide resilience against regional outages and
catastrophic failures
Active geo-replication: Deploy active geo-replication for quick disaster recovery if a datacenter outage
occurs, or a connection can't be made to a primary database.
Geo-replication continually creates readable replicas of your database in up to four secondaries in the
same or different regions.
In an outage, you fail over to one of the secondary regions, and bring your database back online.
Auto-failover groups: Auto-failover groups extend active geo-replication with transparent failover of multiple
databases.
An auto-failover group provides a powerful abstraction of active geo-replication with group level
database replication and automatic failover.
You create a failover group that contains a primary server hosting one or more primary databases, a
secondary server hosting read-only replicas of the primary databases, listeners that point to each server,
and an automatic failover policy.
The specified listener endpoints remove the need to change the SQL connection string after failover.
Geo-restore:
Geo-restore allows you to recover the database to a different region. The automated backup of all Azure
databases will be replicated to a secondary region in the background. It will always restore the database
from the copy of backup files stored in the secondary region.
Zone-redundant databases provide built-in support for Azure availability zones.
Zone-redundant databases enhance high availability for Azure SQL Server in the event of a data center
failure.
With zone-redundancy, you can place redundant database replicas within different availability zones in a
region.
Geo -
replication
Learn more:
Learn about high availability for Azure SQL Server.
Read Azure SQL Databases 101 for disaster recovery.
Get an overview of active geo-replication and failover groups.
Learn about designing for disaster recovery.
Get best practices for failover groups.
Get best practices for security after geo-restore or failover.
Learn about zone redundancy
Learn how to perform a disaster recovery drill for SQL database.
Disaster recovery for Azure Functions
If the compute infrastructure in Azure fails, an Azure function app might become unavailable.
To minimize the possibility of such downtime, use two function apps deployed to different regions.
Azure Traffic Manager can be configured to detect problems in the primary function app, and automatically
redirect traffic to the function app in the secondary region
Traffic Manager with geo-redundant storage allows you to have the same function in multiple regions, in case
of regional failure
Traffic Manager
Learn more:
Learn about disaster recovery for Azure apps.
Learn about disaster recovery and geo-distribution for durable Azure functions.
Alerts
Learn more:
Learn about alerts.
Learn about security playbooks that respond to Security Center alerts.
Azure dashboard
Learn more:
Learn how to create a dashboard.
Learn about dashboard structure.
Next steps
Review other best practices:
Best practices for networking after migration.
Best practices for cost management after migration.
Best practices to set up networking for workloads
migrated to Azure
2/4/2019 • 27 minutes to read • Edit Online
As you plan and design for migration, in addition to the migration itself, one of the most critical steps is the design
and implementation of Azure networking. This article describes best practices for networking when migrating to
IaaS and PaaS implementations in Azure.
IMPORTANT
The best practices and opinions described in this article are based on the Azure platform and service features available at the
time of writing. Features and capabilities change over time. Not all recommendations might be applicable for your
deployment, so select those that work for you.
Learn more:
Learn about designing subnets.
Learn how a fictitious company (Contoso) prepared their networking infrastructure for migration.
Securing VNets
The responsibility for securing VNETs is shared between Microsoft and you. Microsoft provides many networking
features, as well as services that help keep resources secure. When designing security for VNets, there are a
number of best practices you should follow, including implementing a perimeter network, using filtering and
security groups, securing access to resources and IP addresses, and implementing attack protection.
Learn more:
Get an overview of best practices for network security.
Learn how to design for secure networks.
NIC1 AsgWeb
NIC2 AsgWeb
NIC3 AsgLogic
NIC4 AsgDb
In our example, each network interface belongs to only one application security group, but in fact an
interface can belong to multiple groups, in accordance with Azure limits.
None of the network interfaces have an associated NSG. NSG1 is associated to both subnets and contains
the following rules.
Destination port: 80
Protocol: TCP
Access: Allow.
Destination: AsgDb
Protocol: All
Access: Deny.
Protocol: TCP
Access: Allow.
The rules that specify an application security group as the source or destination are only applied to the
network interfaces that are members of the application security group. If the network interface is not a
member of an application security group, the rule is not applied to the network interface, even though the
network security group is associated to the subnet.
Learn more:
Learn about application security groups.
Best practice: Secure access to PaaS using VNet service endpoints
VNet service endpoints extend your VNet private address space and identity to Azure services over a direct
connection.
Endpoints allow you to secure critical Azure service resources to your VNets only. Traffic from your VNet to the
Azure service always remains on the Microsoft Azure backbone network.
VNet private address space can be overlapping and thus cannot be used to uniquely identify traffic originating
from a VNet.
After service endpoints are enabled in your VNet, you can secure Azure service resources by adding a VNet
rule to the service resources. This provides improved security by fully removing public internet access to
resources, and allowing traffic only from your VNet.
Service endpoints
Learn more:
Learn about VNet service endpoints.
Azure Firewall
In Azure Firewall, can centrally create, enforce, and log application and network connectivity policies across
subscriptions and VNets.
Azure Firewall uses a static public IP address for your VNet resources, allowing outside firewalls to identify
traffic originating from your VNet.
Azure Firewall is fully integrated with Azure Monitor for logging and analytics.
As a best practice when creating Azure Firewall rules, use the FQDN tags to create rules.
An FQDN tag represents a group of FQDNs associated with well-known Microsoft services.
You can use an FQDN tag to allow the required outbound network traffic through the firewall.
For example, to manually allow Windows Update network traffic through your firewall, you would need to
create multiple application rules. Using FQDN tags, you create an application rule, and include the Windows
Updates tag. With this rule in place, network traffic to Microsoft Windows Update endpoints can flow through
your firewall.
Learn more:
Get an overview of Azure Firewall.
Learn about FQDN tags.
Network Watcher
With Network Watcher you can monitor and diagnose networking issues without logging into VMs.
You can trigger packet capture by setting alerts, and gain access to real-time performance information at the
packet level. When you see an issue, you can investigate it in detail.
As best practice, you should use Network Watcher to review NSG flow logs.
NSG flow logs in Network Watcher allow you to view information about ingress and egress IP traffic
through an NSG.
Flow logs are written in json format.
Flow logs show outbound and inbound flows on a per-rule basis, the network interface (NIC ) to which
the flow applies, 5-tuple information about the flow (source/destination IP, source/destination port, and
protocol), and whether the traffic was allowed or denied.
Learn more:
Get an overview of Network Watcher.
Learn more about NSG flow Logs.
WAFs Web apps are common, and tend to suffer from vulnerabilities
and potential exploits.
Azure Firewall LiKE NVA firewall farms, Azure Firewall uses a common
administration mechanism, and a set of security rules to
protect workloads hosted in spoke networks, and to control
access to on-premises networks.
NVA firewalls Like Azure Firewall NVA firewall farms have common
administration mechanism, and a set of security rules to
protect workloads hosted in spoke networks, and to control
access to on-premises networks.
If you want to use NVA you can find them in the Azure
Marketplace.
We recommend using one set of Azure Firewalls (or NVAs) for traffic originating on the internet, and another for
traffic originating on-premises.
Using only one set of firewalls for both is a security risk, as it provides no security perimeter between the two
sets of network traffic.
Using separate firewall layers reduces the complexity of checking security rules, and it's clear which rules
correspond to which incoming network request.
Learn more:
Learn about using NVAs in an Azure VNet.
Next steps
Review other best practices:
Best practices for security and management after migration.
Best practices for cost management after migration.
Best practices for costing and sizing workloads
migrated to Azure
4/29/2019 • 17 minutes to read • Edit Online
As you plan and design for migration, focusing on costs ensures the long-term success of your Azure migration.
During a migration project, it's critical that all teams (finance, management, app teams etc) understand associated
costs.
Before migration, estimating your migration spend, with a baseline for monthly, quarterly, and yearly budget
targets is critical to success.
After migration, you should optimize costs, continually monitor workloads, and plan for future usage patterns.
Migrated resources might start out as one type of workload, but evolve into another type over time, based on
usage, costs, and shifting business requirements.
This article describes best practices for costing and sizing before and after migration.
IMPORTANT
The best practices and opinions described in this article are based on Azure platform and service features available at the
time of writing. Features and capabilities change over time. Not all recommendations might be applicable for your
deployment, so select what works for you.
Before migration
Before you move your workloads to the cloud, estimate the monthly cost of running them in Azure. Proactively
managing cloud costs helps you adhere to your operating expenses (OpEx) budget. If budget is limited, take this
into account before migration. Consider converting workloads to Azure serverless technologies, where
appropriate, to reduce costs.
The best practices in this section help you to estimate costs, perform right-sizing for VMs and storage, leverage
Azure Hybrid benefits, use reserved VMs, and estimate cloud spending across subscriptions.
Storage optimized High disk throughput and IO. Good for big data, SQL and NoSQL
databases.
GPU optimized Specialized VMs. Single or multiple Heavy graphics and video editing.
GPUs.
High performance Fastest and most powerful CPU. VMs Critical high-performance apps.
with optional high-throughput network
interfaces (RDMA)
It's important to understand the pricing differences between these VMs, and the long-term budget effects.
Each type has a number of VM series within it.
Additionally, when you select a VM within a series, you can only scale the VM up and down within that series.
For example, a DSv2_2 can scale up to DSv2_4, but it can't be changed to a different series such as Fsv2_2.
Learn more:
Learn more about VM types and sizing, and map sizes to types.
Plan VM sizing.
Review a sample assessment for the fictitious Contoso company.
Blobs Optimized to store massive amounts of Use for streaming and random access
unstructured objects, such as text or scenarios. For example, to serve images
binary data and documents directly to a browser,
Access data from everywhere over stream video and audio, and store
HTTP/HTTPS. backup and disaster recovery data.
Files Managed file shares accessed over SMB Use when migrating on-premises file
3.0 shares, and to provide multiple
access/connections to file data.
Disks Based on page blobs. Use Premium disks for VMs. Use
managed disks for simple management
Disk type (speed): Standard (HDD or and scaling.
SSD) or Premium (SSD).
Queues Store and retrieve large numbers of Connect app components with
messages accessed via authenticated asynchronous message queueing.
calls (HTTP or HTTPS)
Access tiers
Azure storage provides different options for accessing block blob data. Selecting the right access tier helps ensure
that you store block blob data in the most cost-effective manner.
Hot Higher storage cost than Cool. Lower Use for data in active use that's
access charges than Cool. accessed frequently.
Cool Lower storage cost than Hot. Higher Store short-term, data is available but
access charges than Hot. accessed infrequently.
Archive Used for individual block blobs. Use for data that can tolerate server
hours of retrieval latency and will
Most cost-effective option for storage. remain in the tier for at least 180 days.
Data access is more expensive than hot
and cold.
General Purpose v2 Standard Supports blobs (block, page, append), Use for most scenarios and most types
files, disks, queues, and tables. of data. Standard storage accounts can
be HDD or SSD based.
Supports Hot, Cool, and Archive access
tiers. ZRS is supported.
General Purpose v2 Premium Supports Blob storage data (page Microsoft recommends using for all
blobs). Supports Hot, Cool, and Archive VMs.
access tiers. ZRS is supported.
Stored on SSD.
General Purpose v1 Access tiering isn't supported. Doesn't Use if apps need the Azure classic
support ZRS deployment model.
Blob Specialized storage account for storing you can't store page blobs in these
unstructured objects. Provides block accounts, and therefore can't store VHD
blobs and append blobs only (no File, files. You can set an access tier to Hot
Queue, Table or Disk storage services). or Cool.
Provides the same durability, availability,
scalability and performance as General
Purpose v2.
Locally Redundant Storage (LRS) Protects against a local outage by Consider if your app stores data that
replicating within a single storage unit can be easily reconstructed.
to a separate fault domain and update
domain. Keeps multiple copies of your
data in one datacenter. Provides at least
99.999999999 % (11 9's) durability of
objects over a given year.
Zone Redundant Storage (ZRS) Protects again a datacenter outage by Consider if you need consistency,
replicating across three storage clusters durability, and high availability. Might
in a single region. Each storage cluster not protect against a regional disaster
is physically separated and located in its when multiple zones are permanently
own availability zone. Provides at least affected.
99.9999999999 % (12 9's) durability of
objects over a given year by keeping
multiple copies of your data across
multiple data centers or regions.
Geographically Redundant Storage Protects against an entire region Replica data isn't available unless
(GRS) outage by replicating data to a Microsoft initiates a failover to the
secondary region hundreds of miles secondary region. If failover occurs,
away from the primary. Provides at read and write access is available.
least 99.99999999999999 % (16 9's)
durability of objects over a given year.
Read-Access Geographically Similar to GRS. Provides at least Provides and 99.99 % read availability
Redundant Storage (RA-GRS) 99.99999999999999 % (16 9's) by allowing read access from the
durability of objects over a given year second region used for GRS.
Learn more:
Review Azure Storage pricing.
Learn about Azure Import/Export for migration large amounts of data to the Azure blobs and files.
Compare blobs, files, and disk storage data types.
Learn more about access tiers.
Review different types of storage accounts.
Learn about storage redundancy, LRS, ZRS, GRS, and Read-access GRS.
Learn more about Azure Files.
After migration
After a successful migration of your workloads, and a few weeks of collecting consumption data, you'll have a clear
idea of resources costs.
As you analyze data, you can start to generate a budget baseline for Azure resource groups and resources.
Then, as you understand where your cloud budget is being spent, you can analyze how to further reduce your
costs.
Best practices in this section include using Azure Cost Management for cost budgeting and analysis, monitoring
resources and implementing resource group budgets, and optimizing monitoring, storage, and VMs.
Best practices: Use Logic Apps and runbooks with Budgets API
Azure provides a REST API that has access to your tenant billing information.
You can use the Budgets API to integrate external systems and workflows that are triggered by metrics that you
build from the API data.
You can pull usage and resource data into your preferred data analysis tools.
The Azure Resource Usage and RateCard APIs can help you accurately predict and manage your costs.
The APIs are implemented as a Resource Provider and are included in the APIs exposed by the Azure Resource
Manager.
The Budgets API can be integrated with Azure Logic Apps and Runbooks.
Learn more:
Learn more about the Budgets API.
Get insights into Azure usage with the Billing API.
Next steps
Review other best practices:
Best practices for security and management after migration.
Best practices for networking after migration.
Contoso migration: Overview
3/15/2019 • 7 minutes to read • Edit Online
This article demonstrates how the fictitious organization Contoso migrates on-premises infrastructure to the
Microsoft Azure cloud.
This document is the first in a series of articles that show how the fictitious company Contoso migrates to Azure.
The series includes information and scenarios that illustrate how to set up a migration of infrastructure, and run
different types of migrations. Scenarios grow in complexity, and we'll add additional articles over time. The
articles show how the Contoso company completes its migration mission, but pointers for general reading and
specific instructions are provided throughout.
Introduction
Azure provides access to a comprehensive set of cloud services. As developers and IT professionals, you can use
these services to build, deploy, and manage applications on a range of tools and frameworks, through a global
network of datacenters. As your business faces challenges associated with the digital shift, the Azure cloud helps
you to figure out how to optimize resources and operations, engage with your customers and employees, and
transform your products.
However, Azure recognizes that even with all the advantages that the cloud provides in terms of speed and
flexibility, minimized costs, performance, and reliability, many organizations are going to need to run on-premises
datacenters for some time to come. In response to cloud adoption barriers, Azure provides a hybrid cloud
strategy that builds bridges between your on-premises datacenters, and the Azure public cloud. For example,
using Azure cloud resources like Azure Backup to protect on-premises resources, or using Azure analytics to gain
insights into on-premises workloads.
As part of the hybrid cloud strategy, Azure provides growing solutions for migrating on-premises apps and
workloads to the cloud. With simple steps, you can comprehensively assess your on-premises resources to figure
out how they'll run in the Azure cloud. Then, with a deep assessment in hand, you can confidently migrate
resources to Azure. When resources are up and running in Azure, you can optimize them to retain and improve
access, flexibility, security, and reliability.
Migration strategies
Strategies for migration to the cloud fall into four broad categories: rehost, refactor, rearchitect, or rebuild. The
strategy you adopt depends upon your business drivers, and migration goals. You might adopt multiple
strategies. For example, you could choose to rehost (lift-and-shift) simple apps, or apps that aren't critical to your
business, but rearchitect those that are more complex and business-critical. Let's look at the strategies.
Rehost Often referred to as a "lift-and-shift" When you need to move apps quickly
migration. This option doesn't require to the cloud.
code changes, and let's you migrate
your existing apps to Azure quickly. When you want to move an app
Each app is migrated as is, to reap the without modifying it.
benefits of the cloud, without the risk
and cost associated with code changes. When your apps are architected so that
they can leverage Azure IaaS scalability
after migration.
Rearchitect Rearchitecting for migration focuses on When your apps need major revisions
modifying and extending app to incorporate new capabilities, or to
functionality and the code base to work effectively on a cloud platform.
optimize the app architecture for cloud
scalability. When you want to use existing
application investments, meet
For example, you could break down a scalability requirements, apply
monolithic application into a group of innovative Azure DevOps practices, and
microservices that work together and minimize use of virtual machines.
scale easily.
Rebuild Rebuild takes things a step further by When you want rapid development,
rebuilding an app from scratch using and existing apps have limited
Azure cloud technologies. functionality and lifespan.
For example, you could build green When you're ready to expedite
field apps with cloud-native business innovation (including DevOps
technologies like Azure Functions, practices provided by Azure), build new
Azure AI, Azure SQL Database applications using cloud-native
Managed Instance, and Azure Cosmos technologies, and take advantage of
DB. advancements in AI, Blockchain, and
IoT.
Migration articles
The articles in the series are summarized in the table below.
Each migration scenario is driven by slightly different business goals that determine the migration strategy.
For each deployment scenario, we provide information about business drivers and goals, a proposed
architecture, steps to perform the migration, and recommendation for cleanup and next steps after migration
is complete.
Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database using
Data Migration Assistant.
Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs by using the
Site Recovery service.
Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available Article 7: Rehost a Linux app
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to on Azure VMs
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.
Article 8: Rehost a Linux app on Azure Contoso migrates its Linux osTicket Available
VMs and Azure Database for MySQL app to Azure VMs by using Site
Recovery. It migrates the app database
to Azure Database for MySQL by using
MySQL Workbench.
ARTICLE DETAILS STATUS
Article 10: Refactor a Linux app in an Contoso migrates its Linux osTicket Available
Azure web app and Azure Database for app to an Azure web app on multiple
MySQL Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure Database
for MySQL instance.
Article 11: Refactor Team Foundation Contoso migrates its on-premises Available
Server on Azure DevOps Services Team Foundation Server deployment to
Azure DevOps Services in Azure.
Article 12: Rearchitect an app in Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and the
database with Azure SQL Database.
Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app by Available
using a range of Azure capabilities and
services, including Azure App Service,
Azure Kubernetes Service (AKS), Azure
Functions, Azure Cognitive Services,
and Azure Cosmos DB.
Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.
In this article Contoso sets up all the infrastructure elements it needs to complete all migration scenarios.
Demo apps
The articles use two demo apps - SmartHotel360, and osTicket.
SmartHotel360: This app was developed by Microsoft as a test app that you can use when working with
Azure. It's provided as open source and you can download it from GitHub. It's an ASP.NET app connected to
a SQL Server database. Currently the app is on two VMware VMs running Windows Server 2008 R2, and
SQL Server 2008 R2. The app VMs are hosted on-premises and managed by vCenter Server.
osTicket: An open-source service desk ticketing app that runs on Linux. You can download it from GitHub.
Currently the app is on two VMware VMs running Ubuntu 16.04 LTS, using Apache 2, PHP 7.0, and MySQL
5.7
Next steps
Learn how Contoso sets up an on-premises and Azure infrastructure to prepare for migration.
Contoso - Deploy a migration infrastructure
3/18/2019 • 37 minutes to read • Edit Online
In this article, Contoso prepares its on-premises infrastructure for migration, and sets up an Azure
infrastructure, in preparation for migration, and for running the business in a hybrid environment.
It's a sample architecture that's specific to Contoso.
Whether you need all the elements described in this article depends upon your migration strategy. For
example, if you're building only cloud-native apps in Azure, you might need a less complex networking
structure.
This article is part of a series of articles that document how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background information and a series of
deployment scenarios that illustrate how to set up a migration infrastructure, assess the suitability of on-
premises resources for migration, and run different types of migrations. Scenarios grow in complexity. Articles
will be added to the series over time.
Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.
ARTICLE DETAILS STATUS
Article 6: Rehost an app on Azure VMs Contoso migrates the app, using Site Available
and in a SQL Server AlwaysOn Recovery to migrate the app VMs, and
availability group the Database Migration Service to
migrate the app database to a SQL
Server cluster that's protected by an
AlwaysOn availability group.
Article 8: Rehost a Linux app on Azure Contoso migrates its Linux osTicket Available
VMs and Azure Database for MySQL app to Azure VMs by using Site
Recovery. It migrates the app
database to Azure Database for
MySQL by using MySQL Workbench.
Article 10: Refactor a Linux app in an Contoso migrates its Linux osTicket Available
Azure web app and Azure Database app to an Azure web app on multiple
for MySQL sites. The web app is integrated with
GitHub for continuous delivery. It
migrates the app database to an
Azure Database for MySQL instance.
Article 11: Refactor Team Foundation Contoso migrates its on-premises Available
Server on Azure DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.
Article 12: Rearchitect an app in Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the app database with Azure SQL
Database.
Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.
Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.
In this article Contoso sets up all the infrastructure elements it needs to complete all migration scenarios.
Overview
Before Contoso can migrate to Azure, it's critical to prepare an Azure infrastructure. Generally, there are five
broad areas Contoso needs to think about:
Step 1: Azure subscriptions: How will Contoso purchase Azure, and interact with the Azure platform and
services?
Step 2: Hybrid identity: How will it manage and control access to on-premises and Azure resources after
migration? How does Contoso extend or move identity management to the cloud?
Step 3: Disaster recovery and resilience: How will Contoso ensure that its apps and infrastructure are
resilient if outages and disasters occur?
Step 4: Networking: How should Contoso design a networking infrastructure, and establish connectivity
between its on-premises datacenter and Azure?
Step 5: Security: How will its secure the hybrid/Azure deployment?
Step 6: Governance: How will Contoso keep the deployment aligned with security and governance
requirements?
On-premises architecture
Here's a diagram showing the current Contoso on-premises infrastructure.
Contoso has one main datacenter located in the city of New York in the Eastern United States.
There are three additional local branches across the United States.
The main datacenter is connected to the internet with a fiber metro ethernet connection (500 mbps).
Each branch is connected locally to the internet using business class connections, with IPSec VPN tunnels
back to the main datacenter. This allows the entire network to be permanently connected, and optimizes
internet connectivity.
The main datacenter is fully virtualized with VMware. Contoso has two ESXi 6.5 virtualization hosts,
managed by vCenter Server 6.5.
Contoso uses Active Directory for identity management, and DNS servers on the internal network.
The domain controllers in the datacenter run on VMware VMs. The domain controllers at local branches
run on physical servers.
NOTE
The directory that's created has an initial domain name in the form domainname.onmicrosoft.com. The name
can't be changed or deleted. Instead, they need to add its registered domain name to Azure AD.
Sc a l i n g r e so u r c e g r o u p s
In future, Contoso will add other resource groups based on needs. For example, they could define a resource
group for each app or service, so that they can be managed and secure independently.
Create matching security groups on-premises
1. In the on-premises Active Directory, Contoso admins set up security groups with names that match the
names of the Azure resource groups.
2. For management purposes, they create an additional group that will be added to all of the other groups.
This group will have rights to all resource groups in Azure. A limited number of Global Admins will be
added to this group.
Synchronize AD
Contoso wants to provide a common identity for accessing resources on-premises and in the cloud. To do this,
it will integrate the on-premises Active Directory with Azure AD. With this model:
Users and organizations can take advantage of a single identity to access on-premises applications and
cloud services such as Office 365, or thousands of other sites on the internet.
Admins can leverage the groups in AD to implement Role Based Access Control (RBAC ) in Azure.
To facilitate integration, Contoso uses the Azure AD Connect tool. When you install and configure the tool on a
domain controller, it synchronizes the local on-premises AD identities to the Azure AD.
Download the tool
1. In the Azure portal, Contoso admins go to Azure Active Directory > Azure AD Connect, and
download the latest version of the tool to the server they're using for synchronization.
2. They start the AzureADConnect.msi installation, with Use express settings. This is the most
common installation, and can be used for a single-forest topology, with password hash synchronization
for authentication.
3. In Connect to Azure AD, they specify the credentials for connecting to the Azure AD (in the form
CONTOSO\admin or contoso.com\admin).
5. In Ready to configure, they click Start the synchronization process when configuration
completes to start the sync immediately. Then they install.
Note that:
Contoso has a direct connection to Azure. If your on-premises AD is behind a proxy, read this article.
After the first synchronization, on-premises AD objects can be seen in the Azure AD.
The Contoso IT team is represented in each group, based on its role.
Set up RBAC
Azure Role-Based Access Control (RBAC ) enables fine-grained access management for Azure. Using RBAC,
you can grant only the amount of access that users need to perform tasks. You assign the appropriate RBAC
role to users, groups, and applications at a scope level. The scope of a role assignment can be a subscription, a
resource group, or a single resource.
Contoso admins now assigns roles to the AD groups that they synchronized from on-premises.
1. In the ControlCobRG resource group, they click Access control (IAM ) > Add role assignment.
2. In Add role assignment > Role, > Contributor, they select the ContosoCobRG AD group from the
list. The group then appears in the Selected members list.
3. They repeat this with the same permissions for the other resource groups (except for
ContosoAzureAdmins), by adding the Contributor permissions to the AD account that matches the
resource group.
4. For the ContosoAzureAdmins AD group, they assign the Owner role.
For the domain controllers in the VNET-PROD -EUS2 network, Contoso wants traffic to flow both between the
EUS2 hub/production network, and over the VPN connection to on-premises. To do this it Contoso admins
must allow the following:
1. Allow forwarded traffic and Allow gateway transit configurations on the peered connection. In
our example this would be the VNET-HUB -EUS2 to VNET-PROD -EUS2 connection.
2. Allow forwarded traffic and Use remote gateways on the other side of the peering, on the VNET-
PROD -EUS2 to VNET-HUB -EUS2 connection.
3. On-premises they'll set up a static route that directs the local traffic to route across the VPN tunnel to
the VNet. The configuration would be completed on the gateway that provides the VPN tunnel from
Contoso to Azure. They use RRAS for this.
Pr o du c t i o n n et w o r ks
A spoked peer network can't see a spoked peer network in another region via a hub.
For Contoso's production networks in both regions to see each other, Contoso admins need to create a direct
peered connection for VNET-PROD -EUS2 and VENT-PROD -CUS.
Set up DNS
When you deploy resources in virtual networks, you have a couple of choices for domain name resolution. You
can use name resolution provided by Azure, or provide DNS servers for resolution. The type of name
resolution you use depends on how your resources need to communicate with each other. Get more
information about the Azure DNS service.
Contoso admins have decided that the Azure DNS service isn't a good choice in the hybrid environment.
Instead, they're going to leverage the on-premises DNS servers.
Since this is a hybrid network all the VMs on-premises and in Azure need to be able to resolve names to
function properly. This means that custom DNS settings must be applied to all the VNets.
Contoso currently has DCs deployed in the Contoso datacenter and at the branch offices. The primary
DNS servers are CONTOSODC1(172.16.0.10) and CONTOSODC2(172.16.0.1)
When the VNets are deployed, the on-premises domain controllers will be set to be used as DNS
servers in the networks.
To configure this, when using custom DNS on the VNet, Azure's recursive resolvers IP address (such as
168.63.129.16) must be added to the DNS list. To do this, Contoso configures DNS server settings on
each VNet. For example, the custom DNS settings for the VNET-HUB -EUS2 network would be as
follows:
In addition to the on-premises domain controllers, Contoso are going to implement four more to support the
Azure networks, two for each region. Here's what Contoso will deploy in Azure.
After deploying the on-premises domain controllers, Contoso needs to update the DNS settings on networks
on either region to include the new domain controllers in the DNS server list.
Set up domain controllers in Azure
After updating network settings, Contoso admins are ready to build out the domain controllers in Azure.
1. In the Azure portal, they deploy a new Windows Server VM to the appropriate VNet.
2. They create availability sets in each location for the VM. Availability sets do the following:
Ensure that the Azure fabric separates the VMs into different infrastructures in the Azure Region.
Allows Contoso to be eligible for the 99.95% SLA for VMs in Azure. Learn more.
3. After the VM is deployed, they open the network interface for the VM. They set the private IP address to
static, and specify a valid address.
4. Now, they attach a new data disk to the VM. This disk contains the Active Directory database, and the
sysvol share.
The size of the disk will determine the number of IOPS that it supports.
Over time the disk size might need to increase as the environment grows.
The drive shouldn't be set to Read/Write for host caching. Active Directory databases don't
support this.
5. After the disk is added, they connect to the VM over Remote Desktop, and open Server Manager.
6. Then in File and Storage Services, they run the New Volume Wizard, ensuring that the drive is given
the letter F: or above on the local VM.
7. In Server Manager, they add the Active Directory Domain Services role. Then, they configure the
VM as a domain controller.
8. After the VM is configured as a DC and rebooted, they open DNS Manager and configure the Azure
DNS resolver as a forwarder. This allows the DC to forward DNS queries it can't resolve in the Azure
DNS.
9. Now, they update the custom DNS settings for each VNet with the appropriate domain controller for
the VNet region. They include on-premises DCs in the list.
Set up Active Directory
AD is a critical service in networking, and must be configured correctly. Contoso admins will build AD sites for
the Contoso datacenter, and for the EUS2 and CUS regions.
1. They create two new sites (AZURE -EUS2, and AZURE -CUS ) along with the datacenter site
(ContosoDatacenter).
2. After creating the sites, they create subnets in the sites, to match the VNets and datacenter.
3. Then, they create two site links to connect everything. The domain controllers should then be moved to
their location.
5. With everything complete, a list of the domain controllers and sites are shown in the on-premises Active
Directory Administrative Center.
Step 5: Plan for governance
Azure provides a range of governance controls across services and the Azure platform. Read more for a basic
understanding of options.
As they configure identity and access control, Contoso has already begun to put some aspects of governance
and security in place. Broadly, there are three areas it needs to consider:
Policy: Policy in Azure applies and enforces rules and effects over your resources, so that resources stay
compliant with corporate requirements and SLAs.
Locks: Azure allows you to lock subscriptions, resources groups, and other resources, so that they can only
be modified by those with authority to do so.
Tags: Resources can be controlled, audited, and managed with tags. Tags attach metadata to resources,
providing information about resources or owners.
Set up policies
The Azure Policy service evaluates your resources, scanning for those not compliant with the policy definitions
you have in place. For example, you might have a policy that only allows certain types of VMs, or requires
resources to have a specific tag.
Azure policies specify a policy definition, and a policy assignment specifies the scope in which a policy should
be applied. The scope can range from a management group to a resource group. Learn about creating and
managing policies.
Contoso wants to get started with a couple of policies:
It wants a policy to ensure that resources can only be deployed in the EUS2 and CUS regions.
It wants to limit VM SKUs to approved SKUs only. The intention is to ensure that expensive VM SKUs
aren't used.
Limit resources to regions
Contoso uses the built-in policy definition Allowed locations to limit resource regions.
1. In the Azure portal, click All Services, and search for Policy.
2. Select Assignments > Assign Policy.
3. In the policy list, select Allowed locations.
4. Set Scope to the name of the Azure subscription, and select the two regions in the allowed list.
5. By default the policy is set with Deny, meaning that if someone starts a deployment in the subscription
that isn't in EUS2 or CUS, the deployment will fail. Here's what happens if someone in the Contoso
subscription tries to set up a deployment in West US.
Set up locks
Contoso has long been using the ITIL framework for the management of its systems. One of the most
important aspects of the framework is change control, and Contoso wants to make sure that change control is
implemented in the Azure deployment.
Contoso is going to implement locks as follows:
Any production or failover component must be in a resource group that has a ReadOnly lock. This means
that to modify or delete production items, the lock must be removed.
Non-production resource groups will have CanNotDelete locks. This means that authorized users can read
or modify a resource, but can't delete it.
Learn more about locks.
Set up tagging
To track resources as they're added, it will be increasingly important for Contoso to associate resources with an
appropriate department, customer, and environment.
In addition to providing information about resources and owners, tags will enable Contoso to aggregate and
group resources, and to use that data for chargeback purposes.
Contoso needs to visualize its Azure assets in a way that makes sense for the business. For example by role or
department. Note that resources don't need to reside in the same resource group to share a tag. Contoso will
create a simple tag taxonomy so that everyone uses the same tags.
ApplicationTeam Email alias of the team that owns support for the app.
ServiceManager Email alias of the ITIL Service Manager for the resource.
For example:
After creating the tag, Contoso will go back and create new Azure policy definitions and assignments, to
enforce the use of the required tags across the organization.
Encrypt data
Azure Disk Encryption integrates with Azure Key Vault to help control and manage the disk-encryption keys
and secrets in a key vault subscription. It ensures that all data on VM disks are encrypted at rest in Azure
storage.
Contoso has determined that specific VMs require encryption.
Contoso will apply encryption to VMs with customer, confidential, or PPI data.
Conclusion
In this article, Contoso set up an Azure infrastructure and policy for Azure subscription, hybrid identify, disaster
recovery, networking, governance, and security.
Not all of the steps that Contoso completed here are required for a migration to the cloud. In this case, it
wanted to plan a network infrastructure that can be used for all types of migrations, and is secure, resilient, and
scalable.
With this infrastructure in place, Contoso is ready to move on and try out migration.
Next steps
As a first migration scenario, Contoso is going to assess the on-premises SmartHotel360 two-tiered app for
migration to Azure.
Contoso migration: Assess on-premises workloads
for migration to Azure
3/15/2019 • 24 minutes to read • Edit Online
In this article, Contoso assesses its on-premises SmartHotel360 app for migration to Azure.
This article is part of a series that documents how the fictitious company Contoso migrates its on-premises
resources to the Microsoft Azure cloud. The series includes background information, and detailed deployment
scenarios that illustrate how to set up a migration infrastructure, assess the suitability of on-premises resources
for migration, and run different types of migrations. Scenarios grow in complexity. Articles will be added to the
series over time.
Article 3: Assess on-premises resources Contoso runs an assessment of its on- This article
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database using
Data Migration Assistant.
Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.
Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available
and in a SQL Server AlwaysOn app, using Site Recovery to migrate the
availability group app VMs, and the Database Migration
Service to migrate the app database to
a SQL Server cluster that's protected by
an AlwaysOn availability group.
ARTICLE DETAILS STATUS
Article 8: Rehost a Linux app on Azure Contoso migrates its Linux osTicket Available
VMs and Azure Database for MySQL app to Azure VMs using Site Recovery.
It migrates the app database to Azure
Database for MySQL using MySQL
Workbench.
Article 10: Refactor a Linux app in an Contoso migrates its Linux osTicket Available
Azure web app and Azure Database for app to an Azure web app on multiple
MySQL Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.
Article 11: Refactor Team Foundation Contoso migrates its on-premises Available
Server on Azure DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.
Article 12: Rearchitect an app in Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL Database.
Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app by Available
using a range of Azure capabilities and
services, including Azure App Service,
Azure Kubernetes Service (AKS), Azure
Functions, Azure Cognitive Services,
and Azure Cosmos DB.
Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.
Overview
As Contoso considers migrating to Azure, the company wants to run a technical and financial assessment to
determine whether its on-premises workloads are suitable for migration to the cloud. In particular, the Contoso
team wants to assess machine and database compatibility for migration. It wants to estimate capacity and costs
for running Contoso's resources in Azure.
To get started and to better understand the technologies involved, Contoso assesses two of its on-premises apps,
summarized in the following table. The company assesses for migration scenarios that rehost and refactor apps
for migration. Learn more about rehosting and refactoring in the Contoso migration overview.
APP NAME PLATFORM APP TIERS DETAILS
SmartHotel360 Runs on Windows with a Two-tiered app. The front- VMs are VMware, running
SQL Server database end ASP.NET website runs on an ESXi host managed
(manages Contoso travel on one VM (WEBVM) and by vCenter Server.
requirements) the SQL Server runs on
another VM (SQLVM). You can download the
sample app from GitHub.
osTicket Runs on Linux/Apache with Two-tiered app. A front-end The app is used by
MySQL PHP (LAMP) PHP website runs on one customer service apps to
(Contoso service desk app) VM (OSTICKETWEB) and track issues for internal
the MySQL database runs employees and external
on another VM customers.
(OSTICKETMYSQL).
You can download the
sample from GitHub.
Current architecture
This diagram shows the current Contoso on-premises infrastructure:
Contoso has one main datacenter. The datacenter is located in the city of New York in the Eastern United
States.
Contoso has three additional local branches across the United States.
The main datacenter is connected to the internet with a fiber Metro Ethernet connection (500 MBps).
Each branch is connected locally to the internet by using business-class connections with IPsec VPN tunnels
back to the main datacenter. The setup allows Contoso's entire network to be permanently connected and
optimizes internet connectivity.
The main datacenter is fully virtualized with VMware. Contoso has two ESXi 6.5 virtualization hosts that are
managed by vCenter Server 6.5.
Contoso uses Active Directory for identity management. Contoso uses DNS servers on the internal network.
The domain controllers in the datacenter run on VMware VMs. The domain controllers at local branches run
on physical servers.
Business drivers
Contoso's IT leadership team has worked closely with the company's business partners to understand what the
business wants to achieve with this migration:
Address business growth: Contoso is growing. As a result, pressure has increased on the company's on-
premises systems and infrastructure.
Increase efficiency: Contoso needs to remove unnecessary procedures and streamline processes for its
developers and users. The business needs IT to be fast and to not waste time or money, so the company can
deliver faster on customer requirements.
Increase agility: Contoso IT needs to be more responsive to the needs of the business. It must be able to
react faster than the changes that occur in the marketplace for the company to be successful in a global
economy. IT at Contoso must not get in the way or become a business blocker.
Scale: As the company's business grows successfully, Contoso IT must provide systems that can grow at the
same pace.
Assessment goals
The Contoso cloud team has identified goals for its migration assessments:
After migration, apps in Azure should have the same performance capabilities that apps have today in
Contoso's on-premises VMWare environment. Moving to the cloud doesn't mean that app performance is
less critical.
Contoso needs to understand the compatibility of its applications and databases with Azure requirements.
Contoso also needs to understand its hosting options in Azure.
Contoso's database administration should be minimized after apps move to the cloud.
Contoso wants to understand not only its migration options, but also the costs associated with the
infrastructure after it moves to the cloud.
Assessment tools
Contoso uses Microsoft tools for its migration assessment. The tools align with the company's goals and should
provide Contoso with all the information it needs.
Data Migration Assistant Contoso uses Data Migration Assistant Data Migration Assistant is a free,
to assess and detect compatibility downloadable tool.
issues that might affect its database
functionality in Azure. Data Migration
Assistant assesses feature parity
between SQL sources and targets. It
recommends performance and
reliability improvements.
TECHNOLOGY DESCRIPTION COST
Azure Migrate Contoso uses the Azure Migrate As of May 2018, Azure Migrate is a
service to assess its VMware VMs. free service.
Azure Migrate assesses the migration
suitability of the machines. It provides
sizing and cost estimates for running in
Azure.
Service Map Azure Migrate uses Service Map to Service Map is part of Azure Monitor
show dependencies between machines logs. Currently, Contoso can use
that the company wants to migrate. Service Map for 180 days without
incurring charges.
In this scenario, Contoso downloads and runs Data Migration Assistant to assess the on-premises SQL Server
database for its travel app. Contoso uses Azure Migrate with dependency mapping to assess the app VMs before
migration to Azure.
Assessment architecture
Prerequisites
Contoso and other users must meet the following prerequisites for the assessment:
Owner or Contributor permissions for the Azure subscription, or for a resource group in the Azure
subscription.
An on-premises vCenter Server instance running version 6.5, 6.0, or 5.5.
A read-only account in vCenter Server, or permissions to create one.
Permissions to create a VM on the vCenter Server instance by using an .ova template.
At least one ESXi host running version 5.5 or later.
At least two on-premises VMware VMs, one running a SQL Server database.
Permissions to install Azure Migrate agents on each VM.
The VMs should have direct internet connectivity.
You can restrict internet access to the required URLs.
If your VMs don't have internet connectivity, the Azure Log Analytics Gateway must be installed on
them, and agent traffic directed through it.
The FQDN of the VM running the SQL Server instance, for database assessment.
Windows Firewall running on the SQL Server VM should allow external connections on TCP port 1433
(default). This setup allows Data Migration Assistant to connect.
Assessment overview
Here's how Contoso performs its assessment:
Step 1: Download and install Data Migration Assistant: Contoso prepares Data Migration Assistant for
assessment of the on-premises SQL Server database.
Step 2: Assess the database by using Data Migration Assistant: Contoso runs and analyzes the database
assessment.
Step 3: Prepare for VM assessment by using Azure Migrate: Contoso sets up on-premises accounts and
adjusts VMware settings.
Step 4: Discover on-premises VMs by using Azure Migrate: Contoso creates an Azure Migrate collector
VM. Then, Contoso runs the collector to discover VMs for assessment.
Step 5: Prepare for dependency analysis by using Azure Migrate: Contoso installs Azure Migrate agents
on the VMs, so the company can see dependency mapping between VMs.
Step 6: Assess the VMs by using Azure Migrate: Contoso checks dependencies, groups the VMs, and runs
the assessment. When the assessment is ready, Contoso analyzes the assessment in preparation for
migration.
NOTE
Currently, Data Migration Assistant doesn't support assessment for migrating to an Azure SQL Database Managed
Instance. As a workaround, Contoso uses SQL Server on an Azure VM as the supposed target for the assessment.
3. In Select Target Version, Contoso selects SQL Server 2017 as the target version. Contoso needs to
select this version because it's the version that's used by the SQL Database Managed Instance.
4. Contoso selects reports to help it discover information about compatibility and new features:
Compatibility Issues note changes that might break migration or that require a minor adjustment
before migration. This report keeps Contoso informed about any features currently in use that are
deprecated. Issues are organized by compatibility level.
New features' recommendation notes new features in the target SQL Server platform that can
be used for the database after migration. New feature recommendations are organized under the
headings Performance, Security, and Storage.
5. In Connect to a server, Contoso enters the name of the VM that's running the database and credentials
to access it. Contoso selects Trust server certificate to make sure the VM can access SQL Server. Then,
Contoso selects Connect.
6. In Add source, Contoso adds the database it wants to assess, and then selects Next to start the
assessment.
7. The assessment is created.
8. In Review Results, Contoso views the assessment results.
Analyze the database assessment
Results are displayed as soon as they're available. If Contoso fixes issues, it must select Restart Assessment to
rerun the assessment.
1. In the Compatibility issues report, Contoso checks for any issues at each compatibility level.
Compatibility levels map to SQL Server versions as follows:
100: SQL Server 2008/Azure SQL Database
110: SQL Server 2012/Azure SQL Database
120: SQL Server 2014/Azure SQL Database
130: SQL Server 2016/Azure SQL Database
140: SQL Server 2017/Azure SQL Database
2. In the Feature recommendations report, Contoso views performance, security, and storage features
that the assessment recommends after migration. A variety of features are recommended, including In-
Memory OLTP, columnstore indexes, Stretch Database, Always Encrypted, dynamic data masking, and
transparent data encryption.
NOTE
Contoso should enable transparent data encryption for all SQL Server databases. This is even more critical when a
database is in the cloud than when it's hosted on-premises. Transparent data encryption should be enabled only
after migration. If transparent data encryption is already enabled, Contoso must move the certificate or
asymmetric key to the master database of the target server. Learn how to move a transparent data encryption-
protected database to another SQL Server instance.
NOTE
For large-scale assessments:
Run multiple assessments concurrently and view the state of the assessments on the All Assessments page.
Consolidate assessments into a SQL Server database.
Consolidate assessments into a Power BI report.
NOTE
You can create an Azure Migrate project only in the West Central US or East US region.
You can plan a migration for any target location.
The project location is used only to store the metadata that's gathered from on-premises VMs.
Download the collector appliance
Azure Migrate creates an on-premises VM known as the collector appliance. The VM discovers on-premises
VMware VMs and sends metadata about the VMs to the Azure Migrate service. To set up the collector appliance,
Contoso downloads an OVA template, and then imports it to the on-premises vCenter Server instance to create
the VM.
1. In the Azure Migrate project, Contoso selects Getting Started > Discover & Assess > Discover
Machines. Contoso downloads the OVA template file.
2. Contoso copies the project ID and key. The project and ID are required for configuring the collector.
Verify the collector appliance
Before deploying the VM, Contoso checks that the OVA file is secure:
1. On the machine on which the file was downloaded, Contoso opens an administrator Command Prompt
window.
2. Contoso runs the following command to generate the hash for the OVA file:
C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]
Example
C:\>CertUtil -HashFile C:\AzureMigrate\AzureMigrate.ova SHA256
3. The generated hash should match the hash values listed here.
Create the collector appliance
Now, Contoso can import the downloaded file to the vCenter Server instance and provision the collector
appliance VM:
1. In the vSphere Client console, Contoso selects File > Deploy OVF Template.
2. In the Deploy OVF Template Wizard, Contoso selects Source, and then specifies the location of the OVA
file.
3. In Name and Location, Contoso specifies a display name for the collector VM. Then, it selects the
inventory location in which to host the VM. Contoso also specifies the host or cluster on which to run the
collector appliance.
4. In Storage, Contoso specifies the storage location. In Disk Format, Contoso selects how it wants to
provision the storage.
5. In Network Mapping, Contoso specifies the network in which to connect the collector VM. The network
needs internet connectivity to send metadata to Azure.
6. Contoso reviews the settings, and then selects Power on after deployment > Finish. A message that
confirms successful completion appears when the appliance is created.
Run the collector to discover VMs
Now, Contoso runs the collector to discover VMs. Currently, the collector currently supports only English
(United States) as the operating system language and collector interface language.
1. In the vSphere Client console, Contoso selects Open Console. Contoso specifies the language, time zone,
and password preferences for the collector VM.
2. On the desktop, Contoso selects the Run collector shortcut.
3. In Azure Migrate Collector, Contoso selects Set up prerequisites. Contoso accepts the license terms and
reads the third-party information.
4. The collector checks that the VM has internet access, that the time is synced, and that the collector service
is running. (The collector service is installed by default on the VM.) Contoso also installs VMware
PowerCLI.
NOTE
It's assumed that the VM has direct access to the internet without using a proxy.
5. In Specify vCenter Server details, Contoso enters the name (FQDN ) or IP address of the vCenter
Server instance and the read-only credentials used for discovery.
6. Contoso selects a scope for VM discovery. The collector can discover only VMs that are within the
specified scope. The scope can be set to a specific folder, datacenter, or cluster. The scope shouldn't contain
more than 1,500 VMs.
7. In Specify migration project, Contoso enters the Azure Migrate project ID and key that were copied
from the portal. To get the project ID and key, Contoso can go to the project Overview page > Discover
Machines.
8. In View collection progress, Contoso can monitor discovery and check that metadata collected from the
VMs is in scope. The collector provides an approximate discovery time.
Verify VMs in the portal
When collection is finished, Contoso checks that the VMs appear in the portal:
1. In the Azure Migrate project, Contoso selects Manage > Machines. Contoso checks that the VMs that it
wants to discover are shown.
2. Currently, the machines don't have the Azure Migrate agents installed. Contoso must install the agents to
view dependencies.
4. In Azure Log Analytics, Contoso pastes the workspace ID and key that it copied from the portal.
2. Contoso must run the command to install the MMA agent as root. To become root, Contoso runs the
following command, and then enters the root password:
sudo -i
wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-
Linux/master/installer/scripts/onboard_agent.sh && sh onboard_agent.sh -w 6b7fcaff-7efb-4356-
ae06-516cacf5e25d -s
k7gAMAw5Bk8pFVUTZKmk2lG4eUciswzWfYLDTxGcD8pcyc4oT8c6ZRgsMy3MmsQSHuSOcmBUsCjoRiG2x9A8Mg==
NOTE
To view more granular dependencies, you can expand the time range. You can select a specific duration or select
start and end dates.
Run an assessment
1. In Groups, Contoso opens the group (smarthotelapp), and then selects Create assessment.
An assessment has a confidence rating of from 1 star to 5 stars (1 star is the lowest and 5 stars is the highest).
The confidence rating is assigned to an assessment based on the availability of data points that are
needed to compute the assessment.
The rating helps you estimate the reliability of the size recommendations that are provided by Azure
Migrate.
The confidence rating is useful when you are doing performance-based sizing. Azure Migrate might not
have enough data points for utilization-based sizing. For as on-premises sizing, the confidence rating is
always 5 stars because Azure Migrate has all the data points it needs to size the VM.
Depending on the percentage of data points available, the confidence rating for the assessment is
provided:
0%-20% 1 star
21%-40% 2 stars
AVAILABILITY OF DATA POINTS CONFIDENCE RATING
41%-60% 3 stars
61%-80% 4 stars
81%-100% 5 stars
The assessment report shows the information that's summarized in the table. To show performance-based sizing,
Azure Migrate needs the following information. If the information can't be collected, sizing assessment might not
be accurate.
Utilization data for CPU and memory.
Read/write IOPS and throughput for each disk attached to the VM.
Network in/out information for each network adapter attached to the VM.
- Readiness unknown
Azure VM size For ready VMs, Azure Migrate provides Sizing recommendation depends on
an Azure VM size recommendation. assessment properties:
Cost estimates are calculated by using the size recommendations for a machine.
Estimated monthly costs for compute and storage are aggregated for all VMs in the group.
Conclusion
In this scenario, Contoso assesses its SmartHotel360 app database by using the Data Migration Assessment
tool. It assesses the on-premises VMs by using the Azure Migrate service. Contoso reviews the assessments to
make sure that on-premises resources are ready for migration to Azure.
Next steps
In the next article in the series, Contoso rehosts its SmartHotel360 app in Azure by using a lift-and-shift
migration. Contoso migrates the front-end WEBVM for the app by using Azure Site Recovery. It migrates the
app database to an Azure SQL Database Managed Instance by using the Database Migration Service. Get
started with this deployment.
Contoso migration: Rehost an on-premises app on
an Azure VM and SQL Database Managed Instance
3/15/2019 • 31 minutes to read • Edit Online
In this article, Contoso migrates its SmartHotel360 app front-end VM to an Azure VM by using the Azure Site
Recovery service. Contoso also migrates the app database to Azure SQL Database Managed Instance.
NOTE
Azure SQL Database Managed Instance currently is in preview.
This article is one in a series of articles that documents how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background information and a series of
scenarios that illustrate how to set up a migration infrastructure and run different types of migrations. Scenarios
grow in complexity. Articles will be added to the series over time.
Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises two-tier SmartHotel app
running on VMware. Contoso assesses
app VMs by using the Azure Migrate
service. Contoso assesses the app SQL
Server database by using Data
Migration Assistant.
Article 4: Rehost an app on an Azure Contoso runs a lift-and-shift migration This article
VM and SQL Database Managed to Azure for its on-premises
Instance SmartHotel app. Contoso migrates the
app front-end VM by using Azure Site
Recovery. Contoso migrates the app
database to an Azure SQL Database
Managed Instance by using the Azure
Database Migration Service.
Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel app Available
VMs to Azure VMs by using the Site
Recovery service.
ARTICLE DETAILS STATUS
Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel app. Available
and in a SQL Server AlwaysOn Contoso uses Site Recovery to migrate
availability group the app VMs. It uses the Database
Migration Service to migrate the app
database to a SQL Server cluster that's
protected by an AlwaysOn availability
group.
Article 8: Rehost a Linux app on Azure Contoso migrates its Linux osTicket Available
VMs and Azure Database for MySQL app to Azure VMs by using Site
Recovery. It migrates the app database
to Azure Database for MySQL by using
MySQL Workbench.
Article 9: Refactor an app in an Azure Contoso migrates its SmartHotel app Available
web app and Azure SQL Database to an Azure web app and migrates the
app database to an Azure SQL Server
instance.
Article 10: Refactor a Linux app in an Contoso migrates its Linux osTicket Available
Azure web app and Azure Database for app to an Azure web app on multiple
MySQL sites. The web app is integrated with
GitHub for continuous delivery.
Contoso migrates the app database to
an Azure Database for MySQL
instance.
Article 11: Refactor Team Foundation Contoso migrates its on-premises Available
Server on Azure DevOps Services Team Foundation Server deployment
by migrating it to Azure DevOps
Services in Azure.
Article 12: Rearchitect an app in Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure, and then rearchitects the
app. Contoso rearchitects the app web
tier as a Windows container, and
rearchitects the app database by using
Azure SQL Database.
Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service,
Azure Functions, Azure Cognitive
Services, and Azure Cosmos DB.
Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.
You can download the sample SmartHotel360 app that's used in this article from GitHub.
Business drivers
Contoso's IT leadership team has worked closely with the company's business partners to understand what the
business wants to achieve with this migration:
Address business growth: Contoso is growing. As a result, pressure has increased on the company's on-
premises systems and infrastructure.
Increase efficiency: Contoso needs to remove unnecessary procedures, and to streamline processes for its
developers and users. The business needs IT to be fast and to not waste time or money, so the company can
deliver faster on customer requirements.
Increase agility: Contoso IT needs to be more responsive to the needs of the business. It must be able to
react faster than the changes that occur in the marketplace for the company to be successful in a global
economy. IT at Contoso must not get in the way or become a business blocker.
Scale: As the company's business grows successfully, Contoso IT must provide systems that can grow at the
same pace.
Migration goals
The Contoso cloud team has identified goals for this migration. The company uses migration goals to determine
the best migration method.
After migration, the app in Azure should have the same performance capabilities that the app has today in
Contoso's on-premises VMWare environment. Moving to the cloud doesn't mean that app performance is
less critical.
Contoso doesn’t want to invest in the app. The app is critical and important to the business, but Contoso
simply wants to move the app in its current form to the cloud.
Database administration tasks should be minimized after the app is migrated.
Contoso doesn't want to use an Azure SQL Database for this app. It's looking for alternatives.
Solution design
After pinning down their goals and requirements, Contoso designs and reviews a deployment solution, and
identifies the migration process, including the Azure services that it will use for the migration.
Current architecture
Contoso has one main datacenter (contoso-datacenter) . The datacenter is located in the city of New York in
the Eastern United States.
Contoso has three additional local branches across the United States.
The main datacenter is connected to the internet with a fiber Metro Ethernet connection (500 MBps).
Each branch is connected locally to the internet by using business-class connections with IPsec VPN tunnels
back to the main datacenter. The setup allows Contoso's entire network to be permanently connected and
optimizes internet connectivity.
The main datacenter is fully virtualized with VMware. Contoso has two ESXi 6.5 virtualization hosts that are
managed by vCenter Server 6.5.
Contoso uses Active Directory for identity management. Contoso uses DNS servers on the internal network.
Contoso has an on-premises domain controller (contosodc1).
The domain controllers run on VMware VMs. The domain controllers at local branches run on physical
servers.
The SmartHotel360 app is tiered across two VMs (WEBVM and SQLVM ) that are located on a VMware
ESXi version 6.5 host (contosohost1.contoso.com ).
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ) running on a VM.
Proposed architecture
In this scenario, Contoso wants to migrate its two-tier on-premises travel app as follows:
Migrate the app database (SmartHotelDB ) to an Azure SQL Database Managed Instance.
Migrate the frontend WebVM to an Azure VM.
The on-premises VMs in the Contoso datacenter will be decommissioned when the migration is finished.
Database considerations
As part of the solution design process, Contoso did a feature comparison between Azure SQL Database and
SQL Server Managed Instance. The following considerations helped them to decide to go with Managed
Instance.
Managed Instance aims to deliver almost 100% compatibility with the latest on-premises SQL Server
version. Microsoft recommends Managed instance for customers running SQL Server on-premises or on
IaaS VM who want to migrate their apps to a fully managed service with minimal design changes.
Contoso is planning to migrate a large number of apps from on-premises to IaaS. Many of these are ISV
provided. Contoso realizes that using Managed Instance will help ensure database compatibility for these
apps, rather than using SQL Database which might not be supported.
Contoso can simply do a lift-and-shift migration to Managed Instance using the fully automated Data
Migration Service (DMS ). With this service in place, Contoso can reuse it for future database migrations.
SQL Managed Instance supports SQL Server Agent which is an important issue for the SmartHotel360 app.
Contoso needs this compatibility, otherwise it will have to redesign maintenance plans required by the app.
With Software Assurance, Contoso can exchange their existing licenses for discounted rates on a SQL
Database Managed Instance using the Azure Hybrid Benefit for SQL Server. This can allow Contoso to save
up to 30% on Managed Instance.
Managed Instance is fully contained in the virtual network, so it provides a high level of isolation and security
for Contoso’s data. Contoso can get the benefits of the public cloud, while keeping the environment isolated
from the public Internet.
Managed Instance supports many security features including Always-encrypted, dynamic data masking, row -
level security, and threat detection.
Solution review
Contoso evaluates the proposed design by putting together a pros and cons list.
CONSIDERATION DETAILS
For the data tier, Managed Instance might not be the best
solution if Contoso want to customize the operating system
or the database server, or if they want to run third-party
apps along with SQL Server. Running SQL Server on an IaaS
VM could provide this flexibility.
Migration process
Contoso will migrate the web and data tiers of its SmartHotel360 app to Azure by completing these steps:
1. Contoso already has its Azure infrastructure in place, so it just needs to add a couple of specific Azure
components for this scenario.
2. The data tier will be migrated by using the Data Migration Service. The Data Migration Service connects
to the on-premises SQL Server VM across a site-to-site VPN connection between the Contoso datacenter
and Azure. Then, the Data Migration Service migrates the database.
3. The web tier will be migrated by using a lift-and-shift migration by using Site Recovery. The process
entails preparing the on-premises VMware environment, setting up and enabling replication, and
migrating the VMs by failing them over to Azure.
Azure services
SERVICE DESCRIPTION COST
Database Migration Service The Database Migration Service Learn about supported regions and
enables seamless migration from Database Migration Service pricing.
multiple database sources to Azure
data platforms with minimal downtime.
Azure SQL Database Managed Managed Instance is a managed Using a SQL Database Managed
Instance database service that represents a fully Instance running in Azure incurs
managed SQL Server instance in the charges based on capacity. Learn more
Azure cloud. It uses the same code as about Managed Instance pricing.
the latest version of SQL Server
Database Engine, and has the latest
features, performance improvements,
and security patches.
Azure Site Recovery The Site Recovery service orchestrates During replication to Azure, Azure
and manages migration and disaster Storage charges are incurred. Azure
recovery for Azure VMs and on- VMs are created and incur charges
premises VMs and physical servers. when failover occurs. Learn more about
Site Recovery charges and pricing.
Prerequisites
Contoso and other users must meet the following prerequisites for this scenario:
REQUIREMENTS DETAILS
Enroll in the Managed Instance preview You must be enrolled in the SQL Database Managed
Instance limited public preview. You need an Azure
subscription to sign up. Signup can take a few days to
complete, so make sure to sign up before you begin to
deploy this scenario.
REQUIREMENTS DETAILS
Azure subscription You should have already created a subscription when you
perform the assessment in the first article in this series. If you
don't have an Azure subscription, create a free account.
Site Recovery (on-premises) Your on-premises vCenter Server instance should be running
version 5.5, 6.0, or 6.5
Database Migration Service For the Database Migration Service, you need a compatible
on-premises VPN device.
Make sure that the service account running the source SQL
Server instance has write permissions on the network share.
Scenario steps
Here's how Contoso plans to set up the deployment:
Step 1: Set up a SQL Database Managed Instance: Contoso needs a pre-created Managed Instance to
which the on-premises SQL Server database will migrate.
Step 2: Prepare the Database Migration Service: Contoso must register the database migration provider,
create an instance, and then create a Database Migration Service project. Contoso also must set up a shared
access signature (SAS ) Uniform Resource Identifier (URI) for the Database Migration Service. An SAS URI
provides delegated access to resources in Contoso's storage account, so Contoso can grant limited
permissions to storage objects. Contoso sets up an SAS URI, so the Database Migration Service can access
the storage account container to which the service uploads the SQL Server backup files.
Step 3: Prepare Azure for Site Recovery: Contoso must create a storage account to hold replicated data
for Site Recovery. It also must create an Azure Recovery Services vault.
Step 4: Prepare on-premises VMware for Site Recovery: Contoso will prepare accounts for VM
discovery and agent installation to connect to Azure VMs after failover.
Step 5: Replicate VMs: To set up replication, Contoso configure the Site Recovery source and target
environments, sets up a replication policy, and starts replicating VMs to Azure Storage.
Step 6: Migrate the database by using the Database Migration Service: Contoso migrates the
database.
Step 7: Migrate the VMs by using Site Recovery: Contoso runs a test failover to make sure everything's
working. Then, Contoso runs a full failover to migrate the VMs to Azure.
5. They set custom DNS settings. DNS points first to Contoso's Azure domain controllers. Azure DNS is
secondary. The Contoso Azure domain controllers are located as follows:
Located in the PROD -DC -EUS2 subnet, in the East US 2 production network (VNET-PROD -
EUS2)
CONTOSODC3 address: 10.245.42.4
CONTOSODC4 address: 10.245.42.5
Azure DNS resolver: 168.63.129.16
Need more help?
Get an overview of SQL Database Managed Instance.
Learn how to create a virtual network for a SQL Database Managed Instance.
Learn how to set up peering.
Learn how to update Azure Active Directory DNS settings.
Set up routing
The Managed Instance is placed in a private virtual network. Contoso needs a route table for the virtual network
to communicate with the Azure Management Service. If the virtual network can't communicate with the service
that manages it, the virtual network becomes inaccessible.
Contoso considers these factors:
The route table contains a set of rules (routes) that specify how packets sent from the Managed Instance
should be routed in the virtual network.
The route table is associated with subnets in which Managed Instances are deployed. Each packet that
leaves a subnet is handled based on the associated route table.
A subnet can be associated with only one route table.
There are no additional charges for creating route tables in Microsoft Azure.
To set up routing Contoso admins do the following:
1. They create a UDR (route) table in the ContosoNetworkingRG resource group.
2. To comply with Managed Instance requirements, after the route table (MIRouteTable) is deployed, they
add a route that has an address prefix of 0.0.0.0/0. The Next hop type option is set to Internet.
3. they associate the route table with the SQLMI -DB -EUS2 subnet (in the VNET-SQLMI -EUS2 network).
2. They create a Blob storage container. Contoso generates an SAS URI so that the Database Migration
Service can access it.
4. They place the Database Migration Service instance in the PROD -DC -EUS2 subnet of the VNET-
PROD -DC -EUS2 virtual network.
The Database Migration Service is placed here because the service must be in a virtual network
that can access the on-premises SQL Server VM via a VPN gateway.
The VNET-PROD -EUS2 is peered to VNET-HUB -EUS2 and is allowed to use remote gateways.
The Use remote gateways option ensures that the Database Migration Service can communicate
as required.
10. They download and installs MySQL Server and VMWare PowerCLI. Then, they validates the server
settings.
11. After validation, they enter the FQDN or IP address of the vCenter Server instance or vSphere host. They
leave the default port, and enter a display name for the vCenter Server instance in Azure.
12. They specify the account created earlier so that Site Recovery can automatically discover VMware VMs
that are available for replication.
13. They enter credentials, so the Mobility Service is automatically installed when replication is enabled. For
Windows machines, the account needs local administrator permissions on the VMs.
14. When registration is finished, in the Azure portal, they verify again that the configuration server and
VMware server are listed on the Source page in the vault. Discovery can take 15 minutes or more.
15. Site Recovery connects to VMware servers by using the specified settings, and discovers VMs.
Set up the target
Now, Contoso admins configure the target replication environment:
1. In Prepare infrastructure > Target, they select the target settings.
2. Site Recovery checks that there's a storage account and network in the specified target.
Create a replication policy
When the source and target are set up, Contoso admins create a replication policy and associates the policy with
the configuration server:
1. In Prepare infrastructure > Replication Settings > Replication Policy > Create and Associate, they
create the ContosoMigrationPolicy policy.
2. They use the default settings:
RPO threshold: Default of 60 minutes. This value defines how often recovery points are created.
An alert is generated if continuous replication exceeds this limit.
Recovery point retention: Default of 24 hours. This value specifies how long the retention
window is for each recovery point. Replicated VMs can be recovered to any point in a window.
App-consistent snapshot frequency: Default of 1 hour. This value specifies the frequency at
which application-consistent snapshots are created.
3. They specify the target settings, including the resource group and network in which the Azure VM will be
located after failover. They specify the storage account in which replicated data will be stored.
4. They select WebVM for replication. Site Recovery installs the Mobility Service on each VM when
replication is enabled.
5. They check that the correct replication policy is selected, and enable replication for WEBVM. They track
replication progress in Jobs. After the Finalize Protection job runs, the machine is ready for failover.
6. In Essentials in the Azure portal, they can see status for the VMs that are replicating to Azure:
3. For the target, they enter the name of the Managed Instance in Azure, and the access credentials.
4. In New Activity > Run Migration, they specify settings to run migration:
Source and target credentials.
The database to migrate.
The network share created on the on-premises VM. The Database Migration Service takes source
backups to this share.
The service account that runs the source SQL Server instance must have write permissions on
this share.
The FQDN path to the share must be used.
The SAS URI that provides the Database Migration Service with access to the storage account
container to which the service uploads the backup files for migration.
5. They save the migration settings, and then run the migration.
6. In Overview, they monitor the migration status.
7. When migration is finished, they verify that the target databases exist on the Managed Instance.
2. They run a failover on the plan, selecting the latest recovery point. They specify that Site Recovery should
try to shut down the on-premises VM before it triggers the failover.
3. After the failover, they verify that the Azure VM appears as expected in the Azure portal.
4. After verifying, they complete the migration to finish the migration process, stop replication for the VM,
and stop Site Recovery billing for the VM.
2. They update the string with the user name and password of the SQL Database Managed Instance.
3. After the string is configured, they replace the current connection string in the web.config file of its
application.
4. After updating the file and saving it, they restart IIS on WEBVM by running IISRESET /RESTART in a
Command Prompt window.
5. After IIS is restarted, the app uses the database that's running on the SQL Database Managed Instance.
6. At this point, they can shut down on-premises the SQLVM machine. The migration has been completed.
Need more help?
Learn how to run a test failover.
Learn how to create a recovery plan.
Learn how to fail over to Azure.
Conclusion
In this article, Contoso rehosts the SmartHotel360 app in Azure by migrating the app front-end VM to Azure by
using the Site Recovery service. Contoso migrates the on-premises database to an Azure SQL Database
Managed Instance by using the Azure Database Migration Service.
Next steps
In the next article in the series, Contoso rehosts the SmartHotel360 app on Azure VMs by using the Azure Site
Recovery service.
Contoso migration: Rehost an on-premises app to
Azure VMs
3/15/2019 • 20 minutes to read • Edit Online
This article demonstrates how Contoso rehosts the on-premises SmartHotel360 app in Azure, by migrating the
app VMs to Azure VMs.
This document is one in a series of articles that show how the fictitious company Contoso migrates on-premises
resources to the Microsoft Azure cloud. The series includes background information, and scenarios that illustrate
setting up a migration infrastructure, assessing on-premises resources for migration, and running different types
of migrations. Scenarios grow in complexity. We'll add additional articles over time.
Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database
using Data Migration Assistant.
Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 This article
app VMs to Azure VMs using the Site
Recovery service.
Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS
Article 8: Rehost a Linux app on Azure Contoso migrates the Linux osTicket Available
VMs and Azure MySQL app to Azure VMs using Azure Site
Recovery, and migrates the app
database to an Azure MySQL Server
instance using MySQL Workbench.
Article 10: Refactor a Linux app on Contoso migrates its Linux osTicket Available
Azure Web Apps and Azure MySQL app to an Azure web app on multiple
Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.
Article 11: Refactor TFS on Azure Contoso migrates its on-premises Available
DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.
Article 12: Rearchitect an app on Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL
Database.
Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.
Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.
In this article, Contoso will migrate the two-tier Windows. NET SmartHotel360 app running on VMware VMs,
to Azure. If you want to use this app, it's provided as open-source and you can download it from GitHub.
Business drivers
The IT Leadership team has worked closely with business partners to understand what they want to achieve with
this migration:
Address business growth: Contoso is growing, and as a result there is pressure on their on-premises
systems and infrastructure.
Limit risk: The SmartHotel360 app is critical for the Contoso business. It wants to move the app to Azure
with zero risk.
Extend: Contoso doesn't want to modify the app, but does want to ensure that it's stable.
Migration goals
The Contoso cloud team has pinned down goals for this migration. These goals are used to determine the best
migration method:
After migration, the app in Azure should have the same performance capabilities as it does today in VMware.
The app will remain as critical in the cloud as it is on-premises.
Contoso doesn’t want to invest in this app. It is important to the business, but in its current form Contoso
simply wants to move it safely to the cloud.
Contoso doesn't want to change the ops model for this app. Contoso do want to interact with it in the cloud
in the same way that they do now.
Contoso doesn't want to change any app functionality. Only the app location will change.
Solution design
After pinning down goals and requirements, Contoso designs and review a deployment solution, and identifies
the migration process, including the Azure services that Contoso will use for the migration.
Current app
The app is tiered across two VMs (WEBVM and SQLVM ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5).
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
Proposed architecture
Since the app is a production workload, the app VMs in Azure will reside in the production resource group
ContosoRG.
The app VMs will be migrated to the primary Azure region (East US 2) and placed in the production network
(VNET-PROD -EUS2).
The web frontend VM will reside in the frontend subnet (PROD -FE -EUS2) in the production network.
The database VM will reside in the database subnet (PROD -DB -EUS2) in the production network.
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Database considerations
As part of the solution design process, Contoso did a feature comparison between Azure SQL Database and
SQL Server. The following considerations helped them to decide to go with SQL Server running on an Azure
IaaS VM:
Using an Azure VM running SQL Server seems to be an optimal solution if Contoso needs to customize the
operating system or the database server, or if it might want to colocate and run third-party apps on the same
VM.
With Software Assurance, in future Contoso can exchange existing licenses for discounted rates on a SQL
Database Managed Instance using the Azure Hybrid Benefit for SQL Server. This can save up to 30% on
Managed Instance.
Solution review
Contoso evaluates the proposed design by putting together a pros and cons list.
CONSIDERATION DETAILS
Pros Both the app VMs will be moved to Azure without changes,
making the migration simple.
Cons WEBVM and SQLVM are running Windows Server 2008 R2.
The operating system is supported by Azure for specific roles
(July 2018). Learn more.
The web and data tiers of the app will remain a single point
of failover.
Migration process
Contoso will migrate the app frontend and database VMs to Azure VMs with Site Recovery:
As a first step, Contoso prepares and sets up Azure components for Site Recovery, and prepares the on-
premises VMware infrastructure.
They already have the Azure infrastructure in place, so Contoso just needs to add a couple of Azure
components specifically for Site Recovery.
With everything prepared, Contoso can start replicating the VMs.
After replication is enabled and working, Contoso will migrate the VM by failing it over to Azure.
Azure services
SERVICE DESCRIPTION COST
Azure Site Recovery The service orchestrates and manages During replication to Azure, Azure
migration and disaster recovery for Storage charges are incurred. Azure
Azure VMs, and on-premises VMs and VMs are created, and incur charges,
physical servers. when failover occurs. Learn more
about charges and pricing.
Prerequisites
Here's what Contoso needs to run this scenario.
REQUIREMENTS DETAILS
Scenario steps
Here's how Contoso admins will run the migration:
Step 1: Prepare Azure for Site Recovery: They create an Azure storage account to hold replicated data,
and a Recovery Services vault.
Step 2: Prepare on-premises VMware for Site Recovery: They prepare accounts for VM discovery and
agent installation, and prepare to connect to Azure VMs after failover.
Step 3: Replicate VMs: They set up replication, and start replicating VMs to Azure storage.
Step 4: Migrate the VMs with Site Recovery: They run a test failover to make sure everything's working,
and then run a full failover to migrate the VMs to Azure.
3. Create a vault-With the network and storage account in place, Contoso now creates a Recovery Services
vault (ContosoMigrationVault), and places it in the ContosoFailoverRG resource group in the primary
East US 2 region.
11. They download and install MySQL Server, and VMWare PowerCLI.
12. After validation, they specify the FQDN or IP address of the vCenter server or vSphere host. They leave
the default port, and specify a friendly name for the server in Azure.
13. They specify the account that they created for automatic discovery, and the credentials that are used to
automatically install the Mobility Service. For Windows machines, the account needs local administrator
privileges on the VMs.
14. After registration finishes, in the Azure portal, they double check that the configuration server and
VMware server are listed on the Source page in the vault. Discovery can take 15 minutes or more.
15. Site Recovery then connects to VMware servers using the specified settings, and discovers VMs.
Set up the target
Now Contoso admins specify the target replication settings.
1. In Prepare infrastructure > Target, they select the target settings.
2. Site Recovery checks that there's an Azure storage account and network in the specified target location.
Create a replication policy
Now Contoso admins can create a replication policy.
1. In Prepare infrastructure > Replication Settings > Replication Policy > Create and Associate, they
create a policy ContosoMigrationPolicy.
2. They use the default settings:
RPO threshold: Default of 60 minutes. This value defines how often recovery points are created.
An alert is generated if continuous replication exceeds this limit.
Recovery point retention. Default of 24 hours. This value specifies how long the retention
window is for each recovery point. Replicated VMs can be recovered to any point in a window.
App-consistent snapshot frequency. Default of one hour. This value specifies the frequency at
which application-consistent snapshots are created.
3. They select the target settings, including the resource group and Azure network, and the storage account.
4. They select WebVM for replication, check the replication policy, and enable replication.
At this stage they only selects WEBVM because VNet and subnet must be selected, and the app
VMs will be placed in different subnets.
Site Recovery automatically installs the Mobility service on the VM when replication is enabled.
5. They track replication progress in Jobs. After the Finalize Protection job runs, the machine is ready for
failover.
6. In Essentials in the Azure portal, they can see the structure for the VMs replicating to Azure.
Enable replication for SQLVM
Now Contoso admins can start replicating the SQLVM machine, using the same process as above.
1. They select source settings.
2. They then specify the target settings.
2. After creating the plan, they customize it (Recovery Plans > SmartHotelMigrationPlan >
Customize).
3. They remove WEBVM from Group 1: Start. This ensures that the first start action affects SQLVM only.
4. In +Group > Add protected items, they add WEBVM to Group 2: Start. The VMs need to be in two
different groups.
Migrate the VMs
Now Contoso admins run a full failover to complete the migration.
1. They select the recovery plan > Failover.
2. They select to fail over to the latest recovery point, and that Site Recovery should try to shut down the on-
premises VM before triggering the failover. They can follow the failover progress on the Jobs page.
3. After the failover, they verify that the Azure VM appears as expected in the Azure portal.
4. After verification, they complete the migration for each VM. This stops replication for the VM, and stops
Site Recovery billing for it.
Need more help?
Learn about running a test failover.
Learn how to create a recovery plan.
Learn about failing over to Azure.
BCDR
For business continuity and disaster recovery (BCDR ), Contoso takes the following actions:
Keep data safe: Contoso backs up the data on the VMs using the Azure Backup service. Learn more.
Keep apps up and running: Contoso replicates the app VMs in Azure to a secondary region using Site
Recovery. Learn more.
Licensing and cost optimization
1. Contoso has existing licensing for their VMs, and will leverage the Azure Hybrid Benefit. Contoso will convert
the existing Azure VMs, to take advantage of this pricing.
2. Contoso will enable Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary. It's a multi-cloud
cost management solution that helps to utilize and manage Azure and other cloud resources. Learn more
about Azure Cost Management.
Conclusion
In this article, Contoso rehosted the SmartHotel360 app in Azure by migrating the app VMs to Azure VMs using
the Site Recovery service.
Next steps
In the next article in the series, we'll show you how Contoso rehosts the SmartHotel360 app frontend VM on an
Azure VM, and migrates the database to a SQL Server AlwaysOn Availability Group in Azure.
Contoso migration: Rehost an on-premises app on
Azure VMs and SQL Server AlwaysOn Availability
Group
3/15/2019 • 30 minutes to read • Edit Online
This article demonstrates how Contoso rehosts the SmartHotel360 app in Azure. Contoso migrates the app
frontend VM to an Azure VM, and the app database to an Azure SQL Server VM, running in a Windows Server
failover cluster with SQL Server AlwaysOn Availability gGroups.
This document is one in a series of articles that show how the fictitious company Contoso migrates on-premises
resources to the Microsoft Azure cloud. The series includes background information, and scenarios that illustrate
setting up a migration infrastructure, assessing on-premises resources for migration, and running different types
of migrations. Scenarios grow in complexity. We'll add additional articles over time.
Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database
using Data Migration Assistant.
Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.
ARTICLE DETAILS STATUS
Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 This article
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.
Article 8: Rehost a Linux app on Azure Contoso migrates the Linux osTicket Available
VMs and Azure MySQL Server app to Azure VMs using Azure Site
Recovery, and migrates the app
database to an Azure MySQL Server
instance using MySQL Workbench.
Article 10: Refactor a Linux app on Contoso migrates its Linux osTicket Available
Azure Web Apps and Azure MySQL app to an Azure web app on multiple
Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.
Article 11: Refactor TFS on Azure Contoso migrates its on-premises Available
DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.
Article 12: Rearchitect an app on Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL
Database.
Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.
Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.
In this article, Contoso migrates the two-tier Windows .NET SmartHotel360 app running on VMware VMs to
Azure. If you'd like to use this app, it's provided as open source and you can download it from GitHub.
Business drivers
The IT leadership team has worked closely with business partners to understand what they want to achieve with
this migration:
Address business growth: Contoso is growing, and as a result there is pressure on on-premises systems
and infrastructure.
Increase efficiency: Contoso needs to remove unnecessary procedures, and streamline processes for
developers and users. The business needs IT to be fast and not waste time or money, thus delivering faster on
customer requirements.
Increase agility: Contoso IT needs to be more responsive to the needs of the business. It must be able to
react faster than the changes in the marketplace, to enable the success in a global economy. IT mustn't get in
the way, or become a business blocker.
Scale: As the business grows successfully, Contoso IT must provide systems that are able to grow at the
same pace.
Migration goals
The Contoso cloud team has pinned down goals for this migration. These goals were used to determine the best
migration method:
After migration, the app in Azure should have the same performance capabilities as it does today in VMWare.
The app will remain as critical in the cloud as it is on-premises.
Contoso doesn’t want to invest in this app. It is important to the business, but in its current form Contoso
simply want to move it safely to the cloud.
The on-premises database for the app has had availability issues. Contoso would like to deploy it in Azure as
a high-availability cluster, with failover capabilities.
Contoso wants to upgrade from their current SQL Server 2008 R2 platform, to SQL Server 2017.
Contoso doesn't want to use an Azure SQL Database for this app, and is looking for alternatives.
Solution design
After pinning down their goals and requirements, Contoso designs and reviews a deployment solution, and
identifies the migration process, including the Azure services that it will use for the migration.
Current architecture
The app is tiered across two VMs (WEBVM and SQLVM ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5)
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
Proposed architecture
In this scenario:
Contoso will migrate the app frontend WEBVM to an Azure IaaS VM.
The frontend VM in Azure will be deployed in the ContosoRG resource group (used for production
resources).
It will be located in the Azure production network (VNET-PROD -EUS2) in the primary East US2
region.
The app database will be migrated to an Azure SQL Server VM.
It will be located in Contoso's Azure database network (PROD -DB -EUS2) in the primary East US2
region.
It will be placed in a Windows Server failover cluster with two nodes, that uses SQL Server Always On
Availability Groups.
In Azure the two SQL Server VM nodes in the cluster will be deployed in the ContosoRG resource
group.
The VM nodes will be located in the Azure production network (VNET-PROD -EUS2) in the primary
East US2 region.
VMs will run Windows Server 2016 with SQL Server 2017 Enterprise Edition. Contoso doesn't have
licenses for this operating system, so it will use an image in the Azure Marketplace that provides the
license as a charge to their Azure EA commitment.
Apart from unique names, both VMs use the same settings.
Contoso will deploy an internal load balancer which listens for traffic on the cluster, and directs it to the
appropriate cluster node.
The internal load balancer will be deployed in the ContosoNetworkingRG (used for networking
resources).
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Database considerations
As part of the solution design process, Contoso did a feature comparison between Azure SQL Database and
SQL Server. The following considerations helped them to decide to go with an Azure IaaS VM running SQL
Server:
Using an Azure VM running SQL Server seems to be an optimal solution if Contoso needs to customize the
operating system or the database server, or if it might want to colocate and run third-party apps on the same
VM.
Using the Data Migration Assistant, Contoso can easily assess and migrate to an Azure SQL Database.
Solution review
Contoso evaluates their proposed design by putting together a pros and cons list.
CONSIDERATION DETAILS
CONSIDERATION DETAILS
The SQL Server tier will run on SQL Server 2017 and
Windows Server 2016. This retires their current Windows
Server 2008 R2 operating system, and running SQL Server
2017 supports Contoso's technical requirements and goals.
IT provides 100% compatibility while moving away from SQL
Server 2008 R2.
The web tier of the app will remain a single point of failover.
Azure services
SERVICE DESCRIPTION COST
Database Migration Assistant DMA runs locally from the on- DMA is a free, downloadable tool.
premises SQL Server machine, and
migrates the database across a site-to-
site VPN to Azure.
Azure Site Recovery Site Recovery orchestrates and During replication to Azure, Azure
manages migration and disaster Storage charges are incurred. Azure
recovery for Azure VMs, and on- VMs are created, and incur charges,
premises VMs and physical servers. when failover occurs. Learn more
about charges and pricing.
Migration process
Contoso admins will migrate the app VMs to Azure.
They'll migrate the frontend VM to Azure VM using Site Recovery:
As a first step, they'll prepare and set up Azure components, and prepare the on-premises VMware
infrastructure.
With everything prepared, they can start replicating the VM.
After replication is enabled and working, they migrate the VM by failing it over to Azure.
They'll migrate the database to a SQL Server cluster in Azure, using the Data Migration Assistant (DMA).
As a first step they'll need to provision SQL Server VMs in Azure, set up the cluster and an internal
load balancer, and configure AlwaysOn availability groups.
With this in place, they can migrate the database
After the migration, they'll enable AlwaysOn protection for the database.
Prerequisites
Here's what Contoso needs to do for this scenario.
REQUIREMENTS DETAILS
Site recovery (on-premises) The on-premises vCenter server should be running version
5.5, 6.0, or 6.5
Scenario steps
Here's how Contoso will run the migration:
Step 1: Prepare a cluster: Create a cluster for deploying two SQL Server VM nodes in Azure.
Step 2: Deploy and set up the cluster: Prepare an Azure SQL Server cluster. Databases are migrated into
this pre-created cluster.
Step 3: Deploy the load balancer: Deploy a load balancer to balance traffic to the SQL Server nodes.
Step 4: Prepare Azure for Site Recovery: Create an Azure storage account to hold replicated data, and a
Recovery Services vault.
Step 5: Prepare on-premises VMware for Site Recovery: Prepare accounts for VM discovery and agent
installation. Prepare on-premises VMs so that users can connect to Azure VMs after m;migration.
Step 6: Replicate VMs: Enable VM replication to Azure.
Step 7: Install DMA: Download and install the Database Migration Assistant.
Step 7: Migrate the database with DMA: Migrate the database to Azure.
Step 9: Protect the database: Create an Always On Availability Group for the cluster.
Step 10: Migrate the web app VM: Run a test failover to make sure everything's working as expected.
Then run a full failover to Azure.
5. In SQL Server settings, they limit SQL connectivity to the virtual network (private), on default port
1433. For authentication they use the same credentials as they use onsite (contosoadmin).
5. When they create the storage account, primary and secondary access keys are generated for it. They need
the primary access key to create the cloud witness. The key appears under the storage account name >
Access Keys.
Add SQL Server VMs to Contoso domain
1. Contoso adds SQLAOG1 and SQLAOG2 to contoso.com domain.
2. Then, on each VM they install the Windows Failover Cluster Feature and Tools.
Set up the cluster
Before setting up the cluster, Contoso admins take a snapshot of the OS disk on each machine.
1. Then, they run a script they've put together to create the Windows Failover Cluster.
2. After they've created the cluster, they verify that the VMs appear as cluster nodes.
11. They then download and install MySQL Server, and VMWare PowerCLI.
12. After validation, they specify the FQDN or IP address of the vCenter server or vSphere host. They leave
the default port, and specify a friendly name for the vCenter server.
13. They specify the account that they created for automatic discovery, and the credentials that are used to
automatically install the Mobility Service. For Windows machines, the account needs local administrator
privileges on the VMs.
14. After registration finishes, in the Azure portal, they double check that the configuration server and
VMware server are listed on the Source page in the vault. Discovery can take 15 minutes or more.
15. Site Recovery then connects to VMware servers using the specified settings, and discovers VMs.
Set up the target
Now Contoso admins specify target replication settings.
1. In Prepare infrastructure > Target, they select the target settings.
2. Site Recovery checks that there's an Azure storage account and network in the specified target.
Create a replication policy
Now, Contoso admins can create a replication policy.
1. In Prepare infrastructure > Replication Settings > Replication Policy > Create and Associate,
they create a policy ContosoMigrationPolicy.
2. They use the default settings:
RPO threshold: Default of 60 minutes. This value defines how often recovery points are created.
An alert is generated if continuous replication exceeds this limit.
Recovery point retention. Default of 24 hours. This value specifies how long the retention
window is for each recovery point. Replicated VMs can be recovered to any point in a window.
App-consistent snapshot frequency. Default of one hour. This value specifies the frequency at
which application-consistent snapshots are created.
3. Now, they specify the target settings, including the resource group and VNet, and the storage account in
which replicated data will be stored.
4. They select the WebVM for replication, checks the replication policy, and enables replication. Site
Recovery installs the Mobility Service on the VM when replication is enabled.
5. They track replication progress in Jobs. After the Finalize Protection job runs, the machine is ready for
failover.
6. In Essentials in the Azure portal, they can see the structure for the VMs replicating to Azure.
3. In the migration details, they add SQLVM as the source server, and SQLAOG1 as the target. They specify
credentials for each machine.
4. They create a local share for the database and configuration information. It must be accessible with write
access by the SQL Service account on SQLVM and SQLAOG1.
5. Contoso selects the logins that should be migrated, and starts the migration. After it finishes, DMA shows
the migration as successful.
6. They verify that the database is running on SQLAOG1.
DMS connects to the on-premises SQL Server VM across a site-to-site VPN connection between the Contoso
datacenter and Azure, and then migrates the database.
4. They configure a listener for the group (SHAOG) and port. The IP address of the internal load balancer is
added as a static IP address (10.245.40.100).
5. In Select Data Synchronization, they enable automatic seeding. With this option, SQL Server
automatically creates the secondary replicas for every database in the group, so Contoso don't have to
manually back up and restore these. After validation, the availability group is created.
6. Contoso ran into an issue when creating the group. They aren't using Active Directory Windows
Integrated security, and thus need to grant permissions to the SQL login to create the Windows Failover
Cluster roles.
7. After the group is created, Contoso can see it in SQL Management Studio.
Configure a listener on the cluster
As a last step in setting up the SQL deployment, Contoso admins configure the internal load balancer as the
listener on the cluster, and brings the listener online. They use a script to do this.
3. After the failover, they verify that the Azure VM appears as expected in the Azure portal.
4. After verifying the VM in Azure, they complete the migration to finish the migration process, stop
replication for the VM, and stop Site Recovery billing for the VM.
2. After updating the file and saving it, they restart IIS on WEBVM. They do this using the IISRESET
/RESTART from a cmd prompt.
3. After IIS has been restarted, the application is now using the database running on the SQL MI.
Need more help?
Learn about running a test failover.
Learn how to create a recovery plan.
Learn about failing over to Azure.
BCDR
For business continuity and disaster recovery (BCDR ), Contoso takes the following actions:
Keep data safe: Contoso backs up the data on the WEBVM, SQLAOG1 and SQLAOG2 VMs using the Azure
Backup service. [Learn more]. (https://docs.microsoft.com/azure/backup/backup-introduction-to-azure-
backup?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json).
Contoso will also learn about how to use Azure Storage to back up SQL Server directly to blob
storage. Learn more.
Keep apps up and running: Contoso replicates the app VMs in Azure to a secondary region using Site
Recovery. Learn more.
Licensing and cost optimization
1. Contoso has existing licensing for their WEBVM and will leverage the Azure Hybrid Benefit. Contoso will
convert the existing Azure VMs to take advantage of this pricing.
2. Contoso will enable Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary. It's a multi-cloud
cost management solution that helps you to utilize and manage Azure and other cloud resources. Learn more
about Azure Cost Management.
Conclusion
In this article, Contoso rehosted the SmartHotel360 app in Azure by migrating the app frontend VM to Azure
using the Site Recovery service. Contoso migrated the app database to a SQL Server cluster provisioned in
Azure, and protected it in a SQL Server AlwaysOn availability group.
Next steps
In the next article in the series, we'll show how Contoso rehost their service desk osTicket app running on Linux,
and deployed with a MySQL database.
Contoso migration: Rehost an on-premises Linux
app to Azure VMs
4/4/2019 • 20 minutes to read • Edit Online
This article shows how Contoso is rehosting an on-premises Linux-based service desk app ( osTicket), to Azure
IaaS VMs.
This document is one in a series of articles that document how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background information, and a set of
scenarios that illustrate how to set up a migration infrastructure, and run different types of migrations. Scenarios
grow in complexity. We'll add additional articles over time.
Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database
using Data Migration Assistant.
Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.
Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS
Article 7: Rehost a Linux app on Azure Contoso completes a lift-and-shift This article
VMs migration of the Linux osTicket app to
Azure VMs, using Azure Site Recovery
Article 8: Rehost a Linux app on Azure Contoso migrates the Linux osTicket Available
VMs and Azure MySQL app to Azure VMs using Azure Site
Recovery, and migrates the app
database to an Azure MySQL Server
instance using MySQL Workbench.
Article 10: Refactor a Linux app on Contoso migrates its Linux osTicket Available
Azure Web Apps and Azure MySQL app to an Azure web app on multiple
Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.
Article 11: Refactor TFS on Azure Contoso migrates its on-premises Available
DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.
Article 12: Rearchitect an app on Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL
Database.
Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.
Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.
In this article, Contoso will migrate the two-tier osTicket app, running on Linux Apache MySQL PHP (LAMP ) to
Azure. The app VMs will be migrated using the Azure Site Recovery service. If you'd like to use this open-source
app, you can download it from GitHub.
Business drivers
The IT Leadership team has worked closely with business partners to understand what they want to achieve with
this migration:
Address business growth: Contoso is growing, and as a result there's pressure on the on-premises systems
and infrastructure.
Limit risk: The service desk app is critical for the Contoso business. Contoso wants to move it to Azure with
zero risk.
Extend: Contoso don't want to change the app right now. It simply wants to ensure that the app is stable.
Migration goals
The Contoso cloud team has pinned down goals for this migration, to determine the best migration method:
After migration, the app in Azure should have the same performance capabilities as it does today in their on-
premises VMWare environment. The app will remain as critical in the cloud as it is on-premises.
Contoso doesn’t want to invest in this app. It is important to the business, but in its current form Contoso
simply wants to move it safely to the cloud.
Contoso doesn't want to change the ops model for this app. It wants to interact with the app in the cloud in
the same way that they do now.
Contoso doesn't want to change app functionality. Only the app location will change.
Having completed a couple of Windows app migrations, Contoso wants to learn how to use a Linux-based
infrastructure in Azure.
Solution design
After pinning down goals and requirements, Contoso designs and review a deployment solution, and identifies
the migration process, including the Azure services that Contoso will use for the migration.
Current app
The OSTicket app is tiered across two VMs (OSTICKETWEB and OSTICKETMYSQL ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5).
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1)
Proposed architecture
Since the app is a production workload, the VMs in Azure will reside in the production resource group
ContosoRG.
The VMs will be migrated to the primary region (East US 2) and placed in the production network (VNET-
PROD -EUS2):
The web VM will reside in the frontend subnet (PROD -FE -EUS2).
The database VM will reside in the database subnet (PROD -DB -EUS2).
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Solution review
Contoso evaluates the proposed design by putting together a pros and cons list.
CONSIDERATION DETAILS
Pros Both the app VMs will be moved to Azure without changes,
making the migration simple.
Cons The web and data tier of the app will remain a single point of
failover.
Migration process
Contoso will migrate as follows:
1. As a first step, Contoso sets up the Azure and on-premises infrastructure needed to deploy Site Recovery.
2. After preparing Azure and on-premises components, Contoso sets up and enables replication for the VMs.
3. After replication is working, Contoso migrates the VMs by failing them over to Azure.
Azure services
SERVICE DESCRIPTION COST
Azure Site Recovery The service orchestrates and manages During replication to Azure, Azure
migration and disaster recovery for Storage charges are incurred. Azure
Azure VMs, and on-premises VMs and VMs are created, and incur charges,
physical servers. when failover occurs. Learn more about
charges and pricing.
Prerequisites
Here's what Contoso needs for this scenario.
REQUIREMENTS DETAILS
On-premises VMs Review Linux machines that are supported for migration with
Site Recovery.
Scenario steps
Here's how Contoso will complete the migration:
Step 1: Prepare Azure for Site Recovery: Contoso creates an Azure storage account to hold replicated
data, and creates a Recovery Services vault.
Step 2: Prepare on-premises VMware for Site Recovery: Contoso prepares accounts to be used for VM
discovery and agent installation, and prepares to connect to Azure VMs after failover.
Step 3: Replicate VMs: Contoso sets up the source and target migration environment, creates a replication
policy, and starts replicating VMs to Azure storage.
Step 4: Migrate the VMs with Site Recovery: Contoso runs a test failover to make sure everything's
working, and then runs a full failover to migrate the VMs to Azure.
2. With the network and storage account in place, they create a vault (ContosoMigrationVault), and place it
in the ContosoFailoverRG resource group, in the primary East US 2 region.
2. They import the template into VMware to create the VM, and deploy the VM.
3. When they turn on the VM for the first time, it boots up into a Windows Server 2016 installation
experience. They accept the license agreement, and enter an administrator password.
4. After the installation finishes, they sign into the VM as an administrator. At first sign-in, the Azure Site
Recovery Configuration Tool runs by default.
5. In the tool, they specify a name to use for registering the configuration server in the vault.
6. The tool checks that the VM can connect to Azure. After the connection is established, they sign in to the
Azure subscription. The credentials must have access to the vault in which you want to register the
configuration server.
7. The tool performs some configuration tasks and then reboots.
8. They sign in to the machine again, and the Configuration Server Management Wizard starts
automatically.
9. In the wizard, they select the NIC to receive replication traffic. This setting can't be changed after it's
configured.
10. They select the subscription, resource group, and vault in which to register the configuration server.
11. They then download and install MySQL Server, and VMWare PowerCLI.
12. After validation, they specify the FQDN or IP address of the vCenter server or vSphere host. They leave
the default port, and specify a friendly name for the vCenter server.
13. They specify the account that they created for automatic discovery, and the credentials that should be
used to automatically install the Mobility Service.
14. After registration finishes, in the Azure portal, they check that the configuration server and VMware
server are listed on the Source page in the vault. Discovery can take 15 minutes or more.
15. Site Recovery then connects to VMware servers, and discovers VMs.
Set up the target
Now Contoso admins configure the target replication settings.
1. In Prepare infrastructure > Target, they select the target settings.
2. Site Recovery checks that there's an Azure storage account and network in the specified target.
Create a replication policy
After the source and target are set up, they're ready to create a replication policy.
1. In Prepare infrastructure > Replication Settings > Replication Policy > Create and Associate, they
create a policy ContosoMigrationPolicy.
2. They use the default settings:
RPO threshold: Default of 60 minutes. This value defines how often recovery points are created.
An alert is generated if continuous replication exceeds this limit.
Recovery point retention. Default of 24 hours. This value specifies how long the retention
window is for each recovery point. Replicated VMs can be recovered to any point in a window.
App-consistent snapshot frequency. Default of one hour. This value specifies the frequency at
which application-consistent snapshots are created.
3. They specify the target settings, including the resource group and VNet in which the Azure VM will be
located after failover, and the storage account in which replicated data will be stored.
4. They select the OSTICKETWEB VM for replication.
At this stage they select OSTICKETWEB only, because the VNet and subnet must both be
selected, and the VMs aren't in the same subnet.
Site Recovery automatically installs the Mobility service when replication is enabled for the VM.
5. In the VM properties, they select the account that's used by the process server to automatically install
Mobility Service on the machine.
6. in Replication settings > Configure replication settings, they check that the correct replication policy
is applied, and select Enable Replication.
7. They track replication progress in Jobs. After the Finalize Protection job runs, the machine is ready for
failover.
Enable replication for OSTICKETMYSQL
Now Contoso admins can start replicating OSTICKETMYSQL.
1. In Replicate application > Source > +Replicate they select the source and target settings.
2. They select the OSTICKETMYSQL VM for replication, and select the account to use for Mobility service
installation.
3. They apply the same replication policy that was used for OSTICKETWEB, and enable replication.
Need more help?
You can read a full walkthrough of all these steps in Enable replication.
Step 4: Migrate the VMs
Contoso admins run a quick test failover, and then migrate the VMs.
Run a test failover
Running a test failover helps ensure that everything's working as expected before the migration.
1. They run a test failover to the latest available point in time (Latest processed).
2. They select Shut down machine before beginning failover, so that Site Recovery attempts to shut down
the source VM before triggering the failover. Failover continues even if shutdown fails.
3. Test failover runs:
A prerequisites check runs to make sure all of the conditions required for migration are in place.
Failover processes the data, so that an Azure VM can be created. If the latest recovery point is selected,
a recovery point is created from the data.
An Azure VM is created using the data processed in the previous step.
4. After the failover finishes, the replica Azure VM appears in the Azure portal. They check that the VM is the
appropriate size, that it's connected to the right network, and that it's running.
5. After verifying, they clean up the failover, and record and save any observations.
Create and customize a recovery plan
After verifying that the test failover worked as expected, Contoso admins create a recovery plan for migration.
A recovery plan specifies the order in which failover occurs, how Azure VMs will be brought up in Azure.
Since they want to migrate a two-tier app, they'll customize the recovery plan so that the data VM (SQLVM )
starts before the frontend (WEBVM ).
1. In Recovery Plans (Site Recovery) > +Recovery Plan, they create a plan and add the VMs to it.
2. After creating the plan, they select it for customization (Recovery Plans > OsTicketMigrationPlan >
Customize.
3. They remove OSTICKETWEB from Group 1: Start. This ensures that the first start action affects
OSTICKETMYSQL only.
4. In +Group > Add protected items, they add OSTICKETWEB to Group 2: Start. They need these in
two different groups.
Migrate the VMs
Contoso admins are now ready to run a failover on the recovery plan, to migrate the VMs.
1. They select the plan > Failover.
2. They select to fail over to the latest recovery point, and specify that Site Recovery should try to shut down
the on-premises VM before triggering the failover. They can follow the failover progress on the Jobs
page.
3. During the failover, vCenter Server issues commands to stop the two VMs running on the ESXi host.
4. After the failover, they verify that the Azure VM appears as expected in the Azure portal.
5. After verifying the VM in Azure, they complete the migration to finish the migration process for each VM.
This stops replication for the VM, and stops Site Recovery billing for the VM.
Connect the VM to the database
As the final step in the migration process, Contoso admins update the connection string of the application to
point to the app database running on the OSTICKETMYSQL VM.
1. They make an SSH connection to the OSTICKETWEB VM using Putty or another SSH client. The VM is
private so they connect using the private IP address.
2. They need to make sure that the OSTICKETWEB VM can communicate with the OSTICKETMYSQL
VM. Currently the configuration is hardcoded with the on-premises IP address 172.16.0.43.
Before the update
After the update
4. Finally, they update the DNS records for OSTICKETWEB and OSTICKETMYSQL, on one of the
Contoso domain controllers.
Need more help?
Learn about running a test failover.
Learn how to create a recovery plan.
Learn about failing over to Azure.
Next steps
In this article we showed how Contoso migrated an on-premises service desk app tiered on two Linux VMs to
Azure IaaS VMs, using Azure Site Recovery.
In the next article in the series, we'll show you how Contoso migrates the same service desk app to Azure. This
time Contoso uses Site Recovery to migrate the frontend VM for the app, and migrates the app database using
backup and restore to Azure Database for MySQL, using the MySQL workbench tool. Get started.
Contoso migration: Rehost an on-premises Linux
app to Azure VMs and Azure MySQL
3/15/2019 • 20 minutes to read • Edit Online
This article shows how Contoso rehosts its on-premises two-tier Linux service desk app (osTicket), by migrating
it to Azure and Azure MySQL.
This document is one in a series of articles that show how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background information, and scenarios
that illustrate how to set up a migration infrastructure, and run different types of migrations. Scenarios grow in
complexity. We'll add additional articles over time.
Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database
using Data Migration Assistant.
Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.
Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS
Article 8: Rehost a Linux app on Azure Contoso migrates the Linux osTicket This article
VMs and Azure MySQL app to Azure VMs using Azure Site
Recovery, and migrates the app
database to an Azure MySQL Server
instance using MySQL Workbench.
Article 10: Refactor a Linux app on Contoso migrates its Linux osTicket Available
Azure Web Apps and Azure MySQL app to an Azure web app on multiple
Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.
Article 11: Refactor TFS on Azure Contoso migrates its on-premises Available
DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.
Article 12: Rearchitect an app on Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL
Database.
Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.
Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.
In this article, Contoso migrates a two-tier Linux Apache MySQL PHP (LAMP ) service desk app (osTicket) to
Azure. If you'd like to use this open-source app, you can download it from GitHub.
Business drivers
The IT Leadership team has worked closely with business partners to understand what they want to achieve:
Address business growth: Contoso is growing, and as a result there's pressure on the on-premises systems
and infrastructure.
Limit risk: The service desk app is critical for the business. Contoso wants to move it to Azure with zero risk.
Extend: Contoso doesn't want to change the app right now. It simply wants to keep the app stable.
Migration goals
The Contoso cloud team has pinned down goals for this migration, in order to determine the best migration
method:
After migration, the app in Azure should have the same performance capabilities as it does today in their on-
premises VMware environment. The app will remain as critical in the cloud as it is on-premises.
Contoso doesn’t want to invest in this app. It's important to the business, but in its current form Contoso
simply want to move it safely to the cloud.
Having completed a couple of Windows app migrations, Contoso wants to learn how to use a Linux-based
infrastructure in Azure.
Contoso wants to minimize database admin tasks after the application is moved to the cloud.
Proposed architecture
In this scenario:
The app is tiered across two VMs (OSTICKETWEB and OSTICKETMYSQL ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5).
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
The web tier app on OSTICKETWEB will be migrated to an Azure IaaS VM.
The app database will be migrated to the Azure Database for MySQL PaaS service.
Since Contoso is migrating a production workload, the resources will reside in the production resource
group ContosoRG.
The resources will be replicated to the primary region (East US 2), and placed in the production network
(VNET-PROD -EUS2):
The web VM will reside in the frontend subnet (PROD -FE -EUS2).
The database instance will reside in the database subnet (PROD -DB -EUS2).
The app database will be migrated to Azure MySQL using MySQL tools.
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Migration process
Contoso will complete the migration process as follows:
To migrate the web VM:
1. As a first step, Contoso sets up the Azure and on-premises infrastructure needed to deploy Site Recovery.
2. After preparing the Azure and on-premises components, Contoso sets up and enables replication for the web
VM.
3. After replication is up-and-running, Contoso migrates the VM by failing it over to Azure.
To migrate the database:
1. Contoso provisions a MySQL instance in Azure.
2. Contoso sets up MySQL workbench, and backs up the database locally.
3. Contoso then restore the database from the local backup to Azure.
Azure services
SERVICE DESCRIPTION COST
Azure Site Recovery The service orchestrates and manages During replication to Azure, Azure
migration and disaster recovery for Storage charges are incurred. Azure
Azure VMs, and on-premises VMs and VMs are created, and incur charges,
physical servers. when failover occurs. Learn more
about charges and pricing.
Prerequisites
Here's what Contoso needs for this scenario.
REQUIREMENTS DETAILS
Scenario steps
Here's how Contoso admins will complete the migration:
Step 1: Prepare Azure for Site Recovery: They create an Azure storage account to hold replicated data,
and create a Recovery Services vault.
Step 2: Prepare on-premises VMware for Site Recovery: They prepare accounts for VM discovery and
agent installation, and prepare to connect to Azure VMs after failover.
Step 3: Provision the database]: In Azure, they provision an instance of Azure MySQL database.
Step 4: Replicate VMs: They configure the Site Recovery source and target environment, set up a
replication policy, and start replicating VMs to Azure storage.
Step 5: Migrate the database: They set up migration with MySQL tools.
Step 6: Migrate the VMs with Site Recovery: Lastly, they run a test failover to make sure everything's
working, and then run a full failover to migrate the VMs to Azure.
2. With the network and storage account in place, they create a vault (ContosoMigrationVault), and place it
in the ContosoFailoverRG resource group, in the primary East US 2 region.
Need more help?
Learn about setting up Azure for Site Recovery.
2. They add the name contosoosticket for the Azure database. They add the database to the production
resource group ContosoRG, and specify credentials for it.
3. The on-premises MySQL database is version 5.7, so they select this version for compatibility. They use
the default sizes, which match their database requirements.
4. For Backup Redundancy Options, they select to use Geo-Redundant. This option allows them to
restore the database in their secondary Central US region if an outage occurs. They can only configure
this option when they provision the database.
5. In the VNET-PROD -EUS2 network > Service endpoints, they add a service endpoint (a database
subnet) for the SQL service.
6. After adding the subnet, they create a virtual network rule that allows access from the database subnet in
the production network.
12. Now, they download and install MySQL Server, and VMWare PowerCLI.
13. After validation, they specify the FQDN or IP address of the vCenter server or vSphere host. They leave
the default port, and specify a friendly name for the vCenter server.
14. They input the account that they created for automatic discovery, and the credentials that Site Recovery
will use to automatically install the Mobility Service.
15. After registration finishes, in the Azure portal, they check that the configuration server and VMware
server are listed on the Source page in the vault. Discovery can take 15 minutes or more.
16. With everything in place, Site Recovery connects to VMware servers, and discovers VMs.
Set up the target
Now Contoso admins input target replication settings.
1. In Prepare infrastructure > Target, they select the target settings.
2. Site Recovery checks that there's an Azure storage account and network in the specified target.
Create a replication policy
With the source and target set up, Contoso admins are ready to create a replication policy.
1. In Prepare infrastructure > Replication Settings > Replication Policy > Create and Associate,
they create a policy ContosoMigrationPolicy.
2. They use the default settings:
RPO threshold: Default of 60 minutes. This value defines how often recovery points are created.
An alert is generated if continuous replication exceeds this limit.
Recovery point retention. Default of 24 hours. This value specifies how long the retention
window is for each recovery point. Replicated VMs can be recovered to any point in a window.
App-consistent snapshot frequency. Default of one hour. This value specifies the frequency at
which application-consistent snapshots are created.
3. Now they specify the target settings. These include the resource group and network in which the Azure
VM will be located after failover, and the storage account in which replicated data will be stored.
4. They select OSTICKETWEB for replication.
5. In the VM properties, they select the account that should be used to automatically install the Mobility
Service on the VM.
6. in Replication settings > Configure replication settings, they check that the correct replication policy
is applied, and select Enable Replication. The Mobility service will be automatically installed.
7. They track replication progress in Jobs. After the Finalize Protection job runs, the machine is ready for
failover.
Need more help?
You can read a full walkthrough of all these steps in Enable replication.
6. Now, they can import (restore) the database in the Azure MySQL instance, from the self-contained file. A
new schema (osticket) is created for the instance.
Step 6: Migrate the VMs with Site Recovery
Finally, Contoso admins run a quick test failover, and then migrate the VM.
Run a test failover
Running a test failover helps verify that everything's working as expected, before the migration.
1. They run a test failover to the latest available point in time (Latest processed).
2. They select Shut down machine before beginning failover, so that Site Recovery attempts to shut
down the source VM before triggering the failover. Failover continues even if shutdown fails.
3. Test failover runs:
A prerequisites check runs to make sure all of the conditions required for migration are in place.
Failover processes the data, so that an Azure VM can be created. If select the latest recovery point, a
recovery point is created from the data.
An Azure VM is created using the data processed in the previous step.
4. After the failover finishes, the replica Azure VM appears in the Azure portal. They check that the VM is
the appropriate size, that it's connected to the right network, and that it's running.
5. After verifying, they clean up the failover, and record and save any observations.
Migrate the VM
To migrate the VM, Contoso admins creates a recovery plan that includes the VM, and fail over the plan to
Azure.
1. They create a plan, and add OSTICKETWEB to it.
2. They run a failover on the plan. They select the latest recovery point, and specify that Site Recovery
should try to shut down the on-premises VM before triggering the failover. They can follow the failover
progress on the Jobs page.
3. During the failover, vCenter Server issues commands to stop the two VMs running on the ESXi host.
4. After the failover, they verify that the Azure VM appears as expected in the Azure portal.
5. After checking the VM, they complete the migration. This stops replication for the VM, and stops Site
Recovery billing for the VM.
Need more help?
Learn about running a test failover.
Learn how to create a recovery plan.
Learn about failing over to Azure.
Connect the VM to the database
As the final step in the migration process, Contoso admins update the connection string of the app to point to
the Azure Database for MySQL.
1. They make an SSH connection to the OSTICKETWEB VM using Putty or another SSH client. The VM is
private so they connect using the private IP address.
2. They update settings so that the OSTICKETWEB VM can communicate with the OSTICKETMYSQL
database. Currently the configuration is hardcoded with the on-premises IP address 172.16.0.43.
Before the update
After the update
4. Finally, they update the DNS records for OSTICKETWEB, on one of the Contoso domain controllers.
Next steps
In this scenario we showed the final rehost scenario. Contoso migrated the frontend VM of the on-premises
Linux osTicket app to an Azure VM, and migrated the app database to an Azure MySQL instance.
In the next set of tutorials in the migration series, we're going to show you how Contoso performed a more
complex set of migrations, involving app refactoring, rather than simple lift-and-shift migrations.
Contoso migration: Refactor an on-premises app to
an Azure Web App and Azure SQL database
3/15/2019 • 17 minutes to read • Edit Online
This article demonstrates how Contoso refactors their SmartHotel360 app in Azure. They migrate the app
frontend VM to an Azure Web App, and the app database to an Azure SQL database.
This document is one in a series of articles that show how the fictitious company Contoso migrates their on-
premises resources to the Microsoft Azure cloud. The series includes background information, and scenarios
that illustrate setting up a migration infrastructure, assessing on-premises resources for migration, and running
different types of migrations. Scenarios grow in complexity. We'll add additional articles over time.
Article 4: Rehost an app to Azure VMs Demonstrates how Contoso runs a lift- Available
and a SQL Managed Instance and-shift migration to Azure for the
SmartHotel app. Contoso migrates the
app frontend VM using Azure Site
Recovery, and the app database to a
SQL Managed Instance, using the
Azure Database Migration Service.
Article 5: Rehost an app to Azure VMs Shows how Contoso migrate the Available
SmartHotel app VMs using Site
Recovery only.
Article 6: Rehost an app to Azure VMs Shows how Contoso migrates the Available
and SQL Server Always On Availability SmartHotel app. Contoso uses Site
Group Recovery to migrate the app VMs, and
the Database Migration service to
migrate the app database to a SQL
Server cluster protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS
Article 7: Rehost a Linux app to Azure Shows how Contoso does a lift-and- Available
VMs shift migration of the Linux osTicket
app to Azure VMs, using Site Recovery
Article 8: Rehost a Linux app to Azure Demonstrates how Contoso migrates Available
VMs and Azure MySQL Server the Linux osTicket app to Azure VMs
using Site Recovery, and migrates the
app database to an Azure MySQL
Server instance using MySQL
Workbench.
Article 9: Refactor an app to an Azure Demonstrates how Contoso migrates This article
Web App and Azure SQL database the SmartHotel app to an Azure Web
App, and migrates the app database to
Azure SQL Server instance
Article 10: Refactor a Linux app to Shows how Contoso migrates the Available
Azure Web Apps and Azure MySQL Linux osTicket app to Azure Web Apps
in multiple sites, integrated with
GitHub for continuous delivery. They
migrate the app database to an Azure
MySQL instance.
Article 11: Refactor TFS on Azure Shows how Contoso migrates their on- Available
DevOps Services premises Team Foundation Server (TFS)
deployment by migrating it to Azure
DevOps Services in Azure.
Article 12: Rearchitect an app on Azure Shows how Contoso migrates and Available
containers and Azure SQL Database rearchitects their SmartHotel app to
Azure. They rearchitect the app web
tier as a Windows container, and the
app database in an Azure SQL
Database.
Article 13: Rebuild an app in Azure Shows how Contoso rebuild their Available
SmartHotel app using a range of Azure
capabilities and services, including App
Services, Azure Kubernetes, Azure
Functions, Cognitive services, and
Cosmos DB.
Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.
In this article, Contoso migrates the two-tier Windows. NET SmartHotel360 app running on VMware VMs to
Azure. If you'd like to use this app, it's provided as open source and you can download it from GitHub.
Business drivers
The IT leadership team has worked closely with business partners to understand what they want to achieve with
this migration:
Address business growth: Contoso is growing, and there is pressure on on-premises systems and
infrastructure.
Increase efficiency: Contoso needs to remove unnecessary procedures, and streamline processes for
developers and users. The business needs IT to be fast and not waste time or money, thus delivering faster on
customer requirements.
Increase agility: Contoso IT needs to be more responsive to the needs of the business. It must be able to
react faster than the changes in the marketplace, to enable the success in a global economy. It mustn't get in
the way, or become a business blocker.
Scale: As the business grows successfully, Contoso IT must provide systems that are able to grow at the
same pace.
Costs: Contoso wants to minimize licensing costs.
Migration goals
The Contoso cloud team has pinned down goals for this migration. These goals were used to determine the best
migration method.
REQUIREMENTS DETAILS
The team doesn't want to invest in the app. For now, admins
will simply move the app safely to the cloud.
The team also wants to move away from SQL Server 2008
R2 to a modern PaaS Database platform, which will minimize
the need for management.
Azure Contoso wants to move the app to Azure, but doesn't want
to run it on VMs. Contoso wants to leverage Azure PaaS
services for both the web and data tiers.
Solution design
After pinning down goals and requirements, Contoso designs and review a deployment solution, and identifies
the migration process, including the Azure services that will be used for migration.
Current app
The SmartHotel360 on-premises app is tiered across two VMs (WEBVM and SQLVM ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5)
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Proposed solution
For the database tier of the app, Contoso compared Azure SQL Database with SQL Server using this article.
Contoso decided to go with Azure SQL Database for a few reasons:
Azure SQL Database is a relational-database managed service. It delivers predictable performance at
multiple service levels, with near-zero administration. Advantages include dynamic scalability with no
downtime, built-in intelligent optimization, and global scalability and availability.
Contoso can leverage the lightweight Data Migration Assistant (DMA) to assess and migrate the on-
premises database to Azure SQL.
With Software Assurance, Contoso can exchange existing licenses for discounted rates on a SQL
Database, using the Azure Hybrid Benefit for SQL Server. This could provide savings of up to 30%.
SQL Database provides a number of security features including always encrypted, dynamic data
masking, and row -level security/threat detection.
For the app web tier, Contoso has decided to use Azure App Service. This PaaS service enables that to deploy
the app with just a few configuration changes. Contoso will use Visual Studio to make the change, and deploy
two web apps. One for the website, and one for the WCF service.
To meet requirements for a DevOps pipeline, Contoso has selected to use Azure DevOps for Source Code
Management (SCM ) with Git repos. Automated builds and release will be used to build the code, and deploy
it to the Azure Web Apps.
Solution review
Contoso evaluates their proposed design by putting together a pros and cons list.
CONSIDERATION DETAILS
Contoso can configure the web tier of the app with multiple
instances, so that it's no longer a single point of failure.
Cons Azure App Services only supports one app deployment for
each Web App. This means that two Web Apps must be
provisioned (one for the website and one for the WCF
service).
Proposed architecture
Migration process
1. Contoso provisions an Azure SQL instance, and migrates the SmartHotel360 database to it.
2. Contoso provisions and configures Web Apps, and deploys the SmartHotel360 app to them.
Azure services
SERVICE DESCRIPTION COST
SERVICE DESCRIPTION COST
Database Migration Assistant (DMA) Contoso will use DMA to assess and It's a downloadable tool free of charge.
detect compatibility issues that might
impact their database functionality in
Azure. DMA assesses feature parity
between SQL sources and targets, and
recommends performance and
reliability improvements.
Azure SQL Database An intelligent, fully managed relational Cost based on features, throughput,
cloud database service. and size. Learn more.
Azure App Services - Web Apps Create powerful cloud apps using a Cost based on size, location, and usage
fully managed platform duration. Learn more.
Prerequisites
Here's Contoso needs to run this scenario:
REQUIREMENTS DETAILS
Scenario steps
Here's how Contoso will run the migration:
Step 1: Provision a SQL Database instance in Azure: Contoso provisions a SQL instance in Azure. After
the app website is migrate to Azure, the WCF service web app will point to this instance.
Step 2: Migrate the database with DMA: Contoso migrates the app database with the Database Migration
Assistant.
Step 3: Provision Web Apps: Contoso provisions the two web apps.
Step 4: Set up Azure DevOps: Contoso creates a new Azure DevOps project, and imports the Git repo.
Step 5: Configure connection strings: Contoso configures connection strings so that the web tier web app,
the WCF service web app, and the SQL instance can communicate.
Step 6: Set up build and release pipelines: As a final step, Contoso sets up build and release pipelines to
create the app, and deploys them to two separate Azure Web Apps.
2. They specify a database name to match the database running on the on-premises VM
(SmartHotel.Registration). They place the database in the ContosoRG resource group. This is the
resource group they use for production resources in Azure.
3. They set up a new SQL Server instance (sql-smarthotel-eus2) in the primary region.
4. They set the pricing tier to match their server and database needs. And they select to save money with
Azure Hybrid Benefit because they already have a SQL Server license.
5. For sizing they use v-Core-based purchasing, and set the limits for their expected requirements.
3. In the migration details, they add SQLVM as the source server, and the SmartHotel.Registration
database.
4. They receive an error which seems to be associated with authentication. However after investigating, the
issue is the period (.) in the database name. As a workaround, they decided to provision a new SQL
database using the name SmartHotel-Registration, to resolve the issue. When they run DMA again,
they're able to select SmartHotel-Registration, and continue with the wizard.
5. In Select Objects, they select the database tables, and generate a SQL script.
10. They delete the extra SQL database SmartHotel.Registration in the Azure portal.
2. They provide an app name (SHWEB -EUS2), run it on Windows, and place it un the production resources
group ContosoRG. They create a new app service and plan.
3. After the web app is provisioned, they repeat the process to create a web app for the WCF service
(SHWCF-EUS2)
4. After they're done, they browse to the address of the apps to check they've been created successfully.
3. After the code is imported, they connect Visual Studio to the repo, and clone the code using Team
Explorer.
4. After the repo is cloned to the developer machine, they open the Solution file for the app. The web app
and wcf service each have separate project within the file.
4. The client section of the web.config file for the SmartHotel.Registration.Web should be changed to point
to the new location of the WCF service. This is the URL of the WCF web app hosting the service
endpoint.
5. After the changes are in the code, admins need to commit the changes. Using Team Explorer in Visual
Studio, they commit and sync.
3. In Select a template, they select the ASP.NET template for their build.
4. The name ContosoSmartHotelRefactor-ASP.NET-CI is used for the build. They click Save & Queue.
5. This kicks off the first build. They click on the build number to watch the process. After it's finished they
can see the process feedback, and click Artifacts to review the build results.
10. Under the stages, they click 1 job, 1 task to configure deployment of the WCF service.
11. They verify the subscription is selected and authorized, and select the App service name.
12. On the pipeline > Artifacts, they select +Add an artifact, and select to build with the
ContosoSmarthotel360Refactor pipeline.
13. They click the lightning bolt on the artifact is checked., to enable continuous deployment trigger.
16. In Select a file or folder, they locate the SmartHotel.Registration.Wcf.zip file that was creating
during the build, and click Save.
17. They click Pipeline > Stages +Add, to add an environment for SHWEB -EUS2. They select another
Azure App Service deployment.
18. They repeat the process to publish the web app (SmartHotel.Registration.Web.zip) file to the correct
web app.
19. After it's saved, the release pipeline will show as follows.
20. They move back to Build, and click Triggers > Enable continuous integration. This enables the
pipeline so that when changes are committed to the code, and full build and release occurs.
21. They click Save & Queue to run the full pipeline. A new build is triggered that in turn creates the first
release of the app to the Azure App Service.
22. Contoso admins can follow the build and release pipeline process from Azure DevOps. After the build
completes, the release will start.
23. After the pipeline finishes, both sites have been deployed and the app is up and running online.
At this point, the app is successfully migrated to Azure.
Conclusion
In this article, Contoso refactored the SmartHotel360 app in Azure by migrating the app frontend VM to two
Azure Web Apps. The app database was migrated to an Azure SQL database.
Contoso migration: Refactor a Contoso Linux
service desk app to multiple regions with Azure App
Service, Traffic Manager, and Azure MySQL
3/15/2019 • 14 minutes to read • Edit Online
This article shows how Contoso refactors their on-premises two-tier Linux service desk app (osTicket), by
migrating it to Azure App Service with GitHub integration, and Azure MySQL.
This document is one in a series of articles that show how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background information, and scenarios
that illustrate how to set up a migration infrastructure, and run different types of migrations. Scenarios grow in
complexity. We'll add additional articles over time.
Article 3: Assess on-premises resources Contoso runs an assessment of its on- Available
for migration to Azure premises SmartHotel360 app running
on VMware. Contoso assesses app
VMs using the Azure Migrate service,
and the app SQL Server database
using Data Migration Assistant.
Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.
ARTICLE DETAILS STATUS
Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.
Article 8: Rehost a Linux app on Azure Contoso migrates the Linux osTicket Available
VMs and Azure MySQL app to Azure VMs using Azure Site
Recovery, and migrates the app
database to an Azure MySQL Server
instance using MySQL Workbench.
Article 10: Refactor a Linux app on Contoso migrates its Linux osTicket This article
Azure Web Apps and Azure MySQL app to an Azure web app on multiple
Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.
Article 11: Refactor TFS on Azure Contoso migrates its on-premises Available
DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.
Article 12: Rearchitect an app on Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL
Database.
Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel360 Available
app by using a range of Azure
capabilities and services, including
Azure App Service, Azure Kubernetes
Service (AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.
Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.
In this article, Contoso migrates a two-tier Linux Apache MySQL PHP (LAMP ) service desk app (osTicket) to
Azure. If you'd like to use this open-source app, you can download it from GitHub.
Business drivers
The IT Leadership team has worked closely with business partners to understand what they want to achieve:
Address business growth: Contoso is growing and moving into new markets. It needs additional customer
service agents.
Scale: The solution should be built so that Contoso can add more customer service agents as the business
scales.
Increase resiliency: In the past issues with the system affected internal users only. With the new business
model, external users will be affected, and Contoso need the app up and running at all times.
Migration goals
The Contoso cloud team has pinned down goals for this migration, in order to determine the best migration
method:
The application should scale beyond current on-premises capacity and performance. Contoso is moving the
application to take advantage of Azure's on-demand scaling.
Contoso want to move the app code base to a continuous delivery pipeline. As app changes are pushed to
GitHub, Contoso want to deploy those changes without tasks for operations staff.
The application must be resilient with capabilities for growth and failover. Contoso want to deploy the app in
two different Azure regions, and set it up to scale automatically.
Contoso wants to minimize database admin tasks after the app is moved to the cloud.
Solution design
After pinning down their goals and requirements, Contoso designs and reviews a deployment solution, and
identifies the migration process, including the Azure services that will be used for the migration.
Current architecture
The app is tiered across two VMs (OSTICKETWEB and OSTICKETMYSQL ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5).
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
Proposed architecture
Here's the proposed architecture:
The web tier app on OSTICKETWEB will be migrated by building an Azure App Service in two Azure
regions. Azure App Service for Linux will be implemented using the PHP 7.0 Docker container.
The app code will be moved to GitHub, and Azure Web App will be configured for continuous delivery with
GitHub.
Azure App Servers will be deployed in both the primary (East US 2) and secondary (Central US ) region.
Traffic Manager will be set up in front of the two Azure Web Apps in both regions.
Traffic Manager will be configured in priority mode to force the traffic through East US 2.
If the Azure App Server in East US 2 goes offline, users can access the failed over app in Central US.
The app database will be migrated to the Azure MySQL PaaS service using MySQL Workbench tools. The
on-premises database will be backed up locally, and restored directly to Azure MySQL.
The database will reside in the primary East US 2 region, in the database subnet (PROD -DB -EUS2) in the
production network (VNET-PROD -EUS2):
Since they're migrating a production workload, Azure resources for the app will reside in the production
resource group ContosoRG.
The Traffic Manager resource will be deployed in Contoso's infrastructure resource group ContosoInfraRG.
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Migration process
Contoso will complete the migration process as follows:
1. As a first step, Contoso admins set up the Azure infrastructure, including provisioning Azure App Services,
setting up Traffic Manager, and provisioning an Azure MySQL instance.
2. After preparing the Azure, they migrate the database using MySQL Workbench.
3. After the database is running in Azure, they up a GitHub private repo for the Azure App Service with
continuous delivery, and load it with the osTicket app.
4. In the Azure portal, they load the app from GitHub to the Docker container running Azure App Service.
5. They tweak DNS settings, and configure autoscaling for the app.
Azure services
SERVICE DESCRIPTION COST
Azure App Service The service runs and scales Pricing is based on the size of the
applications using the Azure PaaS instances, and the features required.
service for websites. Learn more.
Traffic Manager A load balancer that uses DNS to Pricing is based on the number of DNS
direct users to Azure, or external queries received, and the number of
websites and services. monitored endpoints. Learn more.
Azure Database for MySQL The database is based on the open- Pricing based on compute, storage,
source MySQL Server engine. It and backup requirements. Learn more.
provides a fully managed, enterprise-
ready community MySQL database, as
a service for app development and
deployment.
Prerequisites
Here's what Contoso needs to run this scenario.
REQUIREMENTS DETAILS
Scenario steps
Here's how Contoso will complete the migration:
Step 1: Provision Azure App Services: Contoso admins will provision Web Apps in the primary and
secondary regions.
Step 2: Set up Traffic Manager: They set up Traffic Manager in front of the Web Apps, for routing and load
balancing traffic.
Step 3: Provision MySQL: In Azure, they provision an instance of Azure MySQL database.
Step 4: Migrate the database: They migrate the database using MySQL Workbench.
Step 5: Set up GitHub: They set up a local GitHub repository for the app web sites/code.
Step 6: Deploy the web apps: They deploy the web apps from GitHub.
3. They create a new App Service plan in the primary region ( APP -SVP -EUS2), using the standard size.
4. They select a Linux OS with PHP 7.0 runtime stack, which is a Docker container.
5. They create a second web app (osticket-cus), and App service plan for the Central US region.
Need more help?
Learn about Azure App Service Web apps.
Learn about Azure App Service on Linux.
2. They add the name contosoosticket for the Azure database. They add the database to the production
resource group ContosoRG, and specify credentials for it.
3. The on-premises MySQL database is version 5.7, so they select this version for compatibility. They use
the default sizes, which match their database requirements.
4. For Backup Redundancy Options, they select to use Geo-Redundant. This option allows them to
restore the database in their secondary Central US region if an outage occurs. They can only configure
this option when they provision the database.
5. They set up connection security. In the database > Connection Security, they set up Firewall rules to
allow the database to access Azure services.
6. They add the local workstation client IP address to the start and end IP addresses. This allows the Web
apps to access the MySQL database, along with the database client that's performing the migration.
Step 4: Migrate the database
Contoso admins migrate the database using backup and restore, with MySQL tools. They install MySQL
Workbench, back up the database from OSTICKETMYSQL, and then restore it to Azure Database for MySQL
Server.
Install MySQL Workbench
1. They check the prerequisites and downloads MySQL Workbench.
2. They install MySQL Workbench for Windows in accordance with the installation instructions. The
machine on which they install must be accessible to the OSTICKETMYSQL VM, and Azure via the
internet.
3. In MySQL Workbench, they create a MySQL connection to OSTICKETMYSQL.
6. Now, they can import (restore) the database in the Azure MySQL instance, from the self-contained file. A
new schema (osticket) is created for the instance.
7. After data is restored, it can be queried using Workbench, and appears in the Azure portal.
8. Finally, they need to update the database information on the web apps. On the MySQL instance, they
open Connection Strings.
9. In the strings list, they locate the Web App settings, and click to copy them.
10. They open a Notepad window and paste the string into a new file, and update it to match the osticket
database, MySQL instance, and credentials settings.
11. They can verify the server name and login from Overview in the MySQL instance in the Azure portal.
2. After forking, they navigate to the include folder, and find the ost-config.php file.
3. The file opens in the browser and they edit it.
4. In the editor, they update the database details, specifically DBHOST and DBUSER.
6. For each web app (osticket-eus2 and osticket-cus), they modify the Application settings in the Azure
portal.
7. They enter the connection string with the name osticket, and copy the string from notepad into the
value area. They select MySQL in the dropdown list next to the string, and save the settings.
4. After the configuration is updated and the osTicket web app is loaded from GitHub to the Docket
container running the Azure App Service, the site shows as Active.
5. They repeat the above steps for the secondary web app ( osticket-cus).
6. After the site is configured, it's accessible via the Traffic Manager profile. The DNS name is the new
location of the osTicket app. Learn more.
7. Contoso wants a DNS name that's easy to remember. They create an alias record (CNAME )
osticket.contoso.com which points to the Traffic Manager name, in the DNS on their domain
controllers.
8. They configure both the osticket-eus2 and osticket-cus web apps to allow the custom hostnames.
Set up autoscaling
Finally, they set up automatic scaling for the app. This ensures that as agents use the app, the app instances
increase and decrease according to business needs.
1. In App Service APP -SRV -EUS2, they open Scale Unit.
2. They configure a new autoscale setting with a single rule that increases the instance count by one when
the CPU percentage for the current instance is above 70% for 10 minutes.
3. They configure the same setting on APP -SRV -CUS to ensure that the same behavior applies if the app
fails over to the secondary region. The only difference is that they set the instance limit to 1 since this is
for failovers only.
This article shows how Contoso are refactoring their on-premises Team Foundation Server (TFS ) deployment
by migrating it to Azure DevOps Services in Azure. Contoso's development team have used TFS for team
collaboration and source control for the past five years. Now, they want to move to a cloud-based solution for
dev and test work, and for source control. Azure DevOps Services will play a role as they move to an Azure
DevOps model, and develop new cloud-native apps.
This document is one in a series of articles that show how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background information, and scenarios
that illustrate how to set up a migration infrastructure, and run different types of migrations. Scenarios grow in
complexity. We'll add additional articles over time.
Article 4: Rehost to Azure VMs and a Demonstrates how Contoso migrates Available
SQL Managed Instance the SmartHotel app to Azure. They
migrate the app web VM using Azure
Site Recovery, and the app database
using the Azure Database Migration
service, to migrate to a SQL Managed
Instance.
Article 6: Rehost to Azure VMs and Shows how Contoso migrates the Available
SQL Server Availability Groups SmartHotel app. They use Site
Recovery to migrate the app VMs, and
the Database Migration service to
migrate the app database to a SQL
Server Availability Group.
ARTICLE DETAILS STATUS
Article 7: Rehost a Linux app to Azure Shows how Contoso migrates their
VMs osTicket Linux app to Azure IaaS VMs
using Azure Site Recovery.
Article 8: Rehost a Linux app to Azure Demonstrates how Contoso migrates Available
VMs and Azure MySQL Server the osTicket Linux app. They use Site
Recovery for VM migration, and
MySQL Workbench to migrate to an
Azure MySQL Server instance.
Article 10: Refactor a Linux app to Shows how Contoso migrates the Available
Azure App Service and Azure MySQL osTicket Linux app to Azure App
Server Service using PHP 7.0 Docker
container. The code base for the
deployment is migrated to GitHub. The
app database is migrated to Azure
MySQL.
Article 11: Refactor a TFS deployment Migrate the dev app TFS to Azure This article
in Azure DevOps Services DevOps Services in Azure
Article 12: Rearchitect an app on Azure Shows how Contoso migrates and Available
containers and Azure SQL Database rearchitects their SmartHotel app to
Azure. They rearchitect the app web
tier as a Windows container, and the
app database in an Azure SQL
Database.
Article 13: Rebuild an app in Azure Shows how Contoso rebuild their Available
SmartHotel app using a range of Azure
capabilities and services, including App
Services, Azure Kubernetes, Azure
Functions, Cognitive services, and
Cosmos DB.
Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.
Business drivers
The IT Leadership team has worked closely with business partners to identify future goals. Partners aren't overly
concerned with dev tools and technologies, but they have captured these points:
Software: Regardless of the core business, all companies are now software companies, including Contoso.
Business leadership is interested in how IT can help lead the company with new working practices for users,
and experiences for their customers.
Efficiency: Contoso needs to streamline process and remove unnecessary procedures for developers and
users. This will allow the company to deliver on customer requirements more efficiently. The business needs
IT to fast, without wasting time or money.
Agility: Contoso IT needs to respond to business needs, and react more quickly than the marketplace to
enable success in a global economy. IT mustn't be a blocker for the business.
Migration goals
The Contoso cloud team has pinned down goals for the migration to Azure DevOps Services:
The team needs a tool to migrate the data to the cloud. Few manual processes should be needed.
Work item data and history for the last year must be migrated.
They don't want to set up new user names and passwords. All current system assignments must be
maintained.
They want to move away from Team Foundation Version Control (TFVC ) to Git for source control.
The cutover to Git will be a "tip migration" that imports only the latest version of the source code. It will
happen during a downtime when all work will be halted as the codebase shifts. They understand that only
the current master branch history will be available after the move.
They're concerned about the change and want to test it before doing a full move. They want to retain access
to TFS even after the move to Azure DevOps Services.
They have multiple collections, and want to start with one that has only a few projects to better understand
the process.
They understand that TFS collections are a one-to-one relationship with Azure DevOps Services
organizations, so they'll have multiple URLs. However, this matches their current model of separation for
code bases and projects.
Proposed architecture
Contoso will move their TFS projects to the cloud, and no longer host their projects or source control on-
premises.
TFS will be migrated to Azure DevOps Services.
Currently Contoso has one TFS collection named ContosoDev, which will be migrated to an Azure DevOps
Services organization called contosodevmigration.visualstudio.com.
The projects, work items, bugs and iterations from the last year will be migrated to Azure DevOps Services.
Contoso will leverage their Azure Active Directory, which they set up when they deployed their Azure
infrastructure at the beginning of their migration planning.
Migration process
Contoso will complete the migration process as follows:
1. There's a lot of preparation involved. As a first step, Contoso needs to upgrade their TFS implementation to
a supported level. Contoso is currently running TFS 2017 Update 3, but to use database migration it needs
to run a supported 2018 version with the latest updates.
2. After upgrading, Contoso will run the TFS migration tool, and validate their collection.
3. Contoso will build a set of preparation files, and perform a migration dry run for testing.
4. Contoso will then run another migration, this time a full migration that includes work items, bugs, sprints,
and code.
5. After the migration, Contoso will move their code from TFVC to Git.
Scenario steps
Here's how Contoso will complete the migration:
Step 1: Create an Azure storage account: This storage account will be used during the migration process.
Step 2: Upgrade TFS: Contoso will upgrade their deployment to TFS 2018 Upgrade 2.
Step 3: Validate collection: Contoso will validate the TFS collection in preparation for migration.
Step 4: Build preparation file: Contoso will create the migration files using the TFS Migration Tool.
5. They verify the TFS installation by reviewing projects, work items, and code.
NOTE
Some TFS upgrades need to run the Configure Features Wizard after the upgrade completes. Learn more.
2. They run the tool to perform the validation, by specifying the URL of the project collection:
TfsMigrator validate /collection:http://contosotfs:8080/tfs/ContosoDev
3. The tool shows an error.
4. They located the log files are located in the Logs folder, just before the tool location. A log file is
generated for each major validation. TfsMigration.log holds the main information.
6. They run TfsMigration validate /help at the command line, and see that the command
/tenantDomainName seems to be required to validate identities.
7. They run the validation command again, and include this value, along with their Azure AD name:
TfsMigrator validate /collection:http://contosotfs:8080/tfs/ContosoDev
/tenantDomainName:contosomigration.onmicrosoft.com.
8. An Azure AD Sign In screen appears, and they enter the credentials of a Global Admin user.
9. The validation passes, and is confirmed by the tool.
3. Prepare completes, and the tool reports that the import files have been generated successfully.
4. They can now see that both the IdentityMapLog.csv and the import.json file have been created in a new
folder.
5. The import.json file provides import settings. It includes information such as the desired organization
name, and storage account information. Most of the fields are populated automatically. Some fields
required user input. Contoso opens the file, and adds the Azure DevOps Services organization name to
be created: contosodevmigration. With this name, their Azure DevOps Services URL will be
contosodevmigration.visualstudio.com.
NOTE
The organization must be created before the migration, It can be changed after migration is done.
6. They review the identity log map file that shows the accounts that will be brought into Azure DevOps
Services during the import.
Active identities refer to identities that will become users in Azure DevOps Services after the
import.
On Azure DevOps Services, these identities will be licensed, and show up as a user in the
organization after migration.
These identities are marked as Active in the Expected Import Status column in the file.
Step 5: Migrate to Azure DevOps Services
With preparation in place, Contoso admins can now focus on the migration. After running the migration, they'll
switch from using TFVC to Git for version control.
Before they start, the admins schedule downtime with the dev team, to take the collection offline for migration.
These are the steps for the migration process:
1. Detach the collection: Identity data for the collection resides in the TFS server configuration database
while the collection is attached and online. When a collection is detached from the TFS server, it takes a copy
of that identity data, and packages it with the collection for transport. Without this data, the identity portion
of the import cannot be executed. It's recommended that the collection stay detached until the import has
been completed, as there's no way to import the changes which occurred during the import.
2. Generate a backup: The next step of the migration process is to generate a backup that can be imported
into Azure DevOps Services. Data-tier Application Component Packages (DACPAC ), is a SQL Server feature
that allows database changes to be packaged into a single file, and deployed to other instances of SQL. It can
also be restored directly to Azure DevOps Services, and is therefore used as the packaging method for
getting collection data into the cloud. Contoso will use the SqlPackage.exe tool to generate the DACPAC. This
tool is included in SQL Server Data Tools.
3. Upload to storage: After the DACPAC is created, they upload it to Azure Storage. After it's uploaded, they
get a shared access signature (SAS ), to allow the TFS Migration Tool access to the storage.
4. Fill out the import: Contoso can then fill out missing fields in the import file, including the DACPAC setting.
To start with they'll specify that they want to do a DryRun import, to check that everything's working
properly before the full migration.
5. Do a dry run: Dry run imports help test collection migration. Dry runs have limited life, and are deleted
before a production migration runs. They're deleted automatically after a set period of time. A note about
when the dry run will be deleted is included in the success email received after the import finishes. Take note
and plan accordingly.
6. Complete the production migration: With the Dry Run migration completed, Contoso admins do the
final migration by updating the import.json, and running import again.
Detach the collection
Before starting, Contoso admins take a local SQL Server backup, and VMware snapshot of the TFS server,
before detaching.
1. In the TFS Admin console, they select the collection they want to detach (ContosoDev).
4. In Detach Progress, they monitor progress and click Next when the process finishes.
2. They connect to their subscription and locate the storage account they created for the migration
(contosodevmigration). They create a new blob container, azuredevopsmigration.
5. They accept the defaults and click Create. This enables access for 24 hours.
6. They copy the Shared Access Signature URL, so that it can be used by the TFS Migration Tool.
NOTE
The migration must happen before within the allowed time window or permissions will expire. Don't generate an SAS key
from the Azure portal. Keys generated like this are account-scoped, and won't work with the import.
4. They use Azure Storage Explorer to create a new SAS key with expiry set to seven days.
5. They update the import.json file and run the validation again. This time it completes successfully.
TfsMigrator import /importFile:C:\TFSMigrator\import.json /validateonly
6. They start the dry run:
TfsMigrator import /importFile:C:\TFSMigrator\import.json
7. A message is issued to confirm the migration. Note the length of time for which the staged data will be
maintained after the dry run.
8. Azure AD Sign In appears, and should be completing with Contoso Admin sign-in.
9. A message shows information about the import.
10. After 15 minutes or so, they browse to the URL, and see the following information:
11. After the migration finishes a Contoso Dev Leads signs into Azure DevOps Services to check that the dry
run worked properly. After authentication, Azure DevOps Services needs a few details to confirm the
organization.
12. In Azure DevOps Services, the Dev Lead can see that the projects have been migrated to Azure DevOps
Services. There's a notice that the organization will be deleted in 15 days.
13. The Dev Lead opens one of the projects and opens Work Items > Assigned to me. This shows that
work item data has been migrated, along with identity.
14. The lead also checks other projects and code, to confirm that the source code and history has been
migrated.
7. After around 15 minutes, they browse to the URL, and sees the following information:
8. After the migration finishes, a Contoso Dev Lead logs onto Azure DevOps Services to check that the
migration worked properly. After login, he can see that projects have been migrated.
9. The Dev Lead opens one of the projects and opens Work Items > Assigned to me. This shows that
work item data has been migrated, along with identity.
10. The lead checks other work item data to confirm.
11. The lead also checks other projects and code, to confirm that the source code and history has been
migrated.
Move source control from TFVC to GIT
With migration complete, Contoso wants to move from TFVC to Git for source code management. They need to
import the source code currently in their Azure DevOps Services organization as Git repos in the same
organization.
1. In the Azure DevOps Services portal, they open one of the TFVC repos ( $/PolicyConnect) and review it.
NOTE
Due to differences in how TFVC and Git store version control information, we recommend that Contoso don't
migrate history. This is the approach that Microsoft took when it migrated Windows and other products from
centralized version control to Git.
6. After reviewing the source, the Dev Leads agree that the migration to Azure DevOps Services is done.
Azure DevOps Services now becomes the source for all development within teams involved in the
migration.
Need more help?
Learn more about importing from TFVC.
Next steps
Contoso will need to provide Azure DevOps Services and Git training for relevant team members.
Contoso migration: Rearchitect an on-premises app
to an Azure container and Azure SQL Database
3/15/2019 • 23 minutes to read • Edit Online
This article demonstrates how Contoso migrates and rearchitect its SmartHotel360 app in Azure. Contoso
migrates the app frontend VM to an Azure Windows container, and the app database to an Azure SQL
database.
This document is one in a series of articles that show how the fictitious company Contoso migrates on-premises
resources to the Microsoft Azure cloud. The series includes background information, and scenarios that
illustrate setting up a migration infrastructure, assessing on-premises resources for migration, and running
different types of migrations. Scenarios grow in complexity. Additional articles will be added over time.
Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.
Article 6: Rehost an app on Azure VMs Contoso migrates the SmartHotel360 Available
and in a SQL Server AlwaysOn app. Contoso uses Site Recovery to
availability group migrate the app VMs. It uses the
Database Migration Service to migrate
the app database to a SQL Server
cluster that's protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS
Article 8: Rehost a Linux app on Azure Contoso migrates the Linux osTicket Available
VMs and Azure MySQL app to Azure VMs using Azure Site
Recovery, and migrates the app
database to an Azure MySQL Server
instance using MySQL Workbench.
Article 10: Refactor a Linux app on Contoso migrates its Linux osTicket Available
Azure Web Apps and Azure MySQL app to an Azure web app on multiple
Azure regions using Azure Traffic
Manager, integrated with GitHub for
continuous delivery. Contoso migrates
the app database to an Azure
Database for MySQL instance.
Article 11: Refactor TFS on Azure Contoso migrates its on-premises Available
DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.
Article 12: Rearchitect an app on Azure Contoso migrates its SmartHotel app This article
Containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the database with Azure SQL
Database.
Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.
Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.
In this article, Contoso migrates the two-tier Windows WPF, XAML forms SmartHotel360 app running on
VMware VMs to Azure. If you'd like to use this app, it's provided as open source and you can download it from
GitHub.
Business drivers
The Contoso IT leadership team has worked closely with business partners to understand what they want to
achieve with this migration:
Address business growth: Contoso is growing, and as a result there is pressure on its on-premises systems
and infrastructure.
Increase efficiency: Contoso needs to remove unnecessary procedures, and streamline processes for
developers and users. The business needs IT to be fast and not waste time or money, thus delivering faster
on customer requirements.
Increase agility: Contoso IT needs to be more responsive to the needs of the business. It must be able to
react faster than the changes in the marketplace, to enable the success in a global economy. It mustn't get in
the way, or become a business blocker.
Scale: As the business grows successfully, Contoso IT must provide systems that are able to grow at the
same pace.
Costs: Contoso wants to minimize licensing costs.
Migration goals
The Contoso cloud team has pinned down goals for this migration. These goals were used to determine the best
migration method.
GOALS DETAILS
Azure reqs Contoso wants to move the app to Azure, and run it in a
container to extend app life. It doesn't want to start
completely from scratch to implement the app in Azure.
Solution design
After pinning down goals and requirements, Contoso designs and review a deployment solution, and identifies
the migration process, including the Azure services that Contoso will use for the migration.
Current app
The SmartHotel360 on-premises app is tiered across two VMs (WEBVM and SQLVM ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5)
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Proposed architecture
For the database tier of the app, Contoso compared Azure SQL Database with SQL Server using this
article. It decided to go with Azure SQL Database for a few reasons:
Azure SQL Database is a relational-database managed service. It delivers predictable performance at
multiple service levels, with near-zero administration. Advantages include dynamic scalability with no
downtime, built-in intelligent optimization, and global scalability and availability.
Contoso leverages the lightweight Data Migration Assistant (DMA) to assess and migrate the on-
premises database to Azure SQL.
With Software Assurance, Contoso can exchange its existing licenses for discounted rates on a SQL
Database, using the Azure Hybrid Benefit for SQL Server. This could provide savings of up to 30%.
SQL Database provides a number of security features including always encrypted, dynamic data
masking, and row -level security/threat detection.
For the app web tier, Contoso has decided convert it to the Windows Container using Azure DevOps
services.
Contoso will deploy the app using Azure Service Fabric, and pull the Windows container image from
the Azure Container Registry (ACR ).
A prototype for extending the app to include sentiment analysis will be implemented as another
service in Service Fabric, connected to Cosmos DB. This will read information from Tweets, and
display on the app.
To implement a DevOps pipeline, Contoso will use Azure DevOps for source code management (SCM ),
with Git repos. Automated builds and releases will be used to build code, and deploy it to the Azure
Container Registry and Azure Service Fabric.
Solution review
Contoso evaluates the proposed design by putting together a pros and cons list.
CONSIDERATION DETAILS
CONSIDERATION DETAILS
Contoso can configure the web tier of the app with multiple
instances, so that it's no longer a single point of failure.
Migration process
1. Contoso provisions the Azure service fabric cluster for Windows.
2. It provisions an Azure SQL instance, and migrates the SmartHotel360 database to it.
3. Contoso converts the Web tier VM to a Docker container using the Service Fabric SDK tools.
4. It connects the service fabric cluster and the ACR, and deploys the app using Azure service fabric.
Azure services
SERVICE DESCRIPTION COST
Database Migration Assistant (DMA) Assesses and detect compatibility It's a downloadable tool free of charge.
issues that might impact database
functionality in Azure. DMA assesses
feature parity between SQL sources
and targets, and recommends
performance and reliability
improvements.
Azure SQL Database Provides an intelligent, fully managed Cost based on features, throughput
relational cloud database service. and size. Learn more.
Azure Container Registry Stores images for all types of container Cost based on features, storage, and
deployments. usage duration. Learn more.
Azure Service Fabric Builds and operate always-on, scalable Cost based on size, location, and
and distributed apps duration of the compute nodes. Learn
more.
Prerequisites
Here's what Contoso needs to run this scenario:
REQUIREMENTS DETAILS
REQUIREMENTS DETAILS
- Git
Scenario steps
Here's how Contoso runs the migration:
Step 1: Provision a SQL Database instance in Azure: Contoso provisions a SQL instance in Azure. After
the frontend web VM is migrated to an Azure container, the container instance with the app web frontend
will point to this database.
Step 2: Create an Azure Container Registry (ACR): Contoso provisions an enterprise container registry
for the docker container images.
Step 3: Provision Azure Service Fabric: It provisions a Service Fabric Cluster.
Step 4: Manage service fabric certificates: Contoso sets up certificates for Azure DevOps Services access
to the cluster.
Step 5: Migrate the database with DMA: It migrates the app database with the Database Migration
Assistant.
Step 6: Set up Azure DevOps Services: Contoso sets up a new project in Azure DevOps Services, and
imports the code into the Git Repo.
Step 7: Convert the app: Contoso converts the app to a container using Azure DevOps and SDK tools.
Step 8: Set up build and release: Contoso sets up the build and release pipelines to create and publish the
app to the ACR and Service Fabric Cluster.
Step 9: Extend the app: After the app is public, Contoso extends it to take advantage of Azure capabilities,
and republishes it to Azure using the pipeline.
3. They set up a new SQL Server instance (sql-smarthotel-eus2) in the primary region.
4. They set the pricing tier to match server and database needs. And they select to save money with Azure
Hybrid Benefit because they already have a SQL Server license.
5. For sizing they use v-Core-based purchasing, and set the limits for the expected requirements.
2. They provide a name for the registry ( contosoacreus2), and place it in the primary region, in the
resource group they use for their infrastructure resources. They enable access for admin users, and set it
as a premium SKU so that they can leverage geo-replication.
Step 3: Provision Azure Service Fabric
The SmartHotel360 container will run in the Azure Service Fabric Cluster. Contoso admins create the Service
Fabric Cluster as follows:
1. Create a Service Fabric resource from the Azure Marketplace
2. In Basics, they provide a unique DS name for the cluster, and credentials for accessing the on-premises
VM. They place the resource in the production resource group (ContosoRG) in the primary East US 2
region.
3. In Node type configuration, they input a node type name, durability settings, VM size, and app
endpoints.
4. In Create key vault, they create a new key vault in their infrastructure resource group, to house the
certificate.
5. In Access Policies they enable access to virtual machines to deploy the key vault.
10. After the cluster is provisioned, they connect to the Service Fabric Cluster Explorer.
11. They need to select the correct certificate.
12. The Service Fabric Explorer loads, and the Contoso Admin can manage the cluster.
5. They enter the name of the certificate, and provide an X.509 distinguished name in Subject.
7. Now, they go back to the certificates list in the KeyVault, and copy the thumbprint of the client certificate
that's just been created. They save it in the text file.
8. For Azure DevOps Services deployment, they need to determine the Base64 value of the certificate. They
do this on the local developer workstation using PowerShell. They paste the output into a text file for
later use.
[System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes("C:\path\to\certificate.pfx"))
9. Finally, they add the new certificate to the Service Fabric cluster. To do this, in the portal they open the
cluster, and click Security.
10. They click Add > Admin Client, and paste in the thumbprint of the new client certificate. Then they click
Add. This can take up to 15 minutes.
3. In the migration details, they add SQLVM as the source server, and the SmartHotel.Registration
database.
4. They receive an error which seems to be associated with authentication. However after investigating, the
issue is the period (.) in the database name. As a workaround, they decided to provision a new SQL
database using the name SmartHotel-Registration, to resolve the issue. When they run DMA again,
they're able to select SmartHotel-Registration, and continue with the wizard.
5. In Select Objects, they select the database tables, and generate a SQL script.
10. They delete the extra SQL database SmartHotel.Registration in the Azure portal.
2. They import the Git Repo that currently holds their app code. It's in a public repo and you can download
it.
3. After the code is imported, they connect Visual Studio to the repo, and clone the code using Team
Explorer.
4. After the repo is cloned to the developer machine, they open the Solution file for the app. The web app
and wcf service each have separate project within the file.
Step 7: Convert the app to a container
The on-premises app is a traditional three tier app:
It contains WebForms and a WCF Service connecting to SQL Server.
It uses Entity Framework to integrate with the data in the SQL database, exposing it through a WCF service.
The WebForms application interacts with the WCF service.
Contoso admins will convert the app to a container using Visual Studio and the SDK Tools, as follows:
1. Using Visual Studio, they review the open solution file (SmartHotel.Registration.sln) in the
SmartHotel360-internal-booking-apps\src\Registration directory of the local repo. Two apps are
shown. The web frontend SmartHotel.Registration.Web and the WCF service app
SmartHotel.Registration.WCF.
2. They right-click the web app > Add > Container Orchestrator Support.
3. In Add Container Orchestra Support, they select Service Fabric.
7. A manifest file (ServiceManifest.xml) is created and opened by Visual Studio. This file tells Service
Fabric how to configure the container when it's deployed to Azure.
8. Another manifest file (**ApplicationManifest.xml) contains the configuration applications for the
containers.
9. They open the ApplicationParameters/Cloud.xml file, and update the connection string to connect the
app to the Azure SQL database. The connection string can be located in the database in the Azure portal.
10. They commit the updated code and push to Azure DevOps Services.
2. They select Azure DevOps Services Git and the relevant repo.
3. In Select a template, they select fabric with Docker support.
4. They change the Action Tag images to Build an image, and configure the task to use the provisioned
ACR.
5. In the Push images task, they configure the image to be pushed to the ACR, and select to include the
latest tag.
6. In Triggers, they enable continuous integration, and add the master branch.
9. They select the Azure Service Fabric deployment template, and name the Stage (SmartHotelSF).
10. They provide a pipeline name (ContosoSmartHotel360Rearchitect). For the stage, they click 1 job, 1
task to configure the Service Fabric deployment.
15. They select the project and build pipeline, using the latest version.
16. Note that the lightning bolt on the artifact is checked.
20. To connect to the app, they direct traffic to the public IP address of the Azure load balancer in front of the
Service Fabric nodes.
3. In Getting Started, they select Data Explorer, and add a new collection.
4. In Add Collection they provide IDs and set storage capacity and throughput.
5. In the portal, they open the new database > Collection > Documents and click New Document.
6. They paste the following JSON code into the document window. This is sample data in the form of a
single tweet.
{
"id": "2ed5e734-8034-bf3a-ac85-705b7713d911",
"tweetId": 927750234331580911,
"tweetUrl": "https://twitter.com/status/927750237331580911",
"userName": "CoreySandersWA",
"userAlias": "@CoreySandersWA",
"userPictureUrl": "",
"text": "This is a tweet about #SmartHotel360",
"language": "en",
"sentiment": 0.5,
"retweet_count": 1,
"followers": 500,
"hashtags": [
""
]
}
7. They locate the Cosmos DB endpoint, and the authentication key. These are used in the app to connect to
the collection. In the database, they click Keys, and copy the URI and primary key to Notepad.
3. They can now click through the services to see that the SentimentIntegration app is up and running
Clean up after migration
After migration, Contoso needs to complete these cleanup steps:
Remove the on-premises VMs from the vCenter inventory.
Remove the VMs from local backup jobs.
Update internal documentation to show the new locations for the SmartHotel360 app. Show the database as
running in Azure SQL database, and the front end as running in Service Fabric.
Review any resources that interact with the decommissioned VMs, and update any relevant settings or
documentation to reflect the new configuration.
Conclusion
In this article, Contoso refactored the SmartHotel360 app in Azure by migrating the app frontend VM to
Service Fabric. The app database was migrated to an Azure SQL database.
Contoso migration: Rebuild an on-premises app to
Azure
4/4/2019 • 22 minutes to read • Edit Online
This article demonstrates how Contoso migrates and rebuilds the SmartHotel360 app in Azure. Contoso
migrates the app's front end VM to Azure App Services Web apps. The app back end is built using
microservices deployed to containers managed by Azure Kubernetes Service (AKS ). The site interacts with
Azure Functions to provide pet photo functionality.
This document is one in a series of articles that show how the fictitious company Contoso migrates on-
premises resources to the Microsoft Azure cloud. The series includes background information, and scenarios
that illustrate setting up a migration infrastructure, assessing on-premises resources for migration, and running
different types of migrations. Scenarios grow in complexity. We'll add additional articles over time.
Article 4: Rehost an app on Azure VMs Demonstrates how Contoso runs a Available
and a SQL Managed Instance lift-and-shift migration to Azure for
the SmartHotel360 app. Contoso
migrates the app frontend VM using
Azure Site Recovery, and the app
database to a SQL Managed Instance,
using the Azure Database Migration
Service.
Article 5: Rehost an app on Azure VMs Shows how Contoso migrate the Available
SmartHotel360 app VMs using Site
Recovery only.
Article 6: Rehost an app to Azure VMs Shows how Contoso migrates the Available
and SQL Server Always On Availability SmartHotel360 app. Contoso uses Site
Group Recovery to migrate the app VMs, and
the Database Migration service to
migrate the app database to a SQL
Server cluster protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS
Article 7: Rehost a Linux app on Azure Shows how Contoso does a lift-and- Available
VMs shift migration of the Linux osTicket
app to Azure VMs, using Site Recovery
Article 8: Rehost a Linux app on Azure Demonstrates how Contoso migrates Available
VMs and Azure MySQL Server the Linux osTicket app to Azure VMs
using Site Recovery, and migrates the
app database to an Azure MySQL
Server instance using MySQL
Workbench.
Article 10: Refactor a Linux app to Shows how Contoso migrates the Available
Azure Web Apps and Azure MySQL Linux osTicket app to Azure Web Apps
in multiple sites, integrated with
GitHub for continuous delivery. They
migrate the app database to an Azure
MySQL instance.
Article 11: Refactor TFS on Azure Shows how Contoso migrates the on- Available
DevOps Services premises Team Foundation Server
(TFS) deployment by migrating it to
Azure DevOps Services in Azure.
Article 12: Rearchitect an app to Azure Shows how Contoso migrates and Available
containers and SQL Database rearchitects their SmartHotel app to
Azure. They rearchitect the app web
tier as a Windows container, and the
app database in an Azure SQL
Database.
Article 13: Rebuild an app to Azure Shows how Contoso rebuild their This article
SmartHotel app using a range of Azure
capabilities and services, including App
Services, Azure Kubernetes, Azure
Functions, Cognitive services, and
Cosmos DB.
Article 14: Scale a migration to Azure After trying out migration Available
combinations, Contoso prepares to
scale to a full migration to Azure.
In this article, Contoso migrates the two-tier Windows. NET SmartHotel360 app running on VMware VMs to
Azure. If you'd like to use this app, it's provided as open source and you can download it from GitHub.
NOTE
This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which
will continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM
compatibility, see Introducing the new Azure PowerShell Az module. For Az module installation instructions, see Install
Azure PowerShell.
Business drivers
The IT leadership team has worked closely with business partners to understand what they want to achieve with
this migration:
Address business growth: Contoso is growing, and wants to provide differentiated experiences for
customers on Contoso websites.
Agility: Contoso must be able to react faster than the changes in the marketplace, to enable the success in a
global economy.
Scale: As the business grows successfully, the Contoso IT team must provide systems that are able to grow
at the same pace.
Costs: Contoso wants to minimize licensing costs.
Migration goals
The Contoso cloud team has pinned down app requirements for this migration. These requirements were used
to determine the best migration method:
The app in Azure will remain as critical as it is today. It should perform well and scale easily.
The app shouldn't use IaaS components. Everything should be built to use PaaS or serverless services.
The app builds should run in cloud services, and containers should reside in a private Enterprise-wide
container registry in the cloud.
The API service used for pet photos should be accurate and reliable in the real world, since decisions made
by the app must be honored in their hotels. Any pet granted access is allowed to stay at the hotels.
To meet requirements for a DevOps pipeline, Contoso will use Azure DevOps for Source Code Management
(SCM ), with Git Repos. Automated builds and releases will be used to build code, and deploy it to the Azure
Web Apps, Azure Functions and AKS.
Different CI/CD pipelines are needed for microservices on the backend, and for the web site on the frontend.
The backend services have a different release cycle from the frontend web app. To meet this requirement,
they will deploy two different DevOps pipelines.
Contoso needs management approval for all front end website deployment, and the CI/CD pipeline must
provide this.
Solution design
After pinning down goals and requirements, Contoso designs and review a deployment solution, and identifies
the migration process, including the Azure services that will be used for the migration.
Current app
The SmartHotel360 on-premises app is tiered across two VMs (WEBVM and SQLVM ).
The VMs are located on VMware ESXi host contosohost1.contoso.com (version 6.5)
The VMware environment is managed by vCenter Server 6.5 (vcenter.contoso.com ), running on a VM.
Contoso has an on-premises datacenter (contoso-datacenter), with an on-premises domain controller
(contosodc1).
The on-premises VMs in the Contoso datacenter will be decommissioned after the migration is done.
Proposed architecture
The frontend of the app is deployed as an Azure App Services Web app, in the primary Azure region.
An Azure function provides uploads of pet photos, and the site interacts with this functionality.
The pet photo function leverages Cognitive Services Vision API, and CosmosDB.
The back end of the site is built using microservices. These will be deployed to containers managed on
the Azure Kubernetes service (AKS ).
Containers will be built using Azure DevOps, and pushed to the Azure Container Registry (ACR ).
For now, Contoso will manually deploy the Web app and function code using Visual Studio
Microservices will be deployed using a PowerShell script that calls Kubernetes command-line tools.
Solution review
Contoso evaluates the proposed design by putting together a pros and cons list.
CONSIDERATION DETAILS
Migration process
1. Contoso provision the ACR, AKS, and CosmosDB.
2. They provision the infrastructure for the deployment, including the Azure Web App, storage account,
function, and API.
3. After the infrastructure is in place, they'll build their microservices container images using Azure
DevOps, which pushes them to the ACR.
4. Contoso will deploy these microservices to ASK using a PowerShell script.
5. Finally, they'll deploy the Azure function and Web App.
Azure services
SERVICE DESCRIPTION COST
AKS Simplifies Kubernetes management, AKS is a free service. Pay for only the
deployment, and operations. Provides virtual machines, and associated
a fully managed Kubernetes container storage and networking resources
orchestration service. consumed. Learn more.
Azure Functions Accelerates development with an Pay only for consumed resources. Plan
event-driven, serverless compute is billed based on per-second resource
experience. Scale on demand. consumption and executions. Learn
more.
SERVICE DESCRIPTION COST
Azure Container Registry Stores images for all types of container Cost based on features, storage, and
deployments. usage duration. Learn more.
Azure App Service Quickly build, deploy, and scale App Service plans are billed on a per
enterprise-grade web, mobile, and API second basis. Learn more.
apps running on any platform.
Prerequisites
Here's what Contoso needs for this scenario:
REQUIREMENTS DETAILS
Git
Azure PowerShell
Azure CLI
Scenario steps
Here's how Contoso will run the migration:
Step 1: Provision AKS and ACR: Contoso provisions the managed AKS cluster and Azure container
registry using PowerShell
Step 2: Build Docker containers: They set up CI for Docker containers using Azure DevOps, and push
them to the ACR.
Step 3: Deploy back-end microservices: They deploy the rest of the infrastructure that will be leveraged
by back-end microservices.
Step 4: Deploy front-end infrastructure: They deploy the front-end infrastructure, including blob storage
for the pet phones, the Cosmos DB, and Vision API.
Step 5: Migrate the back end: They deploy microservices and run on AKS, to migrate the back end.
Step 6: Publish the front end: They publish the SmartHotel360 app to the Azure App service, and the
Function App that will be called by the pet service.
3. With the file open, they update the $location parameter to eastus2, and save the file.
4. They click View > Integrated Terminal to open the integrated terminal in Code.
5. In the PowerShell Integrated terminal, they sign into Azure using the Connect-AzAccount command.
Learn more about getting started with PowerShell.
6. They authenticate Azure CLI by running the az login command, and following the instructions to
authenticate using their web browser. Learn more about logging in with Azure CLI.
7. They run the following command, passing the resource group name of ContosoRG, the name of the AKS
cluster smarthotel-aks-eus2, and the new registry name.
8. Azure creates another resource group, containing the resources for the AKS cluster.
9. After the deployment is finished, they install the kubectl command-line tool. The tool is already installed
on the Azure CloudShell.
az aks install-cli
10. They verify the connection to the cluster by running the kubectl get nodes command. The node is the
same name as the VM in the automatically created resource group.
11. They run the following command to start the Kubernetes Dashboard:
az aks browse --resource-group ContosoRG --name smarthotelakseus2
12. A browser tab opens to the Dashboard. This is a tunneled connection using the Azure CLI.
Step 2: Configure the back-end pipeline
Create an Azure DevOps project and build
Contoso creates an Azure DevOps project, and configures a CI Build to create the container and then pushes it
to the ACR. The instructions in this section use the SmartHotel360-Azure-Backend repository.r
1. From visualstudio.com, they create a new organization (contosodevops360.visualstudio.com ), and
configure it to use Git.
2. They create a new project (SmartHotelBackend) using Git for version control, and Agile for the
workflow.
7. In Phase 1, they add a Docker Compose task. This task builds the Docker compose.
8. They repeat and add another Docker Compose task. This one pushes the containers to ACR.
9. They select the first task (to build), and configure the build with the Azure subscription, authorization, and
the ACR.
10. They specify the path of the docker-compose.yaml file, in the src folder of the repo. They select to build
service images and include the latest tag. When the action changes to Build service images, the name
of the Azure DevOps task changes to Build services automatically
11. Now, they configure the second Docker task (to push). They select the subscription and the
smarthotelacreus2 ACR.
12. Again, they enter the file to the docker-compose.yaml file, and select Push service images and include
the latest tag. When the action changes to Push service images, the name of the Azure DevOps task
changes to Push services automatically
13. With the Azure DevOps tasks configured, Contoso saves the build pipeline, and starts the build process.
2. They open Azure DevOps, and in the SmartHotel360 project, in Releases, they click +New Pipeline.
3. They click Empty Job to start the pipeline without a template.
4. They provide the stage and pipeline names.
6. They select Git as the source type, and specify the project, source, and master branch for the
SmartHotel360 app.
7. They click the task link.
8. They add a new Azure PowerShell task so that they can run a PowerShell script in an Azure environment.
9. They select the Azure subscription for the task, and select the deploy.ps1 script from the Git repo.
10. They add arguments to the script. the script will delete all cluster content (except ingress and ingress
controller), and deploy the microservices.
11. They set the preferred Azure PowerShell version to the latest, and save the pipeline.
12. They move back to the Release page, and manually create a new release.
13. They click the release after creating it, and in Actions, they click Deploy.
14. When the deployment is complete, they run the following command to check the status of services, using
the Azure Cloud Shell: kubectl get services.
4. They capture the access details for the storage account in a text file, for future reference.
3. They add a new collection to the database, with default capacity and throughput.
4. They note the connection information for the database, for future reference.
3. They save the connection settings for the API to a text file for later reference.
Provision the Azure Web App
Contoso admins provision the web app using the Azure portal.
1. They select Web App in the portal.
2. They provide an app name (smarthotelcontoso), run it on Windows, and place it in the production
resources group ContosoRG. They create a new Application Insights instance for app monitoring..
3. After they're done, they browse to the address of the app to check it's been created successfully.
4. Now, in the Azure portal they create a staging slot for the code. the pipeline will deploy to this slot. This
ensures that code isn't put into production until admins perform a release.
Provision the Azure function app
In the Azure portal, Contoso admins provision the Function App.
1. They select Function App.
2. They provide an app name (smarthotelpetchecker). They place the app in the production resource
group ContosoRG.They set the hosting place to Consumption Plan, and place the app in the East US 2
region. A new storage account is created, along with an Application Insights instance for monitoring.
3. After the app is deployed, they browse to the app address to check it's been created successfully.
5. After the file is updated, they rename it smarthotelsettingsurl, and upload it to the blob storage they
created earlier.
6. They click the file to get the URL. The URL is used by the app when it pulls down the configuration files.
7. In the appsettings.Production.json file, they update the SettingsURL to the URL of the new file.
Deploy the website to the Azure App Service
Contoso admins can now publish the website.
1. They open Azure DevOps, and in the SmartHotelFrontend project, in Builds and Releases, they click
+New Pipeline.
2. They select Azure DevOps Git as a source.
3. They select the ASP.NET Core template.
4. They review the pipeline, and check that Publish Web Projects and Zip Published Projects are
selected.
5. In Triggers, they enable continuous integration, and add the master branch. This ensures that each time
the solution has new code committed to the master branch, the build pipeline starts.
6. They click Save & queue to start a build.
7. After the build completes, they configure a release pipeline using the Azure App Service Deployment.
8. They provide a Stage name Staging.
9. They add an artifact and select the build they just configured.
10. They click the lightning bolt icon on the artifact, and enable continuous deployment.
11. In Environment, they click 1 job, 1 task under Staging.
12. After selecting the subscription, and app name, they open the Deploy Azure App Service task. The
deployment is configured to use the staging deployment slot. This automatically builds code for review
and approval in this slot.
13. In the Pipeline, they add a new stage.
14. They select Azure App Service deployment with slot, and name the environment Prod.
15. They click on 1 job, 2 tasks, and select the subscription, app service name, and the staging slot.
16. They remove the Deploy Azure App Service to Slot from the pipeline. It was placed there by the
previous steps.
17. They save the pipeline. On the pipeline, they click on Post-deployment conditions.
18. They enable Post-deployment approvals, and add a dev lead as the approver.
19. In the Build pipeline, they manually kick off a build. This triggers the new release pipeline, which deploys
the site to the staging slot. For Contoso, the URL for the slot is https://smarthotelcontoso-
staging.azurewebsites.net/.
20. After the build finishes, and the release deploys to the slot, Azure DevOps emails the dev lead for
approval.
21. The dev lead clicks View approval, and can approve or reject the request in the Azure DevOps portal.
22. The lead makes a comment and approves. This starts the swap of the staging and prod slots, and moves
the build into production.
23. The pipeline completes the swap.
24. The team checks the prod slot to verify that the web app is in production at
https://smarthotelcontoso.azurewebsites.net/.
Deploy the PetChecker Function App
Contoso admins deploy the app as follows.
1. They clone the repo locally to the dev machine by connecting to the Azure DevOps project.
2. In Visual Studio, they open the folder to show all the files in the repo.
3. They open the src/PetCheckerFunction/local.settings.json file, and add the app settings for storage,
the Cosmos database, and the Computer Vision API.
4. They commit the code, and sync it back to Azure DevOps, pushing their changes.
5. They add a new Build pipeline, and select Azure DevOps Git for the source.
6. They select the ASP.NET Core (.NET Framework) template.
7. They accept the defaults for the template.
8. In Triggers, then select to Enable continuous integration, and click Save & Queue to start a build.
9. After the build succeeds, they build a Release pipeline, adding the Azure App Service deployment
with slot.
10. They name the environment Prod, and select the subscription. They set the App type to Function App,
and the app service name as smarthotelpetchecker.
11. They add an artifact Build.
12. They enable Continuous deployment trigger, and click Save.
13. They click Queue new build to run the full CI/CD pipeline.
14. After the function is deployed, it appears in the Azure portal, with the Running status.
15. They browse to the app to test that the Pet Checker app is working as expected, at
http://smarthotel360public.azurewebsites.net/Pets.
16. They click on the avatar to upload a picture.
Conclusion
In this article, Contoso rebuilds the SmartHotel360 app in Azure. The on-premises app front-end VM is rebuilt
to Azure App Services Web Apps. The app back end is built using microservices deployed to containers
managed by Azure Kubernetes Service (AKS ). Contoso enhanced app functionality with a pet photo app.
Contoso - Scale a migration to Azure
3/14/2019 • 23 minutes to read • Edit Online
In this article, Contoso performs a migration at scale to Azure. They consider how to plan and perform a
migration of more than 3000 workloads, 8000 databases, and over 10,000 VMs.
This article is one in a series of articles that document how the fictitious company Contoso migrates its on-
premises resources to the Microsoft Azure cloud. The series includes background and planning information,
and deployment scenarios that illustrate how to set up a migration infrastructure, assess the suitability of on-
premises resources for migration, and run different types of migrations. Scenarios grow in complexity. We'll
add articles to the series over time.
Article 5: Rehost an app on Azure VMs Contoso migrates its SmartHotel360 Available
app VMs to Azure VMs using the Site
Recovery service.
Article 6: Rehost an app on Azure VMs Contoso migrates the app, using Site Available
and in a SQL Server AlwaysOn Recovery to migrate the app VMs, and
availability group the Database Migration Service to
migrate the app database to a SQL
Server cluster that's protected by an
AlwaysOn availability group.
ARTICLE DETAILS STATUS
Article 8: Rehost a Linux app on Azure Contoso migrates its Linux osTicket Available
VMs and Azure Database for MySQL app to Azure VMs by using Site
Recovery. It migrates the app database
to Azure Database for MySQL by
using MySQL Workbench.
Article 10: Refactor a Linux app in an Contoso migrates its Linux osTicket Available
Azure web app and Azure Database app to an Azure web app on multiple
for MySQL sites. The web app is integrated with
GitHub for continuous delivery. It
migrates the app database to an Azure
Database for MySQL instance.
Article 11: Refactor Team Foundation Contoso migrates its on-premises Available
Server on Azure DevOps Services Team Foundation Server deployment
to Azure DevOps Services in Azure.
Article 12: Rearchitect an app in Azure Contoso migrates its SmartHotel app Available
containers and Azure SQL Database to Azure. Then, it rearchitects the app
web tier as a Windows container
running in Azure Service Fabric, and
the app database with Azure SQL
Database.
Article 13: Rebuild an app in Azure Contoso rebuilds its SmartHotel app Available
by using a range of Azure capabilities
and services, including Azure App
Service, Azure Kubernetes Service
(AKS), Azure Functions, Azure
Cognitive Services, and Azure Cosmos
DB.
Article 14: Scale a migration to Azure After trying out migration This article
combinations, Contoso prepares to
scale to a full migration to Azure.
Business drivers
The IT leadership team has worked closely with business partners to understand what they want to achieve
with this migration:
Address business growth: Contoso is growing, causing pressure on on-premises systems and
infrastructure.
Increase efficiency: Contoso needs to remove unnecessary procedures, and streamline processes for
developers and users. The business needs IT to be fast and not waste time or money, thus delivering faster
on customer requirements.
Increase agility: Contoso IT needs to be more responsive to the needs of the business. It must be able to
react faster than the changes in the marketplace, to enable the success in a global economy. It mustn't get in
the way, or become a business blocker.
Scale: As the business grows successfully, the Contoso IT team must provide systems that are able to grow
at the same pace.
Improve cost models: Contoso wants to lessen capital requirements in the IT budget. Contoso wants to
use cloud abilities to scale and reduce the need for expensive hardware.
Lower licensing costs: Contoso wants to minimize cloud costs.
Migration goals
The Contoso cloud team has pinned down goals for this migration. These goals were used to determine the
best migration method.
REQUIREMENTS DETAILS
Move to Azure quickly Contoso wants to start moving apps and VMs to Azure as
quickly as possible.
Compile a full inventory Contoso wants a complete inventory of all apps, databases,
and VMs in the organization.
Assess and classify apps Contoso wants fully leverage the cloud. As a default
Contoso assumes that all services will run as PaaS. IaaS will
be used where PaaS isn't appropriate.
Train and move to DevOps Contoso wants to move to a DevOps model. Contoso will
provide Azure and DevOps training, and reorganize teams
as necessary.
After pinning down goals and requirements, Contoso reviews the IT footprint, and identifies the migration
process.
Current deployment
After planning and setting up an Azure infrastructure and trying out different proof-of-concept (POC )
migration combinations as detailed in the table above, Contoso is ready to embark on a full migration to Azure
at scale. Here's what Contoso wants to migrate.
Migration process
Now that Contoso have pinned down business drivers and migration goals, it determines a four-pronged
approach for the migration process:
Phase 1-Assess: Discover the current assets, and figure out whether they're suitable for migration to Azure.
Phase 2-Migrate: Move the assets to Azure. How they move apps and objects to Azure will depend upon
the app, and what they want to achieve.
Phase 3-Optimize: After moving resources to Azure, Contoso needs to improve and streamline them for
maximum performance and efficiency.
Phase 4-Secure & Manage: With everything in place, Contoso now uses Azure security and management
resources and services to govern, secure, and monitor its cloud apps in Azure.
These phases aren't serial across the organization. Each piece of Contoso's migration project will be at a
different stage of the assessment and migration process. Optimization, security, and management will be
ongoing over time.
Phase 1: Assess
Contoso kicks off the process by discovering and assessing on-premises apps, data, and infrastructure. Here's
what Contoso will do:
Contoso needs to discover apps, maps dependencies across apps, and decide on migration order and
priority.
As Contoso assesses, it will build out a comprehensive inventory of apps and resources. Along with the new
inventory, Contoso will use and update the existing Configuration Management Database (CMDB ) and
Service Catalog.
The CMDB holds technical configurations for Contoso apps.
The Service Catalog documents the operational details of apps, including associated business
partners, and Service Level Agreements (SLAs)
Discover apps
Contoso runs thousands of apps across a range of servers. In addition to the CMDB and Service Catalog,
Contoso needs discovery and assessment tools.
The tools must provide a mechanism that can feed assessment data into the migration process.
Assessment tools must provide data that helps build up an intelligent inventory of Contoso's physical and
virtual resources. Data should include profile information, and performance metrics.
When discovery is complete, Contoso should have a complete inventory of assets, and metadata associated
with them. This inventory will be used to define the migration plan.
Identify classifications
Contoso identifies some common categories to classify assets in the inventory. These classifications are critical
to Contoso’s decision making for migration. The classification list helps to establish migration priorities, and
identify complex issues.
Business group List of business group names Which group is responsible for the
inventory item?
Business group List of business group names Which group is responsible for the
inventory item?
Migration risk 1-5 What's the risk level for migrating the
app? This value should be agreed
upon by Contoso DevOps and
relevant partners.
Contoso needs to use Azure Migrate correctly give the scale of this migration.
Contoso will do an app-by-app assessment with Azure Migrate. This ensures that Azure Migrate returns
timely data to the Azure portal.
Contoso admins read about deploying Azure Migrate at scale
Contoso notes the Azure Migrate limits summarized in the following table.
ACTION LIMIT
Rehost Often referred to as “lift and shift” Contoso can rehost less-strategic
migration, this is a no-code option for apps, requiring no code changes.
migrating existing apps to Azure
quickly.
Refactor Also referred to as “repackaging", this Contoso can refactor strategic apps to
strategy requires minimal app code or retain the same basic functionality, but
configuration changes need to connect move them to run on an Azure
the app to Azure PaaS, and take better platform such as Azure App Services.
advantage of cloud capabilities.
This requires minimum code changes.
Rebuild This strategy rebuilds an app from Contoso can rewrite critical apps from
scratch using cloud-native the ground up, to take advantage of
technologies. cloud technologies such as serverless
computer, or microservices.
Azure platform as a service (PaaS)
provides a complete development and Contoso will manage the app and
deployment environment in the cloud. services it develops, and Azure
It eliminates some expense and manages everything else.
complexity of software licenses, and
removes the need for an underlying
app infrastructure, middleware, and
other resources.
Data must also be considered, especially with the volume of databases that Contoso has. Contoso's default
approach is to use PaaS services such as Azure SQL Database to take full advantage of cloud features. By
moving to a PaaS service for databases, Contoso will only have to maintain data, leaving the underlying
platform to Microsoft.
Evaluate migration tools
Contoso are primarily using a couple of Azure services and tools for the migration:
Azure Site Recovery: Orchestrates disaster recovery, and migrates on-premises VMs to Azure.
Azure Database Migration Service: Migrates on-premises databases such as SQL Server, MySQL, and
Oracle to Azure.
Azure Site Recovery
Azure Site Recovery is the primary Azure service for orchestrating disaster recovery and migration from within
Azure, and from on-premises sites to Azure.
1. Site Recovery enables, orchestrates replication from your on-premises sites to Azure.
2. When replication is set up and running, on-premises machines can be failed over to Azure, completing the
migration.
Contoso already completed a POC to see how Site Recovery can help them to migrate to the cloud.
U si n g Si t e R e c o v e r y a t sc a l e
Contoso plans on running multiple lift-and-shift migrations. To ensure this works, Site Recovery will be
replicating batches of around 100 VMs at a time. To figure out how this will work, Contoso needs to perform
capacity planning for the proposed Site Recovery migration.
Contoso needs to gather information about their traffic volumes. In particular:
Contoso needs to determine the rate of change for VMs it wants to replicate.
Contoso also needs to take network connectivity from the on-premises site to Azure into account.
In response to capacity and volume requirements, Contoso will need to allocate sufficient bandwidth based
on the daily data change rate for the required VMs, to meet its recovery point objective (RPO ).
Lastly, they need to figure out how many servers are needed to run the Site Recovery components that are
needed for the deployment.
Ga t h e r o n -p re mi s e s i n f o rma t i o n
Contoso can use the Site Recovery Deployment Planner tool to complete these steps:
Contoso can use the tool to remotely profile VMs without an impact on the production environment. This
helps pinpoint bandwidth and storage requirements for replication and failover.
Contoso can run the tool without installing any Site Recovery components on-premises.
The tool gathers information about compatible and incompatible VMs, disks per VM, and data churn per
disk. It also identifies network bandwidth requirements, and the Azure infrastructure needed for successful
replication and failover.
Contoso needs to ensure that then run the planner tool on a Windows Server machines that matches the
minimum requirements for the Site Recovery configuration server. The configuration server is a Site
Recovery machine that's needed in order to replicate on-premises VMware VMs.
I d e n t i f y Si t e R e c o v e ry re q u i re me n t s
In addition to the VMs being replicated, Site Recovery requires a number of components for VMware
migration.
COMPONENT DETAILS
Contoso needs to figure out how to deploy these components, based on capacity considerations.
Maximum daily change rate A single process server can handle a daily change rate up to
2 TB. Since a VM can only use one process server, the
maximum daily data change rate that’s supported for a
replicated VM is 2 TB.
Memory: 32 GB
Cache disk: 1 TB
Memory: 32 GB
Cache disk: 1 TB
Azure storage For migration, Contoso must identify the right type and
number of target Azure storage accounts. Site Recovery
replicates VM data to Azure storage.
Contoso has made the decision to use Managed disks for all VMs that are deployed to Azure. The IOPS
required will determine if the disks will be Standard HDD, Standard SSD, or Premium (SSD ).
Another scaling tactic for Contoso is temporarily scale up the Azure SQL or MySQL Database target
instance to the Premium tier SKU during the data migration. This minimizes database throttling that
could impact data transfer activities when using lower-level SKUs.
U si n g o t h e r t o o l s
In addition to DMS, Contoso can use other tools and services to identify VM information.
They have scripts to help with manual migrations. These are available in the GitHub repo.
A number of partner tools can also be used for migration.
Phase 3: Optimize
After Contoso moves resources to Azure, they need to streamline them to improve performance, and maximize
ROI with cost management tools. Given that Azure is a pay-for-use service, it's critical for Contoso to
understand how systems are performing, and to ensure they're sized properly.
Azure cost management
To make the most of their cloud investment, Contoso will leverage the free Azure Cost Management tool.
This licensed solution built by Cloudyn, a Microsoft subsidiary, lets Contoso manage cloud spending with
transparency and accuracy. It provides tools to monitor, allocate, and trim cloud costs.
Azure Cost Management provides simple dashboard reports to help with cost allocation, showbacks and
chargebacks.
Cost Management can optimize cloud spending by identifying underutilized resources that Contoso can
then manage and adjust.
Learn more about Azure Cost Management.
Native Tools
Contoso will also use scripts to locate unused resources.
During large migrations, there are often leftover pieces of data such as virtual hard drives (VHDs), which
incur a charge, but provide no value to the company. Scripts are available in the GitHub repo.
Contoso will leverage work done by Microsoft’s IT department, and consider implementing the Azure
Resource Optimization (ARO ) Toolkit.
Contoso can deploy an Azure Automation account with preconfigured runbooks and schedules to its
subscription, and start saving money. Azure resource optimization happens automatically on a subscription
after a schedule is enabled or created, including optimization on new resources.
This provides decentralized automation capabilities to reduce costs. Features include:
Auto-snooze Azure VMs based on low CPU.
Schedule Azure VMs to snooze and unsnooze.
Schedule Azure VMs to snooze or unsnooze in ascending and descending order using Azure tags.
Bulk deletion of resource groups on-demand.
Get started with the ARO toolkit in this GitHub repo.
Partner Tools
Partner tools such as Hanu and Scalr can be leveraged.
Conclusion
In this article, Contoso planned for an Azure migration at scale. They divided the migration process into four
stages. From assessment and migration, through to optimization, security, and management after migration
was complete. Mostly, it's important to plan a migration project as a whole process, but to migrate systems
within an organization by breaking sets down into classifications and numbers that make sense for the
business. By assessing data and applying classifications, and project can be broken down into a series of smaller
migrations, which can run safely and rapidly. The sum of these smaller migrations quickly turns into a large
successful migration to Azure.
Discover and assess a large VMware environment
4/10/2019 • 14 minutes to read • Edit Online
Azure Migrate has a limit of 1500 machines per project, this article describes how to assess large numbers of on-
premises virtual machines (VMs) by using Azure Migrate.
NOTE
We have a preview release available that allows discovery of up to 10,000 VMware VMs in a single project using a single
appliance, if you are interested in trying it out, please sign up here.
Prerequisites
VMware: The VMs that you plan to migrate must be managed by vCenter Server version 5.5, 6.0, 6.5 or 6.7.
Additionally, you need one ESXi host running version 5.5 or later to deploy the collector VM.
vCenter account: You need a read-only account to access vCenter Server. Azure Migrate uses this account to
discover the on-premises VMs.
Permissions: In vCenter Server, you need permissions to create a VM by importing a file in OVA format.
Statistics settings: This requirement is only applicable to the one-time discovery model which is deprecated
now. For one-time discovery model, the statistics settings for vCenter Server should be set to level 3 before
you start deployment. The statistics level is to be set to 3 for each of the day, week, and month collection
intervals. If the level is lower than 3 for any of the three collection intervals, the assessment will work, but the
performance data for storage and network won't be collected. The size recommendations will then be based on
performance data for CPU and memory, and configuration data for disk and network adapters.
NOTE
The one-time discovery appliance is now deprecated as this method relied on vCenter Server's statistics settings for
performance data point availability and collected average performance counters which resulted in under-sizing of VMs for
migration to Azure.
Set up permissions
Azure Migrate needs access to VMware servers to automatically discover VMs for assessment. The VMware
account needs the following permissions:
User type: At least a read-only user
Permissions: Data Center object –> Propagate to Child Object, role=Read-only
Details: User assigned at datacenter level, and has access to all the objects in the datacenter.
To restrict access, assign the No access role with the Propagate to child object, to the child objects (vSphere
hosts, datastores, VMs, and networks).
If you are deploying in a multi-tenant environment and would like to scope by folder of VMs for a single tenant,
you cannot directly select the VM folder when scoping collection in Azure Migrate. Following are instructions on
how to scope discovery by folder of VMs:
1. Create a user per tenant and assign read-only permissions to all the VMs belonging to a particular tenant.
2. Grant this user read-only access to all the parent objects where the VMs are hosted. All parent objects - host,
folder of hosts, cluster, folder of clusters - in the hierarchy up to the data center are to be included. You do not
need to propagate the permissions to all child objects.
3. Use the credentials for discovery selecting datacenter as Collection Scope. The RBAC set up ensures that the
corresponding vCenter user will have access to only tenant-specific VMs.
NOTE
The one-time discovery appliance is now deprecated as this method relied on vCenter Server's statistics settings for
performance data point availability and collected average performance counters which resulted in under-sizing of VMs for
migration to Azure. It is recommended to move to the one-time discovery appliance.
Project 1,500
Discovery 1,500
Assessment 1,500
NOTE
The one-time discovery appliance is now deprecated as this method relied on vCenter Server's statistics settings for
performance data point availability and collected average performance counters which resulted in under-sizing of
VMs for migration to Azure.
Instant gratification: With the continuous discovery appliance, once the discovery is complete (takes
couple of hours depending on the number of VMs), you can immediately create assessments. Since the
performance data collection starts when you kick off discovery, if you are looking for instant gratification,
you should select the sizing criterion in the assessment as as on-premises. For performance-based
assessments, it is advised to wait for at least a day after kicking off discovery to get reliable size
recommendations.
Note that the appliance only collects performance data continuously, it does not detect any configuration
change in the on-premises environment (i.e. VM addition, deletion, disk addition etc.). If there is a
configuration change in the on-premises environment, you can do the following to reflect the changes in
the portal:
Addition of items (VMs, disks, cores etc.): To reflect these changes in the Azure portal, you can stop
the discovery from the appliance and then start it again. This will ensure that the changes are
updated in the Azure Migrate project.
Deletion of VMs: Due to the way the appliance is designed, deletion of VMs is not reflected even if
you stop and start the discovery. This is because data from subsequent discoveries are appended to
older discoveries and not overridden. In this case, you can simply ignore the VM in the portal, by
removing it from your group and recalculating the assessment.
3. In Copy project credentials, copy the ID and key for the project. You need these when you configure the
collector.
Verify the collector appliance
Check that the OVA file is secure before you deploy it:
1. On the machine to which you downloaded the file, open an administrator command window.
2. Run the following command to generate the hash for the OVA:
C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]
3. Make sure that the generated hash matches the following settings.
Continuous discovery
For OVA version 1.0.10.4
MD5 2ca5b1b93ee0675ca794dd3fd216e13d
SHA1 8c46a52b18d36e91daeae62f412f5cb2a8198ee5
SHA256 3b3dec0f995b3dd3c6ba218d436be003a687710abab9fcd17
d4bdc90a11276be
MD5 e9ef16b0c837638c506b5fc0ef75ebfa
SHA1 37b4b1e92b3c6ac2782ff5258450df6686c89864
SHA256 8a86fc17f69b69968eb20a5c4c288c194cdcffb4ee6568d85ae
5ba96835559ba
MD5 6d8446c0eeba3de3ecc9bc3713f9c8bd
SHA1 e9f5bdfdd1a746c11910ed917511b5d91b9f939f
SHA256 7f7636d0959379502dfbda19b8e3f47f3a4744ee9453fc9ce54
8e6682a66f13c
MD5 d0363e5d1b377a8eb08843cf034ac28a
SHA1 df4a0ada64bfa59c37acf521d15dcabe7f3f716b
SHA256 f677b6c255e3d4d529315a31b5947edfe46f45e4eb4dbc8019
d68d1d1b337c2e
MD5 b5d9f0caf15ca357ac0563468c2e6251
SHA1 d6179b5bfe84e123fabd37f8a1e4930839eeb0e5
ALGORITHM HASH VALUE
SHA256 09c68b168719cb93bd439ea6a5fe21a3b01beec0e15b84204
857061ca5b116ff
MD5 d5b6a03701203ff556fa78694d6d7c35
SHA1 f039feaa10dccd811c3d22d9a59fb83d0b01151e
SHA256 e5e997c003e29036f62bf3fdce96acd4a271799211a84b34b3
5dfd290e9bea9c
2. In the Deploy OVF Template Wizard > Source, specify the location of the OVA file.
3. In Name and Location, specify a friendly name for the collector VM, and the inventory object in which the
VM will be hosted.
4. In Host/Cluster, specify the host or cluster on which the collector VM will run.
5. In storage, specify the storage destination for the collector VM.
6. In Disk Format, specify the disk type and size.
7. In Network Mapping, specify the network to which the collector VM will connect. The network needs
internet connectivity to send metadata to Azure.
8. Review and confirm the settings, and then select Finish.
Identify the ID and key for each project
If you have multiple projects, be sure to identify the ID and key for each one. You need the key when you run the
collector to discover the VMs.
1. In the project, select Getting Started > Discover & Assess > Discover Machines.
2. In Copy project credentials, copy the ID and key for the project.
Next steps
Learn how to create a group for assessment.
Learn more about how assessments are calculated.
Group machines for assessment
12/11/2018 • 2 minutes to read • Edit Online
This article describes how to create a group of machines for assessment by Azure Migrate. Azure Migrate assesses
machines in the group to check whether they're suitable for migration to Azure, and provides sizing and cost
estimations for running the machine in Azure. If you know the machines that need be migrated together, you can
manually create the group in Azure Migrate using the following method. If you are not very sure about the
machines that need to be grouped together, you can use the dependency visualization functionality in Azure
Migrate to create groups. Learn more.
NOTE
The dependency visualization functionality is not available in Azure Government.
Create a group
1. In the Overview of the Azure Migrate project, under Manage, clickGroups>+Group, and specify a group
name.
2. Add one or more machines to the group, and clickCreate.
3. You can optionally select to run a new assessment for the group.
After the group is created, you can modify it by selecting the group on the Groups page, and then adding or
removing machines.
Next steps
Learn how to use machine dependency mapping to create high confidence groups.
Learn more about how assessments are calculated.
Group machines using machine dependency
mapping
4/9/2019 • 7 minutes to read • Edit Online
This article describes how to create a group of machines for Azure Migrate assessment by visualizing
dependencies of machines. You typically use this method when you want to assess groups of VMs with higher
levels of confidence by cross-checking machine dependencies, before you run an assessment. Dependency
visualization can help you effectively plan your migration to Azure. It helps you ensure that nothing is left behind
and surprise outages do not occur when you are migrating to Azure. You can discover all interdependent systems
that need to migrate together and identify whether a running system is still serving users or is a candidate for
decommissioning instead of migration.
NOTE
The dependency visualization functionality is not available in Azure Government.
While associating a workspace, you will get the option to create a new workspace or attach an existing one:
When you create a new workspace, you need to specify a name for the workspace. The workspace is
then created in a region in the same Azure geography as the migration project.
When you attach an existing workspace, you can pick from all the available workspaces in the same
subscription as the migration project. Note that only those workspaces are listed which were created in
a region where Service Map is supported. To be able to attach a workspace, ensure that you have
'Reader' access to the workspace.
NOTE
You cannot change the workspace associated to a migration project.
NOTE
To automate the installation of agents you can use any deployment tool like System Center Configuration Manager or use
our partner tool, Intigua, that has an agent deployment solution for Azure Migrate.
Learn more about the list of Linux operating systems support by MMA.
Install the agent on a machine monitored by SCOM
For machines monitored by System Center Operations Manager 2012 R2 or later, there is no need to install the
MMA agent. Service Map has an integration with SCOM that leverages the SCOM MMA to gather the necessary
dependency data. You can enable the integration using the guidance here. Note, however, that the dependency
agent will need to installed on these machines.
Install the Dependency agent
1. To install the Dependency agent on a Windows machine, double-click the setup file and follow the wizard.
2. To install the Dependency agent on a Linux machine, install as root using the following command:
sh InstallDependencyAgent-Linux64.bin
Learn more about the Dependency agent support for the Windows and Linux operating systems.
Learn more about how you can use scripts to install the Dependency agent.
Create a group
1. After you install the agents, go to the portal and click Manage > Machines.
2. Search for the machine where you installed the agents.
3. The Dependencies column for the machine should now show as View Dependencies. Click the column
to view the dependencies of the machine.
4. The dependency map for the machine shows the following details:
Inbound (Clients) and outbound (Servers) TCP connections to/from the machine
The dependent machines that do not have the MMA and dependency agent installed are
grouped by port numbers
The dependent machines that have the MMA and the dependency agent installed are shown as
separate boxes
Processes running inside the machine, you can expand each machine box to view the processes
Properties like Fully Qualified Domain Name, Operating System, MAC Address etc. of each
machine, you can click on each machine box to view these details
5. You can look at dependencies for different time durations by clicking on the time duration in the time range
label. By default the range is an hour. You can modify the time range, or specify start and end dates, and
duration.
NOTE
Currently, the dependency visualization UI does not support selection of a time range longer than an hour. Use
Azure Monitor logs to query the dependency data over a longer duration.
6. After you've identified dependent machines that you want to group together, use Ctrl+Click to select
multiple machines on the map, and click Group machines.
7. Specify a group name. Verify that the dependent machines are discovered by Azure Migrate.
NOTE
If a dependent machine is not discovered by Azure Migrate, you cannot add it to the group. To add such machines
to the group, you need to run the discovery process again with the right scope in vCenter Server and ensure that
the machine is discovered by Azure Migrate.
8. If you want to create an assessment for this group, select the checkbox to create a new assessment for the
group.
9. Click OK to save the group.
Once the group is created, it is recommended to install agents on all the machines of the group and refine the
group by visualizing the dependency of the entire group.
Summarize volume of data sent and received on inbound connections between a set of machines
Next steps
Learn more about the FAQs on dependency visualization.
Learn how to refine the group by visualizing group dependencies.
Learn more about how assessments are calculated.
Refine a group using group dependency mapping
4/9/2019 • 7 minutes to read • Edit Online
This article describes how to refine a group by visualizing dependencies of all machines in the group. You typically
use this method when you want to refine membership for an existing group, by cross-checking group
dependencies, before you run an assessment. Refining a group using dependency visualization can help you
effectively plan your migration to Azure. You can discover all interdependent systems that need to migrate
together. It helps you ensure that nothing is left behind and surprise outages do not occur when you are migrating
to Azure.
NOTE
Groups for which you want to visualize dependencies shouldn't contain more than 10 machines. If you have more than 10
machines in the group, we recommend you to split it into smaller groups to leverage the dependency visualization
functionality.
NOTE
This article was recently updated to use the term Azure Monitor logs instead of Log Analytics. Log data is still stored in a Log
Analytics workspace and is still collected and analyzed by the same Log Analytics service. We are updating the terminology to
better reflect the role of logs in Azure Monitor. See Azure Monitor terminology changes for details.
NOTE
The dependency visualization functionality is not available in Azure Government.
NOTE
You cannot change the workspace associated to a migration project.
Learn more about the Dependency agent support for the Windows and Linux operating systems.
Learn more about how you can use scripts to install the Dependency agent.
4. To view more granular dependencies, click the time range to modify it. By default, the range is an hour. You
can modify the time range, or specify start and end dates, and duration.
NOTE
Currently, the dependency visualization UI does not support selection of a time range longer than an hour. Use Azure
Monitor logs to query the dependency data over a longer duration.
5. Verify the dependent machines, the process running inside each machine and identify the machines that
should be added or removed from the group.
6. Use Ctrl+Click to select machines on the map to add or remove them from the group.
You can only add machines that have been discovered.
Adding and removing machines from a group invalidates past assessments for it.
You can optionally create a new assessment when you modify the group.
7. Click OK to save the group.
If you want to check the dependencies of a specific machine that appears in the group dependency map, set up
machine dependency mapping.
let ips=materialize(ServiceMapComputer_CL
| summarize ips=makeset(todynamic(Ipv4Addresses_s)) by MonitoredMachine=ResourceName_s
| mvexpand ips to typeof(string));
let StartDateTime = datetime(2019-03-25T00:00:00Z);
let EndDateTime = datetime(2019-03-30T01:00:00Z);
VMConnection
| where Direction == 'inbound'
| where TimeGenerated > StartDateTime and TimeGenerated < EndDateTime
| join kind=inner (ips) on $left.DestinationIp == $right.ips
| summarize sum(LinksEstablished) by Computer, Direction, SourceIp, DestinationIp, DestinationPort
Summarize volume of data sent and received on inbound connections between a set of machines
Next steps
Learn more about the FAQs on dependency visualization.
Learn more about how assessments are calculated.
Customize an assessment
1/10/2019 • 5 minutes to read • Edit Online
Azure Migrate creates assessments with default properties. After creating an assessment, you can modify the
default properties using the instructions in this article.
Target location The Azure location to which you West US 2 is the default location.
want to migrate.
Storage type You can use this property to specify The default value is Premium-
the type of disks you want to move managed disks (with Sizing criterion
to in Azure. For as-on premises as as on-premises sizing).
sizing, you can specify the target disk
type either as Premium-managed
disks or Standard-managed disks.
For performance-based sizing, you
can specify the target disk type either
as Automatic, Premium-managed
disks or Standard-managed disks.
When you specify the storage type
as automatic, the disk
recommendation is done based on
the performance data of the disks
(IOPS and throughput). For example,
if you want to achieve a single
instance VM SLA of 99.9%, you may
want to specify the storage type as
Premium-managed disks. This
ensures that all disks in the
assessment are recommended as
Premium-managed disks. Note that
Azure Migrate only supports
managed disks for migration
assessment.
Reserved Instances You can also specify if you have The default value for this property is
reserved instances in Azure and 3-years reserved instances.
Azure Migrate will estimate the cost
accordingly. Reserved instances are
currently only supported for Pay-As-
You-Go offer in Azure Migrate.
VM series You can specify the VM series that By default, all VM series are selected.
you would like to consider for right-
sizing. For example, if you have a
production environment that you do
not plan to migrate to A-series VMs
in Azure, you can exclude A-series
from the list or series and the right-
sizing is done only in the selected
series.
Offer Azure Offer that you are enrolled to. Pay-as-you-go is the default.
VM uptime If your VMs are not going to be The default value is 31 days per
running 24x7 in Azure, you can month and 24 hours per day.
specify the duration (number of days
per month and number of hours per
day) for which they would be running
and the cost estimations would be
done accordingly.
Next steps
Learn more about how assessments are calculated.
Migrate machines after assessment
12/11/2018 • 2 minutes to read • Edit Online
Azure Migrate assesses on-premises machines to check whether they're suitable for migration to Azure, and
provides sizing and cost estimations for running the machine in Azure. Currently, Azure Migrate only assesses
machines for migration. The migration itself is currently performed using other Azure services.
This article describes how to get suggestions for a migration tool after you've run a migration assessment.
NOTE
The migration tool suggestion is not available in Azure Government.
3. In Suggested Tool, review the suggestions for tools you can use for migration.
Next steps
Learn more about how assessments are calculated.
Scale migration of VMs using Azure Site Recovery
4/1/2019 • 3 minutes to read • Edit Online
This article helps you understand the process of using scripts to migrate large number of VMs using Azure Site
Recovery. These scripts are available for your download at Azure PowerShell Samples repo on GitHub. The scripts
can be used to migrate VMware, AWS, GCP VMs, and physical servers to Azure and support migration to
managed disks. You can also use these scripts to migrate Hyper-V VMs if you migrate the VMs as physical servers.
The scripts leverage Azure Site Recovery PowerShell documented here.
Current Limitations:
Support specifying the static IP address only for the primary NIC of the target VM
The scripts do not take Azure Hybrid Benefit related inputs, you need to manually update the properties of the
replicated VM in the portal
Deletion of VMs: Due to the way the appliance is designed, deletion of VMs is not reflected even if you stop
and start the discovery. This is because data from subsequent discoveries are appended to older discoveries
and not overridden. In this case, you can simply ignore the VM in the portal, by removing it from your group
and recalculating the assessment.
Deletion of Azure Migrate projects and associated Log Analytics workspace
When you delete an Azure Migrate project, it deletes the migration project along with all the groups and
assessments. However, if you have attached a Log Analytics workspace to the project, it does not automatically
delete the Log Analytics workspace. This is because the same Log Analytics workspace might be used for multiple
use cases. If you would like to delete the Log Analytics workspace as well, you need to do it manually.
1. Browse to the Log Analytics workspace attached to the project. a. If you have not deleted the migration
project yet, you can find the link to the workspace from the project overview page in the Essentials section.
b. If you already deleted the migration project, click Resource Groups in the left pane in Azure portal and
go to the Resource Group in which the workspace was created and then browse to it.
2. Follow the instructions in this article to delete the workspace.
Migration project creation failed with error Requests must contain user identity headers
This issue can happen for users who do not have access to the Azure Active Directory (Azure AD ) tenant of the
organization. When a user is added to an Azure AD tenant for the first time, he/she receives an email invite to join
the tenant. Users need to go to the email and accept the invitation to get successfully added to the tenant. If you are
unable to see the email, reach out to a user who already has access to the tenant and ask them to resend the
invitation to you using the steps specified here.
Once the invitation email is received, you need to open the email and click the link in the email to accept the
invitation. Once this is done, you need to sign out of Azure portal and sign-in again, refreshing the browser will not
work. You can then try creating the migration project.
I am unable to export the assessment report
If you are unable to export the assessment report from the portal, try using the below REST API to get a download
URL for the assessment report.
1. Install armclient on your computer (if you don’t have it already installed):
a. In an administrator Command Prompt window, run the following command:
@powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((New-Object
System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET
"PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"
b. In an administrator Windows PowerShell window, run the following command: choco install armclient
2. Get the download URL for the assessment report using Azure Migrate REST API
a. In an administrator Windows PowerShell window, run the following command: armclient login
This opens the Azure login pop-up where you need to sign in to Azure.
b. In the same PowerShell window, run the following command to get the download URL for the
assessment report (replace the URI parameters with the appropriate values, sample API request below )
```armclient POST
https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers
/Microsoft.Migrate/projects/{projectName}/groups/{groupName}/assessments/{assessmentName}/downloadUrl?
api-version=2018-02-02```
3. Copy the URL from the response and open it in a browser to download the assessment report.
4. Once the report is downloaded, use Excel to browse to the downloaded folder and open the file in Excel to
view it.
Performance data for CPU, memory and disks is showing up as zeroes
Azure Migrate continuously profiles the on-premises environment to collect performance data of the on-premises
VMs. If you have just started the discovery of your environment, you need to wait for at least a day for the
performance data collection to be done. If an assessment is created without waiting for one day, the performance
metrics will show up as zeroes. After waiting for a day, you can either create a new assessment or update the
existing assessment by using the 'Recalculate' option in the assessment report.
I specified an Azure geography, while creating a migration project, how do I find out the exact Azure region
where the discovered metadata would be stored?
You can go to the Essentials section in the Overview page of the project to identify the exact location where the
metadata is stored. The location is selected randomly within the geography by Azure Migrate and you cannot
modify it. If you want to create a project in a specific region only, you can use the REST APIs to create the
migration project and pass the desired region.
Collector issues
Deployment of Azure Migrate Collector failed with the error: The provided manifest file is invalid: Invalid OVF
manifest entry.
1. Verify if Azure Migrate Collector OVA file is downloaded correctly by checking its hash value. Refer to the article
to verify the hash value. If the hash value is not matching, download the OVA file again and retry the
deployment.
2. If it still fails and if you are using VMware vSphere Client to deploy the OVF, try deploying it through vSphere
Web Client. If it still fails, try using different web browser.
3. If you are using vSphere web client and trying to deploy it on vCenter Server 6.5 or 6.7, try to deploy the OVA
directly on ESXi host by following the below steps:
Connect to the ESXi host directly (instead of vCenter Server) using the web client (https://<host IP
Address>/ui)
Go to Home > Inventory
Click File > Deploy OVF template > Browse to the OVA and complete the deployment
4. If the deployment still fails, contact Azure Migrate support.
Unable to select the Azure cloud in the appliance, fails with error "Azure cloud selection failed"
This is a known issue and a fix is available for the issue. Please download the latest upgrade bits for the appliance
and update the appliance to apply the fix.
Collector is not able to connect to the internet
This can happen when the machine you are using is behind a proxy. Make sure you provide the authorization
credentials if the proxy needs one. If you are using any URL -based firewall proxy to control outbound connectivity,
be sure to whitelist these required URLs:
URL PURPOSE
The collector can't connect to the internet because of a certificate validation failure
This can happen if you are using an intercepting proxy to connect to the Internet, and if you have not imported the
proxy certificate on to the collector VM. You can import the proxy certificate using the steps detailed here.
The collector can't connect to the project using the project ID and key I copied from the portal.
Make sure you've copied and pasted the right information. To troubleshoot, install the Microsoft Monitoring Agent
(MMA) and verify if the MMA can connect to the project as follows:
1. On the collector VM, download the MMA.
2. To start the installation, double-click the downloaded file.
3. In setup, on the Welcome page, click Next. On the License Terms page, click I Agree to accept the license.
4. In Destination Folder, keep or modify the default installation folder > Next.
5. In Agent Setup Options, select Azure Log Analytics > Next.
6. Click Add to add a new Log Analytics workspace. Paste in project ID and key that you copied. Then click Next.
7. Verify that the agent can connect to the project. If it can't, verify the settings. If the agent can connect but the
collector can't, contact Support.
Error 802: Date and time synchronization error
The server clock might be out-of-synchronization with the current time by more than five minutes. Change the
clock time on the collector VM to match the current time, as follows:
1. Open an admin command prompt on the VM.
2. To check the time zone, run w32tm /tz.
3. To synchronize the time, run w32tm /resync.
VMware PowerCLI installation failed
Azure Migrate collector downloads PowerCLI and installs it on the appliance. Failure in PowerCLI installation could
be due to unreachable endpoints for the PowerCLI repository. To troubleshoot, try manually installing PowerCLI in
the collector VM using the following step:
1. Open Windows PowerShell in administrator mode
2. Go to the directory C:\ProgramFiles\ProfilerService\VMWare\Scripts\
3. Run the script InstallPowerCLI.ps1
Error UnhandledException Internal error occurred: System.IO.FileNotFoundException
This issue could occur due to an issue with VMware PowerCLI installation. Follow the below steps to resolve the
issue:
1. If you are not on the latest version of the collector appliance, upgrade your Collector to the latest version
and check if the issue is resolved.
2. If you already have the latest collector version, follow the below steps to do a clean installation of PowerCLI :
a. Close the web browser in the appliance.
b. Stop the 'Azure Migrate Collector' service by going to Windows Service Manager (Open 'Run' and type
services.msc to open Windows Service Manager). Right click on Azure Migrate Collector Service and click
Stop.
c. Delete all folders starting with 'VMware' from the following locations: C:\Program
Files\WindowsPowerShell\Modules
C:\Program Files (x86)\WindowsPowerShell\Modules
d. Restart the 'Azure Migrate Collector' service in Windows Service Manager (Open 'Run' and type
services.msc to open Windows Service Manager). Right click on Azure Migrate Collector Service and click
Start.
e. Double-click the desktop shortcut 'Run collector' to start the collector application. The collector
application should automatically download and install the required version of PowerCLI.
3. If the above does not resolve the issue, follow steps a to c above and then manually install PowerCLI in the
appliance using the following steps:
a. Clean up all incomplete PowerCLI installation files by following steps #a to #c in step #2 above.
b. Go to Start > Run > Open Windows PowerShell(x86) in administrator mode
c. Run the command: Install-Module "VMWare.VimAutomation.Core" -RequiredVersion "6.5.2.6234650"
(type 'A' when it asks for confirmation)
d. Restart the 'Azure Migrate Collector' service in Windows Service Manager (Open 'Run' and type
services.msc to open Windows Service Manager). Right click on Azure Migrate Collector Service and click
Start.
e. Double-click the desktop shortcut 'Run collector' to start the collector application. The collector
application should automatically download and install the required version of PowerCLI.
4. If you are unable to download the module in the appliance due to firewall issues, download and install the
module in a machine that has access to internet using the following steps:
a. Clean up all incomplete PowerCLI installation files by following steps #a to #c in step #2 above.
b. Go to Start > Run > Open Windows PowerShell(x86) in administrator mode
c. Run the command: Install-Module "VMWare.VimAutomation.Core" -RequiredVersion "6.5.2.6234650"
(type 'A' when it asks for confirmation)
d. Copy all modules starting with "VMware" from “C:\Program Files (x86)\WindowsPowerShell\Modules”
to the same location on the collector VM.
e. Restart the 'Azure Migrate Collector' service in Windows Service Manager (Open 'Run' and type
services.msc to open Windows Service Manager). Right click on Azure Migrate Collector Service and click
Start.
f. Double-click the desktop shortcut 'Run collector' to start the collector application. The collector application
should automatically download and install the required version of PowerCLI.
Error UnableToConnectToServer
Unable to connect to vCenter Server "Servername.com:9443" due to error: There was no endpoint listening at
https://Servername.com:9443/sdk that could accept the message.
Check if you are running the latest version of the collector appliance, if not, upgrade the appliance to the latest
version.
If the issue still happens in the latest version, it could be because the collector machine is unable to resolve the
vCenter Server name specified or the port specified is wrong. By default, if the port is not specified, collector will
try to connect to the port number 443.
1. Try to ping the Servername.com from the collector machine.
2. If step 1 fails, try to connect to the vCenter server over IP address.
3. Identify the correct port number to connect to the vCenter.
4. Finally check if the vCenter server is up and running.
Antivirus exclusions
To harden the Azure Migrate appliance, you need to exclude the following folders in the appliance from antivirus
scanning:
Folder that has the binaries for Azure Migrate Service. Exclude all sub-folders. %ProgramFiles%\ProfilerService
Azure Migrate Web Application. Exclude all sub-folders. %SystemDrive%\inetpub\wwwroot
Local Cache for Database and log files. Azure migrate service needs RW access to this folder.
%SystemDrive%\Profiler
Unsupported boot type Azure does not support VMs with EFI boot type. It is
recommended to convert the boot type to BIOS before you
run a migration.
Conditionally supported Windows OS The OS has passed its end of support date and needs a
Custom Support Agreement (CSA) for support in Azure,
consider upgrading the OS before migrating to Azure.
Conditionally endorsed Linux OS Azure endorses only selected Linux OS versions, consider
upgrading the OS of the machine before migrating to Azure.
Unknown operating system The operating system of the VM was specified as 'Other' in
vCenter Server, due to which Azure Migrate cannot identify
the Azure readiness of the VM. Ensure that the OS running
inside the machine is supported by Azure before you migrate
the machine.
Requires Visual Studio subscription. The machines has a Windows client OS running inside it which
is supported only in Visual Studio subscription.
VM not found for the required storage performance. The storage performance (IOPS/throughput) required for the
machine exceeds Azure VM support. Reduce storage
requirements for the machine before migration.
VM not found for the required network performance. The network performance (in/out) required for the machine
exceeds Azure VM support. Reduce the networking
requirements for the machine.
VM not found in the specified location. Use a different target location before migration.
One or more unsuitable disks. One or more disks attached to the VM do not meet the Azure
requirements. For each disk attached to the VM, ensure that
the size of the disk is < 4 TB, if not, shrink the disk size before
migrating to Azure. Ensure that the performance
(IOPS/throughput) needed by each disk is supported by Azure
managed virtual machine disks.
One or more unsuitable network adapters. Remove unused network adapters from the machine before
migration.
ISSUE FIX
Disk count exceeds limit Remove unused disks from the machine before migration.
Disk size exceeds limit Azure supports disks with up to size 4 TB. Shrink disks to less
than 4 TB before migration.
Disk unavailable in the specified location Make sure the disk is in your target location before you
migrate.
Disk unavailable for the specified redundancy The disk should use the redundancy storage type defined in
the assessment settings (LRS by default).
Could not determine disk suitability due to an internal error Try creating a new assessment for the group.
VM with required cores and memory not found Azure couldn't fine a suitable VM type. Reduce the memory
and number of cores of the on-premises machine before you
migrate.
Could not determine VM suitability due to an internal error. Try creating a new assessment for the group.
Could not determine suitability for one or more disks due to Try creating a new assessment for the group.
an internal error.
Could not determine suitability for one or more network Try creating a new assessment for the group.
adapters due to an internal error.
Collect logs
How do I collect logs on the collector VM?
Logging is enabled by default. Logs are located as follows:
C:\Profiler\ProfilerEngineDB.sqlite
C:\Profiler\Service.log
C:\Profiler\WebApp.log
To collect Event Tracing for Windows, do the following:
1. On the collector VM, open a PowerShell command window.
2. Run Get-EventLog -LogName Application | export-csv eventlog.csv.
How do I collect portal network traffic logs?
1. Open the browser and navigate and log in to the portal.
2. Press F12 to start the Developer Tools. If needed, clear the setting Clear entries on navigation.
3. Click the Network tab, and start capturing network traffic:
In Chrome, select Preserve log. The recording should start automatically. A red circle indicates that
traffic is being capture. If it doesn't appear, click the black circle to start
In Microsoft Edge/IE, recording should start automatically. If it doesn't, click the green play button.
4. Try to reproduce the error.
5. After you've encountered the error while recording, stop recording, and save a copy of the recorded activity:
In Chrome, right-click and click Save as HAR with content. This zips and exports the logs as a .har file.
In Microsoft Edge/IE, click the Export captured traffic icon. This zips and exports the log.
6. Navigate to the Console tab to check for any warnings or errors. To save the console log:
In Chrome, right-click anywhere in the console log. Select Save as, to export and zip the log.
In Microsoft Edge/IE, right-click on the errors and select Copy all.
7. Close Developer Tools.
751 UnableToConnectToSe Unable to connect to Check the error Resolve the issue and
rver vCenter Server message for more try again.
'%Name;' due to details.
error: %ErrorMessage;
752 InvalidvCenterEndpoi The server '%Name;' Provide vCenter Retry the operation
nt is not a vCenter Server details. with correct vCenter
Server. Server details.
753 InvalidLoginCredential Unable to connect to Connection to the Ensure that the login
s the vCenter Server vCenter Server failed credentials provided
'%Name;' due to due to invalid login are correct.
error: %ErrorMessage; credentials.
754 NoPerfDataAvailable Performance data not Check Statistics Level Change Statistics
available. in vCenter Server. It Level to 3 (for 5
should be set to 3 for minutes, 30 minutes,
performance data to and 2 hours duration)
be available. and try after waiting
at least for a day.
756 NullInstanceUUID Encountered a vCenter Server may Resolve the issue and
machine with null have an inappropriate try again.
InstanceUUID object.
757 VMNotFound Virtual machine is not Virtual machine may Ensure that the virtual
found be deleted: %VMID; machines selected
while scoping vCenter
inventory exists
during the discovery
802 TimeSyncError Time is not in sync Time is not in sync Ensure that the time
with the internet time with the internet time on the machine is
server. server. accurately set for the
machine's time zone
and retry the
operation.
702 OMSInvalidProjectKey Invalid project key Invalid project key Retry the operation
specified. specified. with correct project
key.
703 OMSHttpRequestExce Error while sending Check project ID and Retry the operation. If
ption request. Message key and ensure that the issue persists,
%Message; endpoint is reachable. contact Microsoft
Support.
704 OMSHttpRequestTim HTTP request timed Check project ID and Retry the operation. If
eoutException out. Message key and ensure that the issue persists,
%Message; endpoint is reachable. contact Microsoft
Support.
Azure Migrate - Frequently Asked Questions (FAQ)
4/15/2019 • 13 minutes to read • Edit Online
This article includes frequently asked questions about Azure Migrate. If you have any further queries after reading
this article, post them on the Azure Migrate forum.
General
Does Azure Migrate support assessment of only VMware workloads?
Yes, Azure Migrate currently only supports assessment of VMware workloads. Support for Hyper-V is in preview,
please sign up here to get access to the preview. Support for physical servers will be enabled in future.
Does Azure Migrate need vCenter Server to discover a VMware environment?
Yes, Azure Migrate requires vCenter Server to discover a VMware environment. It does not support discovery of
ESXi hosts that are not managed by a vCenter Server.
How is Azure Migrate different from Azure Site Recovery?
Azure Migrate is an assessment service that helps you discover your on-premises workloads and plan your
migration to Azure. Azure Site Recovery, along with being a disaster recovery solution, helps you migrate on-
premises workloads to IaaS VMs in Azure.
What's the difference between using Azure Migrate for assessments and the Map Toolkit?
Azure Migrate provides migration assessment specifically to assist with migration readiness and evaluation of on-
premises workloads into Azure. Microsoft Assessment and Planning (MAP ) Toolkit has other functionalities such
as migration planning for newer versions of Windows client and server operating systems and software usage
tracking. For those scenarios, continue to use the MAP Toolkit.
How is Azure Migrate different from Azure Site Recovery Deployment Planner?
Azure Migrate is a migration planning tool and Azure Site Recovery Deployment Planner is a disaster recovery
(DR ) planning tool.
Migration from VMware to Azure: If you intend to migrate your on-premises workloads to Azure, use Azure
Migrate for migration planning. Azure Migrate assesses on-premises workloads and provides guidance, insights,
and mechanisms to assist you in migrating to Azure. Once you are ready with your migration plan, you can use
services such as Azure Site Recovery and Azure Database Migration Service to migrate the machines to Azure.
Migration from Hyper-V to Azure: The Generally Available version of Azure Migrate currently supports
assessment of VMware virtual machines for migration to Azure. Support for Hyper-V is currently in preview with
production support. If you are interested in trying out the preview, please sign up here.
Disaster Recovery from VMware/Hyper-V to Azure: If you intend to do disaster recovery (DR ) on Azure using
Azure Site Recovery (Site Recovery), use Site Recovery Deployment Planner for DR planning. Site Recovery
Deployment Planner does a deep, ASR -specific assessment of your on-premises environment. It provides
recommendations that are required by Site Recovery for successful DR operations such as replication, failover of
your virtual machines.
Which Azure geographies are supported by Azure Migrate?
Azure Migrate currently supports Europe, United States, and Azure Government as the project geographies. Even
though you can only create migration projects in these geographies, you can still assess your machines for
multiple target locations. The project geography is only used to store the discovered metadata.
GEOGRAPHY METADATA STORAGE LOCATION
Discovery
What data is collected by Azure Migrate?
Azure Migrate supports two kinds of discovery, appliance-based discovery and agent-based discovery. The
appliance-based discovery collects metadata about the on-premises VMs, the complete list of metadata collected
by the appliance is listed below:
Configuration data of the VM
VM display name (on vCenter)
VM inventory path (host/cluster/folder in vCenter)
IP address
MAC address
Operating system
Number of cores, disks, NICs
Memory size, Disk sizes
Performance data of the VM
CPU usage
Memory usage
For each disk attached to the VM:
Disk read throughput
Disk writes throughput
Disk read operations per sec
Disk writes operations per sec
For each network adapter attached to the VM:
Network in
Network out
The agent-based discovery is an option available on top of the appliance-based discovery and helps customers
visualize dependencies of the on premises VMs. The dependency agents collect details like, FQDN, OS, IP address,
MAC address, processes running inside the VM and the incoming/outgoing TCP connections from the VM. The
agent-based discovery is optional and you can choose to not install the agents if you do not want to visualize the
dependencies of the VMs.
Would there be any performance impact on the analyzed ESXi host environment?
With continuous profiling of performance data, there is no need to change the vCenter Server statistics level to run
a performance-based assessment. The collector appliance will profile the on-premises machines to measure the
performance data of the virtual machines. This would have almost zero performance impact on the ESXi hosts as
well as on the vCenter Server.
Where is the collected data stored and for how long?
The data collected by the collector appliance is stored in the Azure location that you specify while creating the
migration project. The data is securely stored in a Microsoft subscription and is deleted when the user deletes the
Azure Migrate project.
For dependency visualization, if you install agents on the VMs, the data collected by the dependency agents is
stored in the US in a Log Analytics workspace created in user’s subscription. This data is deleted when you delete
the Log Analytics workspace in your subscription. Learn more.
What is the volume of data which is uploaded by Azure Migrate in the case of continuous profiling?
The volume of data which is sent to Azure Migrate would vary based on several parameters. To give an indicative
number, a project having ten machines (each having one disk and one NIC ), would send around 50 MB per day.
This is an approximate value and would change based on the number of data points for the NICs and disks (the
data sent would be non-linear if the number of machines, NICs or disks increase).
Is the data encrypted at rest and while in transit?
Yes, the collected data is encrypted both at rest and while in transit. The metadata collected by the appliance is
securely sent to the Azure Migrate service over internet via https. The collected metadata is stored in Cosmos DB
and in Azure blob storage in a Microsoft subscription and is encrypted at rest.
The data collected by the dependency agents is also encrypted in transit (secure https channel) and is stored in a
Log Analytics workspace in the user’s subscription. It is also encrypted at rest.
How does the collector communicate with the vCenter Server and the Azure Migrate service?
The collector appliance connects to the vCenter Server (port 443) using the credentials provided by the user in the
appliance. It queries the vCenter Server using VMware PowerCLI to collect metadata about the VMs managed by
vCenter Server. It collects both configuration data about VMs (cores, memory, disks, NIC etc.) as well as
performance history of each VM for the last one month from vCenter Server. The collected metadata is then sent
to the Azure Migrate service (over internet via https) for assessment. Learn more
Can I connect the same collector appliance to multiple vCenter servers?
Yes, a single collector appliance can be used to discover multiple vCenter Servers, but not concurrently. You need
to run the discovery one after another.
Is the OVA template used by Site Recovery integrated with the OVA used by Azure Migrate?
Currently there is no integration. The .OVA template in Site Recovery is used to set up a Site Recovery
configuration server for VMware VM/physical server replication. The .OVA used by Azure Migrate is used to
discover VMware VMs managed by a vCenter server, for the purposes of migration assessment.
I changed my machine size. Can I rerun the assessment?
If you change the settings on a VM you want to assess, trigger discover again using the collector appliance. In the
appliance, use the Start collection again option to do this. After the collection is done, select the Recalculate
option for the assessment in the portal, to get updated assessment results.
How can I discover a multi-tenant environment in Azure Migrate?
If you have an environment that is shared across tenants and you do not want to discover the VMs of one tenant
in another tenant's subscription, you can use the Scope field in the collector appliance to scope the discovery. If the
tenants are sharing hosts, create a credential that has read-only access to only the VMs belonging to the specific
tenant and then use this credential in the collector appliance and specify the Scope as the host to do the discovery.
Alternatively, you can also create folders in vCenter Server (let's say folder1 for tenant1 and folder2 for tenant2),
under the shared host, move the VMs for tenant1 into folder1 and for tenant2 into folder2 and then scope the
discoveries in the collector accordingly by specifying the appropriate folder.
How many virtual machines can be discovered in a single migration project?
You can discover 1500 virtual machines in a single migration project. If you have more machines in your on-
premises environment, learn more about how you can discover a large environment in Azure Migrate.
Assessment
Does Azure Migrate support Enterprise Agreement (EA ) based cost estimation?
Azure Migrate currently does not support cost estimation for Enterprise Agreement offer. The workaround is to
specify Pay-As-You-Go as the offer and manually specifying the discount percentage (applicable to the
subscription) in the 'Discount' field of the assessment properties.
What is the difference between as-on-premises sizing and performance -based sizing?
When you specify the sizing criterion to be as-on-premises sizing, Azure Migrate does not consider the
performance data of the VMs and sizes the VMs based on the on-premises configuration. If the sizing criterion is
performance-based, the sizing is done based on utilization data. For example, if there is an on-premises VM with 4
cores and 8 GB memory with 50% CPU utilization and 50% memory utilization. If the sizing criterion is as on-
premises sizing an Azure VM SKU with 4 cores and 8GB memory is recommended, however, if the sizing criterion
is performance-based as VM SKU of 2 cores and 4 GB would be recommended as the utilization percentage is
considered while recommending the size. Similarly, for disks, the disk sizing depends on two assessment
properties - sizing criterion and storage type. If the sizing criterion is performance-based and storage type is
automatic, the IOPS and throughput values of the disk are considered to identify the target disk type (Standard or
Premium). If the sizing criterion is performance-based and storage type is premium, a premium disk is
recommended, the premium disk SKU in Azure is selected based on the size of the on-premises disk. The same
logic is used to do disk sizing when the sizing criterion is as on-premises sizing and storage type is standard or
premium.
What impact does performance history and percentile utilization have on the size recommendations?
These properties are only applicable for performance-based sizing. Azure Migrate collects performance history of
on-premises machines and uses it to recommend the VM size and disk type in Azure. The collector appliance
continuously profiles the on-premises environment to gather real-time utilization data every 20 seconds. The
appliance rolls up the 20-second samples, and creates a single data point for every 15 minutes. To create the single
data point, the appliance selects the peak value from all the 20-second samples, and sends it to Azure. When you
create an assessment in Azure, based on the performance duration and performance history percentile value,
Azure Migrate calculates the effective utilization value and uses it for sizing. For example, if you have set the
performance duration to be 1 day and percentile value to 95 percentile, Azure Migrate uses the 15 min sample
points sent by collector for the last one day, sorts them in ascending order and picks the 95th percentile value as
the effective utilization. The 95th percentile value ensures that you are ignoring any outliers which may come if
you pick the 99th percentile. If you want to pick the peak usage for the period and do not want to miss any outliers,
you should select the 99th percentile.
Dependency visualization
NOTE
The dependency visualization functionality is not available in Azure Government.
Next steps
Read the Azure Migrate overview
Learn how you can discover and assess a VMware environment