You are on page 1of 249

Virtual Machine Backup VMware suports hot backups and supports two technologies for backups

Consolidated Backup Data Recovery Appliance

There are new features in version 4.1


The VMware Data Recovery appliance, a VM that faciliates backup and comes with the vSphere client plug-in Improvements in VMware Consolidated backup includes new support for thinly provisioned disks and IDE disks, improved support for busy SAN and iSCSI links, vApps and Windows Server 2008

It is possible to backup a running VM, this is achieved either by VMware or a 3rd party vendor leveraging VMware's snapshot feature. As discussed in my virtual machine section, when a snapshot is applied to a VM, the files that make up the VM becomes unlocked in the file system, when the files are unlocked it is possible to back them up. The only thing to worry about is the validation that the snapshots were successful prior to creating the backup and that they are deleted after the backup has finished or crashed, remember these snapshot files can grow to a very large size. VMware introduced a file system sync driver installed into the VM during the installation of VMware Tools. The job of this sync driver is to flush the file system cache, forcing a write-to-disk prior to the backup process, thus you b=get the most complete backup you can have with a live running VM. The newer version of the sync driver now has hooks into Microsoft's Volume Shadow Copy Service. If you have a small environment or a laptop with VM running you could simply stop all the VM's and clone as templates in the compressed format to a different storage location. In a normal world you would backup the VM from the SAN itself, without the need for backup agents installed into the guest O/S, however many companies still install a backup agent (for example Netbackup) in the guest O/S. VMware has developed it's own backup APIs which the conventional 3rd party vendors can hook into. VMware Consolidated Backup (VCB) isn't actually a backup solution, but rather a collection of command-line tools, scripts and drivers that allow an existing backup vendor to use its backup solution for Windows to access VMFS volumes and files within the VM.New to vSphere is the VMware Data Recovery (vDR) appliance, which is a downloadable virtual applicance thata assists in the backup process. Vdr comes with a point and click interface, it works with ESX classic and ESXi, it inclkudes features such as compression of deduplicated data, there is no network hit when used with shared storage. After the first backup all subsequent backups are merely the delat changes within the virtual disk using special change-block-tracking functionality. However vDR only works in vSphere 4, you can backup 100 VM's and only once in 24 hour period.

vDR

I suspect that these limitations will be removed as the technology progresses. VCB works with both vSphere 4 and VI3, you require no additonal purchase or education investment, you can use the same tools to backup physical and virtual machines. It does not have deduplication and is not capable of backing up a full VM and then recording the delta changes after the first backup. The backup load is removed from the network and the ESXi server by means of a dedicated physical Windows server with a connection to your SAN, iSCSI or NAS system. This solution is popular with large environments especially if you have older versions of VMware.

VCB

Using vDR At this time I am unable to show you vDR until I get a copy of the software Using VCB At this time I am unable to show you VCB until I get a copy of the software

VMware CheatSheet This is a quick and dirty cheatsheet for VMware ESXi server version 4.1, many commands can be used on the older version. Processes vmkeventd vmklogger vpxa
A utility for capturing VMkernel events A utility for logging VMkernel events

Thsi rpocess is responsible for vCenter Server communications, commands received are passed to the hostd process for processing. The Common Information Model (CIM) system monitors hardware and helalth status. The CIM is a standard set of API's that remote applications can use to query the health and status of the ESXi host.

sfcbd

sfcb-vmware_int sfcb-vmware_aux sfcb-vmware_raw sfcb-vmware_bas sfcb-qlgc sfcb-HTTP-Daemon sfcb-ProviderManager VMware USB Arbitration Service, Allows USB devices plugged into the HOST to be usable by the guest.
a utility for running the vProbe daemon. VProbes is a facility for transparently instrumenting a powered-on guest operating system, its currently running processes, and VMware's virtualization software. VProbes provides, both dynamically and statically.

vmwareusbarbitrator vobd vprobed

openwsmand

Openwsman is a system management platform that implements the Web Services Management protocol (WS-Management). It is installed and running by default. The ESXi server host agent, this allows the vSphere client or vCenter access to the host. It consists of two processes

hostd hostd-poll hostd-worker The VIX API (Virtual Infrastructure eXtension) is an API that provides guest management operations inside of a virtual machine that maybe running on VMware vSphere, Fusion, Workstation or Player. These operations are executed on behalf of the vmware-tools service that must be running within the virtual machine and guest credentials are required prior to execution.

vix-high-p

vix-poll

dropbearmulti

Dropbear includes a client, server, key generator, and scp in a single compilation called dropbearmulti, this is basically the ssh stuff CDP is used to share information about other directlyconnected Cisco networking equipment, such as upstream physical switches. CDP allows ESX and ESXi administrators to determine which Cisco switch port is connected to a given vSwitch. When CDP is enabled for a particular vSwitch, properties of the Cisco switch, such as device ID, software version, and timeout, may be viewed from the vSphere Client. This information is useful when troubleshooting network connectivity issues related to VLAN tagging methods on virtual and physical port settings.
A debugging utility for the new Load-Based Teaming feature A debugging utility for Distributed vSwitch

net-cdp

net-lbt net-dvs

busybox (ash)

BusyBox is a software application that provides many standard Unix tools, much like the larger (but more capable) GNU Core Utilities. BusyBox is designed to be a small executable for use with the Linux kernel, which makes it ideal for use with embedded devices. It has been self-dubbed "The Swiss Army Knife of Embedded Linux Busybox utilities: ash - The Almquist shell (also known as A Shell, ash and sh)

helper??-? dcui Direct Console User Interface (DCUI) process provides a local management console for the ESXi host. VMware iSCSI proccess Vmware Open-iSCSI initiator daemon, see below for the files that are used vmkiscsid /etc/vmware/vmkiscsid/iscsid.conf /etc/vmware/vmkiscsid/initiatorname.iscsi iscsi_trans_vmklink iscsivmk-log VMware vMotion vmotionServer VMware High Availability agent Ports 22 allows access to ssh this agent is installed and started when a ESXi server is joined to a HA cluster.

53 80 443 902 903 5989 8000 Commands test default gateway

used for DNS This provides access to a static welcome page, all other traffic is redirected to port 443 This port acts as a reverse proxy to a number of services to allow for Secure Sockets Layer (SSL). The vSphere API uses this port for communications. Remote console communication between vSphere client and ESXi host for authentication, migrate and provision The VM console uses this port Allows communication with the CIM broker to obtain hardware health data for the ESXi host. vMotion requests

vmkping -D

Performance esxtop

Multipathing Detailed information List all Paths with abbreviated information List all Paths with adapter and device mappings esxcfg-mpath -l esxcfg-mpath -L esxcfg-mpath -m

List all devices with their corresponding esxcfg-mpath -b paths List all Multipathing Plugins loaded into esxcfg-mpath -G the system Set the state for a specific LUN Path. Requires path UID or path Runtime Name in --path Used to specify a specific path for esxcfg-mpath --state <active|off> esxcfg-mpath -P

operations Used to filter the list commands to display only a specific device esxcfg-mpath -d

Restore path setting to configured values esxcfg-mpath -r on system start

Distributed Power Management (DPM) More companies are looking into a greener environment, the less servers running the more friendlier and cost saving the company is, VMware address both of these with Distributed Power Management (DPM), it's job is to monitor the cluster's usage and move the VMs during non-peak times to a fewer number of ESXi servers, the unneeded ESXI servers are then put into standby mode, so they consume less power and generate less heat. The following features are new to version 4.1

Full support for DPM (no longer experimental) Replacement of the APCI/magic packets/WoL support with much more reliable method integrated with ILO/RAC/IMM/DRAC boards

I am still of the old school of when a server is powered on leave it on, however over the years hardware has become very reliable, also the cost of servers has decreased, especially simple servers like web servers, SMTP servers, etc. So by buying an additional server and using it in the cluster as a standby server or during peak times can have its advantage, however this means that you don't get bang for your buck on your hardware, your server could be on standby most of the time. DPM maintains a lookup table for the amount of power consumption and three main conditions are used to calculate the DMP process

Guest CPU and memory usage ESXi server CPU and memory usage ESXi server power consumption

DPM takes into account a load history over a 20-minute period for power-off events, for power-on events it checks every 5-minutes. DPM runs a number of "what-if" simulations to determine the best VM's to move, to allow the power off of the best ESXi server. DPM also interacts with HA, so it does not break any rules for the number of ESXi servers you tolerate in a failed state for example +1, +2, +4 redundancy. Configuring DPM You enable DPM on the properties of the DRS cluster, you have three choices off, manual and automatic. In a new environment I generally select manual and see how the cluster performs and get a feel on what recommendation come through, just before they go LIVE I then switch to fully automatic. You can also adjust how aggressive VMware will be when VMware will power off ESXi servers, again use the time while you are testing on different levels to get a feel on when you servers will be powered off.

There are two methods that can be used to enable DPM's soft power-on and power-off functionality

Wake-On-Lan featured used by most NIC's (you may need to check that this feature is enabled in the BIOS) Using the power management support delivered by most ILO/RAC/IMM/DRAC cards

You can also confirm the WoL feature in the "Network Adapter" screen. the below screen you can see that all my NIC's support this feature.

Testing DPM To test the ESXi server standby mode, select the server and in the commands panel you should see "Enter Standby Mode", you will then see the below screen, you can move any VM's running to another ESXI server.

I then received another warning message

you can watch the powering down in the "recent tasks" window

Eventually the ESXi server powers down, in the VM you can see the state as "standby", this is also in the summary -> general panel. To power the server on select the "power on" button, in my case a WoL will be sent and the server should power on

You can monitor the progress in the "recent tasks" panel, eventually the server will come back online

Once tested you can setup up DPM in the cluster in the cluster settings window, you can have a different option for each ESXi server within the cluster

After changing the DPM threshold to be more aggressive I selected the cluster then the DRS tab, then selected the "Run DRS" tag in the top right-hand corner, and VMware come back with a recommendation that I should power off vmware2 to comsume power

Configuring DPM for ILO/RAC/IMM/DRAC cards I will leave you to investigate on how to configure you ILO/RAC/IMM/DRAC card to support the power on/off feature but to set it up in VMware you need to select the ESXi server -> configuration tab -> power management tag in the Software panel -> finally select the properties in the top right-hand corner, eventually you get to edit the IPMI/ILO settings window, supply the details and click OK.

Distributed Resource Scheduler (DRS) DRS is an automated vMotion, when DRS recognizes a imbalance in the resources used on one ESXi server in a cluster, it rebalances the VM's among those servers. DRS also handles where a VM should be powered on for the first time, its uses VMware HA software to perform this, HA is in charge of detecting any crashes and making sure that the VM is started on another node within the cluster, when the ESXI server is repaired DRS when then rebalance the load again across the whole cluster. There are a number of new features in DRS

Improved error reporting to help troubleshoot errors Enhanced vMotion compatibility, forcing your new servers to have backward compatibility with old servers Ability to relocate large swap files from shared and expensive storage to cheaper alternatives

When I first started using DRS I noticed that there was not an even number of VM's on each ESXI server within the cluster, this is not the intention of DRS, different VM's create different amounts of resource demands, its primary goal is to keep a balanced load across each ESXi server thus you may end up having more VM's on one ESXi server if their loads are not very heavy. DRS also wont keep moving VM's in order to keep the cluster perfectly balanced, only if the cluster becomes very unbalanced it will weigh up whether the penalty of vMotion is worth the performance gain. DRS is clever enough to try and separate the large VM's (more CPU and memory) onto different ESXi servers and to move the smaller VM's if the balance is not right. DRS does have a threshold of up to 60 vMotion's events per hour, it will check for imbalances in the cluster once every five minutes. VMware prevents "DRS storms" for example if a ESXI server crashes, DRS starts the VM's on other ESXi servers then see's a imbalance which then causes more vMotion events trying to rebalance the cluster, however this is prevented because DRS will wait at least five minutes before checking the cluster and it would only offer recommendations based on your migration threshold. This allows the administrator to control how aggressively DRS tries to rebalance the cluster. You can choose from three different levels of automation and also set a migration threshold, you can also have resource pools Manual Partially automated Fully automated (default) The administrator is offered recommendations of where to place a VM and whether to vMotion a VM DRS decides where a VM will execute, the administrator is offered recommendations of whether to vMotion a VM DRS decides where a VM will execute and whether to vMotion, based on a threshold parameter, obeying any rules or exclusions created by the administrator.

Setting the DRS to manual or partial does not break the VMware HA, if a ESXi server fails, the VM gets powered on without asking where to power it on, you will be asked alter to rebalance the cluster. It all depends on what you require within your

environment and SLA agreements that you have in place, some administrators prefer to select manual and have total control, this is OK if your environment is small, larger companies may go fully automated as they generally have large cluster environments and lots of spare capacity. You can exclude certain VM's from DRS if you so wish, there are many options available and permutations. You can also set a migration threshold for DRS, this allows you to set how aggressive DRS should be in balancing the cluster, there are five threshold levels Level 1 Level 2 Level 3 Level 4 Level 5 Conservative Triggers a vMotion if the VM has a level five priority rating

Moderately Triggers a vMotion if the VM has a level four or more Conservative priority rating Default Moderately Aggressive Aggressive Triggers a vMotion if the VM has a level three or more priority rating Triggers a vMotion if the VM has a level two or more priority rating Triggers a vMotion if the VM has a level one or more priority rating

DRS automation levels allow you to specify a global rule for the cluster and you have have it on a VM basis, you can completely exclude VM from DRS (you might want to exclude cluster servers), you can also impose affinity and anti-affinity rules by making sure that VM are not on the same CPU or vSwitch. Configuring DRS Firstly you must make sure vMotion has been setup and tested, you really should check every VM and make sure it has no problems migration to any ESXI server within the cluster. I will cover the manual mode only as partial and automatic are hybrid of the manual anyway, I will also use the aggressive mode so that we can at least see some action. Follow below to setup DRS in manual mode First we have to create a cluster by selecting the "new cluster" icon

DRS configuration (manual mode)

Enter the cluster name and select "Turn on VMware DRS" only, we will be covering HA later.

Now I select manual as I want full control

You can use power management features like wake-on-LAN or ILO, I select off for the time being as we will be covering power management later

Again i will cover Enhanced vMotion compatibility later in this section, so disable it for the time being

We will keep the VMKernel swap in the same directory as the VM

Finally we get to the summary screen

Before we start to add ESXi server to the cluster I just want you to see a few things, if you notice in the general panel DRS is on but we have no resources available as we have not added any ESXi servers to the cluster, also the VMware DRS panel is a bit sparse as well

We can add a ESXI server to the cluster by simply dragging and dropping a ESXI

server, when you do this the following screen appears, here I select to choose to use the root resource pool

You then a summary screen

After adding both my ESXi servers in to the cluster we can clearly see the resources available increase, there are now 2 hosts, 4 CPU's and total memory and CPU resources available, also the DRS panel has come alive, there are no recommendations yet but that is because we have hardy any VM's powered on

You can "run DRS" via the DRS tab in the cluster, see the top right-hand corner, you can see when it is completed in the "recent tasks" windows at the bottom, you can also edit the DRS configuration as see below, we will be going into much more detail later

So lets get some recommendations going, if you look in the above screen shot I already have a VM running (windows_2008) on vmware1, so when I try to power on VM linux01, DRS kicks in and the recommendation screen appears, it already knows that I have a VM running on vmware1, so it suggests that i should power this on the

other ESXi server (which has nothing running), I am going to ignore this and continue to power it on vmware1

After I have powered on a number of VM's eventually when I "Run DRS", it produced the below recommendation, it see's that the cluster has become unbalanced and requests that I should move VM oralinux02 from vmware1 to vmware2, by selecting the "Apply Recommendations" button at the bottom right-hand corner, DRS will automatically move the VM for me, if I had automatic level turned on it would have perform this task without requesting me.

You can also see cluster imbalances from the summary screen on the cluster, if you look in the VMware DRS panel you can see 1 recommendation and a warning icon, if you click on the "View resource distribution chart" you can see that vmware1 does ever slow slightly has a red part on the graph, thus the warning and recommendations.

Each green box on the DRS resource distribution chart is a VM and it's size determines how much resources that it consumes, if you hoover over one of the green boxes (I have selected the largest green box), you get a more detailed looked on what resources the VM uses.

I then selected the memory tab, as you can see I am using more than 50% of the total memory on vmware1, thus the recommendation to migrate some VM's onto vmware2, I try to make the most of my hardware and generally try to keep a balance between the ESXi servers within the cluster.

DRS Options I am now going to cover the other options that are available to you From the DRS tab select the "edit" tag, then select Rules, here I create a rule for my Oracle RAC, I do not want both my RAC nodes on the same ESXi server, so I select "Separate Virtual Machines" and add the two RAC nodes, remember that if I only had ESXi server it will start both nodes on that same server, but if another ESXi server is available it will recommend that I move one of the RAC nodes.

DRS cluster affinity/AntiAffinity rules

I did mention above that you can have individual settings for each VM, to change the automatic level of a VM just select the automatic level column and a drop down list will appear, then select one of the five options

Custom automation levels

changing the We can change the automation level in the VMware DRS option DRS automation level

In an ideal world all the ESXi servers will be the same, but there are times when you want to upgrade, the Enhanced vMotion Compatibility (EVC) helps address CPU incompatibility issues. New CPU's are able to mask attributes of the processor, EVC allows them to use this feature to make them compatible with older ESX servers, Conceptually, ECV creates a common baseline of CPU attributes to engineer compatibility. To enable this feature you must turn off all your VM's because EVC generates CPU identity masks to engineer compatibility between hosts, it will also validate all you ESXI server hosts for compatibility as seen in the screen below. Vmware will be evolving this new feature to allow upgrades of all ESX servers using different CPU architectures. enabling enhanced vMotion compatibility

VMKernel swap file

You can change the VMKernel swap file location, however take heed of the warning message when changing the location

I just want to cover maintenance mode, you can select it on the summary page of a ESXi server, maintenance mode is an isolation state used whenever you need to carry out critical ESX server tasks such as upgrading firmware, ESXI servers, memory, CPU's or patching the server itself.

It will prevent other vCenter users from the following


creating new VM's and powering them on prevent any vMotion events created by a administrator or automatic DRS events

It will survive reboots of the server which allows administrators time to confirm that their changes have been effective before the VMs can be executed on the ESXI servers. When you turn on the maintenance mode if DRS is set to fully automatic mode, all the VM's will be moved automatically, you will be asked that you do want to go into maintenance mode and that you want to move the VM's as seen on the screen below

DRS Faults and History If you select the DRS tab then the faults button, you get the screen below, from here you can see if there has been any problems regarding DRS, problems might be incompatibility problems, rules that have been violated, insufficient capacity to move VM, I don't have any faults but here is a screen shot of the faults page

Lastly we come to the history, if you have setup a fully automatic DRS, you will not know where your VM's will be hosted, DRS could be moving from one day to the next, to keep track of what is going on there is a record information page that allow you to see all the DRS events, select the cluster then the DRS tab and finally select the history button, you can see from my history that some DRS events have occurred, remember that only 60 vMotion events can occur within one hour.

Fault Tolerance (FT) VMware Fault Tolerance is a new feature, at the heart of FT is the record/play feature, which was a programmers debugging tool, with record/play you can capture all the virtual interrupts that take place inside a VM. This means in the future you will be able to redirect this recording process to another VM on a different ESXi server in real time. This means that two ESXi could have the same events that are replayed and both servers will be in a synchronous state. This feature is know as lockstep technology and is an attribute of modern CPU's. VMware is working in conjunction with Intel and AMD to offer support for this feature, which is known to them as vLockstep. Fault Tolerance has some advantages and disadvantages

Advantages

offers real time protection for VM's Avoids end users being affected by downtime or hardware failure Provides seamless failover without affecting the users client application Works for all VM's regardless of the software state (stateful or stateless) Protects systems that cannot be given fault tolerance or HA using other vendors technologies FT requires modern CPUs that have lockstep attribute VMware recommend a maximum of 8 VM's (4 primaries and 4 secondaries) per ESXi server Secondary VM consumes CPU and memory resources, but is only used when a failure occurs There is a network overhead to maintain the FT logging network (50Kbps for each FT protection) Currently FT protects only VM's with one vCPU CPU's speeds should not vary to much (<400MHZ) between ESXi servers There are many features that you cannot use with FT VMDirectPath I/O, VM clustering, snapshots, SVMotion, DRS

Disadvantages

Bear in mind this is new technology and I will presume that as it matures many of the disadvantages will be addressed, you can work around some of the disadvantages by using affinity rules to prevent specific multinodes systems residing on the same ESXi server. There are a number of requirements that you need to enable FT

Compatible CPU's - CPU's must have the lockstep attribute HA clustering - you must have a fully working cluster VMotion enabled - Vmotion must be working

FT logging enabled - FT logging should be enabled on a vSwitch to which you have dedicated NIC's to the logging process Correct type of virtual disks - VM's must have virtual disks set with the option "Support Clustering Features" such as "Fault Tolerance Enabled", they must also be zerodthick (default) format.

Configuring FT CPU compatibility is the most challenging aspect to getting FT working, currently there is limited support, but as new CPU's hit the market these will support the lockstep feature. Check the VMware to see if your CPU is supported, I generally try to enable FT and see if I get any error messages. Follow below to enable and configure FT First you have to confirm the certificate management has been setup, this enhances security by making sure the ESXi server is not spoofed, if ESXi servers are added to vCenter with just a username and password without this certificate check, VMware FT will not start correctly. From the home page -> administration > "vCenter server settings" you get the screen below, make sure "vCenter requires verified host SSL certificates" is ticked

Ena blin g FT

Make sure both VMotion and HA are working, then you need to enable a FT logging VMKernel port group, all ESXi server will require an additional IP address for this port group, make sure when creating the port group you select "Use this port group for Fault Tolerance logging"

Hopefully you should end up with something like below

Check that the VM's disk types are thick, you can do this by selecting the VM > select "Edit settings" -> then select each disk and check the Disk Provisioning type, you can see in the screen shot below that this virtual disk is type thick. You can convert thin disks into thick to make them compatible with FT.

Finally we can enable FT on a VM, right-click on the VM -> select Fault Tolerance -> select "turn on Fault Tolerance"

You will see the below warning message, regarding disk provisioning and other information

Here I get two warnings, one regarding that this VM has two vCPU's, remember you can only have one vCPU, the other warning is that my hardware (HP DC7800) is not compatible, however we will continue

I can double check if the hardware is compatible by selecting the ESXi server, then in the general panel you should see "Host Configured for FT" and a small speech bubble at the end, click the speech bubble and you get the "Fault Tolerance requirement error messages", as you can see my HP DC7800's are not compatible

After I remove one vCPU from the VM I tried again, you can watch the progress from the "recent tasks" window

Although my hardware does not support FT, VMware happily configures it for this VM, once configured if you select the VM you will notice an extra "Fault Tolerance" panel, VM is not running as it will not let me start it due to the hardware compatibility problem

If you select the cluster and then the "virtual machines" tab, you will notice that there are two linux01 VM's, the primary and the secondary

Looking at each ESXi server in the "Fault Tolerance" panel you can see which one is the primary and the secondary

Lastly you can either migrate, disable the fault tolerance or turn off the fault tolerance for this VM

If I had got this working you could have started the VM on both the primary

If I get the chance to setup a FT on compatible hardware I will revisit this section.

High Availability (HA) VMware's high availability has a simple goal, if a ESXi server crashes all the VM's running on that server do as well, the other ESXi servers within the cluster will detect this and move the VM's onto the remain ESXi servers, once the failed ESXi server has been repaired and is back online, if DRS is enabled the cluster will be rebalanced if DRS has been set to automatic. The new features regarding HA in version 4.1 are

Existing advanced options now available for the GUI New advanced options to control memory and CPU alerts Three different methods to indicate how to measure load on the cluster

VMware's HA is actually Legato's Automated Availability Manager (AAM) software which has been reengineered to make it work with VM's, VMware's vCenter agent interfaces with the VMware HA agent which acts as an intermediary to the AMM software. vCenter is required to configure HA but is not required for HA to function (DRS and DPM do require vCenter). Each ESXi server constantly checks each other for availability, this is done via the Service Console vSwitch. HA and DRS work together to make sure that crashed VM's are brought back quickly online and to keep the vMotion events to a minimum. HA does require shared storage for your VM's, each ESXi server has to access to the VM's files, you also make sure that DNS is configured for both forward and reserve lookups. The below screen shot display the HA directory, if you notice you can see the aam dictory.

Normally in a cluster you have a redundancy of +1 or more, for example if you need say five ESXi servers to support your environment, then you should have six ESXi servers within the cluster, the additional ESXi server would help during a server crash or if you need to update/repair a server, thus there would be no degradation of your services. If you have previous experience of clusters you would have heard of split-brain, basically this means that either one of more ESXi server become orphaned from the cluster due to network issues, this is also know as the isolated host. The problem with the split is that each part thinks it is the real cluster, VMware's default behavior is that the isolated host powers off all its VM's, thus the locks on the VM's files are then available for other ESXi servers to use. So how does a ESXi server know that its

isolated, you could configure a default gateway and thus if it cannot get to this gateway then there is a problem, you can also use an alternative IP device as a ping source. Try and make sure that you have redundancy built into your Service Console network (multiple NIC's or even a second Service Console). With version 4 you must have at least one management port enabled for HA for the HA agent to start and not produce any error messages, try and setup this port on the most reliable network. To configure a management port follow below In vCenter select the ESXi server, then click the configuration tab and select networking from the hardware panel. Select the properties on vSwitch0 (or which ever one you prefer). Then select add and then select the VMKernel radio button, type a friendly name and make sure that you select the "use this port group for management traffic"

Manag ement port

The next screen will ask you for a IP address, in the end you should have a management port group for each ESXI server. here are examples of both my ESXi servers vmware1

If you are still unsure how to set this up then look at my network section to get more information on port groups. Configuring HA

Now that you have a management port group setup you are ready to configure HA, you should have a cluster already setup (if not then see my DRS section), select the cluster, then the DRS tab, then select the edit tag, when you select the "Turn on VMware HA" you should see several more options appear in the left panel, we will discuss these later

When you click on OK, you can watch the progress in the "recent tasks" panel at the bottom

A lot of additional processes were started on both ESXi servers, notice the /opt/vmware/aam directory

There are three option panels that you can configure HA with The "enable host monitoring" does not stop the HA agents, it basically stops the network monitoring between the ESXi servers, thus if you need to perform some maintenance on the network or default gateway you may need to untick this option while you carry out your repairs, etc. You can control the number of ESXi server failures you tolerate until the HA clustering stops powering on VM's on the remaining servers, you have the choices

The number of ESXi servers you think you lose and still deliver acceptable performance A percentage of free resources A specific host

HA options

You can set different startup priorities for VM's and also configure the isolation HA virtual response should an ESXi server suffer from the split-brain phenomena, use the machine drop down lists to make your choices options

You can automatically restart a VM if the VMware tools heartbeat is not being received within a set time period. Remember through it will only power on/restart the VM, what happens with the applications within the VM is down to you.

HA VM monitoring

HA testing and Monitoring The methods to test a cluster are


Pull the power from a ESXi server (the best test of all) Disconnect the network where the management port group is connected too. Disable the Service console interfaces

I disconnected the network from vmware2 and waited a little while, eventually the cluster picked up that vmware2 was no loner available and migrated the only running VM to vmware2, you can check the progress in the "Tasks and Events" tab

I will probably come back to this section when I get more experienced with the HA, perhaps detailing some errors and problems, I will also cover the various advanced HA settings that you can manually enter.

Installation VMware is pushing hard on the embedded version of its product, because the footprint is now much smaller it can be embedded into flash drives and ROMs. However for the time being there is still a good old fashioned installable version which is freely available for use. The following features are new to ESXi 4.1

64-bit only support for IPv6 support for 512GB of RAM, 64 cores and any combination up to a maximum 256vCPU which includes 1-, 2-, 4-, and 8-way virtual machines Native SATA disk support No longer licensed by a .lic file, this is now handled by strings either locally or via vCenter

Version 4 tries to improve on the last version and offers many advantages to previous releases

Reduced path burden - removing the COS from the VMkernel patch updating has been reduce drastically Restricted Access - ESXi 4 has a lockdown mode which prevents access via vSphere client directly, it can then only be managed via the ILO port or vCenter. Rapid Provisioning - with the embedded version there is no installation, only configuration of of a few items, networking and storage Greater Reliability - because you can install ESXi on solid-state devices (ROM, Flash Drives), the failure rate of physical disks has been reduced Hardware Monitoring - this has been integrated into the system, and thus there is very little need for 3rd party products

Installation As I have mentioned you can have a running ESXi server from a flash drive, this is called the embedded version of the VMware hypervisor, eventually this O/S will be in a ROM on the motherboard. All that will be left to do is setup a password, the IP address and server name which can all be done via the Direct Console User Interface (DCUI). To create a USB memory stick for ESXi follow the below, please change the below to the latest version as I might not have the latest version below Make sure that the USB stick is supported by VMware, here are some of the supported one, but do check with VMware web site USB stick

Kingston DataTraveller II Lexar JumpDrive SanDisk Cruzer Micro SanDisk SADUFD2AA-4096

For testing purposes I would try any USB stick and see if it works, however try and make sure it is USB 2.0 compatible and is at least 1GB and is a high-speed device. You will need to download the installable version ISO file, for here we will need to obtain the .dd image file which will be used to image a USB stick. Below is how to install onto a USB using Linux 1. Create a mounting point for the ISO image # mkdir /mnt/isocd 2. Using the mount command mount the ISO image # mount -o loop -t iso9660 -r /vmfs/volumes/local_esx2/VMware-VMvisor-Installer-4.0.01450815.x86_64.iso /mnt/isocd 3. Copy the image.tgz file to /root # cp /mnt/isocd/image.tgz /root 4. From the image.tgz, extract the .bz2 file, which contains the .dd file Installation # tar -zxvf mnt/isocd/image.tgz *.dd.bz2 5. Unzip the .bz2 file # bunzip2 /root/usr/lib/vmware/installer/VMware-VMvisor-bigNNNNNN-x86_64.dd.bz2 6. Identify the Linux device name for the USB stick, use fdisk # fdisk -l |grep Disk 7. Use the dd command to transfer the image file to the device, file in this case was 890MB # dd if=VMware-VMvisor-big-NNNNNN-x86_64.dd of=<USB device path> 8. Confirm if the USB stick is an exact image by using md5sum command # ls -l VMware-VMvisor-big-NNNNNN-x86_64.dd # md5sum VMware-VMvisor-big-NNNNNN-x86_64.dd

I am not going to go over how to configure the rest of the ESXi server as there are many documents on the web already and the DCUI is very easy to use, however I will point out some defaults that have been created

A default virtual switch called vSwitch0 is created with two ports groups called VM network and Management Network The root account by default will have no password set The default domain name will be localhost.localdomain

If the local disks are blank then a local VMFS volume called datastore1 will be created

One thing regarding the network setup is that you can now test the network configuration within the DCUI which is very handy, a couple of screens in the DCUI are worth noting

Configure Lockdown Mode - we have already mentioned this but is prevents access to the ESXi server directly from a vSphere client Restart Management Network - restarts the management when you have made changes i.e changes to IP address, etc Disable Management Network - if you have very high security then you may want to disable the management network Configure keyboard - this only changes the keyboard and not the language on the screen View Support Information - handy screen if you need to give VMware support some information basically serial and license numbers View System Logs - you can view the messages, config, management agent and the virtual center agent system logs

The last thing to mention is the tech support command-line/Putty access although you can use Putty to access the ESXi server it is not supported and maybe removed in future releases, from the DCUI screen. To configure select "troubleshooting mode options" then select "Enable remote tech support (SSH)", you then can restart the management network. You now should be able to ssh to the ESXi server and login as root. However if you want to use a command-line tool you should use either vCLI or PowerShell, again I will be covering these tools in another section. You can also access a command-line from the console as long "Local Tech Support" has been enabled, which is in the troubleshooting mode options screen. Make sure "Local Tech Support" has been enable in the DCUI Console comand-line 1. At the splash screen hit ALT+F1 2. login as root using root's password 3. Now you should have a command-line

vSphere client Installation You have two ways in which to access and configure your ESXi servers

vSphere client - can configure and control a standalone ESXi server, you cannot however use features like vMotion vCenter - the bells and whistles ESXi configuration and control tool, but requires a underlying O/S to run.

I am not going to go into great detail on how to install the vSphere client software as it is a standard windows application, once installed and running you will be presented

with a login screen as below, from here you can access any of your ESXi servers, just enter the IP address or Server name (preferred) if using DNS and login details.

When you have enter the login details you will be taken the the main screen, compared to the vCenter screen it is vastly sparse (see below for a picture of the vCenter main screen)

selecting the inventory icon will take you to the ESXi server main screen, here you have a number of tabs, I will be covering all the tabs in much greater detail in forth coming sections.

Getting started deploy VM from a market place or create a new VM Summary general information on the VM, Resources (Storage and Disk), commands Virtual Machines Details of all the virtual machines hosts on this ESXi server Resource Allocation see what resources (CPU. Memory and Storage) VM's are using Performance - Graphs that show what performance a VM is using (CPU, Mem, Storage and Networking Configuration - configure the ESXi servers hardware and software (Licensing, DNS, NTP, etc) Local Users

and Groups Display the local users and groups Events display events that have happened on the ESXi server Permissions Define roles and permissions that will be used for security

vCenter Installation vCenter is a management application that allows you to manage many ESXi servers in a single window, the following features are only available when using vCenter

Distributed virtual switches Microsoft user accounts Templates and template management Cold and hot migrations (vMotion and Storage vMotion) Distributed Resource Scheduler (DRS) High Availability (HA) Fault Tolerance (FT)

vCenter requires a database which can be any of the following


Microsoft SQL server 2005 and 2008 Microsoft SQL server express version for test and development purposes, it has a maximum 4GB size limit (this is included on the vCenter installation CD) Oracle 10g or 11g IBM DB2

You can run vCenter either on a physical or a virtual machine, I prefer to run mine in a virtual machine so that I can take advantage of vMotion, HA, etc, it run's perfectly happy in a VM environment and both are supported by VMware. You can also join together multiple vCenter servers in a vCenter group to allow them to be managed using a single vSphere client connection, this is known as linked mode. Linked Mode uses Microsoft Active Directory Application Mode (ADAM) to store the vCenter

linked mode configuration data. If you require further information on this subject I point you to the web, as I will not be going into detail on this subject. For my test environment I create a VM using standard windows 2008 64-bit trail version, I used about 40GB of storage and set it up as a standard installation. Once the Windows 2008 server was up and running I downloaded the vCenter software and installed it (the questions asked are basic and if in doubt select the default option), the only thing to watch is that I got the vCenter installation to install a Microsoft SQL server express instance for me as my environment is only small. Use this link to see VMware installation guide and requirements also there are plenty of videos on youtube and documents on the web that explain this process in detail, so I am not going to re-event the wheel by explaining it here. Make sure the below services on the windows 2008 are running, sometimes they not running.

Once vCenter is up and running, using the vSphere client login into the vCenter, you should then get the main vCenter screen below, which has far more options available, here you can add ESXi servers to manage. vCenter is a licensed product but does allow you to manage many ESXi servers with easy plus it fully supports vMotion, HA, etc, but remember it does come with extra baggage managing a vCenter and Windows 2008 environment even if it is a virtual machine.

Here is another screen below showing that I have two ESXi servers being managed by this vCenter, we will go in to much detail on configuring ESXi servers and vMotion in other sections. vCenter has a hierarchical format, containers types that are available are datacenter and folder. You can group together ESXi servers that are related to one another (Production, QA/Test, DEV), here I have grouped my two ESXi servers into a datacenter called Production.

When you first add a ESXi server to vCenter it may take sometime, two critical changes are taking place, the vCenter Management Agent is installed onto the ESXi server and a user account called vpxuser is created (this is a service account). The Agent allows the vCenter to communicate with the ESXi server, this is done via a process called hostd, this process has four main tasks

Relay ESXi server configuration changes to hostd Relay virtual Machine create and change requests to hostd Relay resource allocations to virtual machines to hostd Gather performance information, alarms and alerts from hostd

One important note is try not to add any hosts using the IP address instead of the server name, this is because many of the VMware product do require name resolution and without reverse lookup on DNS, it can cause all sorts of problems. It is possible to access the vCenter via a mobile device, VMware have developed the vCenter Mobile Access virtual appliance, it is a web server that communicates with the vCenter, the vCenter information is rendered into plain HTML and formatted to work with mobile devices, it's not fully supported and is deemed as a power toy for the time being.

Introduction In the old days data centers were full of physical servers doing nothing or at most running at only 10% capacity, this was a waste of money (power, cooling, support contracts, etc) and space. Companies are always looking to reduce the overall costs and that is were virtualization comes in. Virtualization has been around since the 1970's, it was not until the late 1990 that virtualization became a hot topic once again, in 1999 a company called VMware released VMware Workstation which was designed to run multiple Operating Systems at the same time on a desktop PC. In 2001 VMware release two servers versions called VMware GSX server (requires a host O/S to run) this was later renamed VMware server and VMware ESX server this had it's own VMKernel (known as the hypervisor) and was run directly on the hardware, also a new filesystem was created called VMware Machine File System (VMFS). Since the first release we had had ESX server 2.0 and ESXi 3.5 (2007) which brings us to the latest version VMware ESXi server 4.1 (2010). VMware ESXi server has seen many changes over the years and the latest versions includes support for the following, I will be going into much more depth with the below topics.

High Availability (clustering) VMotion Fault Tolerance Storage VMotion Distributed Resource scheduler (DRS) Distributed Power Management (DPM) Data Recovery

Going back to my first paragraph, virtualizing many servers reduces costs, administration and space and as most companies are now trying to reflect a green image, virtualizing whole environments (including Production environments) seem to be the way things are going. So what is virtualization, what it is not is emulation or simulation Emulation is the process of getting a system to work in an environment which it was never designed for. An example of this is that there is a old Atari emulation software that you can download and play old Atari games. In the back ground translations are carried out in order for you to play the games, this has major performance issues. Why emulate, because it is cheaper than to rewrite the entire code. Simulation gives you the appearance of a system, as opposed to a system itself. For an example NetApp now have a NetApp simulator that appears and works like actual Netapp hardware but it is not. Another example is a flight simulator it gives the appearance of a real plane but obviously it is not.

Emulation

Simulation

Virtualization Virtualization allows you to create virtual environments (Linux ,

Windows) and to make them appear as if they were the only environment using the physical hardware. A virtual machine will have a BIOS (Phoenix BIOS), NIC's, Storage controller, etc and the virtual machine has no idea that its not the only one using the physical hardware. The ESXi server will intercept the virtual interrupts and redirect to the physical hardware inside the ESX host, this is know as binary translation. The latest CPU's from Intel and AMD assist in virtualization having there own technologies, see the appropriate web site for more information

Intel - Intel VT AMD - AMD-V

VMware ESXi Architecture VMware decide to remove the service console (COS) from its latest kernel (hypervisor), which allows the hypervisor not to have any dependencies on the operating system, which improves reliability and security and the need for many updates (patches). The result is a much more stream-lined O/S (90MB approx) which means that it can be embedded onto a hosts flash drive and thus elimating the need for a local disk drive (more greener environment). The heart of a ESXi is the VMKernel, this controls the access to the physical hardware, it is similar to other O/S, were processes will be created, file systems are used. The VMkernel is designed for running virtual machines, it focuses on resource scheduling, device drivers, and I/O stacks. You can communicate with the VMkernel via the vSphere API which the vSphere client or vCenter can use.

VMware ESXi server has a number processes that are started, I managed to obtain information on most of them but there a few that I need to investigate further vmkeventd vmklogger vpxa
A utility for capturing VMkernel events A utility for logging VMkernel events

This process is responsible for vCenter Server communications, commands received are passed to the hostd process for processing. The Common Information Model (CIM) system monitors hardware and health status. The CIM is a standard set of API's that remote applications can use to query the health and status of the ESXi host.

sfcbd

sfcb-vmware_int sfcb-vmware_aux sfcb-vmware_raw sfcb-vmware_bas sfcb-qlgc sfcb-HTTP-Daemon sfcb-ProviderManager VMware USB Arbitration Service, Allows USB devices plugged into the HOST to be usable by the guest.
a utility for running the vProbe daemon. VProbes is a facility for transparently instrumenting a powered-on guest operating system, its currently running processes, and VMware's virtualization software. VProbes provides, both dynamically and statically.

vmwareusbarbitrator vobd vprobed

openwsmand

Openwsman is a system management platform that implements the Web Services Management protocol (WS-Management). It is installed and running by default. The ESXi server host agent, this allows the vSphere client or vCenter access to the host. It consists of two processes

hostd hostd-poll hostd-worker The VIX API (Virtual Infrastructure eXtension) is an API that provides guest management operations inside of a virtual machine that maybe running on VMware vSphere, Fusion, Workstation or Player. These operations are executed on behalf of the vmware-tools service that must be running within the virtual machine and guest credentials are required prior to execution. Dropbear includes a client, server, key generator, and scp in a single compilation called dropbearmulti, this is basically the ssh stuff CDP is used to share information about other directly-

vix-high-p

vix-poll dropbearmulti net-cdp

connected Cisco networking equipment, such as upstream physical switches. CDP allows ESX and ESXi administrators to determine which Cisco switch port is connected to a given vSwitch. When CDP is enabled for a particular vSwitch, properties of the Cisco switch, such as device ID, software version, and timeout, may be viewed from the vSphere Client. This information is useful when troubleshooting network connectivity issues related to VLAN tagging methods on virtual and physical port settings. net-lbt net-dvs
A debugging utility for the new Load-Based Teaming feature A debugging utility for Distributed vSwitch

busybox (ash)

BusyBox is a software application that provides many standard Unix tools, much like the larger (but more capable) GNU Core Utilities. BusyBox is designed to be a small executable for use with the Linux kernel, which makes it ideal for use with embedded devices. It has been self-dubbed "The Swiss Army Knife of Embedded Linux" Busybox utilities: ash - The Almquist shell (also known as A Shell, ash and sh)

helper??-? dcui Direct Console User Interface (DCUI) process provides a local management console for the ESXi host. VMware iSCSI proccess Vmware Open-iSCSI initiator daemon, see below for the files that are used vmkiscsid /etc/vmware/vmkiscsid/iscsid.conf /etc/vmware/vmkiscsid/initiatorname.iscsi iscsi_trans_vmklink iscsivmk-log VMware vMotion vmotionServer VMware High Availability agent this agent is installed and started when a ESXi server is joined to a HA cluster.

There are a number of ports that are used by VMware ESXi, here is a list of some of the common ones used 22 53 80 allows access to ssh used for DNS This provides access to a static welcome page, all other traffic is redirected to port 443

443 902 903 5989 8000

This port acts as a reverse proxy to a number of services to allow for Secure Sockets Layer (SSL). The vSphere API uses this port for communications. Remote console communication between vSphere client and ESXi host for authentication, migrate and provision The VM console uses this port Allows communication with the CIM broker to obtain hardware health data for the ESXi host. vMotion requests

ESXi can be deployed in two formats your ESXi server comes preload on a flash drive, you simple power on the server and boot from the flash drive. You configure the server from Embedded the DCUI, after which you can then manage the server via vSphere client or vCenter. this requires a local host disk or as with the new 4.1 version you can boot via a SAN, thus you can have a diskless server. You start the install either Installable via CD or PXE boot. You can also run pre-scripted installations which mean you can configure the server in advanced. Again vSphere client or vCenter can be used to manage the ESXi server. ESXi now comes with two powerful command-line utilities vCLI and PowerCLI and also the new Tech Mode Support (TSM) which allows low-level access to the VMkernel so that you can run diagnostic commands. vCLI this is a replacement for the esxcfg commands found in the service console. It is available for both Linux and Windows.

this extends Windows PowerShell to allow for the management of vCenter Server objects, PowerShell is designed to replace the DOS PowerCLI command prompt, it is a powerful scripting tool that can be used to run complex tasks across many ESXi hosts or virtual machines. ESXi Server has the below features, although ESXi is free some features will require a license.

supports 64-bit only supports up to 512GB memory supports either 6 or 12 cores per physical processor support for fibre channel, fibre channel over Ethernet (FCOE) and iSCSI support for booting from a SAN (providing the network cards/HBA support this) supports thin provisioning which is designed to provide a higher level of storage utilization support dynamic data growth using available data storage , which includes growing Virtual Machine File Systems (VMFS)

has it own patch manager to allow for easy patching fully supports HA supports both virtual machine and storage vMotion supports fault tolerance (FT) supports distributed resource scheduler (DRS) can support a mixed setup including any of the above features supports USB pass through (single virtual host only) support for serial connections over the network to virtual hosts (vSPC) supports Load-Based Teaming (LBT)

The VMware vNetwork Distributed Switch (dvSwitch) provides centralized configuration of networking for hosts within your vCenter server data center. This means that you can make changes in the vCenter can can then be applied to a number of ESXi hosts or virtual machines. Network I/O control is a new network traffic management feature for dvSwitches it implements a software scheduler within the dvSwitch to isolate and prioritize traffic types on the links that connect your ESXi server to the physical network. It can recognize the following types of traffic

Virtual Machine Management iSCSI NFS Fault Tolerance logging vMotion

Network I/O control uses shares and limits to control traffic leaving the dvSwitch, which can be configured on the resource Allocation tab. Limits are imposed before shares and limits apply over a team of NIC's. Shares on the over hand, schedule and prioritize traffic for each physical NIC in a team

shares - specify the relative importance of a traffic type being transmitted to the hosts physical NICs, shares settings work the same way a CPU and memory resources . limits - are used to specify the maximum limit that can be used by a traffic type

VMware also uses Load-Based Teaming (LBT), which is used to avoid network congestion on a ESXi server, it adjusts manually the mapping of virtual ports to physical NIC's to balance network load leaving and entering the dvSwitch. LBT will attempt to move one or more virtual ports to a less utilized link within the dvSwitch. The vStorage API for Array Integration (VAAI) is a new API available for storage partners to use as a means of offloading specific storage functions in order to improve performance. It supports the following Advanced configuration setting Description

this enables the array to make Full Copy DataMover.HardwareAcceleratedMove full copies of data within the array without requiring the ESXi

host to read or write data. Full copy can also reduce the time required to perform a Storage vMotion operation, as the copy of the virtual disk data is handled by the array on VAAIcapable hardware and does not need to pass to and from the ESXi hosts. the storage array handles zeroing out blocks during the provisioning of virtual machines. Block zeroing also improves the performance of allocating new virtual disks, as the array is able to report to the ESXi server that the process is complete immediately while in reality it is being completed as a background process, without the VAAI the ESXi server would have to wait until it was completed which could take a while on some virtual machines.

Block Zeroing

DataMover.HardareAcceleratedInit

this provides an alternative to small computer systems interface (SCSI) reservations as a means to protect VMFS metadata. This provides a more granular option to protect VMFS metadata than SCSI reservations, it uses a storage array atomic test and set capability to enable fineHardwaregrain block-level locking VMFS3.HardwareAcceleratedLocking assisted mechanism. Any VMFS locking operation that allocates space, such as starting or creating a virtual machine results in VMFS having to allocate space, which in the past has required a SCSI reservation to ensure integrity of the VMFS metadata on datastores shared by many ESXi hosts. Hosts profiles are used to standardize and simplify how you manage your vSphere host configurations, you can capture a policy that contains the configuration of networking, storage, security settings and other features from a properly configured host, you can then use this policy against other hosts to maintain consistency.

Although I am only covering ESXi 4.1 the table below shows the differences between older versions of VMware. ESX 3.5 Service Console (COS) CommandLine interface Advanced troubleshooting Scripted installations Boot from SAN SNMP Active Directory integration Hardware Monitoring Web Access Host Serial Port Connectivity Jumbo Frames ESXi Boot Process ESXi can be installed on a flash drive, small local or remote disk drive and as such ESXI differs from other O/S. The system partitions for ESXI are summarized below, this may differ slightly depending if installed on a flash drive or hard disk

ESX 4.0 Present COS + vCLI COS X X X 3rd party in COS 3rd part COS agents X X X

ESX 4.1 Present COS + vCLI COS X X X X 3rd part COS agents

ESXi 3.5 ESXi 4.0 ESXi 4.1 Removed Removed Removed RCLI Tech Support Mode PowerCLI PowerCLI + vCLI + vCLI Tech Support Mode Tech Support Mode X X Limited Limited Limited X CIM CIM CIM providers providers providers

Present COS COS X X X 3rd party in COS 3rd part COS agents X X X

X X

X X X

Bootloader Partition - This is a small 4MB partition and contains SYSLinux, which is used as a bootloader to start ESXi Boot Bank Partition - This 250MB partition stores the required files to boot ESXI, it is also know as hypervisor 1. Alt Boot Bank Partition - This 250MB is initially empty, the first time you patch ESXi, the new system image is stored here. The partition is also known as Hypervisor2. Core Dump Partition - This 100MB partition is normally empty but the VMKernel will store a memory dump image if the server crashes. You can manage this partition using the vCLI command vicfg-dumppart. Store Partition - this 285MB partition is used to store system utilities such as the ISO images for VMware Tools and floppy disk images for virtual device drives, it is also known as Hypervisor3.

When your ESXi server first starts, SYSLinux is loaded, SYSLinux looks at the file boot.cfg which is located both on hypervisor1 (mounted as /bootbank) and hypervisor2 (mounted as /altbootbank), SYSLinux uses the parameters build, updated and bootstate to determine which partition to use to boot ESXi, if you have not upgraded then it will use /bootbank if you have upgraded then it will use /altbootbank, this then reverses when you update again. If there is a problem after a upgrade you can always boot from the other previous partition. At the initial loading VMware Hypervisor screen you can load a prior version by pressing Shift+R, then press shift+Y to revert back, you should see the message "Fallback hypervisor restored successfully". After SYSlinux determines which system image to boot, boot.cfg is read to determine the files that are used to boot the VMKernel, once loaded into memory the storage is not accessed again. if you run df -h you get a listing of the filesystems that ESXi has mounted, listed first is visorfs which is the RAM disk that ESXi has created, the four vfat partitions are bootbank, altboot, scratch and store.

The command vdf -h is new and gives details on the RAM disks, the listing below shows the tardisks that ESXi has extracted to create the filesystem, these entries correspond to the Archive and State file types in boot.cfg. The four mounts are MAINSYS is the root folder, hoststats is used to store realtime performance data on the host and updatestg is used as storage space for staging patches and updates.

Networking (Standard and Distributed) This section will cover both standard networking and distributed networking (vCenter server required for distributed networking), first I will start with standard networking. Standard Networking The VMkernel can create virtual switches called vSwitches, the virtual machines virtual NIC's are plugged into vSwitches these are then mapped to the physical NIC's on the ESXi server. This means many virtual machines can use the physical NIC's on the ESXI server. The other clever thing VMware can do is that if two virtual machines communicate with each other on the same vSwitch no physical network traffic is generated. The VMkernel moves the data into memory seamlessly from one virtual machine to another without ever involving the physical network interface. Knowing this knowledge you can design your VM's that communicate with each other to use the same vSwitches thus increasing performance. vSwitches are VLAN aware and can control outbound traffic using a VMware technology called traffic shaping, you can also impose security settings. A vSwitch can contain 0,1 or many physical NIC's assigned to it, a vSwitch without any NIC's attached to it is called a internal vSwitch, as it only allows communication with the host. The internal vSwitch could be used as staging area before moving it into Production, one note to remember is that vSwitches do not communicate with one another. The only drawback of internal vSwitches is that you cannot carry out vMotion events without first disconnecting users from the virtual machine. Since they are internal to the ESXi server, we cannot guarantee that users would have a continuous connection to the virtual machine during the vMotion event. A vSwitch with one physical NIC gives you basic connectivity to the outside world. This might be suitable for vMotion or one that does not require fault tolerance (FT), ideally this would be for testing, development environments, etc. A vSwitch with two physical NIC's gives you fault tolerance (FT) and load balancing, this would be idea for vMotion and could be used in Production environments. You can have up to 20 physical NIC's in an ESXi server of any link speed, there are now 56 ports by default on a vSwitch and this is configurable for up to 4,088 ports. Port Groups vSwitches can be divided into smaller units called port groups, there are three types of port groups

Virtual Machine Service Console VMKernel (for vMotion, VMware FT logging and IP storage)

You could create one big fat vSwitch and connect all the NIC's to this but you should really separate the network traffic on separate NIC's. Try and name your port groups with a meaningful name so that if support up the service they can see what the port group is used for, idea names would be vlan25, vlan26. An important note to remember is that if you rename a port group virtual machines become orphaned from the switch because the name of the virtual switch to which the virtual machine is attached is held in the virtual machines configuration file (.vmx), so bear that in mind if you rename a port group that many have many virtual machines attached, that's a lot of work to resolve the problem and many even need a script to resolve it. An example of this that I change my iSCSI network port group to iSCSI networks (additional s in network) and as you can see in the image on the left had 6 virtual machines attached, a simple name change now has orphaned all 6 virtual machines, see the image on the right (the images were taken from the summary screen of the ESXi server)

Here is a guide on how to create the various standard vSwitch To create a internal vSwitch follow the notes below, remember to name your vSwitch appropriately 1. 2. 3. 4. 5. 6. 7. In vSphere Client, select the ESXi host Select the configuration tab In the hardware panel, select Networking Click the Add Network link Choose Virtual Machine and click next Make sure no network adapters are selected and then click Next In the Port Groups Properties dialog box, type a descriptive and unique name as internal0-vmware1 8. Click Finish

Internal standard vSwitch

You should end up with something like below

Teamed Standard Switch

To create a teamed vSwitch it is the same as above, apart from selecting two or more physical adapters, you should end up with something like below

ESXi supports two main methods of enabling access to VLAN's

External switch tagging (EST) - simply plug in the relevant NIC's to the relevant VLAN's and set the virtual machines IP settings for that network. This will consume a lot of NIC's for each VLAN you need one NIC. Virtual switch tagging (VST) - the network interfaces are plugged into truck ports on the physical switch, truck ports allow many VLAN packets to traverse them, even with just one or two NIC's, ESXi can allow multiple virtual machines to access many VLAN's

To create a VLAN vSwitch follow below 1. In vSphere Client, select the ESXi host 2. Select the configuration tab 3. In the hardware panel, select Networking 4. Click the Add Network link 5. Choose Virtual Machine and click next 6. Select the desired NIC's 7. put a friendly name and the VLAN ID 8. click Next and then click finish 9. to add more VLAN ID's simply select properties 10. select the add button and repeat the process as above Hopefully you should have something like below

Standard vSwitch with VLAN support

When you create a vMotion port group you will be asked for the following details VMKernel standard vSwitch for vMotion

IP Address Subnet Mask Default gateway (this is optional)

You can use vMotion across routers using the gateway but this is not a

preferred way, normally ESXi servers are attached to the same network to increase performance and other problems. To create a vMotion vSwitch follow below 1. 2. 3. 4. 5. 6. 7. In vSphere Client, select the ESXi host Select the configuration tab In the hardware panel, select Networking Click the Add Network link Choose VMKernel and click next select the desired NIC's In the port groups dialog box type a friendly name, in my case I type "vMotion" 8. The select the "use this port group for vMotion" 9. Set the IP address and subnet mask and gateway if required Hopefully you should get something like below, you may get a license warning come up, it still will create the port group.

Configuration and Problems of vSwitches and Port Groups To check to see if a NIC has a problem connecting to the network, go to the Networking screen and check if there is a red-cross against the NIC, see the below image, use standard networking fault finding to correct the problem.

There are a number of configuration settings on vSwitches and Port Groups Increase You can now change the number of ports up to 4088 with ESXi 4, if you have a large

the number of VM's you may have to increase this value number of ports To increase the number of ports on a vSwitch on a vSwitch 1. In vSphere Client, select the ESXi host 2. Select the configuration tab 3. In the hardware panel, select Networking 4. Click the properties of the vSwitch that you wish to change 5. make sure the vSwitch option is highlighted 6. select the edit button 7. then using the down arrow, select number ports that you desire 8. click OK to implement

You can change the speed and duplex of any on the physical NIC's in your ESXi server, generally they will auto-negotiate 1. 2. 3. 4. 5. 6. 7. 8. In vSphere Client, select the ESXi host Select the configuration tab In the hardware panel, select Networking Click the properties of the vSwitch that you wish to change select the "Network Adapters" tab at the top select the physical NIC and click the edit button choose the desired speed and duplex click OK to implement

Setting speed and duplex on physical NIC's

I am going to cover these briefly, you have three additional areas that can be changed

promiscuous mode - allows a NIC to collect all network packets including ones that are not intended for it MAC address change - allow for the MAC address to change, this may be a requirement for things like Microsoft cluster and loading balancing Forged Transmits - allows a VM to send traffic under a MAC address that is different from that of the VM, Microsoft Network Loading balancing may break if you reject this

Security Most systems the default option will be fine

Setting vSwitch and Port Group policies

Traffic shaping is ESXi method of controlling outbound traffic generated by VM's you can control the bandwidth, traffic shaping is not dynamic once set those values are set in stone. Not sure why you would use this feature as i like to give my servers as much bandwidth as possible but you can make use of it. Traffic Shaping

NIC Teaming

NIC teaming (bonding in the Linux world) is a fault tolerance, load balancing feature, you can configure the load balancing, what to detect to cause a failover, failback when the problem has been solved and select particular NIC's to standby or active NIC's This is pretty much the same as all NIC teaming in other O/S

Service Console Network ESXi 4 the networking architecture treats the Service Console as if it were just another VM connected to a vSwitch, this default vSwitch is called vSwitch0, the service console ports have a special name vswif which stands for virtual switch interface. You should try to have a backup service console port or try to protect the existing by making use it is fault tolerant to network failures. If you were to lose the service console to have to go to the command-line to either re-install it of repair it. Here are some commands that will help in recovering a failed service console Service Console troubleshooting # Displaying the vSwitch configurations esxcfg-vswitch -l # Display all the NIC's esxcfg-nics -l Correcting your NIC selection # Link another NIC to the switch esxcfg-vswitch -L vmnic1 vSwitch0 # Unlink a NIC from a Switch esxcfg-vswitch -U vmnic0 vSwitch0

# Display current IP address settings esxcfg-vmknic -l # Change the Service console network Correcting IP settings esxcfg-vmknic "Management Network" -i 192.168.0.190 -n 255.255.255.0 # Restart the network services service network restart # Display current CDP settings, it should return "listen" esxcfg-vswitch -b vSwitch0 Configuring the Cisco Discovery Protocol # Make the vSwitch bidirectional esxcfg-vswitch -B both vSwitch0 # Display CDP data esxcfg-info | more +/CDP\ Summary Change the MTU value Distributed Switches Distributed virtual networking (DVN) is completely new to version 4, it enhances the network layer of the VMkernel, in uses features such as below, however you must be using vCenter in order to create DvSwitches

# Change the MTU value esxcfg-vswitch -m 1500 vSwitch0

private VLAN's Network vMotion API support for 3rd parties VMDirectPath I/O new NIC drivers within guest o/s using vmxnet3.sys

At a basic level a DvSwitch is a global switch, rather than configure vSwitches for each ESXi server, you create a DvSwitch that can be applied to every ESXi server, they also implement private VLAN's (PVLAN) within VMware. DvSwitches can do all the things a normal vSwitch can do, they also offer more ports than a standard vSwitch. DvSwitches use distributed virtual uplink (DvUplink) ports, it is merely a container for holding a reference to a physical NIC. When you create a DvSwitch the system attempts to set the correct number of DvUplink ports for you, it does this by looking

at all the ESXi servers and searching for the ESXI server with the most uplinks. DvUplinks can be renamed this allows you given more meaningful names. I am going to combine a number of items to explain how to create and configure a DvSwitch, I am going to create a DvSwitch, add a port group and then add another port group that uses vMotion. This should give you a fairly good understanding of DvSwitches, I will finish off looking at the advanced features (actually they are pretty much the same as a standard vSwitch). First let me explain what I had already configure, I had a vSwitch configured with two port groups one for the private LAN and one port group for vMotion (don't worry too much about vMotion for the moment I will be covering this in greater detail in a later section). This what I had setup before, a vSwitch with two port groups. Each ESXi server I have (vmware1 and vmware2) has this vSwitch setup.

Curr ent setup and freei ng up The first thing I do is free up some NIC's by removing the Physical adapters some from both ESXi servers, you should end up with something like below, you can NIC' keep the groups in case you want to restore back to what you had. s

To create a DvSwitch first go to the home page in the vCenter and select Creat networking. You can either select the "New vNetwork Distributed switch" icon ing a or select the "Add a vNetwork Distributed Switch" in the main screen, you then DvS get the the below screen witch

Select vNetwork Distributed Switch Version 4.1.0 (unless you need version 4.0) and click Next

I changed the name to "Private DvSwitch" but you can call it whatever you want, because I have only one NIC to offer from each of my ESXi servers (vmware1 and vmware2), I change the "Number of dvUplink ports" to one. In the real world you probably will have many NIC's to add so this can be increased, click next when finished.

Expand the ESXi server which then should display the available NIC's (if you remember these are the ones we free 'ed up earlier). Select all NICs and then click next to continue

You now get a summary screen, notice the "automatically create a default port group" is ticked, you can leave this as we will make use of this group. Click next to create the DvSwitch. After renaming the dvSwitch to "dvswitch-NIC2" and renaming the port group to "Private" (you can do this by right-clicking each item and selecting edit settings, then change the name at the top", you should have something like below

If you select the dvSwitch-NIC2 icon you get a number of tabs appear, to view the ports select ports, remember we selected one dvUplink port for each ESXi server which gives us two in total, you can also see that "Connectee" comes from each ESXi server

You can confirm the ESXi servers by then select the "Hosts" tab

So what you have now is a DvSwitch with two NIC's attached (one from vmware1 and one from vmware2), you can makes changes to the NIC's and it will be implemented on both ESXi servers, this can be very handy if you have lots of ESXi servers to manage as most configurations only need slight tuning. The private port group will be empty as we have no VM's using it, I will be discussing how a VM can make use of this in my virtual machine section.

Next we will create a vMotion port group within our DvSwitch, follow below Right click on the DvSwitch icon and select "New port group"

I have change the name to "vmotion", but again you can choose whatever you want Creat ea vMot ion port grou p Once the port group is created you need to add the vMotion information for each ESXi server, you will need to provide an IP address and the subnet mask for each ESXi server using the vMotion. ************** YOU WILL NEED TO REPEAT THE BELOW STEPS FOR EACH ESXI SERVER ************ So from the Home page select "Hosts and Clusters", from there select the first ESXi server (in my case it is vmware1) and select the configuration tab. Select Networking from the hardware then select the "vNetwork Distributed Switch and you then should have a screen similar to below. Select the "Manage Virtual Adapters" ,then select add at the top

You should then have the screen below, select "New Virtual adapter"

then select "VMKernel" (this was the only choice for me)

Now select the vmotion port group from the first drop down list and also tick the "Use this virtual adapter for vMotion"

Now enter the IP address and subnet mask, when you repeat this step for the other ESXi server remember you use a different IP address but keep it on the same subnet

Next comes the summary screen, as you can see the vMotion port group now has a new VMKernel port with the IP address that you choose

************* NEW REPEAT THIS STEP OF OTHER ESXI SERVERS THAT YOU WANT TO USE VMOTION ********************* Once you have configured all the ESXi servers, if you select from the "home page" -> networking, then select the DvSwitch group icon, then select the configuration tab you will see the screen below. I have two physical NIC's from the ESXi servers and two IP address for the vMotion (one each for each ESXi server). If you click on one of the IP addresses you can see what NIC it goes to (follow the orange line), as you can see IP address 192.168.2.192 goes to vmnic2 vmware1, in this case. Also from this screen if you look in the top right hand corner you can manage the DvSwitch by removing it, adding hosts, managing hosts (use this option to add additional NIC's), creating a new port group and evening editing the existing DvSwitch.

You may have noticed that I have a VM using the private port group, I will

cover this in greater detail in my virtual machine section. This pretty much the same when you get to the "add Virtual Adapter" screen, I choose the service_console port group (this was a new port group that I added) and then ticked the last option "Use this virtual adapter for management traffic, you will then be ask to enter an IP address and subnet for the service console

Creat ea port grou p for the Servi Once you have finished the screen should look like below see my new service console port group. ce cons ole

As you see a DvSwitch is very similar to a normal standard vSwitch, have a look at each of them side by side, NIC's are represented on the right-hand side and the port groups are on the left-hand side, there is very little difference in them. Standard vSwitch DvSwitch

I am not going to discuss how to remove a DvSwitch as it is in reserve order as creating one, first remove from the VMKernel ports from each ESXi server first (otherwise when you try to remove the DvSwitch it will say it's in use), then remove each port group and lastly remove the DvSwitch. Most of the time you remove the item by selecting it's icon and right-clicking, then selecting remove..... The last topic to cover is the advanced sections of a DvSwitch, again there is very little difference than a normal vSwitch You can add or remove additional NIC's form the DvSwitch view. From the home page select networking, then select the DvSwitch icon then select configuration from the tabs, you should see a screen like below

Add/Re move NIC

Select manage hosts from the top right-hand corner, then select the hosts from which you wish to add NIC's to then select next, you should see the hosts and available NIC's

Select the additional NIC's and select next, if the NIC's already belong to an existing port group it will ask you if you wish to migrate them, as you can see the one i select already belong to an existing group hence the warning message. Continue on and the NIC will be added to your DvSwitch.

To remove a NIC just deselect at the "Select Physical Adapters page" and continue through the rest of the screens.

Advanc You can edit the setting of a DvSwitch by right-clicking it's icon and then ed settings select "edit settings", the first tab called properties you can change the name, increase/decrease the number of dvUplinks and even put notes for anyone. for a DvSwit ch

The next tab "Network Adapters" just displays the available ESXi servers and the available dvUplinks

The last tab "private VLAN" you can add primary or secondary VLAN ID's

I am not going into much detail on this as it is pretty much the same as the standard vSwitch Advanc ed settings for the DvSwit ch adapters

Security - covers the promiscuous mode, MAC address change and the forged transits Traffic shaping - covers Ingress traffic (traffic that comes from outside a ESXi server - inbound traffic management) and Egress activity that comes into the ESXi server from either virtual machines or the ESXI server itself VLAN - select the VLAN type and VLAN ID's Teaming and Failover - the same as a standard vSwitch but the NIC references are related to dvUplinks port values Miscellaneous - allows you to enable port blocking

Advanced - allow override port polices (override the settings on the dvUplink group) and Configure reset at disconnect

Final Comments DvSwitches are idea for large corporations, it can be a headache to manage lots of standard vSwitches, but bear in mind that you do require vCenter to use DvSwitches which comes with a price and can be off putting to smaller corporations.

P2V of a Physical Machine with vCenter Converter P2V is a large subject to cover, as I cannot cover every scenario, so I am going to briefly cover a P2V conversion of an physical Ubuntu server I have kicking around. Many companies are trying to reduce there footprint and convert many physical machines into virtual ones, web servers, SMTP server are ideal candidates. The conversion is a two step process, the actual conversion and the cleanup process after you have created the VM. There are a number of 3rd party P2V converters including free ones but I will be using VMware own vCenter Converter to perform this task. Vware vCenter converter is a windows based application, that installs onto the vCenter management server, it fully supports both windows and linux conversions but you may want to check VMware own website for all the available supported OS's. There are two versions of the converter available

Starter - is agent based and is free Enterprise - can be either be agent or can boot from a cd

VMware vCenter converter is not a silver bullet, you may have to work with the converted VM to get it working properly, also if the server originally had problems those problems may be replicated in the VM environment. The more complex the physical server (could be a domain control, cluster server, etc), the more you may have problems converting it and getting it to work, here is where you need to look at the VMware forums or search the internet for answers. Installing vCenter Converter Installing the vCenter converter is like a normal application installation, on the vCenter server start the installation by putting in the vCenter 4.1 cd, and then selecting vCenter Converter

During the installation process, you will be asked about the vCenter management server details

Once the installation has finished select the plugins -> manage-plugins from the top bar of the vCenter window, then select download and install plugin

That's it, you are now ready to convert physical servers P2V Once you have the P2V plugin installed we can now import a physical server, I have a old HP D510 desktop PC with Ubuntu 32-bit OS running on it (yep old pc and old OS), lets convert this physical machine into a virtual machine. First select the ESXi server that you wish to import the new virtual machine into, in my case I have selected vmware01, right-click and select "import machine"

Now enter the details of the running physical server, I selected "view source details", just to make sure the connection is working and we have access to the physical server

All looks good, it see's my physical server as a Ubuntu 8.04 32-bit server

The next screen, gives us details on where the VM will be placed, I changed the datastore to a shared storage area (filer2_ds1), you can change the name of the VM if you wish

The screen allows us to configure the VM's hardware

I notice that the network was in correct, so selected edit and changed the network from private to public and can add additional NIC's if you wish

The "data to copy" screen allows us to select what volumes/filesystems we wish to copy across, there may be a case that you do not some volumes/filesystems copied and thus speed up the importing process.

With very little effort we are at the summary screen ready to import our physical server

It took about 5 minutes to import this server, the time it takes all depends on how much data you need to import, you can monitor the progress from the "recent tasks" window in the vCenter man screen

And here we have it, a fully imported physical server, I double checked the hardware settings and it done a pretty good job as I did not have to change anything

When I started the VM, all I had to do was edit the /etc/network/interfaces file and change the NIC to eth3 and up the interface and away we go, as you can see below

Ok not all P2V conversions will go as smoothly as this one did but you get the idea on what is involved and what you need to prepare before hand when performing a P2V conversion. There is also a Cold-Clone boot CD which allows you reboot the physical server from the CD and clone the physical server while it is offline.

Patch Management This section covers patch management using VMware Update Manager (VUM), there also is a vSphere update utility that is used to patch ESX servers and upgrade from ESXi 3 to ESXi 4, as it is free it is ideal for anyone who has downloaded the free ESXi server and wants to keep costs at a minimum. VUM is the bells and whistles update manager, it comes in two flavors, stand-alone and as a plug-in to vCenter. It patches and upgrades ESXi servers and optionally can patch windows-based VM's using the popular website http://shavlik.com as the source for windows, you are also able to add additional source website that caould patch Redhat, Oracle, Ubuntu, etc. The new features in version 4 are

Staging patches to ESX servers which allow a pre-download of the patches locally to each ESX server prior to remediation and should speed up the patching process and thus reducing the time spent in maintenance mode Baseline Groups which allow the administrator to group many different baselines together under a single name and apply the group to a vCenter object

Patching is a major operation, lots of preparation work goes in before hand to make sure that everything goes smoothly, in a ideal world you generally patch Development first followed by QA and by the time you get to patching your Production all the little wrinkles should be ironed out, however in the VMware small companies may have these environments in a single cluster. VMware updates do come regularly with about 4 major updates a year, which means that a lot time can be spent on patching, this is where the update managers can help. Patching can be a dangerous game, some administrators believe that if the environment is working then why patch, especially if the environment is contained and has no access to the internet, patching could cause problems where there were none before. I am a strong believer in patching especially if you follow the applying to Dev, QA and finally Production environments and in a controlled manor, you would never not service your car on a regular basis, otherwise you just know that one day it will stop working. Besides security, patching also helps performance, with the latest drivers and kernel releases to enhance the O/S and fix any potential bugs that you may have not experienced yet, like I said the fear with patching is that you break something that was already working as this never goes down well with management. VMware Update Manager (VUM) The update managers main job is to patch ESX servers and VM's, I will be covering the installation of VUM into vCenter and then using baseline groups for patching, and then scan the datacenters, clusters or VM's to see if this baseline is meet. You can manually apply patches or schedule it and it can be achieved with the VM online or offline, you can also setup for it to rollback any changes should the patching go wrong or cause any problems at a later date (yes we will be using snapshots). Updates are performed in three stages

Baseline - create a list of patches that form a "company standard" for all ESX servers and VM's Scan - check against the baseline for compliance Remediation - patch ESX servers and VM's that fail to meet the baseline

First we have to install and configure VUM, there are three ways to run VUM

run the VUM service on the same windows instances as the vCenter this must have access to the internet install VUM on a separate windows server instance that is connected to the internet (this method is more secure) using a USB stick to download the patches so that they can be imported into the VUM service (this is most secure method but does have its problems)

Which method you choose all comes down to what you will allow to connect to the internet and cost, most companies choose the second option, however this is not the most cost effective as you have to have a second windows server running. For this tutorial I will be showing the first option, I will install VUM onto the Windows 2008 server which has vCenter installed, also you have a choice on where the VUM database is stored, you can create a separate one or configure it to use the same one as the vCenter uses, I will be creating a separate stand-alone one as I did for vCenter. If you decide to install the VUM on a separate Windows server then you will need to create an account that can access the vCenter, If you are installing onto the same server as vCenter then follow below I am install VUM on the same Windows 2008 server as my vCenter, I am not going to show you every screen shot as some are simple answers to simple questions, first start the install process by inserting the CD into the drive, you should then get the screen below, select the "vCenter Update Manager", you will be asked to put in the details of your vCenter server

Instal l VUM

The next screen that you come too will ask to you choose either to install a

separate SQL database or select another DB or you can select the existing vCenter database, I have chosen to create a new separate database as my environment is only small

Next comes the port settings dialog box, change any port numbers if they clash with other software, I selected the defaults

You can optionally change the folder destinations if you wish, I accepted the defaults

The installation will finish and the VUM service will be started, this will start to download the latest signatures from the internet, it will check both VMware and Shavlik web sites. You can see in the tasks and events window the patches that are being downloaded

Once the patches have finished downloading, you can now install the VUM client plug-in, select plug-ins from the top menu bar, then select "manage plugins", you should then see the below screen, select the "download and install" link, this is where you may get errors if you have not entered the details of your vCenter correctly.

Once this has completed there will be many new windows and icons that appear, first in the Home page you will see a new category called "Solutions and Applications" and a new icon called "Update Manager"

Selecting this icon takes you into the update manager, with many tabs and links, I will be covering these in more detail below

Baselines and Baseline Groups First we need to define a baseline, but I will show you the two built-in baselines for use with ESXi servers, select the "update manager" icon from the home page then select the "baselines and groups" tab, there are two types of baselines critical and noncritical.

Baselines are basically lists of possible vulnerabilities and are used in the scanning process to see if a ESXi server or VM meets your requirements, it maybe that just these two baselines are good enough for your needs. You can create baselines for different data centers, clusters, ESXi servers and VM's the combinations are endless. Your baselines can be either dynamic (maintained by the update system) or fixed (manually controlled by the administrator). Lets create a baseline Create To create a new baseline select the "create link" and the below screen should

a appear, type in a friendly name and then select VM patch radio button, you can baseline choose to patch ESXi server, VM's or Virtual Appliances

I have chosen dynamic, thus this baseline will be maintained by the system

There are many patches, I have limited mine to just internet explorer 8 and selected critical only

Here you can excluded specific patches if you wish

If you double click on the patch you can get more detailed information

the next screen allows you to add additional patches, again the combinations are endless

Finally you get to the summary screen

Once completed you should see your baseline, as you can see my baseline only has 12 patches to apply and that it is also a dynamic baseline, the default noncritical baseline has 5268 patches WOW!

You can edit the baseline by choosing the edit line New in version 4 is the ability to create baseline groups, creating a group is much like the process we used above To create a baseline group you use the left panel in the baseline and groups tab Create and select create, the below screen should appear, type in a friendly name and baseline choose the baseline group type group

optionally select an upgrade baseline

Here I have added my newly created baseline "My Custom Baseline"

Finally the summary screen

My newly created baseline group appears in the baseline groups window, you can update this group with new baselines using the edit link, you can expand the baseline group to display all the baselines attached to this group

Scanning and Patching a VM Now that we have a baseline group that contains a baseline we can start to scan datacenters, cluster, VM's to see if any VM's require patching There are several ways to scan, the first is by selecting home page -> VM's and templates -> then right-click on the VM and selecting "scan for Updates"

scan a datac enter , clust er, VM

You can choose to select the whole datacenter

optionally you can use the two icons on the tool bar, the one with the spy glass is the scan button, the other button is the remediate button which we will discuss later

You can also attach a baseline group to a datacenter, cluster or VM, select from the home page -> template and hosts -> select the datacenter or cluster or VM > then select the "update manager" tab, you should see the window below, then select the attach link and tick the "My First Baseline Group" in the baseline group box

The result should appear like below

From here you can select scan, you can choose what patches you want to scan

Eventually the scan report comes back, here I had problems which need to be investigated

I tried the same baseline group on my Windows 2003 which reported back as Compliant

Now we can patch a VM, this is called remediate in the VMware world, I have added a few more patch requirements to my baseline and will apply this to my Windows 2003 server Again you can patch a whole datacenter, cluster or a VM. Here I am going to patch just a Windows 2003 VM, first here is the report I got back after a Patch scan (I added a few patches to my baseline), you can see that this server (remediate requires 85 patches and is non-compliant, also notice that the VM is ) a VM powered down

To start the patching select the remediate button in the bottom right-hand corner, and the below screen will appear, you can see that all 85 patches will be applied

A list of the patches is displayed, you still have the chance to exclude any patches if you wish

Here is a nice feature you can actually schedule the patching to be done at any time of the day, may be you have quiet periods where you can perform the patching

You can take a snapshot of the VM before the patch, thus you can roll back if the patch causes any problem

Finally we get to the summary screen

You can watch the progress in the "recent tasks" panel, the VM will be powered and a snapshot taken first

You can see the snapshot from the VM snapshot manager window, this is

our rollback option if it all goes pear shaped

You can watch the progress in more detail from the "tasks and events" window, if you look at the task you can clearly see the remediate entries

After the patches are download then are then applied to the server, you can see the Windows patches being applied the screen refresh automatically

as the process finishes the VM gets powered down and the update manager gets updated

depending on how much patching you are doing the process completes and updates the manager, as you can see my Windows 2003 is now compliant it took just over 1.5 hours to complete

Patching ESXi servers and Clusters In version 4 a new feature called staging has been implemented, this allows you to download the patches for an ESXi server but prevents you from applying the patch, this allows VMware to have the patches downloaded locally to each ESXi server, prior to triggering the patch installation process, the intention is to speed up the patching process and reduce the amount of time that an ESXi server is in maintenance mode, especially if you have a slow internet connection speed. First we will stage the patching process, select "hosts and clusters" from the home page -> select the "update manager" tab -> attach a baseline group, I have selected the "non-critical Host patches" baseline group and as you can see my cluster requires patching as it is not compliant

Patch a ESXi server or cluste r

Selecting the stage button in the bottom right-hand corner, brings up this screen, you can select/deselect any additional baseline groups that you need,

make sure all ESXi servers within the cluster have been ticked

Here you can exclude any patches that you don't want to install, IMPORTANT look at the impact column this will indicate if the ESXi server requires rebooting or if it will be put into maintenance mode

Finally a summary screen

The patches will be download to each ESXi server but will not be applied

Once the patches have been downloaded (or staged), you can remediate them, this is very similar to the patching of a VM, make sure that both ESXi servers in the cluster are selected

again you can exclude/include any patches if you wish

we can schedule a specific time to patch if we wish, I selected the defaults

This screen is different to a VM patch, here we can disable advanced features of VMware as it may cause problems when we patch each ESXi server, notice the "Generate Report" button, make sure to click this and see if there are any potential errors that could cause the patching to fail, fix any problems before continuing, you can select the Disable DPM and Disable HA to solve two of the problems.

Finally we get to the summary screen

the first ESXi server vmware1 is put into maintenance mode (noticed I had to turn on the EVC mode, see my DRS section for more information on EVC)

Once the ESXi server is in maintanance mode, the HA agent is disabled and as you can see the state is connected (maintenance mode), the patch is now applied to the ESXi server and then rebooted

Once the server returns, the HA agent is enabled and the ESXi server exits maintenance mode

We then move onto the next ESXi server vmware2, however a VM has to be migrated first (windows_2008), once the VM has migrated the whole process repeats for this ESXi server (vmware2)

Finally both ESXi servers (vmware1 and vmware2) have been patched and reboot and the update manager returns with both ESXi servers are now compliant

Update Service Settings Lastly you have a number of options that you can set within the VUM, select the from the home page -> update manager -> then select the configuration tab, the first screen you can change the port settings

the next screen you can add additional source web sites, you can see the VMware and the shavlik web sites have already been added , you possibly could add Redhat, Oracle, Ubuntu, via the "add patch source" link

Next you can select a predefined time when you want to download the patches, select the "edit patch downloads" link to change the time

You can also change the notification check schedule, possibly sending emails out

You can set a default for the snapshot of a VM before patching

Lastly you can change maintenance mode settings and cluster settings

Rapid VM Deployment You have the ability to duplicate an existing VM, a process thats uses clones or templates, also known as master images, golden copies, they all mean the same thing you take an existing VM and copied it. In the section I will discuss an number of methods for duplicating a VM. There are two new features in the VM duplicating world

Hot cloning a VM Open Virtual Machine Format (OVF)

There are a number of ways to duplicate a VM Clone to Template This method copies the VM and converts it to a template format., during the creation of the template you have the ability to compact the files, which will reduce the size of the virtual disk.

This simply marks the VM as a template, it's much quicker that using Clone to Template as no copy process is generated, it literially takes seconds to marks it as a template and seconds to convert it back to being Convert to a VM. You first build the VM, then covert it to a template. When you need to update this template to convert back to a VM make the updates Template and then convert back to a template again. You use this VM as a template only and only gets powered on when you need to update it, so basically it is like a source for creating new VM's. Clone Virtual Machine This merely copies a VM, you cannot compress the size of the VM (unlike Clone to Template) and you cannot quickly update the base VM (unlike convert to Template)

Before you create a Template you need to consider the below


How big the Guest OS boot partition should be, as there is no easy way to adjust the size on the fly How software should you include, for example do you want to including a service pack, latest YUM updates, backup client software, antivirus software, etc

Creating a template doesn't just duplicate the VM's virtual disks, Additionally, the VM's .vmx configuration file is duplicated and renamed with the .vmtx extension, this means that all the settings behind your VM are being duplicated as well. This saves time by reducing the number of wizards and dialog boxes you need to complete. Note that if a VM is currently in snapshot mode you will not be able to clone it. I generally create a folder called "templates", from the "home page" in vCenter select "VM's and Templates", select your data center (in my case it is called production), and right-click, then select new folder and name your folder.

Cloning to a template Before you clone a VM it is sometimes better to you disconnect the CDROM (you will see an error on this below) and choose a staging network that it can connect too, these can then be changed before you put the newly cloned VM into service and thus will not affect anything else. I am going to use the Linux VM we created earlier as a template, normally I would update it with the latest patch set and install any software at all Linux VM's would require, backup agent, BB monitoring (Big Brother), etc. Select the VM you have powered off as you source for the template, right-click the VM and select "Template", then select "Clone to Template"

Cloning to a Template Enter a friendly name for your template, I have called mine oracle_linux_b01_u01 (b = build, u = update), I have also selected my template folder I created

Select the Host/Cluster that you want to store the template on, at this point you may get some errors like the below if the VM that you are cloning may have any problems with it, in this case I have attached a ISO image to my CD/DVD drive, after I removed the attached ISO image from the CD/DVD drive it all worked

Select the physical location for storing the template files, I have put this on some shared iSCSI storage called "filer2_ds1"

Choose the disk format, I have choose "Thin provisioned format", thus cutting down the disk required, if you need more information on disk provisioning then see my virtual machine section

Lastly we get to the summary screen

If you go to the templates and take a look at the newly created template you can see that you cannot power on this VM, you can only deploy it to a VM or convert it to a Virtual Machine. If you need to update this VM you need to convert it to a virtual machine, power it on then update it, power it off, then convert it back to a template

Creating a Now that we have our cloned VM how do we use it to create other VM's, select the new VM template VM and then select "Deploy a virtual machine from this template", then from a put the name of the VM and select Production "cloned to Template"

Select a ESXi server to host the VM

Select the datastore to store the VM

Select thick format, so that the virtual disk is fully created, if you need more information on disk provisioning then see my virtual machine section

Here my vCenter does not offer customization, we will discuss this later

Finally the summary screen

Now when you look at your VM you should see the newly create linux02 VM. At this point it is just like a normal VM, you may need to change the network, CD, etc. Also when you power on the VM you may need to change the IP address, etc for your environment

To convert a VM into a template and back into a VM follow below Make sure that the VM is powered off, right-click and select "template", then choose "Convert to Template", the whole process should be very quick a few seconds

convert a VM to a Template and then back into a You will see that that VM disappears from the list and appear in the VM "VM and Templates" list, you can drag and drop the linux01 template to your template area

That's' all there is to it, to convert back into a VM just right-click on the template select "Templates" and then select "convert into virtual machine", you will be asked a couple of easy questions (what ESXi server you want it to be deployed to). To clone a VM just follow below Again cloning a VM is easy, just select the VM and right-click and select "clone...."

Clone a VM

The same questions that were asked above will be asked here again, so no surprises To export a VM to a OVF (Open Virtual Machine Format) VMware's new portable format OVF, allows vendors to create a virtual machine and upload to VMware to allow others to download and import into your environment. There are no many virtual appliances in VMware marketplace.

To export a VM into OVF, select the VM, then in the main vCenter window select file -> export -> "export to OVF template"

export a VM to OVF

The comes a screen like below, fill in the details and select next

The export will take place, it took about 5 mins for a standard 15GB linux VM

Looking into the directory of the exported VM, you see then below

Resource Monitoring and Management In this section I am going to cover resources (CPU, Memory, Disk and Network) both monitoring them and managing them, from the monitoring point of view we will be using both tools from vSphere client as well as vCenter and trying to find if problems relate to the virtualization layer or the guest OS. In the management section I will be discussing resource parameters and resource pools and how you can cap, or limit and also guarantee a minimum, or reservation of either CPU or memory to a VM or a resource pool. Resource Monitoring It is very tricky to identify bottlenecks in a virtualization environment as there can be many VM's, there are a number of 3rd party tools that you can use to monitor your environment and I use these tools to monitor the guest OS (I personally use MRTG). For the ESXi servers i use VMware own performance monitoring tools in vSphere client and vCenter. There are a number of new features with the latest release

VMware performance counters added to Performance Monitor Massive increase in the number of alarms Better condition statements for alarms A frequency option to determine how you receive emails, once or repeatedly An acknowledgement feature to confirm that the alert has been dealt with vmkusage-style performance charts Coming soon, VMware vCenter AppSpeed, a virtual appliance that assists in monitoring performance

When looking for bottlenecks you must keep an open eye on both the virtual machine and the guest OS (this includes the running applications inside the guest OS), make sure that you still use the tools inside the guest OS, for example in windows use the task manager and in Linux/Unix use the tools like top, vmstat, iostat, but bear in mind that this tools were designed for running on physical server, all I am saying here is use all the tools available to you before you start point the finger at the virtual machine. vCPU I am now going to cover how the VMKernel allocates resources to the VM's, I will be covering CPU, memory, disk and networking, I have already touched on disk performance with multipathing and networking traffic shaping in other sections. Some issues involved in CPU resource allocation are VMKernel load balancing and scheduling, the number of virtual CPU's in VM's, hyperthreading and the use of virtual SMP. VM's execute their instructions on a physical CPU within the ESXi server, the VMKernel will monitor the load on the CPU's looking for a CPU that is doing VMKernel Load Balancing less work. If a CPU is heavy burdened it will reschedule that VM's threads to and Scheduling execute on another less busy CPU. This monitoring is configured at intervals of every 20 milliseconds, this can however be increased to make it less frequent

you feel your CPU load across an ESXi server is relatively uniform. The scheduler is designed to distribute CPU requests intelligently within the ESXi server and reduce contention (two VM's fighting over CPU resource) as much as possible. A single-vCPU VM executes its threads on a single physical socket (or core), where as a dual- or quad-vCPU VM executes its instructions on more that one physical socket or core. VMware FT currently only supports VM's with just one Single vCPU's v vCPU, also the more vCPU's you give a VM the harder it is for DRS to find an Multiple opportunity to perform a VMotion migration to move the VM to a better ESXi vCPU's server. Having plenty of sockets or cores to run is sometimes referred as slots, the fewer the slots an ESXi server or DRS cluster has the fewer VMotion events take place. If you are using hyperthreading the VMKernel treat each logical CPU as if it was physical CPU, so a two-socket processor with hyperthreading enabled would appear as if it had four processors. When it comes to dual-vCPU and quad-vCPU VM's, the VMKernel scheduler always makes sure it runs the VM on two different logical processors in two different processors. The new Intel processors have been overhauled with new version of hyperthreading on version 3 which has improvements of up to 30%. Under the Advanced settings on a VM, you can chose from three modes options

Any - More than one VM can execute on the logical CPU (default) None - VM receives all the CPU not shared with other VM's Internal - VM's with two vCPUs gets exclusive access to a CPU and its logical CPUs

Hyperthreading You can also select the processors to use, using scheduling affinity (see below image)

Virtual SMP

To get the benefit of virtual SMP you need to have plenty of sockets and cores. Simply put the more sockets and cores you have the easier the VMKernel scheduler has of finding a CPU not in use or not heavy in use. Note however that

the application still has to take advantage of SMP, otherwise you may not see any performance improvement. If you have P2V a physical machine, the guest OS may need a kernel update to either downgrade or upgrade its processor requirements, for example in the world of windows a kernel update means changing the ACPI function for a ACPU multiprocessor to a ACPI Uniprocessor within the device manager, newer versions of Microsoft will do this for you but older versions will require a manual process, keep an eye on a newly P2V servers and double check if you do need to update the hardware abstraction Level (HAL). Memory The balloon driver and the VMkernel VM swap file affect the allocation of memory to your VM. If you run more than one copy of Windows or Linux, the VMkernel can identify that very similar information is likely to be duplicated in memory (for example explorer could be running in multiple windows VM's). The VMKernel spots these duplicates and produces a single read-only copy, this attribute prevents the possibility of one VM modifying the memory contents of another VM. The sharing of memory pages will remain invisible to all the guest OSs inside the VM's. VMware's own research has found that around 30% of guests OS are duplicated between VM's, this memory sharing is known as Transparent Page sharing (TSP). You can clearly see TPS in action on the resource Allocation tab of a VM if you look at the memory panel and the shared parameter, here you can see my Windows 2008 server is sharing 225MB of memory with other VM's.

You can also use the esxtop command to see what you are sharing, when you first start esxtop it will display CPU usage, type m to display memory usage.

When you install VMware Tools into a VM, a memory driver is also installed, its file name is vmmemctl, but is it also known as the balloon driver, because it uses the analogy of a balloon to explain how this driver works. This driver is only engaged when memory is scarce (when contention is occurring), it inflates by demanding pages of memory from other VM's (these may have low priority). The guest OS obeys its internal memory management techniques, freeing up RAM by flushing old data to its virtual memory (paging file or swap partition) to give the vmmemctl driver ranges of memory, rather than hanging on to this newly allocated memory, the vmmemctl driver hands it over to the VMKernel, which itself hands it over to the VM's that require it. When the demand has returned to normal the balloon driver deflates and gracefully hands back the memory it claimed to the guest OS. You can see the ballooned memory on a VM via the resource allocation tab in the memory panel, as I have a quiet system there is no ballooned memory.

The VMkernel VM swap file is only used when a VM's has used all of its allocated virtual memory (as a last resort). You can see the swapped memory in the image above in the memory section. I will discuss the VMKernel swap file in much more detail in my resource management section below. Below is a table on what to look for when you looking for performance problems and using the performance charts in vSphere client or vCenter

CPU

Use to the ready value to get a more actual performance value from a VM. The ready value means the VM is ready to execute processes and is waiting for the CPU to allocate a slice of CPU time. Make sure that its ready value is low (<5%) if not then it means the VM is ready to run but the CPU is not ready to supply the CPU time it demands. As mentioned above look for any swapping or ballooning which will indicate a memory shortage

Memory

There is a close relationship between network activity and physical CPU usage, this is because the load-balancing mechanism of the IP hash in itself causes the VMKernel to use the physical CPU. Lots of small TCP transactions inside a VM can cause the physical CPU to be busy, as the VMKernel needs to move packets from the physical NIC to the virtual NIC Network via the vSwitch. Make sure that VMware tools has been installed (installs a virtual networking driver) and make sure that if traffic shaping has been setup that it is correct (it could be throttling bandwidth). Internal tools of the guest OS can help here as well, as VMFS is so lightweight that it can be disregarded as the source of a bottleneck. Look at multipathing and make use you use the best method (round-robin?). Also look at memory if exhausted then the VM may be paging or swapping thus producing disk performance problems. Disk When looking at the charts look for "Kernel disk command latency" which indicates the time on average that the VMKernel spent on each SCSI command (<2-3ms). Secondly look at "Physical device command latency" which measures the average time a physical device took to complete the SCSI command (<15-20ms). Performance Charts Both vSphere client and vCenter have graphical charts, vCenter does have extra feature which allows you to go back in time. For the rest of this section I will be using vCenter, the first place you can look is the "virtual machine" tab on one of your folders, in my case I selected the "Production" folder, here you can have a quick glance of all VM's that are running, you can see CPU and memory resources that are being used

But if you really want very details performance information then we have to go to the performance tab (see image above), I am only going to gloss over these as you will be the ideal on what is available and what you have access to regarding the performance charts. The image below show you the overall performance of a particular ESXi server with a number of VM's, the information displayed is of a Windows 2008 Server but you can display all the VM's, you can also adjust the time range (you cannot do this on vSphere client), I have selected one month. You have CPU, Memory, Disk and networking information all on one screen.

The home screen view gives you information on the ESXi server itself, it is an ideal screen to see if your DRS is working correctly, hopefully all ESXi server should utilized evenly.

Lastly you can drill right down to each component, each ESXi server and each virtual machine as a chart like this, just select the ESXi server or virtual machine and then select the performance tab.

You can even customize the chart and there are many options, line or stacked graphs, turn different counters on/off, different time periods including custom ones.

Alarms Personally I try and use a 3rd party software tool to manage monitoring (HP openview), opening up each area's software would be a real pain, however for completeness I will show you briefly what VMware has to offer, In the latest version you can now have alarms on datacenters and clusters, and most of the important alarms are turned on by default. The built-in alarms and alerts at the top of the Inventory in Host and Clusters are inherited down the inventory and applied to all ESXi servers and VM's. The below image identifies a small number of them,

You can modify a resource by right-click and selecting "edit settings", here you have four tabs which are self explaining, you can also enable or disable the alarm (see bottom left hand corner).

To create an alarm follow below Create This alarm will filter down to all hosts as it will be at the top of the hierarchy, from the top an folder (win_2008 in my case), right-click and select alarm -> add alarm alarm

Fill in the alarm name and description with something meaningful, make sure the alarm is enabled at the bottom

In the triggers tab, select add, then the line has drop down lists that you can choose what options you want, I am setting up an alarm to email me when a snapshot file is greater than 1GB

I then select the action tab so that the alarm will email me, again it is the same as above, the line has drop down lists that you can change

Finally the alarm is created and ready to go

To remove it, just right-click and select "remove". If you have not already configured mail within vCenter, then follow the steps below, first open the "vCenter server settings" from the home page

Then select the mail option and fill in the SMTP server details and sender account details, then you can test the alert above by increase a snapshot to over 1GB

You can acknowledge alarms, by selecting the triggered alarms tab and right-clicking on the alarm, you can clear the alarm by selecting the "reset alarm to green"

Events and Tasks Next I want to cover viewing the events and tasks, before I move onto managing resources, the tasks and events tab is a record of all the tasks and events that have taken place, power on/off VM's, you can even see cloning, etc. This is a great source of information to see what has been happening in your environment.

If you select the show tag at the bottom you can get a more detailed view on want went on regarding each task or event.

You also have the ability to schedule regular tasks, select from the home page -> management -> scheduled tasks by right-clicking on the page you can choose from a number of tasks that can be carried out, cloning a VM, migrate a VM, etc

Resource Management You can adjust resource parameters for CPU, memory and disk on a per-VM basis or you can drop groups of VM's in to resource pools to manage their CPU or memory. Resource pools allows you to treat VM's as groups instead of individuals, and to quickly apply settings to them. You can cap or limit and also guarantee a minimum or reservation of either CPU or memory to a VM or a resource pool. VMware proportional share system provides more dynamic control over VM's resource usage, as it responds to changes in resource demand relative to each VM and ESXi server. You can impose limits on a VM or resource pool for CPU resources (by megahertz) and for memory (by megabytes), when you create a VM you already set a maximum amount of memory, even if memory is available the VM will not allocate any more

than this maximum, if all memory has been depleted it is possible for the VM to use its VMKernel swap file. In contrast the CPU has no default limit, if the VM requires CPU time and it's available then it gets it. However you can limit CPU intensive VM's by placing a cap or a limit which can then control the VM CPU usage. VMware also uses reservations, you could even regard them as a way of meeting service level agreements (SLAs), for an example if the memory reservation on a VM is 1GB of RAM and that amount of physical RAM is not available you will be unable to power on this VM, this is know as admission control, basically if you exceed physical limits of the ESXi server you will not be able to power on the VM. There are relationships between memory limits, reservations and the VMKernel swap file

Difference between limit and reservation - you have a VM with 512MB limit and a 256MB reservation, powering on the VM would create a 256MB VMKernel swap file (512 -256) and guarantee that the VM would receive 256MB of RAM. The VM would not power on if there as insufficient space for the swap file. No difference between limit and reservation - if you set the limit to 512MB and the reservation also as 512MB and powered on the VM, ESXi would not create create a VMKernel swap file at all, it would run the VM entirely in a memory reservation of 512MB Big difference between limit and reservation - If the VM was given a 16GB limit and the default of 0MB was used for the reservation, a 16GB VMKernel swap file would be created, this VMKernel swap file is created in the same location as the .vmx file, which could be on extremely expensive shared storage such as SAN, you can relocate this swap file to a different location.

There are two examples below and depending if the administrator is optimistic or pessimistic regarding if the VM's will consume all the memory during a period of time, or if you want to get bang for your buck on your ESXi servers, obliviously if you are in a large corporation and cost does not come into it then you can fully load your servers with maximum amount of memory and go the pessimistic route, because I work for a small company I am in the optimistic group I try to leverage the VMKernel swap file to over commit on memory, especially if the VM's are web servers, exim (smtp) servers, etc. if you have a ESXi server with 2GB memory you could run up to eight VM (using 512mb each (256MB physical + 256mb VMKernel swap file)) before running out of physical memory (2048MB / 256MB), if all Optimistic the VM's used up all there memory (each one has to use upto 256MB each) you would get VMKernel swap activity as each VM will have a 256MB VMKernel swap file. If you wish to have a cast iron guarantee that your VM's will always run in memory you would choose the second option set the VM limit the Pessimistic same as the reservation limit, thus no swap file will be created, so again using an ESXi server with 2GB memory you could run up to four VM (using 512mb each (512MB physical)) using before running out of

memory (2048MB / 512MB), remember there will be no additional VMKernel swap file. I increased my allocated memory on my Ubuntu server to 5GB, because this exceeds the maximum memory a VMKernel swapfile is created with the .vswp extension, this is how VMware over commits on memory. Remember that the VMKernel swap file is no substitute for physical memory but it may be useful for odd periods when the ESXi server is busy. Also remember that this VMKernel swap file consumes disk space, so be aware that if your VM uses large amounts of memory that you may fill up the VMFS volume, you will get a error message stating that there is no free space to create the swap file.

This is what happens when you do not have enough resources even to power on the VM

The proportional share system allows you to indicate that when a resource is scarce one VM or resource pool is more important than another. Share values can be applied on a per-VM basis or on resource pools, shares can act dynamically to resource demands, the share value can be specified by a number of by user friendly text like Normal, High and Low, remember though that shares only come into play when resources are scarce and contention is occurring. Many customers do not use the share system, it does require a bit on thinking to setup and to maintain, with cheap memory and CPU costs, most companies will purchase additional hardware, but when time are lean the share system can help. You can set share values by using friendly labels, here are those values and the resources settings, VMware has placed more attention to the memory, if has lots of memory then you have lots more shares, thus memory intensive VM's should get more priority. High allocate 2000 shares per vCPU; 20 shares for every 1MB allocated to the VM

Normal Low

allocate 1000 shares per vCPU; 10 shares for every 1MB allocated to the VM allocate 500 shares per vCPU; 5 shares for every 1MB allocated to the VM

It is possible to peg a particular VM to a particular CPU, this is known as processor affinity, normally the ESXi server dynamically move the VM to work on the best CPU. Personally I would only use this feature as a last resort and thus have not used it yet. If you select the VM then "edit settings", select the resource tab, you will find it under "advanced CPU", just select the physical CPU ID that you wish to peg the VM to, remember they start from 0.

Resource Pools Resource pools are optional, you can have stand-alone ESXi servers and DRS clusters without resource pools, basically you accept the default which in most cases is adequate. You can only control CPU and memory resources within a resource pool, they can be created in both vSphere client and vCenter. Resource pools are ideally suited when you are using clusters where you have lots of resources that can be divided amongst the ESXi servers. To create a resource pool follow below Right-click on the ESXi server (or cluster) that you wish to create the resource pool

Creatin ga resource pool Type a friendly name, then you can adjust the resource allocation, I changed both resources to high

The resource pool will appear a a folder under your ESXi server (or cluster)

Simply drag and drop you VM's into the folder, once in the folder they will be under resource pool control, you can see that I have two VM's in my Production resource pool, and because the VM linux01 uses more memory it has a great number of shares , 2000 shares instead of 1000 shares for the other VM, this means that this VM will get more resources when the ESXi server has any contention.

To remove the resource pool, simply right-click on it then select remove

Security This will be a very brief section on VMware security I am not going to cover every aspect of security within VMware but will lightly touch on the commonly used area's. Both the ESXi standalone server and vCenter security is the same, the only difference is where the users and groups come from, if using a ESXi stand-alone configuration they are held locally on the ESXi server, however if you use vCenter you can take advantage of Active Directory (AD) in Windows plus it can be stored locally. Note that users and groups come from the underlying O/S, if using stand-alone ESXi server then the users/groups will be held locally on the server (/etc/passwd and /etc/group), if using a vCenter running on Windows 2008, then these will be local windows accounts or if part of a domain then these will be domain accounts, you cannot administer user accounts or group via VMware only roles. The VMware model of security involes three components: roles, groups and users, this type of setup is very similar to databases. First you create roles of responsibility then you can add users/groups to allow them to perform tasks. As vCenter has an organization of system folders, datacenter objects and subfolders, a system of inheritance does exist, if you set a role on a folder, it will pass your privileges down the folder hierarchy, vCenter does a good job of hiding objects that a user has no privilege to see. There are 11 predefined roles, I have highlighted which are available to vSphere client and to vCenter Role No Access Availability Description this role has no priviliges and can be used to deny access to an object. You can use the No Access role to deny a user access to an object when that user has been ESXi and granted permissions to a parent object. vCenter gives you the ability to only view objects, thus no changes can be made This has the highest level of privileges and thus you can access all objects which includes managing them vCenter only This role assigns a privilege only to VM's, you can power on/off/reset a VM, open a remote console, This extends the role above to include edit some of the VM settings, create and revert snapshots This role has the ability to create resource pool of CPU and RAM and allocate groups of VMs to the pool This role allows you to create datacenter objects but you have very limited access with VM's. This role allows full control over VM's inclduing deleting them. Allows just enough privileges for Consolidated backups to function

Read-Only Administrator Virtual Machine User Virtual Machine Power User Resource Pool Administrator Datacenter Administrator Virtual Machine Administrator VMware Consolidated Backup User

Datastore Consumer Network Consumer Roles

It has only one privilege the ability to allocate space to a datastore it has only one privilege "assign network to a virtual machine, host service console, VMKernel virtual NIC or physical" under the network privilege

So lets create a role and assign some privileges To create a role, from the home page in vCenter select "roles" from the administration section

Roles

select "add role", and type in a meaningful name, then start selecting the privileges for this role, I am not going to cover all the privileges here, I have just selected "virtual machine", this in itself greatly expands into many privileges.

As you can see the new role appears at the bottom

To remove a role just right-click that role and select remove, you also have the ability to clone roles and then change them a little One note about removing roles you may see the below warning message, this is justing letting you know what you want to do about the existing users within this role, either leave them with no access or assign them to another role.

Using Roles and privileges to access a certain VM can a bit of a task as you need to create the privileges from the top down, first start with the datacenter folder, then down to the ESXi hosts and finally the VM itself First select the area that you want to add a privilege to, this could be a folder, host or a VM, in the example below I have selected the production folder, the panel on the left is what appears first, you can change the assigned role to your newly created role, then you need to add users or groups, this is done by selecting the "add" button which produces the rightpanel. here you can choose from your user/group list, remember these will either be local accounts or domain accounts depending what you have setup

assign a privilege

When you have choosen your users/group you should see something like below, here I have allowed a user called vallep to have read-only access, thus when I login via vSphere client or vCenter using the vallep account I will only have read-only access and will not be able to change anything.

Once you have added some users or groups you can go back to the role home page and see the changes you have made, you can clearly see the hierarchy format used.

Personally I just stick to the already created roles and generally do not need to create additional roles, most of the time I can get away with using the administrator, virtual machine user and read-only roles. Lastly I just what to touch on the ESXi default accounts that are created there are 4 of them, this account has full access, by default the password is blank when you have just installed ESXi, it is also used when you add you ESXi host to vCenter this account primary role is to configure hosts for lockdown mode from the direct console, the user is used as the agent for the DCUI and should not be modified or used to login to your ESXi server

root

dcui

nfsnobody this account is used to access NFS datastores vpxuser this account is used by vCenter to issue commands to your ESXi server regardless of the end user that is connected to vCenter server, this user is granted the administrator role.

Datastore Access If you want to improve on security regarding the datastores within vCenter, you should create folders and then drag and drop the datastore into the folder, after which you can assign permissions on that folder restricting the access to the datastore, here is what I have done with my test environment, I created four folders and placed the datastores into the folders, I could then place different permissions on each of the folders.

I will come back to this section at a later date to update it with advanced security features but thats all i am going to cover for now.

Storage (Local, iSCSI and SAN) In this section I will be covering different types of storage (see below), I will also be discussing the Virtual Machine Filesystem System (VMFS) and how to manage it.

Local Storage iSCSI NAS (which I will not be covering) SAN (fibre)

There are a number of new features with version 4.1


Improved user interface for multipathing Improved storage views in vCenter A built in increase option to expand a VMFS volume size to take up available free space Pluggable Storage Architecture (PSA) support, which allows storage vendors (EMC, IBM, NetApp) to add plug-ins to vCenter Ability to organize datastores in folders and set permissions to filter virtual machines access to storage Improvement in the VMFS resignature process; access to VMFS volume signature writeable snapshots without requiring resignaturing

HBA Controllers and Local Storage There are number of ways to attach storage to an ESXi server, I will first cover local storage and HBA controllers. In my environment I purchased two local disk drives (500GB) to use as local storage, there are many companies that have no requirement for an expensive SAN solution, thus a server with a large amount of storage is purchased. This storage can be used to create virtual machines, please note it does not allow you to use features such as vMotion as you require shared storage for this, however it is an ideal solution for a small companies wishing to reduce the number of servers within there environment. When you first install ESXi you have the choice to select where the O/S will be installed, after which you can then configure additional storage to be used for virtual machines, but firstly I want to talk about controllers (HBA's). When you open a vSphere client then select the ESXi server -> configuration and then storage adapters you will be presented of a list of storage controllers that are attached to your system. On my HP DC7800 it has found 4 controllers, 2 of which I have attached a 146GB SATA disk (used for the O/S) and a 500GB SATA disk (used for the VM's). The below image is the 146GB I use for the ESXi O/S, it is attached to the vmhba0 controller and is of block SCSI type, I try to keep this purely for the O/S and keep all virtual machines off this disk.

The second disk is a 500GB seagate disk and is attached to vmhba1, again it is block SCSI type disk, this is where I keep some of my virtual machines. If you look closely enough you can see the disk type and model number in case you have to replace the disk like for like, in my case it is a ST3500418AS seagate disk.

By selecting the paths tab you can expand to see how the disk is connected, if this device was multipath'ed you have all the paths listed here, as you can see we have one active path. We will setup a multipath device in my iSCSI section later and the various options that come with multipathing.

Here is an example of a multipath'ed server I have at work, here you can see in the targets lines I have 17 devices but 34 paths, in this example I have a brocade-825 HBA controller connected to a IBM XIV SAN also notice the World Wide Number (WWN) which are associated with fibre disks. VMware ESXi server also supports it own LUN masking, that is if your SAN does not support this already.

If you have a server that you can hot swap hard disks, you can rescan that particular controller to pick new drives or replaced drives, just right-click on the controller and select rescan, as seen in the image below. You would also use this option to rescan a fibre HBA controller to pickup any new LUN's.

You can also rescan all controllers by selecting the "Rescan All..." icon in the top right-hand corner of the storage adapters tab, you get the options below, I generally tick both and then select OK, any new devices will then be added and can be used.

The last piece of information is regarding the naming convention on the controllers, the ESXi server uses a special syntax so that the VMKernel can find the storage it is using, it is used by local storage, SAN and iSCSI systems. The syntax of a device is vmhbaN:N:N:N:N which is a sequence of numbers that tell the VMKernel how to navigate to the specific storage location. The syntax of the path is below, please note this is different to older versions of ESX. this is the same as reading as vmhbaN:C:T:L:V N - HBA number C - channel (used in iSCSI environments) T - target vmhbaN:N:N:N L - LUN V - Volume (this may or may not appear depends on if you use volumes) So for an example vmhba1:C0:T0:L10 would mean vmhba controller 1, channel 0, target 0, LUN 10 Using SAN's I have already covered some of the SAN disk usage in the above section, when you install a fibre HBA controller you should be able to see it in the storage adapter section of the ESXi server, the image below is a ESXi server with a brocade HBA installed. You can see the WWN of the HBA controller and the attached LUN's, clicking on the paths tab displays all the LUN's attached to this controller. As I mentioned above you can add additional LUN's by simply rescanning the HBA controller (providing LUN masking is setup correctly on the SAN). A ESXi server can handle up to 256 LUNs per controller, from LUN0 to LUN 255, on some SAN's LUN0 is a management LUN and should not be used, unless you want to run management software within your virtual machine.

Here is a multipathing screen shot of one of the disks, I will be covering multipathing in the next section and showing you how to set it up and configure it with my test environment.

you can rescan from the command line rescan from esxcfg-rescan <adapter name> the commandline esxcfg-rescan vmhba1 One problem with the older versions of ESX is that if you created a new LUN for 32 ESX servers you had to go into each one and rescan, version 4.1 you can use vCenter

to rescan many hosts with a single click, just right-click the vCenter object (in my case it was called Production) and select "Rescan for Datastores", you may get a warning message as it may take some time to complete.

Using iSCSI iSCSI is the cheaper version to fibre, it basically still offers a LUN as a SAN does, what makes it different is the transport used to carry the SCSI commands (uses port 3260, you may need firewall's opening). iSCSI uses the normal network cabling infrastructure to carry the SCSI commands, this means you do not need any additional equipment (however good quality network cards should be purchased), these are known as VMware software initiator, you can buy a NIC with iSCSI support these are know as hardware initiators. VMware iSCSI software is actually part of the CISCO iSCSI Initiator Command Reference, the entire iSCSI stack resides inside the VMKernel. The initiator begins the communications (the client) to the iSCSI target (disk array), a good NIC will have what is called a TCP Offload Engine (TOE), which improves performance by removing the load from the main CPU. There are a number of limitations with iSCSI that you should be aware of

You can install ESX to iSCSI LUN for iSCSI booting purposes, but you must use a supported hardware initiator Only the hardware initiator supports a static discovery of LUN's from the iSCSI system. Since dynamic discovery is much easier to setup and works with both software and hardware initiators There is no support for running clustering software within a virtual machine using iSCSI storage

iSCSI does support authentication using Challenge Handshake Authentication Protocol (CHAP) for additional security and iSCSI traffic should be on its own LAN for performance reasons. iSCSi has it own naming convention, its uses the iSCSI Qualified Name (IQN), it a bit like a DNS name or reverse DNS. The below image is my current iSCSI setup as mentioned in my test environment section I am using openfiler as a iSCSI server. The IQN name in my case is iqn.1998-01.com.vmware:localhost-7439ca68, it is

configured the first time you configure the software adapter it can be changed if you want a different name.

My IQN iqn.1998-01.com.vmware:localhost-7439ca68 breaks down as iqn - is always the first part 1998-01.com.vmware - domain registration date localhost-7439ca68 - the name itself (this is the part you should change if you want a more friendly name) Basically the IQN is used to ensure uniqueness To configure the ESXi server to use iSCSI with software initiator requires setting up a VMKernel port group for IP storage and connecting the ESXi server to the iSCSI adapter To setup a VMKernel storage port group follow below 1. In the vSphere client, select the ESXi server 2. Click the configuration tab, in the hardware pane select networking 3. Click add networking link 4. Choose VMkernel and click next 5. Click next in the dialog box 6. In the port group properties dialog box type a friendly name for this connection (I choose viSCSI) 7. Then type a IP address and subnet masks for the VMKernel storage port group 8. You can optionally select a default gateway

IQN

iSCSI VMkernel port group

If you followed the above you should have something looking like below

Next we have to connect the ESXi server to an iSCSI software adapter First you need to select the iSCSI storage adapter

conne cting ESXI to an iSCSI softw are adapte r

select properties to open the below screen then select configure, this is the iSCSI IQN name, you can change this if you want, I left my as the default

select the "Dynamic Discovery" tab and select the add button, enter the IP address of your iSCSI system, leave the default port 3260 unless this is different. You may have noticed that I have two connections this is to the same iSCSI server, this is related to multipathing and we will be looking at this later. Once you have entered the details and clicked OK you will be prompted for a rescan

After a period of time, in the "Static Discovery" tab you should see your iSCSI LUN's, in my case I have 3 LUN's (the LUN's are multipath'ed hence why you see 6 of them).

You can get more details about your LUN's in the storage adapter panel as below, here you can see 3 attached LUN's two 390GB LUN's and one 341GB LUN, also if you notice I have 3 devices but 6 paths which means I am using multipathing.

Lastly I want to talk about multipathing, I have setup my openfiler server to use two interfaces, to multipath all you do is carry out the above task "connecting ESXI to an

iSCSI software adapter" for both IP addresses, after a rescan the "Static Discovery" tab you should display all the LUN's attached, in my case I have 3 LUN's of which each one is multipath'ed (hence why you see 6 targets).

You can confirm that each device has two connections by selecting the disk and right clicking and selecting "manage paths"

You then get the below screen, I have two active paths, one using IP address 192.168.1.75 and the other 192.168.1.76 (the preferred one) both are using port 3260.

By default the ESXi server will use a fixed path for the I/O but you can change this, other options include "round robin" which would make use of both network adapters, just select the down arrow on the "path selection" then click on the change button to update, as you can see in the below image I have change this disk to use "Round Robin" both paths are now active and there is no preferred path.

So what happens when you lose a path, we get a dead connection, the ESXi server continues to function but now has less resilience

All of this can also be viewed from the events tab, you can see me disconnecting and then reconnecting the network cable, the events can be capture by 3rd software like BMC patrol and then monitored with the rest of your environment.

There are a number of commands that you can use to display and configure the multipathing Detailed information List all Paths with abbreviated information List all Paths with adapter and device mappings esxcfg-mpath -l esxcfg-mpath -L esxcfg-mpath -m

List all devices with their corresponding esxcfg-mpath -b paths List all Multipathing Plugins loaded into esxcfg-mpath -G the system Set the state for a specific LUN Path. esxcfg-mpath --state <active|off>

Requires path UID or path Runtime Name in --path Used to specify a specific path for operations Used to filter the list commands to display only a specific device esxcfg-mpath -P esxcfg-mpath -d

Restore path setting to configured values esxcfg-mpath -r on system start iSCSI is very easy to setup on VMware, the main problems normally resides in the iSCSI server/SAN and granting the permissions for the ESXi server to see the LUN's (this is known as LUN masking), I will point out again that you should really use a dedicated network for your iSCSI traffic this will improve performance greatly. VMware Filesystem System (VMFS) VMFS is VMware's own proprietary filesystem, , it supports directories and a maximum of 30,720 files in a VMFS volume, with up to 256 VM's in a volume. A VMFS volume can be used by virtual machines, templates and ISO files. VMFS has been designed to work with very large files such as virtual disks, it also fully supports multiple access which means more than one ESXi server can access the same LUN formatted with VMFS without fear of corruption, SCSI reservations is used to perform the file and LUN locking. VMware improved in version 3 to significantly reduce the number and frequency of these reservations, remember these locks are dynamic and not fixed or static. When ever a ESXi server powers on a virtual machine, a file-level lock is placed on its files, the ESXi server will periodically go and confirm the the VM is still functioning - still running the virtual machine and locking the files, this is known as updating it's heartbeat information. If an ESXi server fails to update these dynamic locks or its heartbeat region of the VMFS file system, then the virtual machine files are forcibly unlocked. This is pretty critical in features like VMware HA, if locking was not dynamic the locks would remain in place when a ESXi server failed, HA would not work. In the latest version you can have up to 64 ESXi servers in the same DRS/HA cluster. VMFS uses a distributed journal recording all all the changes to the VMFS, if a crash does occur it uses the journal to replay the changes and thus does not carry out a full fsck check, this can prove to be much quicker especially on large volumes. You should always format new volumes via the vSphere client this keeps the system from crossing track boundaries known as disk alignment. The vSphere client automatically ensures that disk alignment takes place when you format a VMFS volume in the GUI. Note however that if you are going to be using RDM files to access raw or native LUN's disk alignment should be done within the guest operating system. One big word of warning, you have the hugh potential to screw things up pretty badly in VMware, make note of any warnings and make sure that your data is secured

(backed up) before making important changes, there is no rollback feature in VMware. When you have a new disk and you go to format it you will asked if you want to set the block size (1MB, 2MB, 4MB or 8MB), this controls the maximum file size that can be held in a VMFS volume (256GB, 512GB, 1024GB or 2048GB). Block sizes do not greatly affect performance but do affect maximum virtual disk (VMDK) file sizes. I will discuss virtual machine files in my section on virtual machines. My suggestion would be to set them to 8MB this allows you to have very large VMDK files. You can use the entire disk or make the VMFS volume smaller than the LUN size if you wish, but it is not easy to get that free space back and that's if anyone remembers were it was. It is possible to create multiple VMFS volumes on a single LUN, this can affect performance as it can impose a LUN-wide SCSI reservation which temporary blocks access on multiple VMFS volumes where only one VMFS volume might need locking, so try to keep one VMFS volume per LUN. You will be required to set a datastore name during the format process, VMFS volumes are known by four values: volume label, datastore label, vmhba syntax and UUID. Volume labels need to be unique to ESXi servers whereas datastore labels need to unique to vCenter. So try to create a standard naming convention for your company and give meaningful names. Enough talking lets format a new iSCSI LUN In the vSphere client or vCenter select the ESXi server -> configuration -> storage, then select the "Add Storage" link, you should get the screen below, select Disk/LUN (iSCSI, Fibre and local disk), then click next

Format a new VMFS volume

Select the disk/LUN that you want to format, I have only one free disk to add

Here I have no options but to continue

Now give the disk/LUN a label, I have called mine "filer2_bk1", basically means that this disk is from iSCSI server filer2 and it is my first backup disk. We will be using this later when cloning and snapshotting VM's, etc.

Here is where you set the block size, I have chosen 8MB and decided to use the entire disk

Now we get a summary screen, most important make sure you are happy there is no going back

The disk/LUN is formatted and then appears in the disk storage screen, as seen below

There are a number properties that you can change on a already created volume, increase size, rename it, manage the paths (as see in the iSCSI multipathing), just right click on the volume and select properties and you should see the screen below, It also details various information on the volume maximum file size, file system version number. In older versions of ESX you had to increase the VMFS volumes using extents, it is now recommended that you use the increase button on the properties window (see below image) you can double check to see if you have any free space by looking at the Extent device panel in the right-hand window, the device capacity

should be greater than the Primary Partition size, also make a note of the disk name in the left panel as this is the list you will be presented with when increasing the volume size.

Viewing Disks There are a number of places that you can get details on the storage within you ESXi server and this is where vCenter does have advantages over the vSphere client. One of the nice features of vCenter is the map diagrams that you can view, there are many options you can change as you can see in the below image, so that you can display just the information you need, however the diagram can become messy if you have lots of ESXi server, Storage, etc.

Another nice view in vCenter is the folder (my case the Production folder) -> storage view screen, here you can either look at the report or the maps, again the map has many options and it looks nice in your documentation. Report Map

The rest of the diagrams I have used in this section have come from using the vSphere client software, the vCenter software gives you additional views and the added bonus of see all the ESXi server views in one place, otherwise you would have to login to each ESXi server.

Test Environment If possible it is better to get yourself on a VMware course but as times are hard companies are reluctant to provide training. I have setup a small test rig that allows me to play around with many of the VMware features, apart from Fault Tolerance (unless you want to spend lots of money) my rig will be able to handle the following

vMotion vStorage vMotion Distributed Resource scheduler (DRS) Distributed Power Management (DPM)

I have tried to keep my setup as cheap as possible, the only restriction you have is that the hardware must support 64-bit and supports virtualization. The other requirement is the SAN, in this case I have used an old PC as a iSCSI SAN using the openfiler software, this allows all ESXi servers to access the shared storage. Here is what I have purchased, I paid about 700 for the lot which is still not cheap but cheaper than a VMware course. What I am building is not supported by VMware but it does run well and enables me to learn VMware ESXi server and many of the features that's comes with the ESXi server. I have purchased two HP DC 7800 desktop PC's with the following configuration

ESXi Servers

DC 7800 with E6750 intel processor (dual core - 2.66GHZ) it came with a Internal 120GB disk which i will use for the ESXi O/S 2 x 6GB RAM (each server will have 6GB installed) 2 x Internal 500GB disk which I will be using for virtual machines as well 4 x intel Pro/1000 CT network adapters (one for iSCSI and the other for private network (vMotion))

I will be naming my servers vmware1 and vmware2 I have used an old PC that I built myself many years ago which has the following configuration, it's not powerful but actually runs very well.

iSCSI SAN (openfiler)

A8N-SLI deluxe motherboard AMD Athlon 3200+ 64-bit CPU (2GHZ) 4GB RAM 250GB internal hard disk for the openfiler O/S 2 x 1TB internal hard disks for virtual machines and backups of virtual machines 2 x intel Pro/1000 CT network adapter (will be used for the iSCSI network)

I will be naming my openfiler server filer1 Networking I have two netgear GS608 8 port gigabit switch/router/hub. One will be

used for the public network, the other will be used for both the iSCSI network and the private network. In a ideal world you would split the iSCSI traffic and private network on to two different switches but I have tried to keep the costs down I have used an old 14" LCD monitor and attached to a 4 port KVM KVM and switch and then connected this to both the ESXi servers and the Monitor openfiler server. We will be setting up a vCenter server, this will be created as a VM in one of the ESXi hosts which is now the preferred method of VMware, I will be using a trial windows 2008 server O/S. There are not many options when it comes the vCenter so check out VMware documentation on what is supported. Networking I have setup the following networking, I am not going to create a diagram as it is too small of a network, I only have two routers. You might have noticed that I have two network connections for the openfiler server my plan is to perform some testing of the I/O failover in VMware, its a crude method but it works to explain how VMware handles network failures. Main PC openfiler vmware1 vmware2 Notes

Public LAN

192.168.0.60 attached to port 1

192.168.0.75 attached to port 2

for the public LAN I 192.168.0.190 192.168.0.191 will use attached to attached to one of port 3 port 4 the netgear GS608 routers

iSCSI LAN

n/a

for the iSCSI network I will 192.168.1.90 192.168.1.91 use the 192.168.1.75/192.168.1.76 attached to attached to first 4 attached to port 1 & 2 port 3 port 4 ports on the second GS608 router for the 192.168.2.90 192.168.2.90 private attached to attached to network port 7 port 8 I will use the

Private LAN

n/a

n/a

last 2 ports on the second GS608 router Openfiler Storage The openfiler server has two 1TB disks, that I will use for VMware storage, I am not going to explain how to create these volumes but point you in the direction of the openfiler web site. Shared storage across both ESXi servers filer1_vm1_ds1 Both these storage areas will be accessible to both ESXi servers, this will allow us to use vMotion. I have used two disks just in case you decide to setup a cluster for example a Veritas cluster, then each filer1_vm2_ds1 ESXi can have the VM's on different disks thus getting better performance. NON-shared storage used to backup specific ESXi servers filer1_vm1_bk1 I will create two areas that will allow us to backup VM's, these areas filer1_vm2_bk1 will not be shared Here are some snapshots of my openfiler configuration, hopefully you will get an idea on what I have setup, but its up to you on what your requirements are. You don't need a very powerful system to run openfiler, as you can see I have a very basic server and it fine for testing purposes, it also works well with the ESXi server setup that I have.

The below image is my network setup, as you can see I have three network connections, one for the public network and two for the iSCSI network. I have also added the two ESXi servers to the access configuration which will be used during the iSCSI setup, all ports have been configured at 1000Mb/s.

As I mentioned I have two 1TB disk to play around with, I have created two volume groups data01 and data02, each volume group has one disk.

Disk one I have created two volumes filer2ds1 which will hold VM's and filer2bk1 which will be used for backing up VM's

Disk two I have created two volumes filer2ds2 which will hold VM's and filer2bk2 which will be used for backing up VM's

Once the volumes have been created you need to setup the iSCSI part, I have given my volumes target names like filer2.ds1, filer2.ds2 but its up to you on what naming convention you wish to choose. The image below shows one of the volumes mapped to the target, in this case volume filer2ds1 volume is mapped to target filer2.ds1

The last part is to allow access from the ESXi servers to your target, in this case I am allowing access from both ESXi servers to the filer2.ds1 target, which if you remember is the filer2ds1 volume.

That's pretty much it, openfiler is a very easy to use piece of software and I can highly recommend it for testing purposes (not for Production).

ESXi Setup I have setup both the ESXi servers the same, using the internal 120GB disk as the O/S disk, I will be using the other internal 500GB disk as storage as well, sometimes you may just setup a ESXi with internal storage only (no SAN), so it gives us the opportunity to see what differences internal disks and SAN disks have. Each server will have 6GB of RAM, this will allow me to run multiple VM's on one ESXi server. I am not going to show you how to setup the ESXi server has this has been done to death on the web, all I can say is it is very easy. I will however explain in detail on how to setup the iSCSI on an ESXi in my virtual storage section. Here are some snapshots of my ESXi server setup, I will be going into greater details on how to setup networking, storage, etc for the time being here is quick summary on what I have. As you can see the server is dual core using a E6750 processor, the hardware is a HP DC 7800 desktop PC. It not very powerful but it is able to run a number of VM's simultaneously which will be handy when we start testing DRS.

The more memory the more VM's you can simultaneously run, I have 6GB in each ESXi server which is enough for a test environment

Don't worry to much about the storage at the moment, as I will be going into greater details on how to setup the storage in my storage section. As you can see I have the two iSCSI volumes (filer2_bk1 and filer2_ds1), I also have two internal disks in the HP DC 7800, vmware_ds1 which is what the O/S is residing on (I don't use this for any VM's) and vmware_vs1 which is a 500GB disk that I use for cluster VM's like

Sun cluster, Veritas cluster, etc.

Lastly I have the NIC's, I have three network interfaces, one for the public network, one for the iSCSI network and a private network which will be used for vMotion, etc. Again I will go into greater detail in my network section.

In future sections I will show you how to setup all of the above and other features of VMware. Final Comments In the end you should have two ESXi servers setup with access to a public, private and iSCSI LAN, each server should have iSCSI storage (400GB and 350GB) and internal storage (500GB). You can change anything that I have mentioned above just make sure that the servers/desktop PC you buy for the ESXi servers do support 64-bit and supports virtualization.

Advanced Configuration Tools In my last section on the VMware series I am going to cover advanced configuration tools, I will be covering host profiles, vCLI, , PowerCLI. I am from a Unix background and if I can I try to script everything this eliminates the human error factor, once you know a script works then you can run hundreds of times knowing that the same outcome will occur, you can script many common tasks in VMware. There are a few new features in version 4

Host Profiles which almost eliminate the need for complex scripted installations The next generation of command-line tools which allow everyone to script configurations of not just ESXi servers but also the vCenter environment

Below is a list of the current tools that are available for with VMware Local CLI at the ESXi host vSphere CLI vSphere PowerCLI (toolkit for windows/Powershell toolkit) vSphere Perl (Perl vSphere PowerCLI) vSphere Management Assistant You require root level access and a SSH connection (PuTTy), this is idea for those who like the command-line and using switches, a bit like the Unix world. vCLI allows you to run commands from your Window/Linux server remotely without an SSH session, not all commands available at the local CLI are available in the vCLI PowerCLI plugs directly into vCenter remotely and carries out many functions that aren't even exposed in the GUI. You should have a good knowledge of object-oriented programming (objects, properties, attributes, etc). This toolkit exposes the same functionality as the PowerShell toolkit aggregates much of the functionality seen in the vCLI and the Perl toolkit, it allows for an interactive CLI to ESX classic and ESXi, it also automates much of the authentication issues, meaning you have no need to disclose the root account are not a scripting engine or CLI tool but carry out many of the post configuration tasks of an ESXi server normally undertaken with scripting, you can achieve the same results with host profiles as with scripting, so if you don't like script this is the way to go.

Host Profiles

Host Profiles Host profiles allow you to capture the configuration of a ESXi server and apply it another ESXi server, essentially it acts like a policy object that can be applied to either an ESXi server or cluster in the vCenter inventory, thus you can cut down on the scripting element. If you want maximum control over any settings that make up your ESX server than scripted installation is the way to go, but if you are using ESXi

host profiles may be a better route because they are relatively easy to use and require no scripting knowledge whatsoever. Host profiles have five main functions

Capture the configuration profile of an existing host Apply the configuration profile to a new ESXi server Confirm that an ESXi server is correctly configured Prompt the administrator for per ESXi server settings such as VMKernel network configuration Apply the profile as an ESXi host is added into vCenter cluster - a mere drag and drop event configures the ESXi server

You cannot install additional software into the ESXi server and there are some issues with the HA agent starting, but they are ideal for masse rollouts of ESXi servers. Host profiles are associated with the vCenter you logged in to when you create them, they are not available across multiple vCenters even in linked mode, also the vCenter must manage the ESXi server. Firstly may want to prebuild a clean ESXi server and apply a modest vSwitch, NTP and firewall configuration. try to build as much as you can so that you don't have to tweak to much after applying to a new ESXi server. Host profiles have many number of settings, I am not going to show you all of them but do have a look and play around. First right-click the source ESXi server and choose host profile, then select "Create Profile from Host" create and edit a host profil e

Type in a friendly name and a description

Next we get the summary screen

Now go to the home page and select "Host Profiles"

This is the main host profiles screen, we will discuss some of this later, you can edit the production_cluster profile we created by selecting the "edit profile" link

if you select the "edit profile", you will see the edit edit screen, this has vast amounts of information, there is lots to configure here, hence why I stated earlier that when you build the first source ESXi server try to configure as much possible

Once you have created and configured your host profile you might want to test it against some existing ESXi servers to see if they are compliant with your build, next we attach a a ESXi server to the host profile and apply it Attachin To attach ESXi servers to you host profile, right-click the host profile and g and

applying select "Attach Host/Cluster" host profiles

Select the cluster or the ESXi server/s you want to attach

In the "Hosts and Clusters" tab you can now see two ESXi servers attached to the "production_cluster" host profile, and if you notice in the top righthand corner the links are now live, you can apply this profile to a host or check its compliance, which is what we are going to do now

Select each ESXi server in turn and click the "Check Compliance Now" link, vmware1 should be compliant as this is the ESXi server we created the host profile with in the first place, however vmware2 is not compliant as the TSM service is different

Before you apply a host profile to a ESXi server it must be in maintenance mode, then just click the "apply profile" link, below is the screen you get if your ESXi server is not in maintenance mode

You may have to tweak the ESXi server after applying the profile (IP address, etc) but 99% of the configuration should be complete. You also have the ability to export the host profile, this can then be imported on another vCenter server, the host profile is saved in the VMware Profile Format (.vpf) format as seen below

vCLI vCLI does not cover all the commands that the service console does, for instance you cannot add a second service console port for the VMware iSCSI software initiator and VMware HA, you cannot open firewall ports for the iSCSI. vCLI comes in three flavors

Windows installer Linux installer Downloadable VM (called VMware management appliance)

They all use the perl environment, so if you are using windows you need ActivePerl installed, then download the latest version of vCLI from VMware and install, hopefully you should end up with a vCLI icon like below

vCLi can configure common tasks such as the following, I am not going to cover every single command so I will point you to the VMware vCLI documentation

Create vSwitches

setup DNS configure NTP Enable the iSCSI initiator configure NAS

vCLI can be frustrating sometimes due to the authentication process, it may take some time for the prompt to come back, all vCLI commands require a host (ESXi server or vCenter), username and password (HUP) to authenticate against the system prior to the command being executed. There are three ways to authenticate ## uses a cookie that expires after 30 minutes of nonuse, vCLI commands can be long especially when you add the authentication details for example vicfg-vswitch.pl --server=vcenter1 --username=administrator -password=password --vihost=vmware1 -l ## to create a session file you can use the following command save_session.pl --savesession=c:\vmware_session\vc1 --server=vcenter1 -username=administrator --password=password ## You should receive a message stating "Session information saved" and a file called vc1 should have been created, set a enviroment variable to ## point to this file set VI_SAVESESSIONFILE=c:\vmware_session\vc1 ## The contents of the session file is below, but they will be different than yours #LWP-Cookies-1.0 Set-Cookie3: vmware_soap_session="\"096482A3-3638-4674-A83C42BD007486F2\""; path="/"; domain=win-2008.local; path_spec; discard; version=0 Now we can use this session file as seen in the image below were I list all the vSwitches

Use a session file

## A configuration file holds the details of the server, username and password, you have to make file secure as it is a text file, an example is ## below I called this vc2.txt VI_SERVER=vcenter1 VI_USERNAME=administrator VI_PASSWORD=password ## once you have created the file set the variable below set VI_CONFIG=c:\vmware_session\vc2.txt ## now test the configuration file Create a configuration file

Pass-through uses the current logon to the Microsoft Security Support Provider Interface the (SSPI), to use this method all you need to do is use the below parameters to the credential

command --passthroughauth --passthroughauthpackage="kerberos" I am now going to list some of the more common commnds that you may use, there are many more so take a peak at the VMware documentation, this is a sort of get the feel for vCLI vicfg-vswitch.pl --vihost=vmware1 -a=vSwitch1 ## you can also create aport group on a vswitch by using th -A option create a internal vSwitch vicfg-vswitch.pl --vihost=vmware1 -A=vmware1-internal0 vSwitch1 ## you can list the vswitches vicfg-vswitch.pl --vihost-vmware1 -l ## First create the port groups vicfg-vswitch.pl --vihost=vmware1 -A=vlan10 vswitch1 vicfg-vswitch.pl --vihost=vmware1 -A=vlan11 vswitch1 vicfg-vswitch.pl --vihost=vmware1 -A=vlan12 vswitch1 create a ## then set the VLAN value on the properties of the correct port group vSwitch with vicfg-vswitch --vihost=vmware1 -v=10 -p vlan0 vswitch1 VLAN vicfg-vswitch --vihost=vmware1 -v=11 -p vlan0 vswitch1 tagging vicfg-vswitch --vihost=vmware1 -v=12 -p vlan0 vswitch1 ## Finally link the relevant NIC's to the vSwitch vicfg-vswitch.pl --vihost=vmware1 -L=vmnic1 vSwitch1 vicfg-vswitch.pl --vihost=vmware1 -L=vmnic2 vSwitch1 ## create a port group called vmotion vicfg-vswitch.pl --vihost=vmware1 -A=vmotion vSwitch3 Create a VMKernel port for vMotion ## link the relevant NIC's vicfg-vswitch.pl --vihost=vmware1 -L=vmnic3 vSwitch3 ## configure the IP address and subnet mask vicfg-vmnic.pl --vihost-vmware1 -a -i 192.168.2.190 -n 255.255.255.0 p vmotion vicfg-vswitch.pl --vihost=vmware1 -a=vSwitch4 vicfg-vswitch.pl --vihost=vmware1 -A=ipstorage vSwitch4 vicfg-vswitch.pl --vihost=vmware1 -L=vmnic4 vSwitch4 vicfg-vswitch.pl --vihost=vmware1 -L=vmnic5 vSwitch4 ## configure the IP address and subnet mask vicfg-vmnic.pl --vihost-vmware1 -a -i 192.168.2.193 -n 255.255.255.0 p ipstorage

Enable iSCSI initiator

## change the MTU to support jumbo frames vicfg-vswitch --vihost=vmware1 -m=9000 vSwitch4 ## Now enable the VMware iSCSI software initiator vicfg-iscsi.pl --vihost=vmware1 -E -e ## check to see what virtual HBA device is used vicfg-iscsi.pl --vihost=vmware1 -l -H ## now that you have the device, you can find out your IQN details vicfg-iscsi.pl --vihost=vmware1 -E -l -P vmhba34 ## now set the IQN for this interface vicfg-iscsi.pl --vihost=vmware1 -I -n=iqn.200811.uk.co.datadisk:vmware1 -K=vmware1 vmhba34 ## add the iSCSI target vicfg-iscsi.pl --vihost=vmware1 -a -D -i=<openfiler IP address> vmhba34 ## rescan and then list the available luns vicfg-rescan.pl --vihost=vmware1 vmhba34 vicfg-iscsi.pl --vihost=vmware1 -E -l -L vmhba34 ## here is simple session to display the current configured LUNs I have

Set up your NTP

vicfg-ntp.pl --vihost=vmware1 -a=0.uk.pool.ntp.org vicfg-ntp.pl --vihost=vmware1 -a=1.uk.pool.ntp.org vicfg-ntp.pl --vihost=vmware1 -a=2.uk.pool.ntp.org

## to stop and restart ntp use the below vicfg-ntp.pl --vihost=vmware1 -s vicfg-ntp.pl --vihost=vmware1 -r ## List the NTP servers get-vmhostntpserver vmware1 ## list the datastores available vifs.pl --server=vmware1 --username=root --password=password -S Manging Files ## upload a file, you must supply the "[datastore name] /directory" vifs.pl --server=vmware1 --username=root --password=password p=c:\w2k3.iso "[iso] /ms/w2k3.iso" ## list VM on the server vmware-cmd.pl -H vmware1 -U root -P password -l Snapshots ## take a snapshot, using the information obtained above vmware-cmd.pl -H vmware1 -U root -P password /vmfs/volumes/4d7f4f35-c02e5bd8-594600237d16ab10/linux01/linux01.vmx createsnapshot "Before Export" "Taken before using VM to 2gbsparse format" 1 1 ## make sure you don't have any registered VM's on the ESXi server, a binary file will be created vicfg-cfgbackup.pl --server=vmware1 --username=root -password=password -s c:\vmware1.bak Backup ## Now factory reset your ESXi server, make no VM's are running on existing this server ESXi server vicfg-cfgbackup.pl --server=vmware1 --username=root -configuration password=password -r -q ## Now restore the configuration vicfg-cfgbackup.pl --server=vmware1 --username=root -password=password -l c:\vmware1.bak -f -q PowerCLI Lastly we come to PowerCLI, by default PowerCLI is already installed on Windows 7 (see screen shot below of my Windows 7 desktop PC) and Windows Server 2008 R2, otherwise you can download from the Microsoft website.

Once you have this installed you next need to download the vSphere PowerCLI from the Vmware website, then install this, hopefully you should end up with a icon like below

Because I installed PowerCLI on my desktop PC, when I open a window I am not connected to a vCenter (notice the error messages), to connect to a vCenter I use the connect-viserver command

Once connected I can then run the PowerCLI commands, here I get a list of the current ESXi servers and VM's

Now I am not going to explain all the commands, so again go to the VMware documentation for a complete list, there are seven categories which you can carry out tasks

Add - adding objects such as ESXI servers, vSwitches Get - listing objects Move - moving objects such as moving a VM from one ESXi server to another New - create new objects such as port groups, vSwitches Remove - remove objects Set - set a VM resource allocation Stop/Start - stop a vm or start an ESXi server service

You can also feed one command into another with the use of Pipes (|), if you know the Unix world then this will be familiar, I have listed some common commands below to get you started and to see what tasks you can perform using PowerCLI. List all the get get-command | where object { $_.name-like "get*" } commands List ESXi servers and VM get-vmhost get-vm

Disconnecting get-vm | get-floppydrive | set-floppydrive -connected:$false CD/Floppies get-vm | get-cddrive | setcddrive -connected:$false ## List network adapters and sort them get-vm | get-networkadapter | sort-object -property "NetworkName" get-vm | get-networkadapter | sort-object -property "NetworkName" | where {'Production' -contains $_.NetworkName} Port groups ## rename a port group get-vm | get-networkadapter | sort-object -property "NetworkName" | where {'Production' -contains $_.NetworkName} | set-networkadapter -Networkname 'production'

Maintenace Mode List datastores

get-vmhost -name vmware1 | set-vmhost -state maintenace get-datastore ## get the datastores on a particular ESXi server get-vmhost -name vmware1 | get-datastore ## Create the DataCenter first new-datacenter -location (get-folder -Name 'UK DataCenters') -name 'Milton Keynes DataCenter'

Create a datacenter with folders

## Now create the folder inside the DataCenter new-folder -location (get-datacenter -Name 'Milton Keynes DataCenter') -name 'AMD Hosts' new-folder -location (get-datacenter -Name 'Milton Keynes DataCenter') -name 'Intel Hosts' new-cluster -location (get-datacenter -name 'Milton Keynes DataCenter' | get-folder -name 'AMD Hosts') -name 'AMD Cluster' HAEnabled -HHAdmissionControlEnabled -HAFailoverLevel 2 DRSEnabled -DRSMode PartiallyAutomated

Create a cluster

Adding hosts add-vmhost vmware1 -location (get-datacenter 'Milton Keynes to a datacenter or Datacenter') -user root -password password cluster Triggering vMotion move-vm (get-vm -name 'linux01') -destination (get-vmhost vmware1)

Now that you have a feel for the commands, you can create scripts to run multiple commands, save the script with the extension of .ps1 then just run it, if it complains that you are not authorized then run the command "set-executionpolicy unrestricted", remember this is dangerous as you can run any command but it will at least get your going on you test setup.

Virtual Machines (VM) Now that you have ESXi with networking and storage configured it is time to build some virtual machines, firstly i will explain what a virtual machine and what it contains, then we will build a Linux VM using local disks and a windows 2008 VM using iSCSI disks, we will also install VMware Tools to the guest O/S and the reason on why you should. I will also cover on how to unregister, remove, delete and snapshot a VM and finally i will show to how to create virtual machine from a physical machine using VMware's vCenter Convertor. There are a number of new featurres with version 4.1

Support for VM's with 8 vCPUs and up to 255GB of RAM New power-on defaults (a soft power off rather than a hard power off) Hot add memory and CPUs to guest O/Ss that support it Thinly-provisioned virtual disks, so virtual disks only take up the block they consume and do not waste free space - ideal for virtual desktop environments Broader range of guest O/Ss including windows 3 and windows 95/98, including IDE disk support Virtual Machine Communication Interface (VMCI) an experimental method of comunication between two VMs on the same ESXi server that does away with using network protocols

A Virtual Machine (VM) is defined by a collections of files, when you create a VM you are creating a text file with the .vmx extension which defines properties such as

VM's name Storage location Guest OS type Number of vCPUs Number ofvirtual NIC's type of virtual SCSI adapter Size and location of the virtual disk

As I mentioned above a VM's consists of many files as described below File Extension .vmx .nvram .vmdk -flat.vmdk .vswp 00001.vmdk .vmsn .vmsd .log Description Configuration file in text format, which means you can open it and read it VM's virtual BIOS file VM's metadata/descriptor virtual disk file VM's data virtual disk file (OS/applications, data) VM's swap file Snapshot file Snapshot memory file Snapshot manager file Log file

.vmxf .rdm .rdmp

Internal metadata file RDM file with virtual compatibility RDM file with physical compatibility

here is a listing from one of my Linux VM's

The VM itself presents the appearance of real hardware, even though we know it is actually software, the VM really does believe it is on a physical server and on its own. An ESXi VM actually uses an Intel 440BX-based virtual motherboard with a NS338 chip. This was slected because it had very good caompatibility and reliability with many Operating Systems including very old O/S. The virtual motherboard can support the following

One or Two virtual floppy drives One or Two virtual CD/DVD drives PS/2 interfaces for keyboard and mouse Many PCI slots for SCSI controllers, NIC's, video One to Eight vCPU's up to 255GB of RAM One parallel port One or two serial ports One USB controller

It is not possible to add a USB device and present it to a VM, it is used to address compatibility issues that have occurred with some USB redirection services commonly used in virtual desktops. Parallel and serial devices are not fully virtualized and thier functionality is provided by the service console not the VMKernel. Also vMotion cannot use Parallel or Serial ports as you need to move the physical hardware attached to the parallel or serial port. However in all cases of USB, Parallel and Serial you can purchase a IP-based hub and redirct this via the network instead. Physical CD/DVD and floppies are now rarely used, generally you convert them into ISO or FLP files, these can then be uploaded to a storage area within the ESXi server,

I generally do this for my Linux ISO files, you can attach a ISO file to a VM and boot from this file. Althrough you have PS/2 interfaces for a keyboard or mouse you interact with the VM via a remote console session in either vSphere client or vCenter, it's a bit like accessing a server via the ILO/IMM card. You can watch a VM boot up just like a normal physical server you can even send mouse or keyboard commands to the VM. When you first create a VM you are given a basic video SGA driver, after the guest O/S has been installed it would be wise to install the latest VMware Tools which adds a virtual video adapter driver and VMware virtual mouse driver, this will significantly improve graphics and mouse operations. Many companies when changing over to a virtual solution is match what they had in the physical world into the virtual world even if the server was only using 10% of the resources, for example same number of CPU, same size of RAM which defeats the object of virtualization (reduce costs, etc), a better practice is to define the VM with the minimum that you feel your applications or services need to run, its very easy to increase CPU's or memory but much harder to take away resources, people do believe that when you take away resources even if the server was doing nothing that things start to go slow, etc. Make sure that you fully utilize your ESX hardware and keep resources to a minimum and only increase resources when there is a requirement to do so. Before we create our first VM I have to discuss disk types within a VM, there are three of them

thick/zeroedthick

then this type of virtual disk is created it takes up all the space you allocate, ESX vir data is created inside them, this format offers the best performance. As a flat file, the contiguous blocks within the VMFS volume. When the disk is created the file is created very quickly and any existing data on the data is written to the virtual disk as needed.

thin

with a thin virtual disk you may ask for 30GB disk, it actually grows in size as yo ad saving format for test VM's or VDI environments where you are trying to reduce sto Note that this format is incompatible with the Fault Tolerance (FT) feature

eagerzeroedthick

if you use Fault Tolerance (FT) the disk will be create as this type, basically it is the allocate 30GB you get 30GB, however when a eagerzeroedthick disk is created it zer it means it takes longer to create but is faster to read and write to because the free blo and available for use. This is a major performance improvement which is need for FT required for 3rd party software such as Microsoft Cluster Software (MSCS)

First VM Lets create our first VM, I will be installing the Oracle Enterprise Linux 5.5 as the guest operating, I will also be using a ISO file which will be stored on one of the virtual storage area's to boot from and perform the installation. I will give the VM 15GB of disk space and give it two Network interfaces (One on the public network and one on the iSCSI network) if you have setup a test environment like mine, then

follow below, if not then use your own environment areas (storage, NIC's). I will first cover creating the VM and discuss how to manage the VM and what options you can change, I will then discuss how to get you ISO onto the ESXi server and how to install the guest O/S. To create a VM you need open either vSphere client or vCenter I have opened a vSphere client session and selected my ESXi server, from here if you click on the "getting started" tab you can see in the window a "create a new virtual machine" or if you look in the top left-hand corner the "New virtual machine " icon can be used

Creatin g the first VM

The next screen is the configuration screen, here selecting typical will select most of the defaults for you

Selecting custom you can customize a number of options (compare the left panel with the one above), this is the one I will be choosing so that we can cover all the options.

Next we can enter a meaningful name, I generally keep them the same name as the hostname

Next we can select one of our datastores, I am choosing vmware1_vs1 (Local 500GB disk) which means that this server could not be used in vMotion, the next VM I create will use the shared iSCSI storage.

The next screen ask you what version of Virtual machine type you want to use, as you can see in the image, i have choosen the latest version, you can also see in the image what each version supports

As I mentioned earlier I am going to put Oracle Enterprise Linux 64-bit as the guest O/S on this server. There a vast number of guest o/S you can choose here, the next VM I will be creating is a Windows 2008 Server

Now select the number of CPU's, remember if you are replacing an existing physical server with a virtual one, then try and reduce the amount of resources that you need, you can easily increase them.

I have selected the minimum amount of memory for a Linux server, 1GB would be fine, again give the server as least as you can then start increasing

if needs must.

Here I add a couple of virtual NIC's to my VM, one public NIC and one NIC which is attached to my iSCSI network.

There are now four types of SCSI controllers you can use, it all depends on what guest OS you will be installing, the installation will basically pick the best one suitable for your guest OS, so I generally select the default

Buslogic Parallel - used for older os such as windows 2000 LSI Logic Parallel - idea for Linux OS and windows 2003 LSI Logic SAS - faster and used with SAS drives, use on the latest OS VMware Paravirtual - new virtual storage adapter which lowers CPU

load and increases disk I/O throughput

This is the file that will contain your guest OS, you can use an already existing disk or leave and configure it later, I selected to create the disk now

I give it a size, 15GB should be more than enough to install Linux (this will include the /, /boot and swap area), I have already discuss disk types above, here I am sticking to the default thick/zeroedthick but as it is only a test environment i could have choosen Thin Provisioning

Because the VM has the appearence of real hardware it confirm to the SCSI convention adapter 0 and ID 0 are used as the boot disk (SCSI 0:0) with the SCSI adapter as ID 7. You can have up to 15 virtual disks attached to each adapter. You also have IDE support to allow you to use older operating systems (windows 95, windows 98), again the ESXi server will generally pick the best solution for the guest OS you chose.

Finally you get to the summary screen, click finish and the virtual machine will be created

In the main window on the vSphere client you should see it being created, I was not quick enough as it created before i took the snapshot.

In the Virtual Machines tab of your ESXi server you should be able to see it

if you select the VM and click on the "summary" tab to can see what the VM has installed, you can see the 2vCPU's, 1GB of memory, the two NIC's and the disk allocated to it and where its virtual disks are located (vmware1_vs1)

Now we have built the virtual machine it is now like a physical server with no Operating System installed, you can power it on but it will not boot anything, but before we install guest OS lets see what we can do with this VM There are a number of places where you can power on/off your vm, selecting the ESXi server and then clicking on the virtual machine tabs, you have two ways

Power on/off

If you select the virtual machine and slect the "getting started" tab, you have another way

There is also one way in the "summary" tab, this will change to power off and restart when the VM is running

By selecting the "edit settings" in the VM "summary" tab, we can additional hardware such as

Add/Remove and Display the VM's hardware

increase/decrease the number of vCPU's increase/decrease the amount of RAM add additional disks add or remove NIC's

By adding a hardware component you will be taken to the same screen you used to create the virtual machine screen, now you can see why I wanted to show you the custom setup.

There are a number of options you can change


VM's options

virtual machine name OS type boot options (idea if you need a longer boot process so you can select the boot device or bios) VMware Tools has the power on options General Menu we can turn on debugging

Most of the the time I leave them on the default settings

The last screen is the resource allocation, which I will be covering in great detail in the resource monitoring section

VM resource allocation

ESX has the ability to gracefully power off and on your VMs if you chose to shut down or reboot a ESXi server, this guest shutdown feature requires you to install the VMware Tools on to each VM. If you use features such as vMotion, DRS and HA do not use this feature as it can cause all sorts of problems 1. 2. 3. 4. 5. In the vSphere client select the ESX host Then select the configuration tab In the software panel select Virtual Machine Startup/Shutdown In the top right corner select properties in the dialog box (as seen in the image below), tick the "allow virtual machines to start and stop automatically with the system" 6. select the VM and use the "Move up" button to move it up the

Automaticall y start and stop a VM

list, keep going until it goes into the "Automatic Startup" selection, then the "Edit" button will be available and the VM will be enabled

Virtual Machines (VM) Now that you have an ESXi server with networking and storage configured it is time to build some virtual machines, firstly I will explain what a virtual machine is and what it contains, then we will build a Linux VM using local disks, we will also install VMware Tools to the guest O/S and the reason on why you should. I will also cover snapshotting a VM and finally I will show to how to create virtual machine from a physical machine using VMware's vCenter Converter. There are a number of new features with version 4.1

Support for VM's with 8 vCPUs and up to 255GB of RAM New power-on defaults (a soft power off rather than a hard power off) Hot add memory and CPUs to guest O/Ss that support it Thinly-provisioned virtual disks, so virtual disks only take up the block they consume and do not waste free space - ideal for virtual desktop environments Broader range of guest O/Ss including windows 3 and windows 95/98, including IDE disk support Virtual Machine Communication Interface (VMCI) an experimental method of communication between two VMs on the same ESXi server that does away with using network protocols

A Virtual Machine (VM) is defined by a collections of files, when you create a VM you are creating a text file with the .vmx extension which defines properties such as

VM's name Storage location Guest OS type Number of vCPUs Number of virtual NIC's type of virtual SCSI adapter Size and location of the virtual disk

As I mentioned above a VM consists of many files as described below File Extension .vmx .nvram .vmdk -flat.vmdk .vswp .vmsn .vmsd .log Description Configuration file in text format, which means you can open it and read it VM's virtual BIOS file VM's metadata/descriptor virtual disk file VM's data virtual disk file (OS/applications, data) VM's swap file Snapshot memory file Snapshot manager file Log file

00001.vmdk Snapshot file

.vmxf .rdm .rdmp

Internal metadata file RDM file with virtual compatibility RDM file with physical compatibility

here is a listing from one of my Linux VM's

The VM itself presents the appearance of real hardware, even though we know it is actually software, the VM really does believe it is on a physical server and on its own. An ESXi VM actually uses an Intel 440BX-based virtual motherboard with a NS338 chip. This was selected because it had very good compatibility and reliability with many Operating Systems including very old O/S. The virtual motherboard can support the following

One or Two virtual floppy drives One or Two virtual CD/DVD drives PS/2 interfaces for keyboard and mouse Many PCI slots for SCSI controllers, NIC's, video One to Eight vCPU's up to 255GB of RAM One parallel port One or two serial ports One USB controller

It is not possible to add a USB device and present it to a VM, it is used to address compatibility issues that have occurred with some USB redirection services commonly used in virtual desktops. Parallel and serial devices are not fully virtualized and their functionality is provided by the service console not the VMKernel. Also vMotion cannot use Parallel or Serial ports as you need to move the physical hardware attached to the parallel or serial port. However in all cases of USB, Parallel and Serial you can purchase a IP-based hub and redirect this via the network instead. Physical CD/DVD and floppies are now rarely used, generally you convert them into ISO or FLP files, these can then be uploaded to a storage area within the ESXi server,

I generally do this for my Linux ISO files, you can attach a ISO file to a VM and boot from this file. Although you have PS/2 interfaces for a keyboard or mouse you interact with the VM via a remote console session in either vSphere client or vCenter, it's a bit like accessing a server via the ILO/IMM card. You can watch a VM boot up just like a normal physical server you can even send mouse or keyboard commands to the VM. When you first create a VM you are given a basic video SGA driver, after the guest O/S has been installed it would be wise to install the latest VMware Tools which adds a virtual video adapter driver and VMware virtual mouse driver, this will significantly improve graphics and mouse operations. Many companies when changing over to a virtual solution is match what they had in the physical world into the virtual world even if the server was only using 10% of the resources, for example same number of CPU, same size of RAM which defeats the object of virtualization (reduce costs, etc), a better practice is to define the VM with the minimum that you feel your applications or services need to run, its very easy to increase CPU's or memory but much harder to take away resources, people do believe that when you take away resources even if the server was doing nothing that things start to go slow, etc. Make sure that you fully utilize your ESX hardware and keep resources to a minimum and only increase resources when there is a requirement to do so. Before we create our first VM I have to discuss disk types within a VM, there are three of them then this type of virtual disk is created it takes up all the space you allocate, ESX virtual disks do not grow as more data is created inside them, this format offers the best performance. As a flat file, the virtual disk will be created in contiguous blocks within the thick/zeroedthick VMFS volume. When the disk is created the file is created very quickly and any existing data on the physical disk is overwritten as data is written to the virtual disk as needed. with a thin virtual disk you may ask for 30GB disk, it actually grows in size as yo add more data, this is a good space-saving format for test VM's or VDI environments where you are trying to reduce storage costs. Note that this format is incompatible with the Fault Tolerance (FT) feature if you use Fault Tolerance (FT) the disk will be create as this type, basically it is the same as a think disk in that if you allocate 30GB you get 30GB, however when a eagerzeroedthick disk is created it eagerzeroedthick zeros all the blocks within the disk , it means it takes longer to create but is faster to read and write to because the free blocks are guaranteed to be blank and available for use. This is a major performance improvement which is need for FT to function

thin

correctly. It is also required for 3rd party software such as Microsoft Cluster Software (MSCS) First VM Lets create our first VM, I will be installing the Oracle Enterprise Linux 5.5 as the guest operating, I will also be using a ISO file which will be stored on one of the virtual storage area's to boot from and perform the installation. I will give the VM 15GB of disk space and give it two Network interfaces (One on the public network and one on the iSCSI network) if you have setup a test environment like mine, then follow below, if not then use your own environment areas (storage, NIC's). I will first cover creating the VM and discuss how to manage the VM and what options you can change, I will then discuss how to get you ISO onto the ESXi server and how to install the guest O/S. To create a VM you need open either vSphere client or vCenter I have opened a vSphere client session and selected my ESXi server, from here if you click on the "getting started" tab you can see in the window a "create a new virtual machine" or if you look in the top left-hand corner the "New virtual machine " icon can be used

Creatin g the first VM

The next screen is the configuration screen, here selecting typical will select most of the defaults for you

Selecting custom you can customize a number of options (compare the left panel with the one above), this is the one I will be choosing so that we can cover all the options.

Next we can enter a meaningful name, I generally keep them the same name as the hostname

Next we can select one of our datastores, I am choosing vmware1_vs1 (Local 500GB disk) which means that this server could not be used in vMotion, we will be looking at vMotion in another section.

The next screen ask you what version of Virtual machine type you want to use, as you can see in the image, I have chosen the latest version, you can also see in the image what each version supports

As I mentioned earlier I am going to put Oracle Enterprise Linux 64-bit as the guest O/S on this server. There a vast number of guest o/S you can choose here.

Now select the number of CPU's, remember if you are replacing an existing physical server with a virtual one, then try and reduce the amount of

resources that you need, you can easily increase them.

I have selected the minimum amount of memory for a Linux server, 1GB would be fine, again give the server as least as you can then start increasing if needs must.

Here I add a couple of virtual NIC's to my VM, one public NIC and one NIC which is attached to my iSCSI network.

There are now four types of SCSI controllers you can use, it all depends on what guest OS you will be installing, the installation will basically pick the best one suitable for your guest OS, so I generally select the default

Buslogic Parallel - used for older os such as windows 2000 LSI Logic Parallel - idea for Linux OS and windows 2003 LSI Logic SAS - faster and used with SAS drives, use on the latest OS VMware Paravirtual - new virtual storage adapter which lowers CPU load and increases disk I/O throughput

This is the file that will contain your guest OS, you can use an already existing disk or leave and configure it later, I selected to create the disk now

I give it a size, 15GB should be more than enough to install Linux (this will include the /, /boot and swap area), I have already discuss disk types above, here I am sticking to the default thick/zeroedthick but as it is only a test environment I could have chosen Thin Provisioning

Because the VM has the appearance of real hardware it confirm to the SCSI convention adapter 0 and ID 0 are used as the boot disk (SCSI 0:0) with the SCSI adapter as ID 7. You can have up to 15 virtual disks attached to each adapter. You also have IDE support to allow you to use older operating systems (windows 95, windows 98), again the ESXi server will generally pick the best solution for the guest OS you chose.

Finally you get to the summary screen, click finish and the virtual machine will be created

In the main window on the vSphere client you should see it being created, I was not quick enough as it created before i took the snapshot.

In the Virtual Machines tab of your ESXi server you should be able to see it

if you select the VM and click on the "summary" tab to can see what the VM has installed, you can see the 2vCPU's, 1GB of memory, the two NIC's and the disk allocated to it and where its virtual disks are located (vmware1_vs1)

Now we have built the virtual machine it is now like a physical server with no operating system installed, you can power it on but it will not boot anything, but before we install the guest OS lets see what we can do with this VM, I am not going to explain all the features but highlight a few that I commonly change, some of the advanced features will actually be covered in other sections of my web site, I will be updating this section from time to time when I do get a change to play around with the related feature. Virtual Machine Power on/off There are a number of places where you can power on/off/restart your VM, selecting the ESXi server and then clicking on the virtual machine tabs, you have two ways

If you select the virtual machine and select the "getting started" tab, you have another way

There is also one way in the "summary" tab, this will change to power off and restart when the VM is running

ESX has the ability to gracefully power off and on your VMs if you chose to shut down or reboot a ESXi server, this guest shutdown feature requires you to install the VMware Tools on to each VM. If you use features such as vMotion, DRS and HA do not use this feature as it can cause all sorts of problems 1. In the vSphere client select the ESX host 2. Then select the configuration tab 3. In the software panel select Virtual Machine Startup/Shutdown 4. In the top right corner select properties 5. in the dialog box (as seen in the image below), tick the "allow virtual machines to start and stop automatically with the system" 6. select the VM and use the "Move up" button to move it up the list, keep going until it goes into the "Automatic Startup" selection, then the "Edit" button will be available and the VM will be enabled

Automatically start and stop a VM

Virtual Machine Hardware By selecting the "edit settings" in the VM "summary" tab, we can additional hardware such as

Add/Remove and Display the VM's hardware

increase/decrease the number of vCPU's increase/decrease the amount of RAM add additional disks (see below for adding Raw disks) add or remove NIC's

By adding a hardware component you will be taken to the similar screen you used to create the virtual machine screen, now you can see why I wanted to show you the custom setup.

There are times when you want you VM to have direct access to a SAN or iSCSI, this ia achieved by a special mapping file called RDM (Raw Device Mapping), this metadata text file tells the VM which LUN to access, the VMkernel intercedes (binary translation) on its behalf using the VMKernel drivers to access the SAN or iSCSI via the ESXi hosts physical HBA There are several reasons to use RDM files in preference to virtual disks

Giving a VM direct access to a SAN or iSCSI

You may have existing data held with NTFS, ext3 or another proprietary file system to which you merely wish the VM to have access RDM files are required for some clustering scenarios, such as running a clustering service between two VMs on separate ESXi servers You may wish to leverage you guest OS's native disk and file system tools to carry out certain tasks. For example the Microsoft DiskPart tool allows you to carry out certain tasks to stretch an NTFS partition to fill free space Some 3rd party software like EMC's replication manager software may require RDM style files to the array to function correctly.

The maximum size a RDM file can be is 2TB, there are two compatibility modes, note that you can convert a RDM file into a virtual disk file this can be done with either vMotion (SvMotion) or

via cold migration, we will cover this in my vMotion section.

physical - allows the VM to treat the raw LUN as if it were a physical machine, ideal for clustering and quorum devices (MCSC, Sun Cluster, etc) virtual - allows the VM to treat the raw LUN as if it were a virtual disk, this allows for advanced features such as disk modes and VMware snapshot files.

RDM will have a .vmdk extension just like virtual disks, they can be stored alongside the VM's other files or in a different datastore. To create a RDM file for a VM follow below, right click on the VM (you can use the newly created one called linux01 above) and select "edit settings", then click the "add" button, then choose "hard disk" and the below screen should appear, select "Raw Device Mapping"

I create a 1GB iSCSI volume on my openfiler iSCSI server, to use for this demonstration

I prefer to store with the VM as it keeps it all tidy but you can select a different datastore location if you wish

Now we come to the compatibility mode, again if you are going to use this with a cluster or as a quorum device you should select physical, otherwise select virtual

You can specify the SCSI LUN location, but generally the default is OK

Finally we get to a summary page, make sure you are happy as there is no going back

You now should be able to see it in you hardware tab of the VM

You can also see it in the VM volume directory, its called "linux01_1-rdmp.vmdk", when we build the guest OS it will be available to the VM to use.

Convert a thin disk to a thick disk

In older versions of ESX you had to use the vmkfstools command to change a disk type, the VM must be powered off and then locate the virtual disk that you wish to convert and then right click in the virtual disk and select "inflate", I have located the the linux01 virtual disk in the vmware1_vs1 datastore, using the datastore browser and selecting the linux01 folder I have access to the virtual disk.

| Virtual Machines Options There are a number of options you can change

virtual machine name OS type boot options (idea if you need a longer boot process so you can select the boot device or bios) VMware Tools has the power on options General Menu we can turn on debugging

Most of the the time I leave them on the default settings

VM's options

HOT pluggable A number of OS support hot adding, thus when you add an

CPU and Memory

additional CPU or memory will be available to the OS immediately, you can enable this feature from the options tab and selecting "Memory/CPU hotplug"

VMware has three types of execution

Memory Management Unit (MMU) and Paravirtualizati on

Direct Execution - VM will speak natively to the underlying physical CPU to make direct calls to the instruction set of the CPU, this offers the best performance Binary Execution - occurs whenever there is an interrupt request generated inside the VM by the guest OS, interrupts are intercepted or translated down through the VMkernel (the hypervisor) to a real, physical device using a device driver, however it is slower then direct execution. Paravirtualization - the guest OS is given some awareness to a layer just above the hypervisor to which the VM hooks into, this is know as VMI Paravirtualization, however it does use a PCI slot inside the VM, it can also cause problem with vMotion if all ESXi servers support this, it is also incompatible with fault tolerance (FT)

The MMU (Memory Management Unit) is called the Hardware Page Table Virtualization, it may or may not improve performance, it all depends if you CPU supports this feature and if you workloads are memory-intensive. Within the VMkernel, there are two page tables one for VM memory the other for physical memory, the VMkernel maps the hexidecimal memory ranges from one to the other. As the VMkernel is fundamentally the attributer of access to hardware this allows for avoiding the duplication of memory connects within and between VM's commonly referred to as Transparent Page Sharing (TPS), it also allows VMware to deallocate memory from one VM to another using a memory driver installed to the guest OS along with VMware Tools. This does in additional CPU costs, both Intel and

AMD offer in there new CPU's hardware-based page table system, commonly referred to as Nested Page Table (NPT). The VMkernel will interrogate the CPU and allow the VM to use the MMU if it is available, otherwise it simple ignores it.

Virtual Machine Resources The last screen is the resource allocation, which I will be covering in great detail in the resource monitoring section

VM resource allocation

Guest OS installation and VMware Tools If you have followed the above on how to create a VM, you should be in a position to install the guest OS, I am going to install Oracle Enterprise Linux 5.5 and then install VMware Tools. Firstly we must get the ISO image onto a datastore Using either the vSphere client or vCenter go to the datastore store screen, right-click the datastore and select "browse datastore", I have created a folder in datastore filer2_ds1 called

iso, this is where we will put the iso image

I have downloaded Oracle Enterprise Linux 5.5 64-bit from Oracle's website on to my PC.

Now select the upload button and select the downloaded ISO image

You should now see a screen indicating how long it will take to upload the ISO image to the datastore

Finally you should a screen similar to below, which means we can now use this image to boot

and install Linux onto our VM

Now that we have our ISO image on a datastore lets install the guest OS The booting of an iso image is the same for all OS's, first select the VM and right-click and select "edit settings", then select the CD/DVD drive 1, then click the "DataStore ISO file" radio button then select the "browse.." button, find your ISO image

Instal ling Once selected you should have a similar screen below, make sure the "connect the guest at power on" button is ticked. OS (Linu x)

Now power on the VM and open a console and install the OS as normal

After you have installed the guest OS you should really install VMware Tools which is a software package that contains three components Many drivers are installed VMware SCGA II, VMware mouse, VMware SCSI, AMD Enhanced NIC driver, a file system synchronization driver (used with VMware Consolidated Backup and the new VMware Data Recovery (VDR)) and a memory control driver (vmmemctl). These devices and drivers greatly improve performance. The heartbeat or daemon service is used to alert the administrator that the guest OS inside the VM as malfunctioned, if a VM hangs (BSOD windows or Linux kernel panic) you will see the green icon in vCenter change to a red exclamation mark

Drivers

Heartbeat service or daemon

Configuration these scripts are used to configure VMware tools after the installation applet or of VMware Tools script To install VMware tools follow below 1. 2. 3. 4. Right-click on your windows VM Select guest Select Upgrade/Install VMware Tools, then click OK Next choose the type of installation you want to run (typical is sufficient)

Windows

In the background Windows will connect to a ISO file called "windows.iso" held on the ESXi server, hopefully Windows will automatically run this CD and execute the VMware Tools.msi file. Unix In Unix there are a number of different types of packages depending on the guest OS installed, I will describe the Linux installation process

1. Right-click on your Linux VM 2. Select guest 3. Select Upgrade/Install VMware Tools, then click OK

4. This will mount a filesystem called "/media/VMware Tools" (yes there is a space in there), in this directory you will see a file called "VMwareTools?????.tar.gz" , depending on the VMware Tools version

5. copy this file locally and unzip and untar it, you will have a directory called "vmware-tools-distrib", cd into this directory and run "vmware-install.pl" script 6. You will be asked a number of very easy questions (many create directory questions), I selected the default for all options 7. The package will now be installed and various daemons started, the vmware filesystem will be automatically unmounted for you, there is no need to reboot your server, if you used a GUI interface it is advised to logout and log back to enable certain X-system features. You should be able to see the VMware daemons running on your system

That's all there is to talk about VMware Tools, but I do strongly advise you to install the latest version. Using Snapshots Snapshots have much the same functionality as redo files, plus some extra features, they allow you to capture the state of a VM at a single point in time (which includes

both disk and memory states)., they can be created and deleted even when the VM is powered on. When you create a snapshot, all the new changes in the disk and memory are actually going to a differences or delta file, its like a bookmark that you can use to return to a set point. You would use snapshots when you want to make fundamental changes to you VM and you are unsure on what might happen (could completely trash the system). When a VM snapshot is applied, all the read and write events that would normally go to the virtual disk actually get sent to a delta file, normally the virtual disk locked by the file system and cannot be manipulated, however when a snapshot is applied the virtual disk is unlocked and can be copied to another location for backup purposes. Bear in mind that if you revert to an older snapshot any data entered into the system after the snapshot will be lost. There have been reports that snapshotting having problems with committing snapshots above 2GB and that the management of the snapshot files is poor, thus the VMware communities suggest that it should be used sparingly and restricted to test and development environments. The other problem is that snapshots grow incrementally in blocks of 16MB, if you forget that you taking a snapshot it could become very large, VMware does now have alarms to monitor snapshot usage. Lets create, revert and delete some snapshots I am going to use my Linux VM I created earlier (linux01) which is powered on and running, to create a snapshot right-click the VM and choose "snapshot" and then select "take snapshot", a dialog box will appear, you can supply a meanful name, a brief description and you can quiesce the file system, if you wish.

Creat ea snaps hot

If you list the volume on the ESXi server you can see the snapshot delta files

I sometimes leave a note on the VM configuration tab to let me know I am snapshotting

This then displays in the main vSphere or vCenter virtual machine screen

You can use two ways to revert to a snapshot, reverting to the last snapshot (quick way) or use snapshot manager, the quick way you right-click the VM, select snapshot, select "revert to current snapshot", if you want a particular snapshot, right-click on the VM, select snapshot, then select snapshot manager, as you can see below you then can select which snapshot you want Rever ting to a snaps hot

Although you have reverted you are still have the snapshot feature engaged ans thus the delta files will continue to grow The delete option is also in the snapshot manager screen, just select the Delet snapshot that you wish to delete, or you can delete all of them. When you ea delete a snapshot VMware takes the contents of snapshot and copies the data snaps in the delta file into your virtual disk, once this merger has taken place the hot delta file is deleted.

vMotion, Storage vMotion and Cold Migration Finally if you have been following the series we come to the most enjoyable part, vMotion, storage vMotion and Cold Migration. Before we start I just want to recap on my test environment I have two ESXi servers connected to a openfiler iSCSI server which will supply the shared storage, I will also be using the VM I created in my virtual machine section to show you how to migrate storage using Storage vMotion. To say we are moving a VM from one ESXi server to another with vMotion is a bit of a lie, we don't actually move the data at all, this stays on the shared storage, it's only the VM's memory contents that are moved from one ESXi server to another. The VM on the first ESXi server is duplicated on to the second ESXi server and then the original is deleted, during vMotion first ESXi server creates an initial pre-copy of memory from the running VM into the second ESXi server, during the copy process, a log file is generated to track all changes during the initial copy phase (it is referred to as a memory bitmap). Once the VM's are practically at the same state, this memory bitmap is transferred to the second ESXi server, before the transfer of the bitmap file the VM on the first ESXi server is put into a quiesced state. This state reduces the amount of acclivity occurring inside the VM that is being migrated, it allows the bitmap to become so small that it can be transferred very quickly, it also allows for rollback if a network failure occurs, this means that the migration will have to be successful or unsuccessful. When the bitmap has been transferred the user is then switched to the new ESXi server and the original VM is removed from the first ESXi server. You need the following to perform a vMotion all requirements are for both ESXI servers involved

Shared storage visibility between the source and destination ESXi servers, this also includes any RDM-mapped LUN's A VMKernel port group on a vSwitch configured with 1Gbps on the vMotion network, it will require a separate IP address Access to the same network, preferably not going across switches/routers, etc Consistently labeled vSwitch port groups Compatible CPUs

If one of the above has not been met, then vMotion will fail. I just want to touch on the network quality, I highly recommend that you use a dedicated 1GB network for the vMotion, although it is possible to perform a vMotion over a 100MB network it can fail and regularly, also VMware will not support it. Also CPU compatibility is also a show stopper, try to make sure that all your ESXi server that you plan to use vMotion are the same, it is possible to vMotion across different CPU types but it can be a real pain and expensive option only to find out that it does not work, buy for compatibility. There are a few requirements for the VM's

Inconsistently named port groups, vMotion expects the port group name for both the vMotion vSwitch and VM port group to be spelled the same and in the same case, for example vMotion and VMotion are different.

Active connections that use an internal switch, by active I mean that the VM is configured and connected to the same internal switch. Active connection to use a CD/DVD that is not shared, vMotion cannot guarantee that a CD/DVD will be connected after the switch CPU affinities, vMotion cannot guarantee that the VM will be able to continue to run on a specific CPU number on the destination server VMs in cluster relationship regarding the use of RDM files used in VM clustering, MSCS used to have problems with this No visibility to LUN, where RDM files are not visible to both ESXi servers Inconsistent security settings on a vSwitch/port group, if you have mismatched settings vMotion will error snapshots are fully supported you may be warned if reverting when a VM is moved, but as long as the data is on the shared storage there should be no problems

vMotion requires a VMKernel port group with a valid IP address and subnet mask for the vMotion network. A default gateway is not required (you should really have all ESXi servers that you are going to use with vMotion on the same subnet) but VMware does support vMotion across routers or WAN's.You can create a vMotion port group on an existing vSwitch. So lets create the vMotion port group I have two ESXi servers (vmware1 and vmware2) connected to a netgear GS608 router both at 1GB, I have already created a private port group on vSwitch2, this is a dedicated network for my vMotion traffic, if you need a refresher on port group and vSwitches check out my networking section. Remember we will be performing this on both ESXi servers, I will be using vSwitch2 to create my vMotion port group, this uses a dedicated network

Select the properties tag and then select the add button and the below screen should appear, select VMkernel (if you notice you should be able to see the VMware vMotion in the list)

select a meaningful name, make sure that it is spelt the same on both ESXi servers, it is case sensitive. Then tick the "use this port group for vMotion".

Enter a IP address, both must of course be different, the gateway is optional unless you are going over WAN's.

Lastly we get the summary page

Here are both ESXi server vMotion port groups, spelt the same and different IP addresses ESXi server (vmware1) ESXi server (vmware2)

Would you believe me that this is all there is to it, well it is, you can now hot migrate any VM that is configured on the shared storage and meets the above requirements., lets migrate a VM I am going to use an existing VM I have configured on the shared storage, notice that the VM is powered on and running, I right-click on the VM and select migrate

VM migratio n using vMotion

the below screen will appear, as we are only changing the ESXi host we

select "change host"

Next we select the destination host, in the case vmware2, if for any reason the VM cannot be migrated any errors will be produced in the button screen, other you will see "Validation succeeded"

Here is an example of a VM failing a vMotion, this is because I used local storage, which we will correct in my storage vMotion section below

Next comes a priority screen, depending on the resources available and time of the day (you may want standard priority during business hours) will decide the priority level

Lastly a summary screen appears

You can see that the VM was running on vmware1 and used the filer2_ds2 storage datastore. You can see in the Recent tasks panel at the bottom, it is being migrated from this ESXi server.

Finally the VM is successfully migrated, you can clearly see that it is now running on vmware2 and it is still using the same datastore (filer2_ds2), the whole process only took 16 seconds, this of course may take longer depending on the VM memory size and usage.

The other option is to drag and drop the VM from one server to another this will also perform the same task. If you receive errors like the one below, then check your vMotion network, make sure both ESXi servers have different IP address and are able to contact each other (use putty to ping each other), to produce this error I gave both ESXI servers the same vMotion IP address and then tried to migrate a VM, it paused a while then chucked out this error message

You might also see this error when trying to migrate a VM, this basically means that the VMware Tools has not established a heartbeat yet, generally either the VM has just been started or just migrated, give the VM time to settle down a little while and retry the migration again

Storage vMotion It is possible to move the storage of a VM and locate it to another storage area without even powering off, this is ideal when you want to migrate onto a new SAN or iSCSI server, thus upgrading your storage is a breeze, there are a number of other reasons to use SVMotion

Decommission an old array whose lease or warranty has expired switch from one storage type to another, say from Fibre channel to iSCSI Move VMs or virtual disks out of a LUN that is running out of space Ease future upgrades from one version of VMFS to another Convert RDM files into virtual disks

Like vMotion there are a number of requirements


Spare memory and CPU cycles, when you move storage additional memory is required to carry out the process, also additional CPU cycles will be used Free space for snapshots, storage vMotion is a copy/ file delete process so make sure there is enough free spare space available Time it will take longer to copy the files across, the larger the files the longer it will take Concurrency try now to burden the ESXI server to much by running multiple storage vMotion's at once, maximum of 4 at any one time

If you followed me on creating a VM in my virtual machine section, you will have noticed that I configured this on local storage, I am now going to move this to a shared storage area, which means that this VM can use vMotion storage The VM I have is using a local datastore (vmware1_vs1) and has 17GB of vMotio storage allocated to it, the VM is powered down but you can have the VM

running if you wish

Right-click the VM and select migrate, which should bring up the below screen, select "change datastore"

Select the datastore you wish to migrate to, also notice the access column and what type of access each datastore has, in this case I am selecting a datastore that has multiple access

I have already spoken about disk provisioning in my virtual machine section, here I select to keep the same

Lastly the summary screen appears

You can watch the progress in the "recent tasks" panel at the bottom

When the migration has been complete, the datastore will change and a completed note will appear in the "recent tasks" panel, if you look carefully it only took 1 minute and 22 seconds to migrate a 17GB VM with a few clicks of a button HOW COOL IS THAT!!!!!

You can use the commandline if you like, which is ideal if you want to script several storage vMotion's in one go, which is handy when you are migrating from one storage array to another, the command is "svmotion.pl" this command has many options check them out using the man page Cold Migration If you cannot meet the hot migration requirements you can always revert to cold migration this is where the VM is powered off, and as it is not as stringent there are less requirements that need to be meet. As long as both ESXi servers have visibility to the same storage, then cold migration can be very quick and the VM downtime kept to a minimum. You can also perform a ESXi server migration and a Storage vMotion at the same time but it will take a little longer, I personally only migrate servers that are not complex no databases, not clustered, generally web servers, SMTP servers, middleware servers are ideal candidates, but that's up to you. I am not going to cover cold migration as I performed this in the above example in the storage vmotion section, it is the same as a hot migration but with the VM powered off. I personally perform cold migrations on complex VM setups like MSCS and cluster setups (Veritas, Sun) that use quorum disks, also database VM's as these generally have large amounts of memory, don't get me wrong it is possible to perform a hot migration on these types of setups it's just my preference to perform a cold migration.

VMware (ESXi - 4.1) Recently I have been working in the world of VMware, as an administrator more of my servers are now being virtualized and I am being requested to get more involved with setting up VMware latest ESXi server. I wanted a site that covers the basics in order for me to gain my VMware accreditation and help me in remembering the vast information that comes with the VMware virtualized world. I will be covering most of the topics including backups, disaster recovery and some of the VMware tools such as P2V, VMotion, DRS, DPM, HA and FT and also covering vCLI. Please note that my website will only cover the VMware ESXi version 4, althrough many of the sections can be related to the older versions of ESX. The site has been comprised of reading the following books and real world experience, if you are new to VMware I highly recommend that you should purchase these books as they contain far more information than this web site contains and of course the VMware web site contains all the documentation you will ever need. Please feel free to email me any constructive criticism you have with the site as any additional VMware knowledge or mistakes that I have made would be most welcomed. Introduction Introduction and History Architecture Boot process My Hardware setup Introduction Networking Openfiler setup ESXi server setup Final Comments Installation ESXi Server installation vSphere Client installation vCenter installation and overview Networking Introduction Standard Networking Port Groups Configuration and Problems of vSwitches and Port groups Service Console Network Distributed Networking

Final Comments Storage Introduction Controllers and Local Storage SAN Storage (fibre) iSCSI storage (using openfiler) VMware File System (VMFS) Viewing Storage Virtual Machines Introduction First VM - Creating a new VM and Administration Guest OS installation and VMware Tools Snapshotting P2V (physical to virtual) Rapid VM deployment - templates, cloning and export to OVF Security Introduction Roles and Users Datastore Access Resource Monitoring and Management Introduction vCPU Memory vClient/vCenter performance charts Alarms Events and Tasks Resource Management Resource pools VMotion, Storage VMotion and Cold Migration VMotion Storage VMotion Cold Migration Distributed Resource Scheduler (DRS) Introduction Setting up and configuring DRS DRS Options DRS Faults and History events

Distributed Power Management (DPM) Introduction Configuring DPM Testing DPM Configuring DPM for ILO/RAC/IMM/DRAC cards High Availability (HA) Introduction Enabling and configuring Testing and Monitoring Fault Tolerance (FT) Introduction Configuring FT Backups and Restoring Introduction vDR Consolidated backups (VCB) Patch Management Introduction VMware Update Manager (VUM) Baselines and BaseLine Groups Scanning and Patching ESXi Server and cluster Patching (I also cover staging) Update Service Advanced Configuration Tools Introduction Host Profiles vCLI PowerCLI Commandline CheatSheet Cheatsheet Links The official VMware web site Books VMware books

Chapters come in a nice order, very easy to read, VMware vSphere 4 overall a very good book if you are new to VMware implementation - Mike Laverick ESXi server VMware ESXi - Planning, Implementation and Security Dave Mishchenko This compliments the above book , it has some information the other book does not and misses some information, this book uses the commandline more

You might also like