Professional Documents
Culture Documents
Kenny Garreau
@kennega
10/28/2012
Table of Contents
Introduction ............................................................................................................................................ 2
Intended Audience .............................................................................................................................. 2
Special Thanks ..................................................................................................................................... 2
Vendor Products...................................................................................................................................... 3
VMware vSphere 5 .............................................................................................................................. 3
Cisco UCS B-series Servers ................................................................................................................... 3
EMC VNXe Unified Storage .................................................................................................................. 3
Implementation....................................................................................................................................... 4
Architecture ........................................................................................................................................ 4
Challenges ........................................................................................................................................... 5
Best Practices ...................................................................................................................................... 7
Cisco UCS Configuration .......................................................................................................................... 7
Create Pools and Policies ..................................................................................................................... 7
Create vNIC Templates ........................................................................................................................ 9
Create Service Profile Template ......................................................................................................... 10
Deploy and Modify Service Profiles .................................................................................................... 17
EMC VNXe Configuration ....................................................................................................................... 18
Add Generic iSCSI Storage.................................................................................................................. 18
vSphere Installation............................................................................................................................... 24
ESXi Network Configuration ............................................................................................................... 28
ESXi iSCSI Configuration ..................................................................................................................... 30
EMC VNXe Configuration Redux ............................................................................................................ 32
Add VMware Storage......................................................................................................................... 32
Conclusion............................................................................................................................................. 36
References ............................................................................................................................................ 37
Introduction
Being in the technical solutions and integration space will always present challenges, and the more
technologies involved in that integration, the greater the challenge. From a technological perspective,
the protocols and products in this whitepaper arent the most advanced in the market, but I found some
unique issues that arose during the work. Many of the troubleshooting steps I followed that were
outlined on the Cisco, EMC and VMware forums didnt apply, or ended up being red herrings to the true
problem. I spent a lot of time sifting through reference documentation from each vendor, and
unfortunately the trusty Google search for EMC VNXe UCS yielded less fruitful information than I had
hoped to find. That is why I decided to pen this whitepaper.
Intended Audience
This document is meant to provide specific technical guidance with regards to deploying VMware
vSphere 5 in an environment backed by Cisco UCS B-series blade servers and EMCs VNXe storage
offering. These are a collection of best practices from Cisco, VMware and EMC, as well as my
professional recommendations and experiences from several of the deployments I have been charged
with.
The knowledge level assumed is the ability to deploy all three technologies in at least an SMB
environment. Standard setup tasks such as rack and stack, array/chassis cabling and basic setup and
initialization procedures are outside the scope of this document.
Special Thanks
Before I get started, putting this together would not have been possible without the help of a couple
good friends.
Jason Goldrick (@J_Goldrick) for all your help with the VNXe minutiae.
Anderson Nichols (@badrockjones) the best technical writer I know!
Vendor Products
Covered in this whitepaper is the integration of the following technologies:
VMware vSphere 5
Cisco UCS B-series Servers
EMC VNXe Unified Storage
VMware vSphere 5
The deployment here utilized VMware vSphere 5.0, with ESXi 5.0 U1 hosts deployed from Ciscos
vendor-specific ISO. This is available from VMwares Downloads site, under the vSphere 5.0 download
page, under Drivers and Tools.
vCenter 5.0 U1b was installed and configured as the management platform for the underlying ESXi
hypervisor hosts.
Implementation
Architecture
Now that weve covered the key components involved, lets discuss how all of this is going to fit
together. Keep in mind that most SMB customers who have purchased this type of solution arent going
to have an existing 10GbE switching infrastructure. This will be their first exposure to 10Gb, and this may
even be their first time using a dedicated storage or virtualization platform! There are lots of moving
parts, so lets cover the basics.
We have four 10GbE SFP+ Twinax cables going from each 2208 Fabric Extender (FEX) to its
corresponding 6248UP Fabric Interconnect (FI). The customer does NOT have an existing 10GbE
switching infrastructure, so in this scenario we have opted to utilize UCS Appliance Ports to connect the
VNXes 10GbE iSCSI modules. Cisco UCS bundles will typically include four SFP-10GB-SRs, which are
10GbE optical SFPs youll need for connectivity between those 10GbE modules in the VNXe and the
6248 FIs1. Two appliance ports will be configured per FI, so you should cable module ports A-0 and B-1
to FI-A, then module ports A-1 and B-0 to FI-B.
This customer had a single-switch core, and many SMBs will have a similar setup, or perhaps a stack of
Cisco Catalyst 3750 switches. Connectivity choices upstream can vary wildly, but a quick Google search
for Cisco UCS Networking Best Practices will yield plenty of results 2. For this install, the customer has
two two-port 1GbE LACP bundles from each FI into a single switch, so we acquired four GLC-T GBICs and
placed two into each FI and configured two LACP port-channels within UCS Manager.
In order to enable Service Profile mobility, each blade server has a VIC1280 mezzanine card installed
with no local drives. We will boot each Service Profile (ESXi host) from an iSCSI LUN presented via the
VNXe, and shared iSCSI datastores for VMware virtual machine storage will be presented to each host
and formatted with VMFS-5.
The following diagram shows the Visio layout of how weve connected the components:
You may be able to unplug the 10GbE SFPs in the VNXes modules and use 10GbE SFP+ Twinax cabling, but I have
not tested this. EMC VNX requires Active Twinax cabling, which may also be a requirement for the VNXe. Hit me up
on Twitter (@kennega) if youve tested this!
2
See also the References section of this document.
Challenges
Throughout the course of testing and validating this solution, I ran into quite a few issues. As of now
they seem quite simple and obvious, but during the integration and deployment, things arent always so
clear.
The biggest challenge with this V+C+E integration is the lack of Storage Groups within the VNXe. In a
VNX, LUNs are assigned to a storage group, and that storage group is in turn masked to a host.
Therefore, masking a boot LUN ID of 0 to each host is uniform throughout the environment.
The VNXe however, has no concept of Storage Groups. LUNs are assigned at the iSCSI Server level within
the system, at which point you assign Virtual Disk or Datastore access to a Host. Its a slightly different
way of thinking than with a mid-range or enterprise storage array. While this impacts how you end up
deploying and organizing your storage, it also has an effect on your UCS Service Profiles and Service
Profile Templates.
Best Practices
The following best practices are followed as part of this implementation:
Dedicated storage VLANs for iSCSI fabrics; one for A side, one for B side.
Differing subnets in each storage VLAN. In this deployment, we utilize 172.16.10.0/24 and
172.16.11.0/24
LUN masking at the array for boot LUNs and vSphere datastores
Do not use link aggregation for any iSCSI interfaces
iSCSI vmkernel port binding within vSphere
iSCSI vmkernel port groups are contained within a dedicated vSwitch
Balanced storage presentation from iSCSI Server A and iSCSI Server B
All boot LUNs presented from one iSCSI Server
Boot LUNs should always utilize the lowest tier of storage available
iSCSI Overlay vNICs do not utilize a MAC address (inherited from parent vNIC)
Do not enable Fabric Failover for iSCSI-bound vNICs
All iSCSI connections are set as access ports to their respective iSCSI VLANs
Jumbo frames are enabled end-to-end
Utilize Round Robin PSP within vSphere for storage multipathing
Create your Appliance Interfaces, and be sure to set them as Access ports to the appropriate iSCSI VLAN.
For ease, Ive created a VLAN label iSCSI and assigned VLAN 500 on FI-A and VLAN 501 on FI-B.
Remember, youll need to create your iSCSI VLANs in two places: under Appliances Fabric A/B VLANs,
and under LAN LAN Cloud Fabric A/B VLANs. While creating the Appliance Interfaces, be sure to
assign the Network Control Policy you just created:
Under the LAN tab, under Policies root Network Control Policies, right-click and Create Network
Control Policy. Wait... Didnt we do this above? Sort of. We created a Network Control Policy before for
our Appliance Ports. This policy were creating will be applied to our vNIC Templates were about to
create. Set the same options as before however: Enable CDP, leave all others at their defaults. Give it a
descriptive name and click OK.
One vNIC per Fabric for Service Console and vMotion traffic
One vNIC per Fabric for Virtual Machine traffic
One vNIC per Fabric for iSCSI traffic (do not enable Fabric Failover for these vNIC Templates!)
Create your six vNIC Templates, assign the appropriate VLANs, and be sure you set MTU 9000 for your
iSCSI vNIC Templates. Unfortunately I didnt get a screen grab of my vNIC Templates as an example, but I
did get a screen cap of the vNICs within the Service Profile Templates as they were built from the vNIC
Connectivity Templates:
Youll see above that this iSCSI vNIC within the Service Profile Template is bound to my iSCSI_A LAN
template, is owned by Fabric A WITHOUT Failover, has an MTU of 9000, Adapter Policy set to VMware
and Network Control Policy set to my CDP-enabled NCP I created earlier.
9
Select Create Local Disk Configuration Policy, give it a name. Ive called mine NoLocalStorage - select
No Local Storage from the drop-down, uncheck Protect Configuration and click OK.
Choose the policy you just created from the drop-down.
Under How would you like to configure SAN connectivity? select No vHBAs. Click Next.
10
On the Networking screen, skip any configuration of Dynamic vNIC Connection Policies. Select Expert for
configuring LAN connectivity, and now create all of your vNICs for the Service Profile Template.
Select Add, Name your vNIC to be attached to the Service Profile Template, and check Use LAN
Connectivity Template select your vNIC Template created earlier, set your Adapter Policy to VMware.
Create a vNIC for each vNIC template you created earlier. Your screen should look similar to this:
11
Give the iSCSI vNIC a name, select the vNIC created just now to Overlay iSCSI traffic, and select the
appropriate VLAN. Click Create iSCSI Adapter Policy and create a policy with a Connection Timeout of
30s and Busy Retry of 30s:
12
13
On the Server Boot Order screen, create a Boot Policy to use. Boot off the CD-ROM first, and then add
an iSCSI Boot option. Be sure you use the same iSCSI Overlay vNIC name you created earlier. Click OK.
14
Paste the IQN of the VNXe iSCSI Server into the Name field, set the IPv4 address to the iSCSI Server Aside IP address. The default LUN ID of 0 is acceptable, this will change for the Service Profiles anyway
when we un-bind them from the Template after deploying.
Note: I only deploy one iSCSI vNIC at this time. The reason for this is that UCS currently does not have
the ability to deploy multiple IP pools for iSCSI vNICs. It would be great if this feature was enabled as
wed be able to simply create two iSCSI vNICs, bind distinct IPs to each initiator on different subnets, and
VMware would be crafty enough to recognize both of them and assign vmkernel ports to them. Our goal
is to automate what we can though, so that is why I do not create a second iSCSI vNIC.
Click OK, and your screen should look like this:
15
Select a Server Assignment Pool to use, your desired power state at association, and any Pool
Qualifications or Firmware Management Policies youd like to apply, if any. Click Next.
16
17
After you modify these parameters and click OK and Save Settings, youll be prompted with a User
Acknowledgment to reboot the blade to apply the new parameters. Click Yes, and under Pending
Activities, acknowledge the reboot. You can do these one at a time for each Service Profile, or edit and
reboot them all in bulk. Dont close UCS Manager quite yet; well need some information for the VNXe.
Id recommend something a little more descriptive, maybe ESXnBootLUN, where n is the Service Profile number. I
did a bad job here of clearly delineating between the Host ID and Storage ID.
18
Click Next. Choose a Pool from which to provision the LUN, preferably the pool with the lowest
performance. Set your LUN size to 5GB, and since you can have two iSCSI Servers, make sure youre
provisioning from iSCSI Server A:
19
20
Click Next. Under the Operating System dropdown, select VMware ESX:
21
Click Next. Grab the IQN off the iSCSI Boot Parameters property sheet as well, and input that on this
screen.
22
Select Virtual Disk from the Access drop-down. Click Next. Verify the summary information, and click
Finish.
23
vSphere Installation
Log back into the UCS Manager interface, and select your first Service Profile. Reboot the Service Profile
since weve added our LUN masking configuration on the VNXe for our boot LUNs. During the boot-up
process, if you have disabled the Quiet Boot option in Service Profile BIOS Policy4, you should see the
boot LUN appear in the Cisco VIC iSCSI Boot Driver as follows:
If you do NOT see the above, do not proceed with mounting the Cisco ISO to the Virtual Media drive.
Double-check the configuration items preceding this step. The command-line for nxos within the FIs is
extremely useful. Verify that the MAC address for the VNXes iSCSI Server IP is assigned to the proper
VLAN, and that you are seeing the proper iSCSI vNIC MAC addresses on that VLAN as well:
Id highly recommend disabling Quiet Boot in the BIOS policies for your Service Profiles for the initial part of the
implementation. Its extremely useful to be able to look at the POST process and be sure your VIC is initializing and
seeing the boot LUN properly.
24
This is actually a bad example as only one port is active currently (Eth1/24), which doesnt have an IP
assigned to it within the VNXe yet. But a show mac address-table vlan 500 will show the MAC
membership for your iSCSI VLAN on FI-A. If you are seeing MAC addresses on the opposite FIs that
youre expecting, you may need to edit your iSCSI Server configuration within the VNXe, or do some
cable swaps between FIs.
If all is well, mount your Cisco custom ISO in the Virtual Media tab:
25
Run through the install prompts, and when the installer scans for disks, you should see the following:
26
After the install finishes, the virtual media ISO will be ejected and the Service Profile will reboot.
Youll need to perform this procedure for each Service Profile. If you dont see your boot LUN at first,
dont panic. Sometimes its as simple as a Service Profile Reset and on the next boot it will discover the
target and login successfully.5
CLI tools are invaluable for troubleshooting. From the adapter prompt, the iscsi_ping and iscsi_get_config
commands display a wealth of information. However, they are only available when booting or when the server has
entered the BIOS screen.
27
A pre-requisite for iSCSI vmkernel port binding is to bind each vmkernel port in the iSCSI vSwitch to its
own vmnic by overriding the vSwitch Failover order.
For the iScsiBootPG port group, edit the NIC Teaming properties and select to override the switch
failover order:
28
Commit this change and repeat the same process for the other port group, except youll reverse the
adapter usage:
Im not quite sure why these vmnics show a speed of 20000. This usually happens when Fabric Failover is
enabled, as the NIC technically sees two 10Gb connections. I can assure you FF is not enabled for the vNICs backing
these vmnics however.
29
Click Add, and select a vmkernel port to use. Both of the vmkernel ports you created before should show
as compliant; if they do not, go back and double-check your bindings for the port groups.
30
You can choose to add discovery targets now, but the VNXes vSphere integration will actually do all of
this for you.
31
32
Click Next, and enter the IP address of the Service Profiles Management Network vmkernel port. Then
click Find.
33
At this point the VNXe should find and recognize the host as an ESXi server.
If the password is ever changed (or the account deleted if you use a non-root account) youll need to be sure to
update the credentials for the host(s) affected.
34
The host has now been re-added to the VNXe, but well need to make sure we re-assign its boot LUN to
the host. Navigate to Storage Generic iSCSI Storage and select the boot LUN for the host you just readded. Click Details, and then select Host Access.
35
At this point, youll be able to add shared datastores to your ESXi hosts by selecting Create Storage for
VMware from the Dashboard. This is a well-covered topic in EMCs documentation, as well as on several
blogs and YouTube, so for the sake of brevity Ill leave those instructions to their docs.9
Conclusion
I hope this document is of assistance to anyone looking to virtualize with VMware vSphere on Cisco
Unified Compute Servers backed by EMCs VNXe on iSCSI. If you have any feedback or questions, or just
want to give a shout out that this was helpful, I can be found on Twitter @kennega, or via my website:
http://www.dudewheresmycloud.com/
This is for reference only. The host will never try to use the Management Network to log in via iSCSI.
Depending on the VNXe code revision, the datastores created by the VNXe vSphere engine may create the
datastores as VMFS-3 instead of VMFS-5. The newest code rev from EMC is supposed to address this.
9
36
References
EMC VNXe High Availability
https://community.emc.com/docs/DOC-12551
Multipathing Configuration for Software iSCSI Using Port Binding
http://www.vmware.com/files/pdf/techpaper/vmware-multipathing-configuration-software-iSCSI-portbinding.pdf
Cisco UCS Manager GUI Configuration Guide
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/2.0/b_UCSM_GUI_Conf
iguration_Guide_2_0_chapter_011101.html#concept_D7BF302366F24CF5A602B0E0BD18787C
Cisco UCS Networking Best Practices
http://bradhedlund.com/2010/06/22/cisco-ucs-networking-best-practices/
37