You are on page 1of 13

LeftHand Networks VI3 field guide for SAN/iQ 8 SANs

Overview
Administrators implementing VMware® infrastructure 3 (VI3) on a LeftHand Networks® SAN
should read this document in its entirety. Important configuration, best practices, and frequently
asked questions are outlined to accelerate a successful deployment.

Contents
Overview ......................................................................................................................................... 1
Initial iSCSI Setup of VI3 server .................................................................................................... 2
Licensing ..................................................................................................................................... 2
Networking for the Software initiator ......................................................................................... 2
Enabling the iSCSI software adapter .......................................................................................... 4
HBA connectivity and networking .............................................................................................. 4
Connecting and using iSCSI volumes ............................................................................................ 5
Creating the first iSCSI volume on the SAN .............................................................................. 5
Enabling VIPLB for performance ............................................................................................... 5
Discovery of the first iSCSI volume ........................................................................................... 5
Discovering additional volumes .................................................................................................. 6
Disconnecting iSCSI volumes from ESX or ESXi hosts ............................................................ 6
Troubleshooting Volume Connectivity ....................................................................................... 6
Creating a new datastore on the iSCSI volume ........................................................................... 7
Expanding a SAN/iQ volume and extending a datastore on the iSCSI volume ......................... 7
Snapshots, Remote IP Copy, and SmartClone volumes ................................................................. 7
Resignaturing .............................................................................................................................. 7
SAN/iQ snapshots of VI3 raw devices ........................................................................................ 8
SAN/iQ snapshots of VI3 VMFS datastores ............................................................................... 8
SAN/iQ Remote IP Copy Volumes and SRM ............................................................................ 8
SAN/iQ SmartClone Volumes .................................................................................................... 8
VMotion, Clustering, HA, DRS, and VCB ..................................................................................... 9

Page 1 of 13
Choosing Datastores and Volumes for virtual machines ................................................................ 9
Best Practices ................................................................................................................................ 10
FAQ............................................................................................................................................... 11

Initial iSCSI Setup of VI3 server


Licensing
In order to use software iSCSI functionality ESX servers must be licensed for iSCSI SAN use.
Licensing information is viewable for each ESX or ESXi server within the virtual center client.

Networking for the Software initiator


SAN connectivity via the iSCSI software adapter requires specific network configuration before
it is enabled. To enable it correctly a VMkernel network must be created with sufficient network
access to the iSCSI SAN. As a best practice use at least two Gigabit network adapters teamed
together for performance and failover. To allow for iSCSI authentication ESX servers also
require a Service Console that can route to the iSCSI SAN. If the an ESX servers current Service
Consoles cannot route to the iSCSI SAN an additional Service Console should be added to the
same network as the VMkernel used for iSCSI. ESXi servers do not require this as they do not
have a Service Console. As a best practice the VMkernel network for iSCSI should be separate
from the management and virtual networks used by virtual machines. If enough networks are
available VMotion should use a separate network and VMkernel also.

The ideal networking configuration for iSCSI depends on the number of Gigabit network
connections available to a VI3 server. The most common; 2, 4, and 6 ports, are outlined here for
reference.

VI3 servers with only 2 Gigabit network ports are not ideal for iSCSI SANs connected by the
software initiator due to lack of network bandwidth to ensure good performance. If performance
is not a concern, VI3 servers can function well with only 2 gigabit ports. VI3 servers with only 2
Gigabit network ports should be configured with:

o A single virtual switch comprised of


both Gigabit ports teamed together
containing:
 A network for virtual machines
 A service console network
(ESX) or management network
(ESXi)
 A VMkernel network with
VMotion enabled

Page 2 of 13
o As a best practice the VMkernel
network’s failover order should
be reversed from the rest of the
port groups on the switch. This
will make best use of bandwidth
by favoring the second adapter
for iSCSI and VMotion and
favoring the first adapter for VM network traffic and management access.

VI3 servers with 4 Gigabit network ports are capable of performing better by separating
management and virtual machine traffic away from iSCSI and VMotion traffic. VI3 servers with
4 Gigabit network ports should be configured with:

o Two virtual switches each comprised of two Gigabit ports teamed together. If possible one
port from two separate Gigabit adapters should be used. For example if using two onboard
Gigabit adapters and a dual port Ethernet card, team together port 0 from the onboard and
port 0 from the Ethernet card. Then team together port 1 from the onboard and port 1 from
the Ethernet card. This provides protection from some bus or card failures.

 The first virtual switch should have:

 A service console (ESX) or


management network (ESXi)

 A virtual machine network

 The second virtual switch should


have:

 A VMkernel network with


VMotion enabled

 For ESX a service console (required for iSCSI authentication, not required for
ESXi)

VI3 servers with 6 Gigabit network ports are ideal for delivering performance with the software
iSCSI initiator. The improvement over 4 ports is achieved by separating VMotion traffic and
iSCSI traffic so they don’t have to share bandwidth. Both iSCSI and VMotion will perform
better in this environment. VI3 servers with 6 Gigabit network ports should be configured with:

o Three virtual switches each comprised of two Gigabit ports teamed together. If possible one
port from separate Gigabit adapters should be used in each team to prevent some bus or card
failures from affecting an entire virtual switch.

Page 3 of 13
 The first virtual switch should have:

 A service console (ESX) or


management network (ESXi)

 A virtual machine network

 The second virtual switch should


have:

 A VMkernel network with


VMotion disabled

 For ESX a service console


(required for iSCSI
authentication, not required for
ESXi)

 The third virtual switch should have:

 A VMkernel network with VMotion enabled on a separate subnet than iSCSI

More than 6 Ports: If more than 6 network adapters are available more adapters can be added to
the iSCSI virtual switch to increase available bandwidth or used for any other network services
desired.

Enabling the iSCSI software adapter


The VI3 iSCSI software adapter must be enabled. The iSCSI software adapter is managed from
the Storage Adapters list of each ESX or ESXi server. When the iSCSI adapter is enabled, copy
or write down the “iSCSI Name”. This is the iqn name that identifies this VI3 server and will be
needed for authentication on the SAN. The VI3 server must be rebooted after enabling the iSCSI
software adapter.

HBA connectivity and networking


San connectivity via iSCSI HBAs enables
offloading iSCSI processing from the VI3
server and booting ESX itself from the iSCSI
SAN. HBAs do not require licensing or special
networking within VI3 servers. The physical
network for HBAs should be a dedicated
Gigabit network to the SAN just like for
software initiators. As a best practice use two
HBA initiators (a dual port or two single port),

Page 4 of 13
each configured with a path to all iSCSI targets for failover. Configuring multiple HBA initiators
to connect to the same target requires configuring authentication for each initiators iqn name on
the SAN. Typically this is configured in as two SAN/iQ ”Servers” (one for each HBA initiator)
each with permissions to the same volumes on the SAN.

Connecting and using iSCSI volumes


Creating the first iSCSI volume on the SAN
Before a VI3 server can mount a new iSCSI volume it
must be created on the SAN and authentication
configured to allow the server to access the volume.
Use the LeftHand Networks Centralized Management
Console (CMC) or CLI to create a new volume.
Create a “Server” representing the ESX or ESXi
server using the iqn name(s) copied from the VI3
server’s initiator(s), or CHAP as required. SAN
volumes are simply assigned to “Servers” within the
CMC or CLI.

Enabling VIPLB for performance


Virtual IP load balancing, or just load balancing, is a setting on each SAN/iQ ”Server” that
allows for iSCSI sessions to be distributed throughout the cluster of SAN/iQ nodes to maximize
performance and bandwidth utilization. Most initiators support this feature including VI3
software initiators and HBAs so it is on by default in SAN/iQ 8 SANs. All VI3 initiators should
have this feature enabled in a SAN/iQ SAN.

Discovery of the first iSCSI volume


To discover volumes the software initiator of each ESX or ESXi host must have the Virtual IP
address of the SAN/iQ cluster containing the volume added to its dynamic discover list. Then
new targets can be discovered by simply rescanning the iSCSI software adapter on the ESX or
ESXi host. The iSCSI session status of an ESX server can be viewed in the LeftHand Networks
CMC by selecting the “Server” in question. Volumes that are connected show an IP address in
the “Gateway Connection” field. If an IP addresses is not listed then something might be
configured incorrectly on the ESX host or the “Server” object in the CMC.

Page 5 of 13
Discovering additional volumes
A reboot of a VI3 server will completely refresh iSCSI connections and volumes included
removing ones that are no longer available. Without rebooting additional volumes can be logged
into and discovered by simply performing a rescan of the iSCSI software adapter or all adapters.
HBAs can also add targets manually by configuring static targets.

Disconnecting iSCSI volumes from ESX or ESXi hosts


The VI3 iSCSI software adapter does not have a session disconnect feature. To disconnect an
iSCSI volume it must first be unassigned from the “Server” within the LeftHand Networks
CMC. Unassigned volumes are not forcefully disconnected. Instead the server simply will not be
allowed to login again. Rebooting an ESX or ESXi server will clear all unassigned volumes.
Individual iSCSI sessions can be reset (resetsession) from the SAN/iQ 8 CLI to remove them
from ESX or ESXi hosts forcefully without rebooting. Before forcefully resetting an iSCSI
session from CLI all VMs accessing the volume through that session should be powered off or
VMotioned to another server that will continue to access the volume.

Troubleshooting Volume Connectivity


If new volumes are not showing up as expected try these troubleshooting steps

o Ping the virtual IP address from the iSCSI initiator to ensure basic network connectivity
 For the software initiator this can be done by logging into the VI3 service console and
executing vmkping x.x.x.x and ping x.x.x.x. Both of those commands must be successful
or networking is not correct for iSCSI. The vmkping ensures that a VMkernel network
can reach the SAN and the ping ensures the service console can reach the SAN. Both
must be able to network to the SAN to log into new volumes.
 HBAs typically have their own ping utilities inside the BIOS of the HBA
 ESXi has a network trouble shooting utility that can be used from KVM console to
attempt a ping to the SAN.
o Double check all iqn names or CHAP entries. For iSCSI authentication to work correctly
these must be exact. Simplifying the iqn name to something shorter than the default can help
with trouble shooting.
o Make sure all “Servers” on the SAN have load balancing enabled. If there is a mix, some
have it enabled and some don’t, then those that don’t might not connect to their volumes
o Enable resignaturing. Some volumes that have been copied, snapshot, or restored from
backup could look like a snapshot LUN to VI3. If resignaturing is not enabled VI3 will hide
those volumes.
o Verify that the iSCSI protocol is allowed in the firewall rules of ESX. Version 3.5 of ESX
does not allow iSCSI traffic through the firewall by default in most installations.

Page 6 of 13
Creating a new datastore on the iSCSI volume
Now that the VI3 server has an iSCSI SAN volume connected it can be formatted as a new
VMFS data store or mounted as a raw device mapping (RDM) directly to virtual machines. New
datastores are
formatted from
within the VMware
virtual center client.

Expanding a SAN/iQ volume and extending a datastore on


the iSCSI volume
Both SAN/iQ volumes and VI3 datastores can be expanded or extended dynamically. If space is
running low on a datastore you will want to first make the volume it is on larger. To increase the
size of the SAN/iQ volume, simply edit it in the CMC and give it a larger size. SAN/iQ software
will immediately change the LUN size and layout data across the cluster accordingly without
affecting its availability. Once the SAN/iQ volume has been expanded the datastore on that
volume can have a VMFS extent added to it to use the new space available. This expansion can
only be done four times within the same volume since each extent is an additional primary
partition created by VI3 on the SAN/iQ volume. If extending more than four times is necessary
then an additional SAN/iQ volume would have to be used for the fifth extent. For more
information on extents please refer to VMware documentation.
http://www.vmware.com/support/pubs/vi_pubs.html

Snapshots, Remote IP
Copy, and SmartClone
volumes
Resignaturing
In order for VI3 servers to utilize SAN based
snapshots resignaturing must be enabled on
the VI3 server. If resignaturing is not

Page 7 of 13
enabled the VI3 server will report that the volume is blank and needs to be formatted even
though it contains a valid datastore. Resignaturing is one of the advanced settings of an ESX or
ESXi server that can be edited in the virtual center client. Be aware that some SANs cannot
support resignaturing. If SAN storage other than LeftHand Networks SANs is also attached to
the same VI3 server refer to VMware’s SAN configuration guide to verify resignaturing is an
option. For more information on resignaturing please refer to VMware documentation.
http://www.vmware.com/support/pubs/vi_pubs.html

SAN/iQ snapshots of VI3 raw devices


SAN/iQ snapshots of VI3 raw devices function and are supported in exactly the same way they
do for physical servers either booting from the SAN or accessing raw disks on the SAN. Detailed
information about SAN/iQ snapshots and how they works for specific applications is available at
http://www.lefthandnetworks.com/resource_library.aspx

SAN/iQ snapshots of VI3 VMFS datastores


SAN/iQ snapshots are very useful in a VI3 environment. All virtual machines stored on a single
volume can be snapshot at once and rolled back to anytime. Additionally SAN/iQ snapshots can
be mounted to any VI3 server without interrupting access to the volume they are a point in time
copy of.

SAN/iQ Remote IP Copy Volumes and SRM


Remote IP Copy™ software allows SAN/iQ snapshots to be copied over WAN links to remote
sites for disaster recovery or backup. VI3 environments can be protected by Remote IP Copy
volumes on a scheduled basis and automated by VMware Site Recovery Manager for the
simplest and most complete disaster recovery solution. LeftHand Networks provides a storage
replication adapter (SRA) for VMware Site Recovery Manager (SRM) to seamlessly integrate
Remote IP Copy volumes with a VI3 environment. For more information on Remote IP Copy
volumes go to http://www.lefthandnetworks.com/resource_library.aspx

For more information or to download the LeftHand Networks SRA for VMware Site Recovery
Manager got to http://resources.lefthandnetworks.com/forms/VMware-LeftHand-SRA-
Download

SAN/iQ SmartClone
Volumes
SmartClone™ volumes are very useful in a
VI3 environment. All virtual machines
stored on a single volume can be cloned
instantly and without replicating data.
SmartClone volumes only consume
changed data from the time the clone was

Page 8 of 13
taken. This is the best way to deploy large quantities of cloned virtual machines or virtual
desktops. SmartClone volumes can be used with any other SAN/iQ features such as snapshots or
Remote IP Copy without limitation. SmartClone volumes are also very useful for performing
tests on virtual machines by quickly reproducing them without taking up space on the SAN to
actually copy them.

VMotion, Clustering, HA, DRS, and VCB


The advanced VI3 features of VMotion, Clustering, HA, DRS, and VCB all require multiple VI3
servers to have access to volumes
simultaneously. To enable this on a SAN/iQ
volume multiple “Servers” (one for each VI3
server initiator) can be assigned to the same
volume. To use multiple “Servers” simply
create one for each VI3 host that will connect
to the SAN in the same manner outlined in
the Creating the first iSCSI volume on the
SAN section. When multiple “Servers” are
assigned to the same volume a warning will
indicate that this is intended for clustered servers or clustered file systems such as ESX and
ESXi. CHAP authentication can also be used for increased security or to represent many ESX or
ESXi hosts as one “Server” in the LeftHand Networks CMC. Each ESX or ESXi server must be
configured to use the correct CHAP credentials. New volumes on each VI3 server will have to be
discovered as described in Discovery of the first iSCSI volume.

VCB proxy servers should have their own “Server” configured on the SAN with read only access
to the volumes that ESX or ESXi servers are accessing. VCB does not require write access and
this prevents the Windows based VCB proxy server from inadvertently writing new signatures to
VMFS volumes that are in use by ESX or ESXi.

Choosing Datastores and Volumes for virtual


machines
More than one virtual machine can and generally should be stored on each datastore or volume.
Choosing where to put virtual machines should be driven by backup policies, virtual machines
relationships to each other, and performance. In general virtual machines that do not have a
relationship to one another should not be mixed on the same SAN/iQ volume. SAN/iQ features
like snapshots, Remote IP Copy, and SmartClone volumes are very useful with virtual machines
but will always affect all virtual machines on the same volume simultaneously. If virtual

Page 9 of 13
machines that have no relationship are mixed on a single volume those virtual machines will
have to be snapshot, rolled back, remotely copied, and cloned together.

Performance of virtual
machines could also be
affected if too many virtual
machines are located on a
single volume. The more
virtual machines on a volume
the more IO and SCSI
reservation contention there is
for that volume. Up to sixteen
virtual machines on a single
volume will function but might
experience degraded
performance, depending on the
hardware configuration, if all
VMs are booted at the same
time. Four to eight virtual
machines per volume is less
likely to affect performance.

Best Practices
Use at least two Gigabit network adapters teamed together for performance and failover of
the iSCSI connection.

Teaming network adapters provides redundancy for networking components such as adapters,
cables, and switches. An added benefit to teaming is an increase in available IO bandwidth.
Network Teams on SAN/iQ storage nodes are easily configured in the CMC by selecting 2 active
links and enabling a “bond”. Balance ALB is the most common teaming method used on
SAN/iQ storage nodes. Network Teams on VI3 servers are configured at the virtual switch level.
Testing has shown that a VI3 server’s NIC team for iSCSI handles networks failures smoother if
it has “rolling failover” enabled.

The VMkernel network for iSCSI should be separate from the management and virtual
networks used by virtual machines. If enough networks are available VMotion should use a
separate network also.

Separating networks by functionality, iSCSI, VMotion, Virtual Machines, provides higher


reliability and performance of those functions.

Page 10 of 13
Enable load balancing for iSCSI for improved performance.

The Load Balancing feature of a SAN/iQ “Server” allows iSCSI connections to be re-directed to
the least busy storage node in the cluster. This keeps the load on storage nodes throughout the
cluster as even as possible and improves performance of the SAN overall. This settings in
enabled by default in SAN/iQ 8 management groups but had to be enabled explicitly in previous
versions.

Virtual machines that can be backed up and restored together can share the same volume.

Since SAN/iQ snapshots, Remote IP Copy volumes, and SmartClone volumes work on a per
volume basis it is best to group virtual machines on volumes based on their backup and restore
relationships. For example, a test environment made up of a domain controller and a few
application servers would be a good candidate to put on the same volume. Those could be
snapshot, cloned, and restored as one unit.

FAQ
I added the SAN/iQ cluster virtual IP address to VI3 server’s dynamic discovery but don’t
see a new target?

Most likely you just need to select “rescan” under the “configuration” tab and “storage adapters”
or you have not configured authentication on the SAN correctly. Also, confirm all network
configurations. Please refer to the Discovery of the first iSCSI volume section.

I can’t see any new volumes after the first one?

SAN/iQ software version 6.5 with patch 10004 or higher is necessary to mount multiple volumes
on VI3 ESX servers. Contact support@lefthandnetworks.com to receive the patch for 6.5.
Upgrading to 6.6.00.4101 or higher is preferred.

Is virtual IP load balancing supported for VI3?

The VI3 software adapter and hardware adapters are supported by SAN/iQ Virtual IP Load
Balancing. As a best practice this should be enabled on the authentication groups of all VI3
initiators. Load Balancing is enabled by default in SAN/iQ 8 management groups.

I rolled back / mounted a snapshot and VI3 server says it needs to be formatted?

You need to enable resignaturing on your VI3 servers to mount or roll back SAN based
snapshots. Please refer to the Resignaturing section.

What Initiator Should I use for a Virtual Machine boot volume/LUN?

Page 11 of 13
There are many ways to connect and present an iscsi volume to a virtual machine on an VI3
server. These include using the VI3 software adapter, a hardware adapter (HBA), and some guest
operating systems have their own software iSCSI adapters. For VMFS datastores containing
virtual machine’s definitions and virtual disk files the VI3 servers hardware or software adapter
must be used. Which one, HBA vs software, is debateable. Each gives you full VI3 functionality
and is supported equally. HBA’s advantages are in supporting boot from SAN and offloading
iSCSI processing from the VI3 server. If boot from SAN is not neccessary then the software
initiator is a good choice. The impact of iSCSI processing on modern processors is minimal.
With either one performance is more a function of the quality of physical network, disk quantity,
and disk rotation speed of the SAN being attached to.

What Initiator Should I use for additional Raw Device Mappings or Virtual Disks?

Aside from the boot LUN/Volume additional volumes should be used for storing application
data. Specifically database and logs volumes for many applications require as a best practice
separate volumes. These should be presented as either raw devices (RDM) through your chosen
VI3 server initiator or connected as an iSCSI disk directly through the virtual machines guest
operating system software initiator. Using RDMs or direct iSCSI allows these application
volumes to be transported between physical and virtual servers seamlessly since they are
formatted in the native file systems of the operating system (NTFS, EXT3, etc). In order to use
the guest operating system initiator successfully ensure these guidelines are followed:

o The guest operating system initiator is supported and listed in LeftHand Networks
compatibility matrix
o For good performance and failover the guest network the initiator is going to use is at least
dual gigabit and separate from other virtual networks; VMkernel, VMotion, Service Console,
Virtual Machine public networks, etc.
o The guest operating system is using the vmxnet NIC driver from VMware tools.
o The virtual machine will not be used in conjunction with VMware Site Recovery Manager
(SRM). SRM does not work with volumes connected by guest initiators.

How many Virtual Machines should be on a Volume or Datastore

Please refer to the Choosing Datastores and Volumes for virtual machines section.

Are jumbo frames supported?

Jumbo frames (IP packets configured larger than the typical 1500 bytes, up to 9000) are
supported by all SAN/iQ SANs. In order for Jumbo frames to be affective they must be enabled
end to end including network adapters and switches. ESX version 3.5 allows configuration of
Jumbo frames but does not support them for use with the VI3 software iSCSI initiator. HBAs are
capable of utilizing Jumbo frames on any version of ESX.

Page 12 of 13
Is VCB (VMware Consolidated Backup) supported on iSCSI?

VMware Consolidated Backup 1.0.3 and higher is fully supported on iSCSI SANs.

Why does my 2TB or higher iSCSI volume show up as 0MB to the VI3 server?

The maximum volume size supported by VI3 is 2047GB.

What version of SAN/iQ software supports VMware VI3?

SAN/iQ software 6.5 + patch 10004 will support all features except for VMotion with more than
one virtual machine on a volume.

SAN/iQ software 6.6 + patch 10005 support all features VMware enables for iSCSI. The
SAN/iQ software version of storage nodes with the patch applied should be 6.6.00.4101 or
higher.

All subsequent releases, 6.6 SP1, 7, 7 SP1, and SAN/iQ 8 software, support VI3

Page 13 of 13

You might also like