You are on page 1of 182

HP Virtual Connect with iSCSI Cookbook

February 2012, 3
rd
Edition


Technical white paper


Table of contents
Purpose .............................................................................................................................................. 3
Introduction ......................................................................................................................................... 3
System requirements ............................................................................................................................. 4
Virtual Connect with iSCSI Summary support ....................................................................................... 5
Firmware and Software Support......................................................................................................... 6
iSCSI Target support ......................................................................................................................... 7
Unsupported configuration ................................................................................................................ 7
Networking recommendations ............................................................................................................... 8
Network considerations .................................................................................................................... 8
Keep it simple and short ................................................................................................................... 8
Storage 'vendors' recommendations ................................................................................................... 8
Flow Control ................................................................................................................................ 8
Jumbo Frames (optional recommendation) ..................................................................................... 10
iSCSI multipathing solutions ............................................................................................................. 16
Virtual Connect network scenarios ....................................................................................................... 17
Scenario 1: iSCSI network physically separated ................................................................................ 17
Defining two iSCSI networks vNet ................................................................................................ 20
Scenario 2: iSCSI network logically separated .................................................................................. 22
Defining a first Shared Uplink Set (VLAN-trunk-1) ............................................................................ 23
Defining a second Shared Uplink Set (VLAN-trunk-2) ....................................................................... 24
Scenario 3: Direct-attached iSCSI Storage System .............................................................................. 26
Direct-attached limitations ............................................................................................................ 26
Scenario 3-A: Direct-attached iSCSI device with out-of-band management ......................................... 27
Scenario 3-B: Direct-attached iSCSI device with in-band management .............................................. 29
Virtual Connect network configuration........................................................................................... 30
Connecting the direct-attached iSCSI SAN to the VC Domain .......................................................... 31
Preparing the network settings of the storage system ....................................................................... 32
Configuring the Storage Management Server ................................................................................ 33
Configuring the network interface bonds on the Storage System....................................................... 35
Configuring the iSCSI Host .......................................................................................................... 37
Accelerated iSCSI .............................................................................................................................. 38
Enabling Accelerated iSCSI on the server using Virtual Connect (also known as iSCSI Offload) .............. 40
Accelerated iSCSI with Microsoft Windows Server ............................................................................. 44
Installing Emulex OneCommand Manager ..................................................................................... 44
Configuring the IP addresses of the iSCSI ports .............................................................................. 48
Installing the Microsoft iSCSI Initiator ............................................................................................ 49
Installing Microsoft MPIO ............................................................................................................ 50
Installing the Device Specific Module (DSM) .................................................................................. 50
Connecting volumes with MPIO .................................................................................................... 54
Using the Microsoft iSCSI Software Initiator in conjunction with Accelerated iSCSI support .................. 61
Accelerated iSCSI with VMware ESX 4.1 .......................................................................................... 66
Installing the NC551 / NC553 iSCSI drivers ................................................................................ 66
Installing Emulex OneCommand Manager ..................................................................................... 67
Configuring the IP addresses of the iSCSI ports .............................................................................. 69
Configuring the iSCSI Volumes ..................................................................................................... 71
Using the VMware iSCSI Software Initiator in conjunction with Accelerated iSCSI support................... 75
For more information ...................................................................................................................... 83
Boot from Accelerated iSCSI ............................................................................................................... 84


Creating an Accelerated iSCSI Boot Virtual Connect profile ................................................................ 85
iSCSI Boot Image Creation Step-by-Step Guides ................................................................................ 94
Microsoft Windows 2008 R2 ...................................................................................................... 94
VMware vSphere 5 .................................................................................................................... 99
VMware vSphere 4.1 ............................................................................................................... 103
Red Hat Enterprise Linux 5 Update 4 .......................................................................................... 113
Suse Linux Enterprise Server 11 .................................................................................................. 117
Troubleshooting ............................................................................................................................... 124
Emulex iSCSI Initiator BIOS Utility .................................................................................................. 124
Configuration checking ............................................................................................................. 126
Emulex OneCommand Manager (OCM) ......................................................................................... 130
Problems found with OneCommand Manager ................................................................................. 133
Problems found during iSCSI Boot .................................................................................................. 134
PXE booting problems ................................................................................................................... 139
iSCSI boot install problems with Windows 2003 ............................................................................. 140
VCEM issues with Accelerated iSCSI Boot ....................................................................................... 141
iSCSI issues with HP P4000 products ............................................................................................. 141
Appendix 1- iSCSI Boot Parameters ................................................................................................... 144
Mandatory iSCSI Boot Parameters entries ....................................................................................... 144
iSCSI Initiator (iSCSI Boot Configuration) ..................................................................................... 144
iSCSI Target (iSCSI Boot Configuration) ...................................................................................... 145
Initiator Network Configuration .................................................................................................. 147
Optional iSCSI Boot Parameters entries .......................................................................................... 149
Secondary iSCSI Target Address ................................................................................................ 149
Security enhancement using an authentication ............................................................................. 149
Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters ........................................................ 152
Windows 2008 DHCP server configuration .................................................................................... 153
Linux DHCP server configuration .................................................................................................... 162
Format of DHCP option 43 for NC551/NC553 CNA ...................................................................... 163
Examples .................................................................................................................................... 164
Appendix 3- How to monitor an iSCSI Network? ................................................................................. 166
Monitoring Disk Throughput on the iSCSI Storage System ................................................................. 166
Monitoring Network and Disk Throughput on the iSCSI Host ............................................................. 167
VMware vSphere ..................................................................................................................... 167
Microsoft Windows Resource Monitor ......................................................................................... 169
Analyzing Network information from the Virtual Connect interface ..................................................... 170
Analyzing Virtual Connect Network performance............................................................................. 175
Wireshark ................................................................................................................................... 176
Software iSCSI analysis ............................................................................................................. 176
iSCSI analysis for Accelerated iSCSI adapters ............................................................................. 178
For more information ........................................................................................................................ 181



Purpose 3

Purpose
This Virtual Connect iSCSI Cookbook provides users of Virtual Connect with a better understanding of the
concepts and steps required when using iSCSI with Virtual Connect Flex-10 or FlexFabric components. This
document will help users answer some of the typical questions on iSCSI: What are the network
considerations to properly build an iSCSI network? What are the components supported by HP? How can I
troubleshoot my iSCSI environment?
In addition, this document describes some typical iSCSI scenarios to provide the reader with some valid
examples of how Virtual Connect Flex-10 or FlexFabric with iSCSI could be deployed within their
environments.
Tips and some troubleshooting information for iSCSI boot and install are also provided.
Detailed information regarding Emulex requirements is subject to change, and readers should always refer
to the documentation from the providers.

Introduction
The iSCSI standard implements the SCSI protocol over a TCP/IP network. While iSCSI can be implemented
over any TCP/IP network, the most common implementation is over-1- and 10-Gigabit Ethernet (GbE). The
iSCSI protocol transports block-level storage requests over TCP connections. Using the iSCSI protocol,
systems can connect to remote storage and use it as a physical disk, although the remote storage provider
or target may actually be providing virtual physical disks. iSCSI serves the same purpose as Fibre Channel
in building SANs, but iSCSI avoids the cost, complexity, and compatibility issues associated with Fibre
Channel SANs.
Because iSCSI is a TCP/IP implementation, it is ideal for new field deployments where no FC SAN
infrastructure exists. An iSCSI SAN is typically comprised of software or hardware initiators on the host
connected to an isolated Ethernet network and some number of storage resources (targets). While the target
is usually a hard drive enclosure or another computer, it can also be any other storage device supporting
the iSCSI protocol, such as a tape drive. The iSCSI stack at both ends of the path is used to encapsulate
SCSI block commands into Ethernet Packets for transmission over IP networks
iSCSI boot allows the c-class Blade to boot from a remote operating system image located on an Ethernet
based storage network.
In addition, accelerated iSCSI enables the Converged Network Adapter on a ProLiant Blade server to run
accelerated iSCSI; it offloads the iSCSI function to the CNA rather than taxing the CPU of the server.


System requirements 4

System requirements
With Virtual Connect technology, only the following components support iSCSI Boot and Accelerated iSCSI:

Integrated NC553i Dual Port FlexFabric 10Gb Adapter (Intel-based BladeSystem G7 servers)
Integrated NC551i Dual Port FlexFabric 10Gb Adapter (AMD-based BladeSystem G7 servers)
HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter
HP NC553m Dual Port FlexFabric 10Gb Converged Network Adapter
HP Virtual Connect FlexFabric 10Gb/24-port Module
HP Virtual Connect Flex-10 10Gb Ethernet Module



NOTE:
iSCSI Boot is available as well on Virtual Connect with the QLogic QMH4062
1GbE iSCSI Adapter, but with some restrictions. The QMH4062 iSCSI settings
cannot be managed by a Virtual Connect profile but they can be set manually
through the Qlogic Bios (CTRL+Q during Power-on Self-Test). The constraints to
remember is that during a Virtual Connect profile move, the iSCSI boot settings
will not be saved and reconfigured on the target server.

NOTE:
10Gb KR-based Ethernet switches (like Procurve 6120XG or Cisco 3120G) can
be used as well for Accelerated iSCSI boot, but this option is not covered in this
document.

System requirements 5

Virtual Connect with iSCSI Summary support
Only the following combination of devices supports iSCSI Boot and Accelerated iSCSI when using Virtual
Connect.

BladeSystem G7



+

Virtual Connect FlexFabric
BladeSystem BLxx G7
with NC551i / NC553i Integrated CNA



+

Virtual Connect Flex-10
Minimum VC 3.10 and above

BladeSystem BLxx G7
with NC551i / NC553i Integrated CNA


BladeSystem G6



+

+

Virtual Connect FlexFabric
NC551m / NC553m
mezzanine CNA

BladeSystem BLxx G6
Latest System BIOS



+

+

Virtual Connect Flex-10
Minimum VC 3.10 and above

NC551m / NC553m
mezzanine CNA

BladeSystem BLxx G6
Latest System BIOS

NOTE:
At the time of writing, HP BladeSystem c-Class Integrity Server Blades do not
support Accelerated iSCSI and iSCSI Boot.
System requirements 6

Firmware and Software Support
Setup of your iSCSI Solution with HP Virtual Connect Flex-10 or FlexFabric Module and FlexFabric
Adapters (integrated and mezzanine cards) require using the HP BladeSystem Firmware Release Set
2010.10 or later with updates for specific components of the solution.

For best results, use the pre-deployment planning steps in the following documents:
HP Virtual Connect for c-Class BladeSystem Setup and Installation Guide
http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownload.jsp?lang=en&cc=us&prodNameId=3
552696&taskId=135&prodTypeId=3709945&prodSeriesId=3552695&lang=en&cc=us
HP BladeSystem ProLiant Firmware Management Best Practices Implementer Guide
http://h18004.www1.hp.com/products/servers/management/literature.html

Requirements for Accelerated iSCSI and iSCSI Boot:
VCM 3.10 (or above)
One Command OS tool
be2iSCSI driver [FlexFabric Adapters driver (integrated and mezzanine cards)]
be2iSCSI Driver Update Disk for iSCSI boot installs
iSCSI target
DHCP server (optional)
NOTE:
For the supported firmware, drivers, and software versions supported by HP, see
the HP Virtual Connect FlexFabric Solution Recipe at
http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.
jsp?contentType=SupportManual&lang=en&cc=us&docIndexId=64180&t
askId=101&prodTypeId=3709945&prodSeriesId=4144084 and see the
FlexFabric Adapter Firmware and FlexFabric Adapter Operating System Drivers
tables.
NOTE:
If the VC firmware is downgraded to a version older than 3.10, the iSCSI boot
parameter configuration is not supported and all iSCSI boot parameters are
cleared.

System requirements 7

iSCSI Target support
Any target supporting the iSCSI protocol, for example, the HP P4300 LeftHand SAN Solution.



Unsupported configuration
All other Virtual Connect modules do not support Accelerated iSCSI and iSCSI Boot, including the following:


1/10Gb Virtual Connect Ethernet module

1/10Gb-F Virtual Connect Ethernet module

The following adapters will not support iSCSI Boot and Accelerated iSCSI with VC Flex-10 or VC FlexFabric:

HP NC532i/m Dual Port Flex-10 10GbE (Broadcom)
HP NC522m Dual Port Flex-10 10GbE (NetXen)
HP NC542m Dual Port Flex-10 10GbE (Mellanox)
Not supported on 1 Gb NICs

Networking recommendations 8

Networking recommendations
Network considerations
When constructing an iSCSI SAN with Virtual Connect, some network considerations must be taken into
account.
'Do not think of an iSCSI network as just another LAN flavor, IP storage needs the same sort of design
thinking applied to FC infrastructure, particularly when critical infrastructure servers can boot from a remote
iSCSI data source.
Network performance is one of the major factors contributing to the performance of the entire iSCSI
environment. If the network environment is properly configured, the iSCSI components provide adequate
throughput and low enough latency for iSCSI initiators and targets. But if the network is congested, and links,
switches, or routers are saturated, iSCSI performance suffers and might not be adequate for some
environments.
Following are some important tips and tricks to consider:


Keep it simple and short
With iSCSI, you can route packets between different networks and subnetworks, but keep in mind, every
route and hop a packet must use, network latency will be added, tremendously affecting the performance
between iSCSI initiator and iSCSI target.
A network switch will also add latency to the delivery time from the iSCSI packet, so HP recommends
keeping the distance short and avoiding any router or network switches in between the connection. Put
simply, it costs performance, reduces the IOP's per second, and increases the chances of storage traffic
competing with other data traffic on congested inter-switch links.
To avoid bottlenecks, inter-switch links should be sized properly, the use of 10-GbE uplinks and link
aggregation are highly recommended.
Networking considerations are as follows:
Minimizing switch hops
Maximizing the bandwidth on the inter-switch links, if present
Using 10-GbE uplinks


Storage 'vendors' recommendations
Storage vendors usually have iSCSI SAN design recommendations. Following is a list of some of the most
important ones.


Flow Control
Ethernet Flow Control is a mechanism used to manage the traffic flow between two directly connected
devices and uses pause frames to notify the link-partner to stop sending traffic when congestion occurs. It
helps resolve efficiently any imbalance in network traffic between sending and receiving devices.
Enabling Flow Control is highly recommended by iSCSI storage vendors. It must be enabled globally across
the switches, the server adapter ports, and the NIC ports on the storage node.


Networking recommendations 9

Enabling Flow Control on iSCSI SAN Systems
Flow Control can usually be enabled on all iSCSI Storage Systems. For more specific information about
enabling Flow Control, see the Storage System's manufacturer documentation.

On an HP P4000, Flow Control can be set from the following TCP/IP settings page within the CMC
console:


Enabling Flow Control on the network switches
Flow Control should be enabled on each switch interface connected to the Storage device.
See the 'switch's manufacturer documentation for more information about Flow Control.
NOTE:
On ProCurve switches, if the port mode is set to auto and flow control is enabled
on the P4000 port, the switch port will auto-negotiate flow control with the Storage
device NIC.

Enabling Flow Control on Virtual Connect
Flow Control is enabled by default on all downlink ports. To enable Flow Control on all VC ports, including
uplink ports, enter the following command:
-> set advanced-networking FlowControl=on
BE CAREFUL:
This command can result in data traffic disruption!
Networking recommendations 10



Enabling Flow Control on the iSCSI hosts
By default, Flow Control is enabled on all network interfaces when Accelerated iSCSI is enabled.
For Software iSCSI, it might be necessary to enable Flow Control on the NIC/iSCSI initiator.



Jumbo Frames (optional recommendation)
Jumbo Frames (MTU>=9000 bytes) are also frequently recommended by iSCSI storage vendors, as they can
significantly increase the iSCSI performance.
Jumbo Frames have many benefits, particularly for iSCSI traffic, as they reduce the fragmentation overhead
by immediately lowering CPU utilization. They also give more aggressive TCP dynamics, leading to greater
throughput and better response to certain types of loss.
Jumbo Frames must be correctly configured end-to-end on the network, from the storage to the Ethernet
switches and up to the server ports.
NOTE:
Using Jumbo Frames in some environments can cause more problems than how
much it helps with performance. This is frequently due to misconfigured MTU sizes,
but also because some devices support different maximum MTU sizes. If you are
unsure about whether your routers and other devices support larger frame sizes,
keep the frame size at the default setting.

Enabling Jumbo Frames on iSCSI Storage Systems
Jumbo Frames can generally be used with all iSCSI Storage Systems and are usually enabled by setting the
MTU size on an interface. The frame size on the storage system should correspond to the frame size on
iSCSI Hosts (Windows and Linux application servers).
For more specific information about how to enable Jumbo Frames, see the Storage System manufacturer
documentation.

Networking recommendations 11

On an HP P4000, Jumbo Frames can be set from the following TCP/IP settings page within the CMC
console.


NOTE:
The maximum frame size supported by HP FlexFabric CNA (NC551 and 553) is
8342 bytes.

NOTE:
On the Storage System, set an MTU size above 8342 bytes.
Any Storage Systems configured with a frame size below 8342 bytes will result in
an MTU negotiation failure with the CNA, causing the traffic to run at the default
Ethernet standard frame size (that is, 1518 bytes).
If 9000 bytes are needed for any specific reason, then Software iSCSI must be
configured instead of Accelerated iSCSI.

Enabling Jumbo Frames on Network switches
Jumbo Frames must be enabled across all ports of the iSCSI-dedicated VLAN or hardware infrastructure
(always end-to-end). For more specific information, see the switch's manufacturer documentation.
NOTE:
Not all switches support both Jumbo Frames and Flow Control. If you must choose
between the two, choose Flow Control.

Networking recommendations 12

Enabling Jumbo Frames on Virtual Connect
Jumbo Frames are enabled by default on Virtual Connect; no configuration is required.



Enabling Jumbo Frames on the iSCSI hosts
There are two procedures for enabling Jumbo Frames on servers: one for Accelerated iSCSI (uses a
dedicated HBA port for iSCSI traffic) and one for Software iSCSI (uses a port of an existing NIC for iSCSI
traffic):

Enabling Jumbo Frames on the iSCSI host with Accelerated iSCSI
Jumbo Frames are enabled by default on the FlexFabric 10Gb NC551 and NC553 CNA with
Accelerated iSCSI; no configuration is required. The MTU size is auto-negotiated during the TCP
connection with the iSCSI target.
For optimal performance, the Max MTU size supported by iSCSI Accelerated mode is limited to
8,342 bytes and cannot be modified.

a. Checking MTU size under Windows:
To see the MTU size that has been auto-negotiated under Windows, go to the Emulex
OneCommand Manager (OCM), select the iSCSI target, and click on Sessions.


The TCPMSS value used for this connection is displayed in the Connection Negotiated Login
Properties section.
Networking recommendations 13


When TCPMSS displays 1436, the MTU negotiated size is 1514.
When TCPMSS displays 8260, the MTU negotiated size is 8342.

b. Checking MTU size under VMware ESX:
Like under MS Windows, MTU is automatically configured and the user has no control on
this setting. Under VMware, there is no way for the user to view the configured MTU/MSS.


Enabling Jumbo Frames on the iSCSI host with Software iSCSI
With Software iSCSI, Jumbo frames must be enabled under the Operating System on each
adapter/vSwitch running iSCSI.
a. Under MS Windows Server 2008:
1. Right-click Network in Start Menu and click Properties.
2. Select the network adapter used for iSCSI and click Properties.
3. Click Configure.
4. Click the Advanced tab.
Networking recommendations 14

5. Select Packet Size and change the MTU value.

6. Click OK to apply the changes.
To see the MTU value configured under Windows, go to OCM, then select the adapter
used for iSCSI. The MTU is displayed under the Current MTU field on the Port
information tab.

For more information, refer to the Microsoft documentation.

Networking recommendations 15

b. Under VMware ESX:
Enter the following command to set MTU for the vswitch:
esxcfg-vswitch -m 9000 vSwitch<#>
Enter the following command to check the MTU configuration:
esxcfg-vswitch l vSwitch<#>

Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
vSwitch2 128 3 128 9000 vmnic4,vmnic5

For more information, see iSCSI and Jumbo Frames configuration on ESX 3.x and ESX 4.x
(1007654).


Testing Jumbo Frames
The Jumbo Frames configuration can be tested by using the PING command usually available on the iSCSI
Storage System.
1. Test ping from the Storage System to the iSCSI host's interface using 8300 bytes for MTU.

2. The ping result should appear similar to the following:

PING 192.168.5.14 (192.168.5.14) from 192.168.5.20 : 8300(8328) bytes of data.
8308 bytes from 192.168.5.14: icmp_seq=5 ttl=64 time=47.7 ms

Networking recommendations 16

iSCSI multipathing solutions
Using multipathing solutions is highly recommended for load balancing and failover to improve iSCSI
performance and availability.
Multipathing solutions use redundant physical path componentsadapters, cables, and switchesto create
logical "paths" between the server and the storage device. If one or more of these components fails,
causing the path to fail, multipathing logic uses an alternate path for I/O so applications can still access
their data.
For the operating system, this multipathing means needing an intelligent path manager called Multipath I/O
(MPIO) to log in multiple sessions and to failover, if needed, among multiple iSCSI Host Bus Adapters
(HBAs).
MPIO is a key component to building a highly available, fault-tolerant iSCSI SAN solution. MPIO
technologies provide for the following:
I/O path redundancy for fault tolerance
I/O path failover for high availability
I/O load balancing for optimal performance
For Microsoft Windows OS, storage vendors usually provide a vendor-specific Device Specific Module
(DSM ) to optimize multipathing using the Microsoft MPIO framework. This vendor-specific module (DSM)
for MPIO must be installed under the operating system. Consult your storage provider website for more
information.

Virtual Connect network scenarios 17

Virtual Connect network scenarios
For security and performance purposes, HP recommends separating the iSCSI network either logically
(using different VLANs) or physically (using different physical switches) from the ordinary data network.
Isolating the iSCSI traffic helps to improve response times, reliability, and prevent bottlenecks and
congestion. It also helps to address the TCP/IP overhead and flow control issues inherent in an Ethernet
network.
Another recommendation to maximize availability and optimal performance is to use an iSCSI-redundant
network path from Virtual Connect (and therefore from the server) to the storage system. This enables a
failover mechanism in case of path failure among multiple iSCSI HBA. Using Multipath I/O software
running under the OS (Windows, Linux, and VMware) is required to provide an automatic means of
persisting I/O without disconnection.
For a step-by-step typical scenario configuration, see the Virtual Connect Ethernet Cookbook
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01990371/c01990371.pdf


Scenario 1: iSCSI network physically separated
The iSCSI network is physically separated from the ordinary data network using a different switch
infrastructure.

Pros
This scenario is the best for performance, better latency, and is the recommended scenario. It maximizes the
bandwidth availability. Here, the iSCSI traffic does not have to fight for bandwidth, as there is a dedicated
infrastructure for the storage traffic.

Cons
This scenario uses more switches, more VC uplinks, therefore using more cabling. The solution cost is
increased.
Virtual Connect network scenarios 18


Figure 1 - Physical view

Enclosure UID
Enclosure Interlink
PS
3
PS
2
PS
1
PS
6
PS
5
PS
4
OA1 OA2
Remove management modules before ejecting sleeve
FAN
6
FAN
10
FAN
1
FAN
5
2 1
3
5
7
4
6
8
UID
X1
HP VC Flex-10 Enet Module X1 X2 X3 X4 X5 X6
SHARED
X7 X8
SHARED: UPLINK or X-LINK
UID
X1
HP VC Flex-10 Enet Module X1 X2 X3 X4 X5 X6
SHARED
X7 X8
SHARED: UPLINK or X-LINK
HP 4Gb VC-FC Module
UID
1 2 3 4
HP 4Gb VC-FC Module
UID
1 2 3 4
iLO UID
Active Reset iLO UID
Active Reset
802.3ad LAG
802.1Q Trunk
LAN Switch 1 LAN Switch 2
PRODUCTION NETWORK IP STORAGE NETWORK
iSCSI network
Rear view
c7000 enclosure
VC Modules
PRODUCTION NETWORK
802.3ad LAG
802.1Q Trunk
vNet-iSCSI-1 vNet-iSCSI-2

iSCSI Storage Device
Port 1 Port 2
Active
Virtual IP
Active
Prod-vNet-1 Prod-vNet-2
Active Active



Virtual Connect network scenarios 19


Figure 2 Logical View of an iSCSI VMware host
HP VC Fl exFabr i c 10Gb/ 24- Por t Modul e
SHARED: UPL I NK o r X- L I NK
X3 X4 X1 X2 X5 X6 X7 X8
UID
LOM 1 LOM 2
Console
Hypervisor Host
VM Guest VLAN
1Gb
1.5Gb
3.5Gb
4Gb
1Gb
1.5Gb
3.5Gb
4Gb
A
B
C
D
A
B
C
D
VMotion
10Gb 10Gb
vmnic0
vmnic2
vmnic1
vmnic3
vmnic4 vmnic5
iSCSI2 iSCSI3
VLAN
UT
103
104
HP VC Fl exFabr i c 10Gb/ 24- Por t Modul e
SHARED: UPL I NK o r X- L I NK
X3 X4 X1 X2 X5 X6 X7 X8
UID
Prod-vNet-1
Enc0:Bay1:X1,X2
IP Storage Network
Switch 1
IP Storage Network
Switch 2
Prod-vNet-1
Prod-vNet-2
VC FlexFabric 10Gb Enc0:Bay1 VC FlexFabric 10Gb Enc0:Bay2
Virtual Connect Domain
NIC
1
NIC
2
UID
HP ProLiant
BL460cG6
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
UT
102
Prod-vNet-2
Enc0:Bay2:X1,X2
iSCSI Network UT
LOM 2
2B FlexiSCSI
2C FlexNIC
2D FlexNIC
2A FlexNIC
LOM 1
1C FlexNIC
1D FlexNIC
1A FlexNIC
1B FlexiSCSI
VLAN_101-1 VLAN_102-1
VLAN_103-1 VLAN_104-1
vNet-iSCSI-1
Enc0:Bay1:X5
VLAN_101-2 VLAN_102-2
VLAN_103-2 VLAN_104-2
vNet-iSCSI-2
Enc0:Bay2:X5
vSwitch




Figure 3 Logical View of an iSCSI Windows host
LOM 1 LOM 2
Management
Windows Host
1Gb
4Gb
5Gb
1Gb
4Gb
3.5Gb
5Gb
A
B
C
D
A
B
C
D
Application
10Gb 10Gb
LAC0
LAC2
LAC1
LAC3
LAC4 LAC5
iSCSI1 iSCSI2
VLAN
UT
HP VC Fl exFabr i c 10Gb/ 24- Por t Modul e
SHARED: UPL I NK o r X- L I NK
X3 X4 X1 X2 X5 X6 X7 X8
UID
HP VC Fl exFabr i c 10Gb/ 24- Por t Modul e
SHARED: UPL I NK o r X- L I NK
X3 X4 X1 X2 X5 X6 X7 X8
UID
iSCSI device
Port 1
iSCSI device
Port 2
Prod-vNet-1
Prod-vNet-2
VC FlexFabric 10Gb Enc0:Bay1 VC FlexFabric 10Gb Enc0:Bay2
Virtual Connect Domain
NIC
1
NIC
2
UID
HP ProLiant
BL460cG6
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
T
102
103
104
Prod-vNet-2
Enc0:Bay2:X1,X2
iSCSI Network UT
Team
Management-2 VLAN_102-2
VLAN_103-2 VLAN_104-2
vNet-iSCSI-1
Enc0:Bay1:X5
vNet-iSCSI-2
Enc0:Bay2:X5
Prod-vNet-1
Enc0:Bay1:X1,X2
Management-1 VLAN_102-1
VLAN_103-1 VLAN_104-1
LOM 2
2B FlexiSCSI
2C FlexNIC
2D FlexNIC
2A FlexNIC
LOM 1
1C FlexNIC
1D FlexNIC
1A FlexNIC
1B FlexiSCSI



Virtual Connect network scenarios 20

Defining two iSCSI networks vNet
Create a vNet named vNet-iSCSI-1:
1. On the Virtual Connect Manager screen, click Define, Ethernet Network to create a vNet.
2. Enter the Network Name vNet-iSCSI-1.
3. Select Smart Link, but do not select any of the other options (such as, Private Networks, and so forth).
4. Select Add Port, then add one port from Bay 1.
5. For Connection Mode, use Failover.
6. Select Apply.


Create a vNet named vNet-iSCSI-2:
1. On the Virtual Connect Manager screen, click Define, Ethernet Network to create a vNet.
2. Enter the Network Name vNet-iSCSI-2.
3. Select Smart Link, but do not select any of the other options (such as, Private Networks, and so forth).
4. Select Add Port, then add one port from Bay 2.
5. For Connection Mode, use Failover.
6. Select Apply.
Virtual Connect network scenarios 21


NOTE:
By creating two vNets, we have provided a redundant path to the network. As
each uplink originates from a different VC module and vNet, both uplinks will be
active. This configuration provides the capability to lose an uplink cable, network
switch, or depending on how the iSCSI ports are configured at the server (iSCSI
Software Initiator supporting failover), even a VC module.
NOTE:
Smart Link Should be enabled. It is used to turn off downlink ports within Virtual
Connect if all available uplinks to a vNet or SUS are down. In this scenario, if an
upstream switch or all cables to a vNet were to fail on a specific vNet, VC would
turn off the downlink ports connected to that vNet, forcing the iSCSI Software
Initiator to fail-over to the alternate NIC.
Connection Mode Failover Should be enabled here as only a single external
uplink port is used for this network. With multiple uplink ports, the connection
mode Auto can be used to enable the uplinks to attempt to form aggregation
groups using IEEE 802.3ad link aggregation control protocol. Aggregation groups
require multiple ports from a single VC-Enet module to be connected to a single
external switch supporting automatic formation of LACP aggregation groups, or
multiple external switches utilizing distributed link aggregation.

Virtual Connect network scenarios 22

Scenario 2: iSCSI network logically separated
In this second scenario, we use the same switch infrastructure, but the iSCSI network is logically separated
from the ordinary data network through the use of 802.1Q VLAN trunking.
Each Virtual Connect module is connected with more than one cable to the LAN Switches to increase the
network bandwidth and to provide a better redundancy.

Pros
This scenario uses less VC uplinks, therefore, less cabling. The solution cost is reduced.

Cons
In this scenario, the performance of iSCSI relies on the datacenter network performance.
If the datacenter network is congested and saturated, iSCSI performance suffers and might not be adequate
for some environments.

Figure 4 - Physical view
Enclosure UID
Enclosure Interlink
PS
3
PS
2
PS
1
PS
6
PS
5
PS
4
OA1 OA2
Remove management modules before ejecting sleeve
FAN
6
FAN
10
FAN
1
FAN
5
2 1
3
5
7
4
6
8
UID
X1
HP VC Flex-10 Enet Module X1 X2 X3 X4 X5 X6
SHARED
X7 X8
SHARED: UPLINK or X-LINK
UID
X1
HP VC Flex-10 Enet Module X1 X2 X3 X4 X5 X6
SHARED
X7 X8
SHARED: UPLINK or X-LINK
HP 4Gb VC-FC Module
UID
1 2 3 4
HP 4Gb VC-FC Module
UID
1 2 3 4
iLO
UID
Active Reset iLO
UID
Active Reset
802.3ad LAG
802.1Q Trunk
LAN Switch 1 LAN Switch 2
iSCSI Storage Device
802.3ad LAG
802.1Q Trunk
iSCSI network
+
Production network
Rear view
c7000 enclosure
VC Modules
Port 1 Port 2
Virtual IP
PRODUCTION
NETWORK
IP STORAGE NETWORK
PRODUCTION
NETWORK
UplinkSet_1
Active
UplinkSet_2
Active
iSCSI network
+
Production network



Virtual Connect network scenarios 23


Figure 5 - Logical view of an iSCSI Vmware host
HP VC Fl exFabr i c 10Gb/ 24- Por t Modul e
SHARED: UPL I NK o r X- L I NK
X3 X4 X1 X2 X5 X6 X7 X8
UID
LOM 1 LOM 2
Console
Hypervisor Host
VM Guest VLAN
1Gb
1.5Gb
3.5Gb
4Gb
1Gb
1.5Gb
3.5Gb
4Gb
A
B
C
D
A
B
C
D
VMotion
10Gb 10Gb
vmnic0
vmnic2
vmnic1
vmnic3
vmnic4 vmnic5
iSCSI2 iSCSI3
VLAN
UT
101
103
104
HP VC Fl exFabr i c 10Gb/ 24- Por t Modul e
SHARED: UPL I NK o r X- L I NK
X3 X4 X1 X2 X5 X6 X7 X8
UID
UplinkSet_1
Enc0:Bay1:X1,X2
Prod 802.1Q Trunk
(VLANs 101 through 105)
VC FlexFabric 10Gb Enc0:Bay1 VC FlexFabric 10Gb Enc0:Bay2
Virtual Connect Domain
NIC
1
NIC
2
UID
HP ProLiant
BL460cG6
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
UT
102
UplinkSet_2
Enc0:Bay2:X1,X2
Prod 802.1Q Trunk
(VLANs 101 through 105)
iSCSI Network UT
105
LOM 2
2B FlexiSCSI
2C FlexNIC
2D FlexNIC
2A FlexNIC
LOM 1
1C FlexNIC
1D FlexNIC
1A FlexNIC
1B FlexiSCSI
VLAN_101-1 VLAN_102-1
VLAN_103-1
iSCSI_1
VLAN_104-1
VLAN_101-2 VLAN_102-2
VLAN_103-2
iSCSI_2
VLAN_104-2
Internal Stacking link
vSwitch


Defining a first Shared Uplink Set (VLAN-trunk-1)
Create an SUS named UplinkSet_1:
1. On the Virtual Connect Home page, select Define, Shared Uplink Set.
2. Insert Uplink Set Name as UplinkSet_1.
3. Select Add Port, then add two ports from Bay 1.
4. Add Networks as follows: (to add a network, right-click on the grey bar under the "Associate
Networks (VLAN)" header, then select ADD.)
VLAN_101-1 = VLAN ID = 101 = CONSOLE
VLAN_102-1 = VLAN ID = 102 = VMOTION
VLAN_103-1 = VLAN ID = 103 = First VM Guest VLAN
VLAN_104-1 = VLAN ID = 104 = Second VM Guest VLAN
(More VM Guest VLANs can be defined here)
iSCSI_1 = VLAN ID=105
5. Enable SmartLink on all networks.
6. Leave Connection Mode as Auto (this will create an LCAP port channel if the upstream switch is
properly configured).
7. Optionally, if one of the VLANs is configured as Default/untagged, on that VLAN only, set Native to
Enabled.
8. Click Apply.
Virtual Connect network scenarios 24



Defining a second Shared Uplink Set (VLAN-trunk-2)
1. Create an SUS named UplinkSet_2.
2. On the Virtual Connect home page, select Define, Shared Uplink Set.
3. Insert Uplink Set Name as UplinkSet_2.
4. Select Add Port, then add two ports from Bay 2.
5. Add Networks as follows: (to add a network, right-click on the grey bar under the "Associate
Networks (VLAN)" header, then select ADD.)
VLAN_101-2 = VLAN ID = 101 = CONSOLE
VLAN_102-2 = VLAN ID = 102 = VMOTION
VLAN_103-2 = VLAN ID = 103 = First VM Guest VLAN
VLAN_104-2 = VLAN ID = 104 = Second VM Guest VLAN
(More VM Guest VLANs can be defined here)
iSCSI_2 = VLAN ID=105
6. Enable SmartLink on all networks.
7. Leave Connection Mode as Auto (this will create an LCAP port channel if the upstream switch is
properly configured).
8. Optionally, if one of the VLANs is configured as Default/untagged, on that VLAN only, set Native to
Enabled.
9. Click Apply.
Virtual Connect network scenarios 25




Virtual Connect network scenarios 26

Scenario 3: Direct-attached iSCSI Storage System
In this third scenario, an iSCSI device is directly connected to the Virtual Connect Domain without any switch
infrastructure. This scenario uses more VC uplinks than the second scenario, but no additional or dedicated
switches are required like in scenario 1 to only connect a SCSI disk storage enclosure. This reduces the entire
solution cost and complexity.

Pros
Cost is greatly reduced, as no additional switch is required.

Cons
There are several limitations.


Direct-attached limitations
The following are important limitations the administrator must be made aware of:
When an iSCSI storage device is directly connected to a VC Domain, this iSCSI device is
only accessible to the servers belonging to this Virtual Connect Domain.
iSCSI Storage Systems sharing the same ports for both iSCSI host connectivity traffic and
LAN management (also known as in-band management) can only be managed from the
Virtual Connect Domain.
The only network interface bond supported on the iSCSI Storage system is Active-Passive.
VC Active/Standby iSCSI vNet configuration is not supported.
iSCSI Storage systems can be divided into two categories: out-of-band management (using separate ports
for management and host traffic) and in-band management (using the same ports for management and host
traffic). The direct-attached scenario will be divided into two subscenarios, according to the type of
management you may use:
Scenario 3-A: Direct-attached iSCSI device with out-of-band management
Scenario 3-B: Direct-attached iSCSI device with in-band management
Virtual Connect network scenarios 27

Scenario 3-A: Direct-attached iSCSI device with out-of-band management (using separate ports for
management and host traffic)

Figure 6 - Physical view
iSCSI Hosts
UID
FLEX 1
FLEX 2
HP ProLiant BL460c G7
SAS 6G DP 10k
300 GB
SAS 6G DP 10k
300 GB
UID
FLEX 1
FLEX 2
HP ProLiant BL460c G7
SAS 6G DP 10k
300 GB
SAS 6G DP 10k
300 GB
UID
FLEX 1
FLEX 2
HP ProLiant BL460c G7
SAS 6G DP 10k
300 GB
SAS 6G DP 10k
300 GB
Enclosure UID
Enclosure Interlink
PS
3
PS
2
PS
1
PS
6
PS
5
PS
4
OA1 OA2
Remove management modules before ejecting sleeve
FAN
6
FAN
10
FAN
1
FAN
5
2 1
3
5
7
4
6
8
UID
X1
HP VC Flex-10 Enet Module X1 X2 X3 X4 X5 X6
SHARED
X7 X8
SHARED: UPLINK or X-LINK
UID
X1
HP VC Flex-10 Enet Module X1 X2 X3 X4 X5 X6
SHARED
X7 X8
SHARED: UPLINK or X-LINK
HP 4Gb VC-FC Module
UID
1 2 3 4
HP 4Gb VC-FC Module
UID
1 2 3 4
iLO UID
Active Reset iLO UID
Active Reset
802.3ad LAG
802.1Q Trunk
LAN Switch 1 LAN Switch 2
PRODUCTION NETWORK
Directly attached
iSCSI Storage Device
Port 1
Active
Port 2
Passive
IP STORAGE NETWORK
iSCSI host network
Rear view
c7000 enclosure
VC Modules
PRODUCTION NETWORK
802.3ad LAG
802.1Q Trunk
Management network
2533t
Storage
Management
Console
vNet-iSCSI-1 vNet-iSCSI-2
Active
Virtual IP
Active
Mgt
Port




In Scenario A, the iSCSI target is directly connected to the VC Domain using two Active/Active vNet (blue
and red in the previous diagram) to provide iSCSI LAN access to the servers located inside the enclosure.
The iSCSI storage device is using an out-of-band management network, which means dedicated ports
separated from the iSCSI host traffic are available for the management/configuration. Therefore, the iSCSI
device can be managed from anywhere on the network.
Both software and hardware iSCSI can be implemented on the iSCSI hosts without limitation.










Virtual Connect network scenarios 28

NOTE:
The direct-attached scenario supports only one iSCSI device per VC Active/Active
network. To support more direct-attached iSCSI devices, 'you must create an
Active/Active VC network for each iSCSI device.
Enclosure UID
Enclosure Interlink
PS
3
PS
2
PS
1
PS
6
PS
5
PS
4
OA1 OA2
Remove management modules before ejecting sleeve
FAN
6
FAN
10
FAN
1
FAN
5
2 1
3
5
7
4
6
8
UID
X1
HP VC Flex-10 Enet Module X1 X2 X3 X4 X5 X6
SHARED
X7 X8 SHARED: UPLINK or X-LINK
UID
X1
HP VC Flex-10 Enet Module X1 X2 X3 X4 X5 X6
SHARED
X7 X8 SHARED: UPLINK or X-LINK
HP 4Gb VC-FC Module
UID
1 2 3 4
HP 4Gb VC-FC Module
UID
1 2 3 4
iLO UID
Active Reset iLO UID
Active Reset
LAN Switch LAN Switch

iSCSI
Target 1
Direct-attached iSCSI devices

iSCSI
Target 2
vNet1-iSCSI-1
vNet1-iSCSI-2
vNet2-iSCSI-1
vNet2-iSCSI-2
Rear view
c7000 enclosure
VC Modules
Production Network
The limitation of one device per VC Active/Active network implies that you cannot
directly attach multiple LHN P4000 series nodes to the same VC domain because
in a multinode environment, LeftHand nodes need to talk to each other. This
requirement is not possible as VC's 'do not switch traffic between different VC
networks.
Enclosure UID
Enclosure Interlink
PS
3
PS
2
PS
1
PS
6
PS
5
PS
4
OA1 OA2
Remove management modules before ejecting sleeve
FAN
6
FAN
10
FAN
1
FAN
5
2 1
3
5
7
4
6
8
UID
X1
HP VC Flex-10 Enet Module X1 X2 X3 X4 X5 X6
SHARED
X7 X8 SHARED: UPLINK or X-LINK
UID
X1
HP VC Flex-10 Enet Module X1 X2 X3 X4 X5 X6
SHARED
X7 X8 SHARED: UPLINK or X-LINK
HP 4Gb VC-FC Module
UID
1 2 3 4
HP 4Gb VC-FC Module
UID
1 2 3 4
iLO UID
Active Reset iLO UID
Active Reset
LAN Switch LAN Switch

HP P4000 series
Node 1
Direct-attached HP LeftHand P4000 series
Rear view
c7000 enclosure
VC Modules
Production Network
12 12
UID
HP StorageWorks P4300 G2
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
12 12
UID
HP StorageWorks P4300 G2
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB
serial scsi
1 port 10k
146 GB

HP P4000 series
Node 2


Virtual Connect network scenarios 29

Scenario 3-B: Direct-attached iSCSI device with in-band management (using the same ports for
management and host traffic):


Figure 7 - Physical view

Enclosure UID
Enclosure Interlink
PS
3
PS
2
PS
1
PS
6
PS
5
PS
4
OA1 OA2
Remove management modules before ejecting sleeve
FAN
6
FAN
10
FAN
1
FAN
5
2 1
3
5
7
4
6
8
UID
X1
HP VC Flex-10 Enet Module X1 X2 X3 X4 X5 X6
SHARED
X7 X8
SHARED: UPLINK or X-LINK
UID
X1
HP VC Flex-10 Enet Module X1 X2 X3 X4 X5 X6
SHARED
X7 X8
SHARED: UPLINK or X-LINK
HP 4Gb VC-FC Module
UID
1 2 3 4
HP 4Gb VC-FC Module
UID
1 2 3 4
iLO UID
Active Reset iLO UID
Active Reset
802.3ad LAG
802.1Q Trunk
LAN Switch 1 LAN Switch 2
PRODUCTION NETWORK IP STORAGE NETWORK
iSCSI host network +
management network
Rear view
c7000 enclosure
VC Modules
PRODUCTION NETWORK
802.3ad LAG
802.1Q Trunk
Storage
Management
Console
iSCSI Hosts
UID
FLEX 1
FLEX 2
HP ProLiant BL460c G7
SAS 6G DP 10k
300 GB
SAS 6G DP 10k
300 GB
UID
FLEX 1
FLEX 2
HP ProLiant BL460c G7
SAS 6G DP 10k
300 GB
SAS 6G DP 10k
300 GB
UID
FLEX 1
FLEX 2
HP ProLiant BL460c G7
SAS 6G DP 10k
300 GB
SAS 6G DP 10k
300 GB
UID
FLEX 1
FLEX 2
HP ProLiant BL460c G7
SAS 6G DP 10k
300 GB
SAS 6G DP 10k
300 GB
vNet-iSCSI-1 vNet-iSCSI-2
Directly attached
iSCSI Storage Device
Port 1
Active
Port 2
Passive
Active
Virtual IP
Active


In scenario B, the iSCSI device is again connected directly to the VC Domain using two Active/Active vNet
(blue and red in the previous diagram). But here, the iSCSI device is not using a dedicated interface for
management, which means the same ports are used for both the management and the iSCSI host traffic
(known as in-band management device).
This implies, due to the Virtual Connect technologies, you can only manage and configure the iSCSI storage
device from the Virtual Connect Domain (that is, only from a server located inside the enclosure and
connected to the iSCSI vNet).
This means you need a dedicated server for the storage system management, and all SNMP trap
notifications for hardware diagnostics, events, and alarms will only be sent to this management console.
Both software and hardware iSCSI can be implemented here without limitation on the iSCSI hosts.
For more information about Accelerated iSCSI, see "Accelerated iSCSI".

Virtual Connect network scenarios 30


Virtual Connect network configuration
The Virtual Connect network configuration is the same for Scenario 3-A and Scenario 3-B.
You must define two Active/Active iSCSI Virtual Connect networks (vNets): vNet-iSCSI-1 and vNet-iSCSI-2,
like in Scenario 1. These 2 vNets will be used exclusively for the direct attachment of the iSCSI storage
device.


NOTE:
The direct-attached scenario is not supported with VC Active/Standby iSCSI vNet
configuration. A standby VC uplink is always seen as active by an upstream
device, so having no way to detect the standby state of the VC uplink, a storage
system would incorrectly send traffic to the standby port, causing communication
issues.


Virtual Connect network scenarios 31

Connecting the direct-attached iSCSI SAN to the VC Domain
Connect the VC uplink port of the two vNets to the network interfaces of the iSCSI device.

Figure 8: Out-of-band management configuration with a HP P4000 (Scenario 3-A)
Mgmt
2 1 UID
PORT 1
10GbE
SFP
PORT 2
L A
L A
T X R X T X R X
NC550sp
HP VC Fl exFabr i c 10Gb/ 24- Por t Modul e
SHARED: UPL I NK o r X- L I NK
X3 X4 X1 X2 X5 X6 X7 X8
UID
HP VC Fl exFabr i c 10Gb/ 24- Por t Modul e
SHARED: UPL I NK o r X- L I NK
X3 X4 X1 X2 X5 X6 X7 X8
UID
NIC 1
NIC 2
UID
HP ProLiant BL460c
CNA 1 CNA 2
Prod-vNet-1 Prod-vNet-2
HP LeftHand P4300 G2
NIC 1
NIC 2
UID
HP ProLiant BL460c
CNA 1 CNA 2
NIC 1
NIC 2
UID
HP ProLiant BL460c
CNA 1 CNA 2
NIC 1
NIC 2
UID
HP ProLiant BL460c
CNA 1 CNA 2
iSCSI Hosts
Blade Servers
vNet-iSCSI-1 vNet-iSCSI-2
Active Active
Directly attached iSCSI device
using out-of-band management
HP VC FlexFabric
Management network



Virtual Connect network scenarios 32



Figure 9: In-band management configuration with an HP P4000 (Scenario 3-B)
Mgmt
2 1 UID
HP VC Fl exFabr i c 10Gb/ 24- Por t Modul e
SHARED: UPL I NK o r X- L I NK
X3 X4 X1 X2 X5 X6 X7 X8
UID
HP VC Fl exFabr i c 10Gb/ 24- Por t Modul e
SHARED: UPL I NK o r X- L I NK
X3 X4 X1 X2 X5 X6 X7 X8
UID
NIC 1
NIC 2
UID
HP ProLiant BL460c
CNA 1 CNA 2
Prod-vNet-1 Prod-vNet-2
HP LeftHand P4300 G2
NIC 1
NIC 2
UID
HP ProLiant BL460c
CNA 1 CNA 2
NIC 1
NIC 2
UID
HP ProLiant BL460c
CNA 1 CNA 2
NIC 1
NIC 2
UID
HP ProLiant BL460c
CNA 1 CNA 2
iSCSI Hosts
Blade Servers
vNet-iSCSI-1 vNet-iSCSI-2
Active Active
Directly attached iSCSI device
using in-band management
HP VC FlexFabric


Preparing the network settings of the storage system
The following describes a basic network configuration of an HP P4300:
1. Open a remote console on the P4300 using the P4300 iLO or connect a keyboard/monitor to the
P4300.
2. Type start, and press Enter at the log in prompt.
3. Press Enter to log in or type the Username and password if already configured.
4. When the session is connected to the storage system, the Configuration Interface window displays.
5. On the Configuration Interface main menu, tab to Network TCP/IP Settings, and press Enter.
6. Tab to select the first network interface and press Enter.
7. Enter the host name, and tab to the next section to configure the network settings.
8. Enter a Private IP address like 192.168.5.20 / 255.255.255.0 with 0.0.0.0 for the Gateway address.
Virtual Connect network scenarios 33


9. Tab to OK, and press Enter to complete the network configuration.
10. Press Enter on the confirmation window.
A window opens listing the assigned IP address that will be used later on to configure and manage
the P4300.

Configuring the Storage Management Server
Scenario 3-A
An iSCSI device with an out-of-band management does not require a specific configuration, as the
management console can reside anywhere on the network.

Scenario 3-B
An iSCSI device with an in-band management requires a little bit more attention, particularly when network
interface bonding is enabled on the iSCSI Storage System.
The following configuration can be followed for an in-band management iSCSI Storage System:
1. Create a Virtual Connect server profile for the Management Storage server.
2. Assign NIC1 and NIC2 to the management network.
3. Assign NIC3 and NIC4 to the iSCSI direct-attached VC networks vNet-iSCSI-1 and vNet-iSCSI-2.
Virtual Connect network scenarios 34


Figure 10: Example of a VC profile used by the Storage Management Server


4. Start the server and install Windows Server 2003 or 2008.
5. At the end of the installation, assign a private IP address to NIC3 (such as 192.168.5.150) and to
NIC4 (such as 192.168.5.151)
Ensure you use the same IP subnet addresses as the one set in the Storage System.

NOTE:
Two IP addresses are used in this configuration to ensure the management server
stays connected to the iSCSI device regardless of the iSCSI device bonding
configuration (no bonding or Active-Passive bond enabled).
Despite using the Active/Active iSCSI vNet configuration, NIC teaming cannot be
used on the management server; otherwise the network connection to the iSCSI
device will fail. This is due to the direct-attached design and using NIC bonding on
the iSCSI device. There is no such limitation with Scenario 3-A.
Virtual Connect network scenarios 35

6. Install the Storage Management software (for example, HP P4000 Centralized Management Console;
this is the utility to configure and manage the HP P4000 SAN Solution).
7. Open the Centralized Management Console (CMC) and locate the storage system using the Find
function. Enter the address assigned previously on the first interface of the P4300 (which is
192.168.5.20).


8. Select the Auto Discover by Broadcast check box.
9. When you have found the storage system, it appears in the Available systems pool in the navigation
window.

Configuring the network interface bonds on the Storage System
When configuring an iSCSI SAN directly attached to VC, one point is worth considering: the network
interface bonding.
Network interface bonding is generally used on iSCSI SAN devices to provide high availability, fault
tolerance, load balancing, and/or bandwidth aggregation. Depending on your storage system hardware,
you can generally bond NIC's in one of the following methods:
Active - Passive
Link Aggregation (802.3 ad)
Adaptive Load Balancing
Related to the direct-attached scenario, when a bond is configured, multiple interfaces are used. but due to
the lack of Ethernet switches between the VC domain and the iSCSI SAN device, the support of NIC
bonding is very limited; only Active/Passive bonding methods are supported.
NOTE:
Link Aggregation (802.3ad) is not supported because you cannot create an
aggregation group across different VC modules. Adaptive Load Balancing is not
supported either, as it requires the two interfaces to be connected to the same
network, which is not the case here, as we have two vNet for iSCSI (Active/Active
scenario).

Virtual Connect network scenarios 36

To configure Active/Passive bonding on a HP P4300:
1. From the CMC, select the storage system and open the tree below it.
2. Select TCP/IP Network category, and click the TCP/IP tab.

3. Select the two interfaces from the list for which you want to configure the bonding. Right-click and
select New Bond.
4. Select the Active Passive bond type.

5. Click OK.
6. After a warning message, click OK to rediscover the Storage system; the IP address is normally
preserved.
7. After a few seconds, the new bond setting shows the Active-Passive configuration:

8. The Storage System is now ready to be used with fault tolerance enabled.


Virtual Connect network scenarios 37

Configuring the iSCSI Host
For the iSCSI Hosts configuration, the Hardware and/or Software iSCSI connection can be used.

Multipathing and path checking
With the direct-attached scenario and its Active Passive iSCSI SAN NIC bonding, HP recommends
checking the second path to validate the entire configuration. After the iSCSI volume has been discovered
from the first path, log in to the iSCSI Storage System and trigger an NIC bonding fail-over; this will activate
the second interface. You will then have the ability to validate the second iSCSI path.


Accelerated iSCSI 38

Accelerated iSCSI
Traditional software-based iSCSI initiators generate more processing overhead for the server CPU. An
accelerated iSCSI-capable card, also known as Hardware iSCSI, offloads the TCP/IP operations from the
server processor, freeing up CPU cycles for the main applications.
The main benefits are as follows:
Processing work offloaded to the NIC to free CPU cores for data-intensive workloads.
Increased server and IP storage application performance.
Increased iSCSI performance.
Software-based iSCSI initiators are supported by all Virtual Connect models (1/10Gb, 1/10Gb-F, Flex-10,
and FlexFabric), but 10Gb accelerated iSCSI (Hardware-based iSCSI initiator) is only provided by
Converged Network Adapters (that is, NC551i/m or NC553i/m) with only Virtual Connect Flex-10 and
Virtual Connect FlexFabric.
The QLogic QMH4062 1GbE iSCSI Adapter is another adapter supporting accelerated iSCSI, but it is not
supported by VC Flex-10 and VC FlexFabric.
NOTE:
Under the OS, Accelerated iSCSI differs from Software iSCSI in that it provides an
HBA type of interface and not a network interface card (NIC). Consequently,
additional drivers, software, and settings are sometimes required.

Accelerated iSCSI 39

NOTE:
The selection between Software iSCSI and Accelerated iSCSI is done under the
Virtual Connect profile.
Figure 11 Example of a server profile with Software iSCSI enabled

Figure 12 Example of a server profile with Accelerated iSCSI enabled

Accelerated iSCSI 40

The following steps are required to connect a server to an iSCSI target using accelerated iSCSI:
1. Enable Accelerated iSCSI on the server using Virtual Connect.
2. Install and configure a hardware iSCSI initiator under the OS.
3. Use MPIO software to manage the redundant iSCSI connection.
This document provides the steps to enable Accelerated iSCSI on a BladeSystem server under Microsoft
Windows Server 2003/2008 and VMware vSphere Server.



Enabling Accelerated iSCSI on the server using Virtual Connect
(also known as iSCSI Offload)
The first step is the same for both MS Windows and VMware vSphere servers: create a Virtual Connect
server profile enabling Accelerated iSCSI.
1. To create a new VC profile, open Virtual Connect manager.
2. From the Define menu, select Server Profile.

VCM assigns FCoE connections by default.

NOTE:
A server with one FlexFabric adapter can be configured with a unique personality,
either all Ethernet, or Ethernet/iSCSI, or Ethernet/FCoE. Therefore, 'it is not
possible to enable at the same time both FCoE and iSCSI connections.
A server with multiple FlexFabric Adapters can be configured with both iSCSI and
FCoE connections.
Accelerated iSCSI 41

3. If you have a unique FlexFabric Adapter, delete the two FCOE connections; otherwise, go to the next
step.

4. In the iSCSI HBA Connections section, click Add.

5. In the Network Name column, click Select a network.

6. Select your iSCSI-dedicated VC network, thenclick OK.

7. In the Port Speed column, you can adjust the speed setting to Auto, Preferred, or Custom.
8. In the Boot Setting column, leave as DISABLED.

Accelerated iSCSI 42

NOTE:
Boot setting disabled means Accelerated iSCSI is enabled, but iSCSI Boot is
unavailable.
The disable mode offloads the iSCSI protocol processing from the OS to the NIC.
In addition to offloading TCP/IP protocol processing, it also offloads iSCSI
protocol processing.
The multiple network feature (that is, when using 802.1Q VLAN tagging) is not
supported for iSCSI connections.
9. Optionally, create a second iSCSI Connection for multipathing configuration.

NOTE:
Allowing more than one iSCSI application server to connect to a volume
concurrently without cluster-aware applications or without an iSCSI initiator with
MPIO software could result in data corruption.
10. Configure the additional VC Ethernet Network connections that may be needed on the other FlexNIC.

Accelerated iSCSI 43

11. When done, you can assign the profile to a server with an Ethernet adapter supporting Accelerated
iSCSI.

12. Click Apply to save the profile.
13. The server can now be powered on (using either the OA, the iLO, or the Power button).




Accelerated iSCSI 44

Accelerated iSCSI with Microsoft Windows Server
After creating a VC profile with Accelerated iSCSI enabled, proceed with the following steps under
Microsoft Windows Server:
1. Installing the Emulex hardware iSCSI initiator (that is, OneCommand Manager).
2. Configuring the IP addresses of the iSCSI ports.
3. Installing the Microsoft iSCSI Initiator.
4. Installing the Microsoft MPIO.
5. Installing the Device Specific Module (DSM):
a. Using the Microsoft DSM that comes with Microsoft MPIO.
b. Using the Storage vendor DSM for MPIO provided by the Storage System vendor for better
performance and latency.
6. Connecting to the iSCSI target using the iSCSI initiator.


Installing Emulex OneCommand Manager
OneCommand Manager is the Emulex utility managing the NC551and NC553 10Gb FlexFabric Converged
Network Adapters. Among other things, it provides comprehensive control of the iSCSI network including
discovery, reporting, and settings.
For accelerated iSCSI under MS Windows server, the Emulex OneCommand Manager is mandatory, as it
provides the only way to configure the IP settings of the iSCSI HBA ports required to connect the iSCSI
volume.
For more information about OneCommand Manager, see the User Manual of the Emulex OneCommand
Manager http://bizsupport1.austin.hp.com/bc/docs/support/supportmanual/c02018556/c02018556.pdf

NOTE:
Do not confuse hardware and software iSCSI initiators. Accelerated iSCSI always
uses a specific port (such as host bus adapters) and requires a utility from the HBA
vendor (such as Emulex OneConnect Manager or Emulex Drivers be2iscsi).
Storage iSCSI
Adapter Ports
Network Interface Ports

Just the opposite, software initiators use the standard server NIC port and are
usually included in the operating system (such as Microsoft iSCSI initiator).
Accelerated iSCSI 45

OneCommand Manager can be installed by HP System Update Manager (HP SUM) but it can as well be
downloaded from the web:
1. From the Support & Drivers hp.com webpage, enter a Blade model, for example BL460 G7.

2. Select the operating system.

Accelerated iSCSI 46

3. Click Utility FC HBA.


4. Click Download to download the OneCommand Manager Application Kit

NOTE:
The HP OneCommand Manager Application Kit contains a GUI and a CLI.
HbaCmd.exe (the CLI) is located by default in C:\Program
Files\Emulex\Util\OCManager.
5. Install the application.

Accelerated iSCSI 47

6. Launch the OneCommand utility from Windows Start menu All Programs Emulex
OCManager.

7. OneCommand Manager shows the iSCSI ports detected on your server.




Accelerated iSCSI 48

Configuring the IP addresses of the iSCSI ports
To log into the iSCSI target, an IP address must be assigned to each iSCSI HBA ports, as follows:
1. From the OneCommand Manager interface, select the first iSCSI port, and click Modify.

2. Enter a static IP address or check DHCP Enabled.

This IP address must be in the same subnet as the one configured in the iSCSI Storage System.

3. Click OK.
4. Select the second iSCSI port, and click Modify.
Accelerated iSCSI 49

5. Enter a second static IP address or check DHCP Enabled.

6. Click OK.


Installing the Microsoft iSCSI Initiator
The Microsoft iSCSI Software Initiator enables connection of a Windows host to an external iSCSI storage
array using Ethernet NICs.
For more information about the Microsoft initiator, see http://technet.microsoft.com/en-
us/library/ee338476%28WS.10%29.aspx.
NOTE:
The Microsoft iSCSI Initiator utility is a software initiator (using Ethernet NICs) but it can
also be used to manage the iSCSI accelerated connectivity (using the iSCSI HBAs).
For Windows 2003:
Download and install the latest version of the Microsoft iSCSI Initiator software.
You must select the Microsoft MPIO Multipathing Support for iSCSI option when you install the Microsoft
iSCSI Initiator.

For Windows 2008:
The Microsoft iSCSI Initiator comes installed with both the Windows Server 2008 and the Server Core
installation.


Accelerated iSCSI 50

Installing Microsoft MPIO
MPIO solutions are needed to logically manage the iSCSI redundant connections and ensure the iSCSI
connection is available at all times. MPIO provides fault tolerance against single point of failure in hardware
components, but can also provide load balancing of I/O traffic, thereby improving system and application
performance.

For Windows 2003:
MPIO is installed during the Microsoft iSCSI initiator installation. You must select the Microsoft MPIO
Multipathing Support for iSCSI option when installing the Microsoft iSCSI Initiator.

For Windows 2008:
MPIO is an optional component in all versions of Windows 2008 Server so it must be installed.
Go to Server Manager, then select Features Add Features Multipath I/O to install it.


Installing the Device Specific Module (DSM)
MPIO requires the installation of device-specific modules (DSM) to provide support for using multiple data
paths to a storage device. These modules are a server-side plug-in to the Microsoft MPIO framework.
A native Microsoft DSM provided by Microsoft, comes by default with Microsoft MPIO software, but storage
providers can develop their own DSM containing the hardware-specific information needed to optimize the
connectivity with their storage arrays.
Vendor-specific DSM usually provides better write performance, lower latency, and so forth, while the
Microsoft DSM offers more basic functions.

Installing the Microsoft DSM
The Microsoft DSM comes with the Microsoft iSCSI Initiator for Windows 2003 Server and is an optional
component for Windows 2008 Server that is not installed by default.

For Windows 2003:
The Microsoft DSM is installed along with the iSCSI Initiator.

For Windows 2008:
To use the Microsoft DSM with all versions of Windows 2008 Server, you must install MPIO and enable it.
Additional details for some steps are in the white paper available from http://www.microsoft.com/mpio
1. Connect to a volume using the iSCSI initiator.
2. In the MPIO Control Panel applet, click the Discover Multi-Paths tab, select the Add support for iSCSI
devices check box, then click Add.
Accelerated iSCSI 51


The check box does not become active unless you are logged in to an iSCSI session.

3. The system asks to reboot for the policy to take effect.

Accelerated iSCSI 52

4. After the reboot, the MPIO Devices tab shows the addition of MSFT2005iSCSIBusType_0x9.

5. Connect to a volume using the iSCI Initiator and select the Enable multi-path check box.

Accelerated iSCSI 53

6. When you connect to the volume, set your MPIO load balancing.


Installing the Storage vendor DSM
1. Download the specific DSM for your iSCSI storage array from your storage 'vendor's website.
NOTE:
The HP P4000 DSM for MPIO software is not supported with Accelerated iSCSI
HBA under Windows Server 2003 and Windows Server 2008.
2. Install the DSM for MPIO. A reboot may be required.
3. Once the DSM for MPIO is installed, there are no additional settings. All iSCSI volume connections
made to an iSCSI Storage System will attempt to connect with the storage 'vendor's DSM for MPIO.
Accelerated iSCSI 54




Connecting volumes with MPIO
1. Open the Microsoft iSCSI Initiator from the Control Panel.
For Windows Server 2008, enter iSCSI in the Search tab.

Accelerated iSCSI 55

2. On the Discovery tab, select Discover Portal.

3. Enter the IP address of the iSCSI Storage System, then click OK.
NOTE:
iSCSI SAN vendors usually recommend using a Virtual IP address. See your iSCSI
SAN documentation for additional information.
4. On the Targets tab, you should discover a new iSCSI target if a LUN has been correctly presented to
the server.


Accelerated iSCSI 56

5. Select the volume to log on to, then click Connect.

6. Select the Enable multi-path check box.

NOTE:
Do not select the Enable Multi-path checkbox if your iSCSI Storage System is not
supporting load-balanced iSCSI access.
7. Click OK to finish.










Accelerated iSCSI 57

Multipath Verification

1. You can verify the DSM for MPIO operations. From the Targets tab, select the volume (now
connected!), then click Properties.


2. On the Sessions tab, you should see multiple sessions of this target, as the DSM for MPIO
automatically builds a data path to each storage node in the storage cluster.


If after a few minutes only one session is available (you can click on refresh several times), it may be
necessary to configure the multipathing manually.




Accelerated iSCSI 58

Manual Multipath configuration

1. Click Add session.

2. Select the Enable multi-path check box.

3. Click Advanced to open the Advanced Settings window.

4. Configure the Advanced Settings as follows:
For Local adapter, select the first Emulex OneConnect OCe11100, iSCSI Storport.

For Initiator IP, select the IP address of the iSCSI HBA to connect to the volume.
Accelerated iSCSI 59

For Target portal IP, select the IP of the iSCSI Storage System containing the volume.

5. Click OK to close the Advanced Settings dialog.

6. Click OK to finish logging on.

7. Repeat step 1 to step 6, using this time the second Emulex adapter.

For Source IP, select the IP address of the iSCSI HBA to connect to the volume.

For Target portal, select the IP of the iSCSI Storage System containing the volume.

8. Click OK to close the Advanced Settings dialog.


Accelerated iSCSI 60

9. Click OK again to finish logging on. You will see the second session using the second path.



NOTE:
HP recommends testing the failover to validate the multipath configuration.





Accelerated iSCSI 61

Using the Microsoft iSCSI Software Initiator in conjunction with Accelerated iSCSI support

Besides the Accelerated iSCSI attached disk, it is possible to use the iSCSI Software to connect additional
volumes using one of the NIC adapters. This section describes the different configuration steps required.

Virtual Connect Profile Configuration
From the Windows 'server's VC Profile, ensure at least one server NIC is connected to an iSCSI
network or to a network where an iSCSI device relies.


Enabling jumbo frames on the NIC adapters
1. From the HP Network configuration Utility, select the port connected to the iSCSI-1 VC Network.
2. Click Properties.
3. Select the Advanced Settings tab, then click Jumbo Packet.

Software iSCSI
Accelerated iSCSI
Accelerated iSCSI 62

4. Select the MTU size supported by the network infrastructure and the iSCSI Storage System.

5. Click OK to close.
6. Repeat step 1 to step 5 on the second adapter connected to the iSCSI-2 VC Network.

Installing the iSCSI Storage DSM for MPIO
A vendor-specific DSM for the iSCSI storage array can be installed for better performance. If not available,
the Microsoft DSM can be used.

Microsoft iSCSI Software configuration with multiple NICs
1. Open the Microsoft iSCSI initiator.
2. Select the Discovery tab.
3. Click Discover Portal.


Accelerated iSCSI 63

4. Enter the IP Address of the iSCSI target system connected to one of the server NIC networks (for
example, iSCSI-1 and iSCSI-2).


5. Click OK.
6. Select the Target tab.
7. A new discovered iSCSI target should appear in the list if a LUN has been correctly presented to the
server.

8. Select the new target (the Status will be Inactive), then click Properties.
9. Click Add session.

Accelerated iSCSI 64

10. Select Multipath, then click Advanced.

11. Configure the Advanced Settings as follows:
For Local adapter, select Microsoft iSCSI Initiator.
For Initiator IP, select the IP address of the first NIC connected to iSCSI-1 VC Network.

For Target portal IP, select the IP of the iSCSI Storage System containing the volume.

Accelerated iSCSI 65

12. Click OK to close the Advanced Settings dialog.
13. Click OK to finish logging on.
14. Repeat step 11 to step 15, this time using the second NIC adapter connected to iSCSI-2 VC Network
and using the corresponding IP address.

15. On the Sessions tab, you should see several sessions using the first and second paths.

16. Click OK to close the Properties window.

Two iSCSI targets are now available through the Microsoft iSCSI initiator; one is using the iSCSI
Accelerated ports (iSCSI HBAs) and the other is using the NIC adapters with software iSCSI.




Accelerated iSCSI 66

Accelerated iSCSI with VMware ESX 4.1
After creating a VC profile with Accelerated iSCSI enabled, proceed with the following steps under
VMware ESX 4.1:
1- Installing the NC551 / NC553 iSCSI drivers.
2- Installing the Emulex OneCommand Manager. (This step is optional.)
3- Configuring the iSCSI HBA ports.
4- Connecting to the iSCSI volumes.

Installing the NC551 / NC553 iSCSI drivers
Ensure the latest Emulex iSCSI drivers for VMware ESX have been installed. Visit the following hp.com links
to download to latest be2iscsi drivers:

HP NC551m Dual Port FlexFabric
10Gb Converged Network Adapter
http://h20000.www2.hp.com/bizsupport/TechSupport/Driv
erDownload.jsp?lang=en&cc=us&prodNameId=4145269&ta
skId=135&prodTypeId=3709945&prodSeriesId=4145106&l
ang=en&cc=us
HP NC551i Dual Port FlexFabric
10Gb Converged Network Adapter
http://h20000.www2.hp.com/bizsupport/TechSupport/Driv
erDownload.jsp?prodNameId=4132827&lang=en&cc=us&pr
odTypeId=3709945&prodSeriesId=4132949&taskId=135
HP NC553i 10Gb 2-port FlexFabric
Server Adapter

http://h20000.www2.hp.com/bizsupport/TechSupport/Driv
erDownload.jsp?prodNameId=4194638&lang=en&cc=us&pr
odTypeId=3709945&prodSeriesId=4194735&taskId=135

NOTE:
Hardware and software iSCSI can be managed by vSphere without any additional
utility. Accelerated iSCSI uses a specific port (that is, vmhba) and requires the
installation of HBA drivers included in the be2iscsi package.
If the CNA is properly configured, you can view the vmhba on the list of initiators available for
configuration:
1. Log in to the vSphere Client, and select your host from the inventory panel.
2. Click the Configuration tab, then click Storage Adapters in the Hardware panel. The Emulex
OneConnect vmhba appear in the list of storage adapters.

Accelerated iSCSI 67

Installing Emulex OneCommand Manager
OneCommand Manager (OCM) is the Emulex utility to manage the NC551and NC553 10Gb FlexFabric
Converged Network Adapters.

OCM is not required to connect a VMware server to an iSCSI volume but it delivers lots of advanced
management features, configuration, status monitoring and online maintenance.
OCM for VMware can be divided into the following installations:
OneCommand Manager for VMware vCenter
OCM application agent in VMware ESX Server

OneCommand Manager for VMware vCenter
For more information, see the OneCommand Manager for VMware vCenter User Manual
http://www-dl.emulex.com/support/vmware/vcenter/100/vcenter_user_manual.pdf.
See the Installing and Uninstalling the OCM for VMware vCenter Components page 6.
The OneCommand Manager for VMware vCenter can be downloaded from the following:
For vSphere 4.1: http://www.emulex.com/downloads/emulex/vmware/vsphere-
41/management.html
For vSphere 5.0: http://www.emulex.com/downloads/emulex/vmware/vsphere-
50/management.html



Accelerated iSCSI 68

OCM application agent in VMware ESX Server
For more information, see the OneCommand Manager User Manual
http://bizsupport1.austin.hp.com/bc/docs/support/supportmanual/c02018556/c02018556.pdf
To install the OneCommand Manager Application agent in VMware ESX Server, log into the ESX Server
COS.
1. Download the Application Kit (OCM core application) from
http://www.emulex.com/downloads/emulex/vmware/vsphere-41/management.html or
http://www.emulex.com/downloads/emulex/vmware/vsphere-50/management.html.
2. Copy the elxocmcore-esx<NN>-<version>.<arch>.rpm file to a directory on the install machine.
3. CD to the directory to which you copied the rpm file.
4. Install the rpm. Type:
rpm -Uvh elxocmcore-esx<NN>-<version>.<arch>.rpm

The rpm contents are installed in /usr/sbin/ocmanager. The OneCommand Manager application
Command Line Interface is also located in this directory.

OneCommand Manager uses the standard CIM interfaces to manage the adapters. But it is necessary
with the NC551/NC553 adapters to import manually the CIM provider provided by Emulex as described
below:
1. Download the CIM provider from http://www.emulex.com/downloads/emulex/vmware/vsphere-
41/management.html or http://www.emulex.com/downloads/emulex/vmware/vsphere-50/management.html.
2. Copy the vmw-esx-x.x.x-emulex-x.x.x.x-x-offline_bundle-x.zip file in /var/log/vmware on the ESX5i
server.
3. Install the CIM Provider, type
Under ESX5i:
esxcli software vib install -d vmw-esx-x.x.x-emulex-x.x.x.x-x-offline_bundle-x.zip
Under ESX41:
esxupdate --nosig --maintenance update --bundle elx-esx-4.1.0-emulex-cim-provider-x.x.xx.x-
offline_bundle-x.zip


At the end of the installation, you will get the following information from the Emulex OneCommand tab.
Accelerated iSCSI 69



NOTE:
Only OneCommand Manager for VMware vCenter 1.1.0 or later can discover
and view iSCSI port information.
Configuring the IP addresses of the iSCSI ports
To log into the iSCSI target, an IP address must be assigned to each vmhba iSCSI ports:
1. From the vSphere client, select the host and click the Configuration tab, then Storage Adapters.
2. Select the first vmhba, then click Properties.

3. Click Configure.
4. Enter a static IP address.
Accelerated iSCSI 70



NOTE:
This IP address must be in the same subnet as the one configured in the iSCSI
Storage System.
5. Click OK to save the changes.
6. Repeat the same configuration for vmhba2

7. Click OK.


Accelerated iSCSI 71

Configuring the iSCSI Volumes

Configuring iSCSI Software target discovery address
1. Select vmhba1, then click Properties.
2. Select the Dynamic Discovery tab.

3. Click Add
4. Enter the IP address of the iSCSI Storage System, then click OK.


NOTE:
iSCSI SAN vendors usually recommend using a Virtual IP address. See your iSCSI
Storage System documentation for additional information.
Accelerated iSCSI 72

5. Click Close, then accept the rescan message.

6. Newly discovered iSCSI targets should appear in the list if a LUN has been correctly presented to the
server.
7. Repeat step 1 to step 6 for vmhba2.

8. At the end, the iSCSI target should be discovered on both ports.



Accelerated iSCSI 73

Adding iSCSI Software Datastore
1. From the Hardware panel, click Storage.
2. Click Add Storage.
3. Select the Disk/LUN storage type, then click Next
4. Select the iSCSI device freshly discovered to use for your datastore, then click Next.
5. Click Next.
6. Enter a name for the Accelerated iSCSI datastore.
7. Adjust the file system values, if needed, then click Next.
8. Click Finish.

Multipath configuration checking
1. To check the multipathing configuration, select Storage from the Hardware panel.
2. From the Datastores view, select the iSCSI datastore.
Ensure the number of paths displayed is more than one:

Accelerated iSCSI 74

3. Click Properties Manage Paths.

Details about the multipath settings displays.



Accelerated iSCSI 75

Using the VMware iSCSI Software Initiator in conjunction with Accelerated iSCSI support
Using an iSCSI software initiator with the Emulex iSCSI hardware initiator is possible under VMware with
Virtual Connect. This section describes the different configuration steps required.

Virtual Connect Profile Configuration
From the ESX server's VC Profile, ensure at least one server NIC is connected to an iSCSI network or to
a network where an iSCSI device is available.



Software iSCSI
Accelerated iSCSI
Accelerated iSCSI 76

vSphere iSCSI Software configuration with multiple NICs

1. Open the vSphere Client, go to the Configuration tab, then click Networking.
2. Click Add Networking.
3. Select VMkernel then click Next.
4. Select Create a virtual switch.
5. Select the two vmnic connected to the iSCSI network (for example, iSCSI-1 and iSCSI-2).
6. Click Next.
7. Enter a network label (for example, iSCSI-1), then click Next.
8. Specify the IP Settings (Static or DHCP), then click Next.
9. Click Finish.

10. Select the vSwitch1 just created, then click Properties.
11. Under the Ports tab, click Add.
12. Select VMkernel, then click Next.
13. Enter a network label (for example, iSCSI-2), then click Next.
14. Specify the IP Settings (Static or DHCP), then click Next.
15. Click Finish.

Mapping each iSCSI port to just one active NIC

1. Select iSCSI-1 on the Ports tab, then click Edit.
2. Click the NIC Teaming tab and select Override vSwitch failover order.
Accelerated iSCSI 77

3. Move the second vmnic under Unused Adapters.

4. Repeat step 1 to step 3 for the second iSCSI port (iSCSI-2), but this time moving the first vmnic under
Unused Adapters.

5. Review the configuration, then click Close.

The iSCSI-1 VMkernel interface network is linked to the vmk1 port name and iSCSI-2 is linked to
vmk2.



Accelerated iSCSI 78

Binding iSCSI ports to iSCSI adapters
From the vSphere CLI command, enter the following:
#esxcli swiscsi nic add -n vmk1 -d vmhba32
#esxcli swiscsi nic add -n vmk2 -d vmhba32
#esxcli swiscsi nic list -d vmhba32

Enable Jumbo Frames on vSwitch
For an MTU=9000, enter the following on the vSphere CLI:
# esxcfg-vswitch -m 9000 vSwitch1
# esxcfg-vswitch -l
Accelerated iSCSI 79


Enable Jumbo frames on the VMkernel interface
For an MTU=9000, enter the following on the vSphere CLI:
#esxcfg-vmknic -m 9000 iSCSI-1
#esxcfg-vmknic -m 9000 iSCSI-2
#esxcli swiscsi nic list -d vmhba32


Accelerated iSCSI 80

Configuring iSCSI Software target discovery address
1. Log in to the vSphere Client and select your server from the inventory panel.
2. Click the Configuration tab and click Storage Adapters in the Hardware panel.
3. Select the iSCSI Software Adapter, then click Properties.

4. Click Configure, then check the Enabled option.
5. Enter the same iSCSI name as the one already configured on the Accelerated iSCSI HBAs.


NOTE:
The host assigns a default iSCSI name to the initiator that must be changed in
order to follow the iSCSI RFC-3720, where it is stated to use the same IQN for all
connections for a given host and not associate different ones to different adapters.
6. Click OK.
7. Go to the Dynamic Discovery tab and click Add.
8. Enter the iSCSI Storage System IP address, click OK, then click Close.
9. Accept the rescan of the freshly configured iSCSI Software HBA.



Accelerated iSCSI 81

Newly discovered iSCSI targets should appear in the list if a LUN has been correctly presented to the
server.

10. Click the Paths tab and ensure two paths are discovered.


Adding iSCSI Software Datastore
1. From the Hardware panel, click Storage.
2. Click Add Storage.

3. Select the Disk/LUN storage type, then click Next.
4. Select the iSCSI device freshly discovered to use for your datastore, then click Next.
5. Click Next.
6. Enter a name for the software iSCSI datastore.
7. Adjust the file system values, if needed, then click Next.
Accelerated iSCSI 82

8. Click Finish.

Two datastores are now available; one is using iSCSI Accelerated ports (iSCSI HBA) and the other is
using iSCSI software traffic flowing through the NIC adapters.








Accelerated iSCSI 83

For more information
The following links may be useful for further information regarding iSCSI.

Microsoft Windows 2008 R2
See HP LeftHand SAN iSCSI Initiator for Microsoft Windows Server 2008
http://h10032.www1.hp.com/ctg/Manual/c01750839.pdf

VMware ESX Server 4
See chapter 2 of the VMware iSCSI SAN Configuration Guide
ESX 4.0: http://www.vmware.com/pdf/vsphere4/r40/vsp_40_iscsi_san_cfg.pdf
ESX 4.1: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_iscsi_san_cfg.pdf

Citrix XENServer
See http://docs.vmd.citrix.com and
http://forum.synology.com/wiki/index.php/How_to_use_iSCSI_Targets_on_Citrix_XenServer

Linux Redhat Enterprise
See Red Hat Storage Administration Guide
http://docs.redhat.com/docs/en-
US/Red_Hat_Enterprise_Linux/6/pdf/Storage_Administration_Guide/Red_Hat_Enterprise_Linux-6-
Storage_Administration_Guide-en-US.pdf

Linux SUSE Linux Enterprise Server 11
Find detailed documentation on http://www.novell.com/documentation/sles11/

Boot from Accelerated iSCSI 84

Boot from Accelerated iSCSI
The iSCSI Boot feature allows a server to boot from a remote disk (known as the iSCSI target) on the
network without directly attaching a boot disk.
The main benefits are as follows:
Centralized boot process
Cheaper servers (diskless)
Less server power consumption Servers can be denser and run cooler without internal storage.
Boot from SAN-like benefits at attractive costs
Easier server replacement You can replace servers in minutes, the new server points to the old boot
location.
Easier backup processes The system boot images in the SAN can be backed up as part of the
overall SAN backup procedures.
NOTE:
Under Virtual Connect management, Accelerated iSCSI takes place automatically when
iSCSI boot is enabled.

Boot from Accelerated iSCSI 85

Creating an Accelerated iSCSI Boot Virtual Connect profile
The iSCSI configuration setup feature enables a VC user to configure a server to boot from a remote iSCSI
target as part of the VC server profile.
The following steps provide an overview of the procedure to enable iSCSI boot:
1. Open Virtual Connect Manager.
2. From the Define menu, select Server Profile to create a new VC profile.

Be careful and note that VCM assigns FCoE connections by default.

NOTE:
A server with one FlexFabric adapter can be configured with a unique personality,
either all Ethernet, or Ethernet/iSCSI, or Ethernet/FCoE. Therefore, it is' not
possible in this case to enable at the same time both FCoE and iSCSI connections.
So the two existing FCoE connection must be deleted. On the other hand, a server
with multiple FlexFabric Adapters can be configured with both iSCSI and FCoE
connections mapped to different adapters.
3. If you have a unique FlexFabric Adapter, delete the two FCoE connections by clicking Delete,
otherwise go to the next step.

Boot from Accelerated iSCSI 86

4. In the iSCSI HBA Connections section, click Add.

5. In the Network Name column, click Select a network.

6. Select your iSCSI dedicated VC network then click OK.


NOTE:
The multiple network feature (that is, when using 802.1Q VLAN tagging) is not
supported for iSCSI connections.
7. In the Port Speed column, you can adjust the speed settings (to Auto, Preferred, or Custom)

Boot from Accelerated iSCSI 87

8. In the Boot Setting column, select Primary.


NOTE:
Disabled: Only Accelerated iSCSI is available. Boot is unavailable.
Primary: Enables you to set up a fault-tolerant boot path and displays the screen
for Flex-10 iSCSI connections. Accelerated iSCSI is enabled.
USE-BIOS: Indicates if boot will be enabled or disabled using the server iSCSI
BIOS utility.

When Primary is selected, a new window menu pops up.




Boot from Accelerated iSCSI 88

The following two options are offered:
Enter all iSCSI boot parameters manually for the primary connection.
See Appendix 1 for more information.
Use the iSCSI Boot Assistant (if previously configured, see the following for more details).

Boot Assistant prerequisites:
a. Update your LeftHand SAN solution with the latest software version.
b. Create servers and volumes on the LeftHand storage.
c. Assign LeftHand volumes to servers.
d. Configure the LeftHand user and password credentials in VCM:
Click on the Storage Mgmt Credentials link on the left hand side menu:

Click Add and enter the required information:

NOTE:
Boot Assistant supports only HP LeftHand SAN Solution. It is the recommended
method, as it simplifies the procedure and avoids typing errors.



Boot from Accelerated iSCSI 89

Steps with Boot Assistant:
a. Click Use Boot Assistant.
b. Then select the iSCSI target from Management Targets drop down list, then click Retrieve.

c. Then select the Boot Volume.



9. After all entries have been correctly entered either manually or by the Boot Assistant, save the iSCSI
Boot parameters, by clicking Apply.




Boot from Accelerated iSCSI 90

NOTE:
Boot Assistant will populate information for iSCSI Boot Configuration' and
Authentication'. The user must fill in the Initiator Network Configuration', either by
checking Use DHCP to retrieve network configuration' or by filling in the IP address
and Netmask fields.
10. Now, the second iSCSI connection can be defined. In the iSCSI HBA Connections section, click Add.


NOTE:
Allowing more than one iSCSI application server to connect to a volume
concurrently without cluster-aware applications or without an iSCSI initiator with
MPIO software could result in data corruption.
11. In the Network Name column, select your second iSCSI dedicated VC network.

12. In the Port Speed column, you can adjust the speed settings (to Auto, Preferred, or Custom).

13. In the Boot Setting column, select Secondary.

A new window menu pops up automatically when Secondary is selected.

14. Enter all iSCSI boot parameters for the secondary connection. The settings are the same as the
primary configuration (see Appendix 1 for additional information) or you can use the iSCSI Boot
Assistant (if previously configured).
Boot from Accelerated iSCSI 91


15. Save by clicking Apply.

16. Additional VC Ethernet Network connections must be added to the VC profile to give the server other
network access, like Service Console, VM Guest VLAN, VMotion, and so forth. This will obviously
depend on your server application.

Boot from Accelerated iSCSI 92

17. When completed, you can assign your profile to a server bay.

18. Click Apply to save the profile.

19. The server can now be powered on (using either the OA, the iLO, or the Power button).


NOTE:
For servers with recent System BIOS, you can press any key to view the Option
ROM boot details.
Boot from Accelerated iSCSI 93

20. While the server starts up, a screen similar to the following should be displayed.


21. Ensure the iSCSI disk is shown during the Emulex iSCSI scan.

Validation of the iSCSI configuration:







If everything is correct, you can start with the OS deployment; otherwise, see the Troubleshooting
section for more information.
iSCSI Disk Volume
Ensure you see the drive
information.

Initiator name
iSCSI port 1
Ensure you get an IP
address if DHCP is
enabled.
iSCSI port 2
Ensure you get an IP
address if DHCP is
enabled.

Boot from Accelerated iSCSI 94

iSCSI Boot Image Creation Step-by-Step Guides
Microsoft Windows 2008 R2

Manual Windows installation method

An iSCSI boot Windows Server installation does not require any particular instructions, except that the iSCSi
drivers must be provided at the beginning of the Windows installation in order to discover the iSCSI drive
presented to the server.
1. Launch the Windows installation.


2. On the "Where do you want to install Windows?" screen, the Windows installation does not detect
any iSCSI drives.

Boot from Accelerated iSCSI 95

3. Click Load Driver to provide the iSCSI drivers for the NC551/NC553 10Gb FlexFabric CNA.

To obtain these drivers, go to the following web pages:

HP NC551m Dual Port
FlexFabric 10Gb Converged
Network Adapter
http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDo
wnload.jsp?lang=en&cc=us&prodNameId=4145269&taskId=135
&prodTypeId=3709945&prodSeriesId=4145106&lang=en&cc=us
HP NC551i Dual Port
FlexFabric 10Gb Converged
Network Adapter
http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDo
wnload.jsp?prodNameId=4132827&lang=en&cc=us&prodTypeId
=3709945&prodSeriesId=4132949&taskId=135
HP NC553i 10Gb 2-port
FlexFabric Server Adapter

http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDo
wnload.jsp?prodNameId=4194638&lang=en&cc=us&prodTypeId
=3709945&prodSeriesId=4194735&taskId=135

4. Using the iLO Virtual Drives menu, mount a virtual folder and select the iSCSI folder located where
your HP drivers have been unpacked.


Boot from Accelerated iSCSI 96

5. Browse the new iLO folder.


6. Windows should detect the be2iscsi.inf file contained in the folder.



Boot from Accelerated iSCSI 97

7. Click Next.


8. If the iSCSI volume has been properly presented and the correct iSCSI driver has been loaded, the
Where do you want to install Windows? screen should indicate the iSCSI Boot volume presented to
the server.




Boot from Accelerated iSCSI 98

9. Select the iSCSI Volume and click Next, then you can complete the Windows installation.

Sometimes a same drive is detected several times by the Windows installation due to multipathing.
This is because the server profile under VCM is configured with two iSCSI ports.

In this case, the Windows installation automatically turns offline the duplicated drives and only leaves
one online drive to be selected.






Boot from Accelerated iSCSI 99

VMware vSphere 5
Some tips before the installation:
Ensure the LUN is presented to the ESX system as LUN 0. The host can also boot from LUN 255.
Ensure no other system has access to the configured LUN.

Manual ESXi installation method

Installing an iSCSI Boot server with ESXi 5.0 is quite straight forward and does not require any specific
steps:
1. Download the installation ISO from the HP website
https://h20392.www2.hp.com/portal/swdepot/displayProductsList.do?category=SVIRTUALor the VMware
website https://www.vmware.com/tryvmware/?p=esxi.
2. Burn the installation ISO to a CD, or move the ISO image to a location accessible using the virtual
media capabilities of iLO.
3. Boot the server and launch the HP VMware ESXi installation.



Boot from Accelerated iSCSI 100

4. Click Enter.

5. Click F11 to accept the agreement.



Boot from Accelerated iSCSI 101

6. The iSCSI volume to install ESXi should be detected without difficulty by the installer.

7. Once the disk is selected, go to the next step.


Boot from Accelerated iSCSI 102

8. Reboot the server to start ESXi 5.0.

9. Once rebooted, install the latest Emulex drivers and firmware. See
http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c03005737 for more
information.



Boot from Accelerated iSCSI 103

VMware vSphere 4.1
Some tips before the installation:
Ensure the LUN is presented to the ESX system as LUN 0. The host can also boot from LUN 255.
Ensure no other system has access to the configured LUN.

Manual ESX installation method

1. Obtain the iSCSI drivers.

The VMware ESX 4.1 CD does not include the drivers for the FlexFabric 10Gb Converged Network
Adapters (NC551 and NC553 drivers), so during the install, it is' necessary to provide the iSCSI
drivers (be2iscsi) and the NIC drivers (be2net).

Visit the following links to download the latest iSCSI drivers:

HP NC551m Dual Port
FlexFabric 10Gb Converged
Network Adapter
http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDown
load.jsp?lang=en&cc=us&prodNameId=4145269&taskId=135&pro
dTypeId=3709945&prodSeriesId=4145106&lang=en&cc=us
HP NC551i Dual Port
FlexFabric 10Gb Converged
Network Adapter
http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDown
load.jsp?prodNameId=4132827&lang=en&cc=us&prodTypeId=370
9945&prodSeriesId=4132949&taskId=135
HP NC553i 10Gb 2-port
FlexFabric Server Adapter

http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDown
load.jsp?prodNameId=4194638&lang=en&cc=us&prodTypeId=370
9945&prodSeriesId=4194735&taskId=135

2. Start the VMware installation by inserting the ESX 4.1 DVD or ISO image through ILO Virtual Media.
3. On the Custom Drivers page, click Yes and Add to install the Emulex NIC drivers (be2net).


Boot from Accelerated iSCSI 104

4. Insert the recently downloaded Emulex ESX4.1 NIC driver CD and select the only module to import.

5. The NIC driver should be listed in the Custom Drivers window.




Boot from Accelerated iSCSI 105

6. Click Add again to provide the Emulex iSCSI drivers (be2iscsi), insert the Emulex ESX 4.1 iSCSI driver
CD, and select the module to import.

7. Two drivers should be listed. Click Next.

8. Click Yes at the Load the system drivers warning.

Boot from Accelerated iSCSI 106

9. Ensure you have the Emulex Network Adapter properly detected, then configure the network settings.

10. At the Setup Type window, select Advanced setup.


Boot from Accelerated iSCSI 107

11. If the iSCSI volume has been properly presented and the correct iSCSI driver has been loaded, the
Select a location to install ESX' window should indicate the iSCSI Boot volume presented to the server.

12. Select the iSCSI volume, click Next, then follow the traditional installation procedures. Check
VMware's documentation for more details.





Boot from Accelerated iSCSI 108

View iSCSI Storage Adapter Information

Procedure:

1. Using a vSphere client, select the host and click the Configuration tab; then in Hardware, select
Storage Adapters.
2. To view details for a specific iSCSI adapter, select the adapter from the Storage Adapters list.

3. To list all storage devices the adapter can access, click Devices.

4. To list all paths the adapter uses, click Paths.




Boot from Accelerated iSCSI 109

How to configure/change/check the iSCSI Multipathing on ESX4.1

Generally, there is nothing to change under ESX, the default multipathing settings used by VMware is
correct. However, if changes are required, it is' possible to modify the path selection policy and preferred
path.

Procedure:

1. Select the host and click the Configuration tab, then in Hardware, select Storage.
2. From the Datastores view, select the datastore, then click Properties.

Boot from Accelerated iSCSI 110

3. Click Manage Paths.


Boot from Accelerated iSCSI 111

4. Set multipathing policy.
Fixed (VMware)
This is the default policy for LUNs presented from an Active/Active storage array.
Uses the designated preferred path flag, if it has been configured. Otherwise, it uses the first
working path discovered at system boot time. If the ESX host cannot use the preferred path or
it becomes unavailable, ESX selects an alternative available path. The ESX host automatically
returns to the previously defined preferred path as soon as it becomes available again.
VMW_PSP_FIXED_AP
Extends the Fixed functionality to active-passive and ALUA mode arrays.
Most Recently Used (VMware)
This is the default policy for LUNs presented from an Active/Passive storage array.
Selects the first working path, discovered at system boot time. If this path becomes
unavailable, the ESX host switches to an alternative path and continues to use the new path
while it is available. ESX does not return to the previous path when if, or when, it returns; it
remains on the working path until it, for any reason, fails.
Round Robin (VMware)
Uses an automatic path selection rotating through all available paths, enabling the distribution
of load across the configured paths.
For Active/Passive storage arrays, only the paths to the active controller will be used
in the Round Robin policy.
For Active/Active storage arrays, all paths will be used in the Round Robin policy.
NOTE:
This policy is not currently supported for Logical Units that are part of a Microsoft
Cluster Service (MSCS) virtual machine.

5. For the fixed multipathing policy, right-click the path to designate it as preferred and select Preferred.
Boot from Accelerated iSCSI 112


6. Click Ok to save and exit the dialog box.

7. Reboot your host for the change to impact the storage devices currently managed by your host.




Boot from Accelerated iSCSI 113

Red Hat Enterprise Linux 5 Update 4

Manual RHEL installation method

1. Obtain the iSCSI drivers.

Red Hat Enterprise Linux DVD does not currently include the drivers for the FlexFabric 10Gb
Converged Network Adapters (NC551 and NC553 drivers), so during the install, 'provide the iSCSI
driver disk (be2iscsi).

To obtain the be2iscsi driver disk, access the websites in the following table:

HP NC551m Dual Port
FlexFabric 10Gb Converged
Network Adapter
http://h20000.www2.hp.com/bizsupport/TechSupport/Driver
Download.jsp?lang=en&cc=us&prodNameId=4145269&taskId
=135&prodTypeId=3709945&prodSeriesId=4145106&lang=e
n&cc=us
HP NC551i Dual Port FlexFabric
10Gb Converged Network
Adapter
http://h20000.www2.hp.com/bizsupport/TechSupport/Driver
Download.jsp?prodNameId=4132827&lang=en&cc=us&prodT
ypeId=3709945&prodSeriesId=4132949&taskId=135
HP NC553i 10Gb 2-port
FlexFabric Server Adapter

http://h20000.www2.hp.com/bizsupport/TechSupport/Driver
Download.jsp?prodNameId=4194638&lang=en&cc=us&prodT
ypeId=3709945&prodSeriesId=4194735&taskId=135

Also download the HP NC-Series ServerEngines Driver Update Disk for Red Hat Enterprise Linux 5
Update 4.


Boot from Accelerated iSCSI 114

2. Start the RHEL5.4 installation by inserting the DVD or ISO image through iLO Virtual Media.
At the boot prompt:

If there is a single I/O path to the iSCSI device you want to use for the RHEL5.4 installation,
type: linux dd

This command prompts to load a driver disk during the setup for the FlexFabric CNA.

With multiple I/O paths to the iSCSI device, enter: linux dd mpath

This command enables multipath and prompts to provide a driver disk for the FlexFabric
CNA.


Boot from Accelerated iSCSI 115

3. When prompted, "Do you have a driver disk?" click Yes.

4. Insert the floppy disk image using the iLO virtual Media, then click OK.


Boot from Accelerated iSCSI 116

Notice the floppy is being read, and pressing <Alt> <F3> will show be2iscsi driver being loaded.

5. Press <Alt> <F1> and return to the main install screen.

6. When prompted again with "Do you wish to load any more driver disks?", click No, then follow the
traditional installation procedures. Check Red Hat's documentation for more details.

7. At the "Select the drive to use for this installation" window, ensure only one iSCSI drive is proposed
(with multipathing detected for a multipath iSCSI target installation).




Boot from Accelerated iSCSI 117

Suse Linux Enterprise Server 11

Manual Suse installation method

1. Obtain the iSCSI drivers.

SUSE LINUX Enterprise Server 11 CD does not currently include the drivers for the FlexFabric 10Gb
Converged Network Adapters (NC551 and NC553 drivers), so during the install, 'it is necessary to
provide the iSCSI driver disk (be2iscsi).

To obtain the be2iscsi driver disk, access the websites in the following table:

HP NC551m Dual Port
FlexFabric 10Gb Converged
Network Adapter
http://h20000.www2.hp.com/bizsupport/TechSupport/DriverD
ownload.jsp?lang=en&cc=us&prodNameId=4145269&taskId=1
35&prodTypeId=3709945&prodSeriesId=4145106&lang=en&c
c=us
HP NC551i Dual Port FlexFabric
10Gb Converged Network
Adapter
http://h20000.www2.hp.com/bizsupport/TechSupport/DriverD
ownload.jsp?prodNameId=4132827&lang=en&cc=us&prodType
Id=3709945&prodSeriesId=4132949&taskId=135
HP NC553i 10Gb 2-port
FlexFabric Server Adapter

http://h20000.www2.hp.com/bizsupport/TechSupport/DriverD
ownload.jsp?prodNameId=4194638&lang=en&cc=us&prodType
Id=3709945&prodSeriesId=4194735&taskId=135

Also download the HP ServerEngines Driver Update Disk for SUSE Linux Enterprise Server 11.

2. Start the SUSE LINUX Enterprise Server 11 CD1 installation by inserting the DVD or ISO image
through ILO Virtual Media.
Boot from Accelerated iSCSI 118

3. A menu will be displayed prompting for input.

4. Press F6 and select Yes to get prompted during the installation for the driver update disk

Boot from Accelerated iSCSI 119

5. Click Installation.

6. Insert the floppy disk image using the iLO virtual Media, then click OK.

Boot from Accelerated iSCSI 120

7. After a while, you should get the following windows with the new mounted sda: drive listed. Select
sda, then click OK.

8. Notice the floppy is being read. Pressing <Alt> <F3> will show be2iscsi driver being loaded.


Boot from Accelerated iSCSI 121

9. Press <Alt> <F1> and return to the main install screen where you should now see the iSCSI drives.

10. Click Back and follow the traditional installation procedures as prompted. Check SUSE's
documentation for more details.

11. If there are multiple I/O paths to the iSCSI device you want to use for the SUSE installation, you must
enable multipath support. Click Partitioning to open the Preparing Hard Disk page.


Boot from Accelerated iSCSI 122


12. Click Custom Partitioning (for experts), then click Next.

13. Select the Hard Disks main icon, click the Configure button, select Configure Multipath, then click Yes
when prompted to activate multipath.


Boot from Accelerated iSCSI 123

14. This rescans the disks and shows available multipath devices (such as /dev/mapper/
3600a0b80000f4593000012ae4ab0ae65). Select one of the hard disks starting with /dev/mapper/
that should be detected twice if multipathing is used.

15. Click Accept to continue with the traditional installation procedures.

Troubleshooting 124

Troubleshooting
Emulex iSCSI Initiator BIOS Utility
The iSCSI Initiator BIOS Utility is an INT 13H option ROM resident utility you can use to configure, manage,
and troubleshoot your iSCSI configuration.
NOTE:
To get into the iSCSI utility, iSCSI must be enabled in the Virtual Connect profile.
With the Emulex iSCSISelect Utility you can do the following:
Configure an iSCSI initiator on the network
Ping targets to determine connectivity with the iSCSI initiator
Discover and display iSCSI targets and corresponding LUNs
Initiate boot target discovery through Dynamic Host Configuration Protocol (DHCP)
Manually configure bootable iSCSI targets
View initiator properties
View connected target properties

NOTE:
HP Virtual Connect profile takes precedence over the settings in this menu.
If iSCSI SAN boot settings are made outside of Virtual Connect (using iSCSISelect
or other configuration tools), Virtual Connect will restore the settings defined by the
server profile after the server blade completes the next boot cycle.
Only the USE-BIOS option in a VC Profile preserves the boot settings options set in
the iSCSI Utility or through other configuration utilities.
To run the Emulex iSCSISelect Utility, press <Ctrl> + <S> when prompted.


Troubleshooting 125

NOTE:
To reach the iSCSISelect Utility screen, press any key during POST.

The iSCSI Initiator Configuration menu appears after the BIOS initializes.

The iSCSI Initiator name we get here is related to the control of Virtual Connect.
A server with a VC profile without iSCSI Boot parameters shows a factory default initiator name like
"iqn.1990-07.com.Emulex.xx-xx-xx-xx-xx", with the information in red being the default Factory
MAC address of the iSCSI adapter.
A server with a VC profile with iSCSI Boot parameters shows the initiator name defined in the VC
Profile.


Troubleshooting 126

Configuration checking
If you are facing some connection problems with your iSCSI target, use the iSCSISelect Ping utility to
troubleshoot your iSCSI configuration.
If you do not see any iSCSI Boot drive detected during POST (BIOS not installed is displayed during POST),
you might need to check the iSCSI configuration written by VC.
1. From the iSCSI Initiator Configuration menu, tab to Controller Configuration menu, and press Enter.
2. Select the first Controller and press Enter.
3. Move to the iSCSI Target Configuration and press Enter.

4. Verify you have one iSCSI target. This target must correspond to your Virtual Connect profile
configuration.
5. Select the target, then press Enter.
Troubleshooting 127

6. You can verify the target name, the IP Address, and the authentication method.

A screen without target information, as seen below, means there is a problem somewhere.

The following are reasons there could be a problem:
Outdated Emulex firmware
Authentication information problem
Virtual Connect profile not assigned (ensure there is no issue reported by the VC Domain)
Network connectivity issue between the server and the iSCSI target
iSCSI target problem: misconfiguration, wrong LUN masking, and so forth


Troubleshooting 128

For further debugging, you can manually enter the target configuration to run a PING test.
1. Select Add New iSCSI target.
2. Enter all the target information.

3. Select Ping, then press Enter.


Troubleshooting 129

4. You must get the following successful result.

If ping is successful, then 'it is very likely 'it is the incorrect authentication information.
If you still cannot connect or ping your iSCSI target, see the following paragraphs.



Troubleshooting 130

Emulex OneCommand Manager (OCM)
The Emulex OCM Application is a comprehensive management utility for Emulex and CNAs and
HBAs, providing a powerful, centralized adapter management suite, including discovery and
reporting. 'It is an excellent tool for troubleshooting. OCM contains a GUI and a CLI.
OneCommand Manager is available under the following:
Windows Server 2003/2008/2008 R2
Linux
VMware ESX Server
See http://www.emulex.com/downloads/emulex.html for more information.

Some of OCMs interesting features available for iSCSI are as follows:
Run diagnostic tests on adapters
Reset/disable adapters
Manage an adapter's CEE settings
Discover iSCSI targets
Login to iSCSI Targets from CNAs
View iSCSI Target session information
Logout from iSCSI targets

Not all OCM Application features are supported across all operating systems. Consult the Emulex
OneCommand documentation for more information.



3 NIC
1 iSCSI
0 FCoE
Target LUN 0 from Port 2
Target LUN 0 from Port 1
MAC of the NICs
iSCSI Target
iSCSI Initiator
Troubleshooting 131

LUN information

If you select the target, then you can click Sessions.



Troubleshooting 132

Get lots of diagnostics of the target session.

Diagnostics



Troubleshooting 133

Problems found with OneCommand Manager
Problem:
You have configured a VC server profile with two iSCSI ports, but OneCommand Manager or the iSCSI
Initiator show only one iSCSI port connected to the iSCSI LUN.




Solution:
To properly configure iSCSI load balancing, sometimes like with HP LeftHand P4000 SAN Solutions,
you must use the Primary Target Address in the iSCSI boot parameters, the Virtual IP Address (VIP) of
the cluster, and not the IP address of the LeftHand node.


Also, verify with your Storage Administrator that all appropriate clusters have VIPs configured.
Ensure you use the VIP of
your storage cluster when
using two iSCSI ports boot.
connections
Troubleshooting 134

Problems found during iSCSI Boot
Problem:
You are using a DHCP server to get the iSCSI Boot parameters, but during server Power-On Self-Test
(POST), the iSCSI disk is not being displayed when the Emulex iSCSI Initiator is executed, although the
menu shows the correct initiator IP addresses.

Solution 1:
You are not able to get the DHCP option 43 to work with the FlexFabric 10Gb Converged Network
Adapters. For more information, see Appendix 2.
Solution 2:
Ensure the storage presentation (also known as LUN masking) is correctly configured on the storage
array.

Troubleshooting 135

Problem:
You get a warning message with IP address 0.0.0.0.

Solution:
The following are different reasons for this issue:
o Your second iSCSI boot connection is not configured in the VC profile, in which case 'it is not a
problem.
o You have a network connection issue. Check the VC profile and the vnet corresponding to the
iSCSI connection. Verify the status of the vnet and ensure it is properly configured with all ports
in green.



Problem:
BIOS Not Installed appears without any drive information.

Troubleshooting 136


Solution:
- Ensure the Storage presentation (also known as LUN masking) is correct.
- Ensure the Emulex CNA has the latest Firmware.
- This may also be network, or authentication information problem, so do the following:

1. Check iSCSISelect on the Controller Properties page to ensure Boot Support is Enabled.

2. Check the network in iSCSISelect on the Network Configuration page, and ensure the Link
Status field is Link Up' and the DHCP field match what is configured in VC.

Troubleshooting 137

3. Click Ping.

4. Enter the target IP' to ensure the network connection is OK.

5. If ping failed and DHCP is enabled, check the DHCP server setup.


Troubleshooting 138

6. If ping failed and DHCP is disabled:
a. Select Configure Static IP Address.

b. Ensure the initiator IP, network mask, and gateway are set up correctly.

7. If Ping is successful, ensure authentication configuration in VC matches what is configured on
storage side.


Problem:
You cannot boot from iSCSI and your boot iSCSI LUN ID is above 8
Solution:
Change the iSCSI boot LUN ID to a number lower than 9.

Troubleshooting 139

PXE booting problems
Problem:
Media test failure when PXE booting a G6 server with a CNA mezzanine and only FlexFabric modules
in bay 3 and 4:
Solution:
Ensure all the LOM ports (that is, 1 and 2) are PXE disabled in the VC profile, otherwise you will see
the PXE error media test failure because the default VC settings are Use-Bios.


Troubleshooting 140

iSCSI boot install problems with Windows 2003
Problem:
Installation loaded the iSCSI driver and installation proceeded seeing iSCSI target LUN. Later during
the copy file' phase, it cannot find the driver be2iscsi.sys/cat/inf file even though the same files were
used to access the target. Skipping the files will end up with windows installation failed.
Solution:
Get the window installation DVD providing choices of Custom' and Express' install. You must choose
the Custom' installation. The Express' installation requires a boot driver to be a Windows Signed driver
(WinLogo). If 'it is not, the w2k3 installation just gives a very misleading generic message (file not
found) and fails the installation.

Problem:
Drivers loaded successfully, target LUN seen, installation proceeded to target LUN, no complaint of
drivers not found, but a blue screen occurred.
Solution:
Monitor the network to the iSCSI storage using one of the multiple network monitoring tools available
on the market. This used to be one of the causes for iSCSI boot install problems with Windows 2008.
For more information, see Appendix 3: How to monitor an iSCSI Network.

Problem:
Load driver' seems to hang. It should only take a few seconds.
Solution:
Check if the firmware version matches the driver version. It may hang if it does not match.

Problem:
During the Expanding files' phase, the installation message stated it could not locate some files and
terminated.
Solution:
Monitor the network access to the iSCSI storage. Network can be a cause.

Troubleshooting 141

Problem:
iSCSI boot parameters are set up in VC side, but iSCSISelect utility does not show configurations in the
iSCSI Target Configuration' for the specified controller.
Solution:
Ensure controller Network Configuration' is correct. Check link status, 'ensure it is Link Up', check Static
IP Address' setup: correct IP, mask, and gateway (if routing is required from initiator to target).



VCEM issues with Accelerated iSCSI Boot
Problem:
During the creation of an iSCSI boot Virtual Connect profile under VCEM 6.2, you get an unclear
format error message.
Solution:
VCEM 6.2 only allows all lowercase for the initiator and target information (does not apply to VCM).



iSCSI issues with HP P4000 products
Problem:
You are connected to a HP P4000 solution and during heavy network utilization; the iSCSI initiator
driver reports a device failure.
Solution:
Something is significantly slowing down the response time on the network, such that the iSCSI
initiator session recovery 'cannot occur within the default driver settings. In this case, it might be
useful to increase the iSCSI initiator driver timeout parameter called Extended TimeOut (ETO) and is
configurable through OneCommand Manager.
By default ETO is 90 seconds on all Windows operating systems. It can be set between 20 and
3600 seconds. You can also set it to 0 seconds, but the minimum value assumed by the driver is
20.

To set the value:
Troubleshooting 142

1. Launch OneCommand Manager.

2. Click the first iSCSI port and select the iSCSI target.



Troubleshooting 143

3. In the ETO entry, enter the new timeout value, then click Apply.

4. Follow step 1 to step 3 for the second iSCSI port.













Appendix 1- iSCSI Boot Parameters 144

Appendix 1- iSCSI Boot Parameters
Mandatory iSCSI Boot Parameters entries
Some entries must be correctly filled up to successfully boot from iSCSI.


iSCSI Initiator (iSCSI Boot Configuration)
(Also known as the iSCSI client or iSCSI host)



This is the name used for the iSCSI initiator on the booting system. The initiator name length can be maximum
of 223 characters.


Appendix 1- iSCSI Boot Parameters 145

NOTE:
Ensure the Initiator Name you set is unique. This Initiator Name must be given to
the Storage Administrator for the Storage presentation (also known as LUN
masking).
NOTE:
Use the same iSCSI qualified name (IQN)initiator name for all connections for a
given host and not associate different ones to different adapters. (Ref. iSCSI RFC-
3720).
Each iSCSI host is identified by a unique IQN. This name is similar to the WorldWide Name (WWN)
associated with Fibre Channel devices and is used as a way to universally identify the device.
iSCSI qualified names take the form iqn.yyyy-mm.naming-authority:unique name, where:
yyyy-mm is the year and month when the naming authority was established.
naming-authority is usually reverse syntax of the Internet domain name of the naming authority.
For example, the iscsi.vmware.com naming authority could have the iSCSI qualified name form
of iqn.1998-01.com.vmware.iscsi. The name indicates that the vmware.com domain name was
registered in January 1998, and iscsi is a subdomain, maintained by vmware.com.
unique name is any name you want to use, for example, the name of your host.
Examples:
iqn.1991-05.com.microsoft:win-g19b6w8hsum
iqn.1990-07.com.Emulex.00:17:A4:77:04:02
iqn.1998-01.com.vmware.iscsi:name1
iqn.1998-01.com.vmware.iscsi:name2

iSCSI Target (iSCSI Boot Configuration)
The iSCSI target parameters can be set either statically (by disabling DHCP) or dynamically (by enabling
DHCP).
For simplification and human error prevention, 'it may be easier to use a DHCP server for the iSCSI boot
parameters.

Statically configure the target information
1. Target Name


The Target Name is your iSCSI target from which the server boots. The target name length is of maximum
223 characters.
This target name is provided by the Storage Administrator.
Appendix 1- iSCSI Boot Parameters 146

2. Boot LUN


The LUN of the Target identifies the volume to be accessed. Valid values for standard LUNs are 0-255
decimal. Valid values for extended LUNs are 13 to 16 character hexadecimal values.
This Boot LUN is provided by the Storage Administrator.

3. Primary Target Address


The Primary Target Address is the primary IP address used by the iSCSI target.
The TCP port associated with the primary target IP address is by default 3260.
This Primary Target Address and TCP Port is also provided by the Storage Administrator.
Depending on your storage solutions, but sometimes if you plan to configure a second iSCSI
boot connection from the second iSCSI HBA, you must enter here the Virtual IP (VIP) address
of your storage cluster and not the IP address of one of the node ('that is the case with HP
LeftHand P4000 SAN Solutions).
NOTE:
A VIP address is a highly available IP address, ensuring that if a storage node in a
cluster becomes unavailable, servers can still access a volume through the other
storage nodes in the cluster.
The benefits of using a VIP here is that in case of a storage node failure, the iSCSI traffic does
not failover to the second iSCSI port, thus reducing risk, latency, and so forth.

Appendix 1- iSCSI Boot Parameters 147

Dynamically configure the target information

To use DHCP when configuring the iSCSI boot configuration, select the Use DHCP checkbox.


NOTE:
In a dynamic configuration, the initiator name, the target name, the LUN number,
and the IP address can be provided by the DHCP server. Selecting this option
requires setting up a DHCP server properly with iSCSI extensions to provide boot
parameters to servers. See Appendix 2 for more information.


Initiator Network Configuration

1. VLAN ID
This is the VLAN number the iSCSI initiator will use for all sent and received packets. An accepted
value is from 1 to 4094. This field is only used when the iSCSI vNet is in VLAN tunneling mode.

However, the most common scenario is using an iSCSI vNet in VLAN mapping mode (that is, not
using the VLAN tunneling mode).

Appendix 1- iSCSI Boot Parameters 148

Under this mode, the VLAN ID field should be left blank as an iSCSI connection cannot be assigned
to multiple networks.

2. IP address/Netmask/Gateway
This is the network configuration of the iSCSI initiator. Either a fixed IP address or DHCP can be set.



When the Use DHCP checkbox is selected, it allows the iSCSI option ROM to retrieve the TCP/IP
parameters from a DHCP server.


Appendix 1- iSCSI Boot Parameters 149

Optional iSCSI Boot Parameters entries
Some fields are optional and can be configured for enhancements.


Secondary iSCSI Target Address
Secondary iSCSI target address is used if the primary target IP network is down. If the primary target fails to
boot, the secondary iSCSI boot is used instead.

The TCP port associated with the secondary target IP address is by default 3260.
This Secondary Target Address and TCP Port are provided by the Storage Administrator.
A secondary target here is usually not required for a server with a second CNA iSCSI port.


Security enhancement using an authentication
For added network security, the Challenge Handshake Authentication Protocol (CHAP) can be enabled to
authenticate initiators and targets. By using a challenge/response security mechanism, CHAP periodically
verifies the identity of the initiator. This authentication method depends on a secret known only to the initiator
and the target. Although the authentication can be one-way, you can negotiate CHAP in both directions (2-
way CHAP) with the help of the same secret set for Mutual authentication. You must ensure, however, that
what you have configured on the target side, is going to match the initiator side. Both One-Way (CHAP
Mode) and Mutual authentication (CHAPM) are supported.

To enable One-Way CHAP authentication:
Select CHAP then enter the Target CHAP Username and Target Secret


NOTE:
The Target/Initiator CHAP Name and Target/Initiator Secret can be any name or
sequence of numbers over 12 and less than 16 characters. However, the
username and secret on the Target side must match the name on the Initiator side.
Appendix 1- iSCSI Boot Parameters 150

CHAP authentication must be enabled as well on the storage system side by the Storage
Administrator.


To enable Mutual CHAP authentication:
Select CHAPM, then enter the Target CHAP Username, Target Secret, Mutual Username (known as well as
Initiator CHAP Name), and Mutual Secret (also known as Initiator Secret).

Appendix 1- iSCSI Boot Parameters 151


NOTE:
The Target/Initiator CHAP Name and Target/Initiator Secret can be any name or
sequence of numbers over 12 and less than 16 characters. However, the name
and secret on the Target side must match the name on the Initiator side.
NOTE:
If users enabled CHAP/CHAPM through DHCP Vendor Option 43, the username
and secret will need to be entered through VC profile configuration.
CHAPM authentication must also be enabled on the storage system side by the Storage
Administrator.


Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 152

Appendix 2 - Dynamic configuration of the iSCSI Boot
Parameters
The dynamic configuration of the iSCSI target parameters through a DHCP server is one of the options
available in the iSCSI Boot configuration in order to automatically get all of the following required
parameters: initiator name, target name, its IP address, its ports, and Boot LUN number.
During this dynamic configuration, DHCP clients (that is, iSCSI initiators configured with DHCP enabled for
the iSCSI Boot configuration) are identified by the DHCP server by their vendor type. This mechanism is
using the DHCP Vendor Option 43, which contains the vendor-specific information. If the DHCP option 43
is not defined as specified by the vendors, then the DHCP server cannot identify the client, and therefore,
will not provide the iSCSI target information.
When this occurs the iSCSI disk will not be displayed when the Emulex iSCSI Initiator is executed.



It is necessary to configure properly the DHCP server with the mandatory fields provided by the vendor
(Emulex) for our NC551 and NC553 Dual Port FlexFabric 10Gb Converged Network Adapter.
We will describe the configuration steps for both Windows and Linux DHCP servers.




Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 153

Windows 2008 DHCP server configuration
The following are set up instructions for a Windows 2008 DHCP server to enable DHCP option 43 as
specified by Emulex for the NC551 and NC553 Dual Port FlexFabric 10Gb Converged Network Adapter.
1. Start DHCP Manager.
2. In the console tree, click the applicable DHCP server branch.
3. Right-click IPV4, then click Define Vendor Classes to create a new vendor class.

4. Click Add.



Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 154

5. In the New Class dialog box, add a descriptive identifying name for the new option in the
Display name box. You may also add additional information to the Description box.

6. Type in the data to be used by the DHCP Server service under ASCII (32 characters maximum) like
Emulex.

Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 155

To enter the ID, click the right side of the text box under ASCII.

This ID is the value to be filled in the VC iSCSI Boot Configuration DHCP Vendor ID' field.

7. Click OK, then click Close.


Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 156

8. Right-click IPV4 again and then click Set Predefined Options to set the vendor-specific options.

9. In Option class, choose the previously created vendor.


Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 157

10. Click Add.

11. Provide the following:
a. An option name, for example Boot target.
b. Data type = String or Encapsulated
c. Code = 43
d. Add any description.



Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 158

12. Click OK.

13. Click OK again.
14. In DHCP Manager, double-click the appropriate DHCP scope, right-click Scope Options, then click
Configure Options.

15. Click the Advanced tab.


Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 159

16. Choose the vendor class previously entered (for example, Emulex).

17. Select the check box for 43 (created earlier) and enter the correct string.
(for example, iscsi:192.168.1.19:::iqn.2003-10.com.lefthandnetworks:p4300:13392:esx5-
1:"iqn.1991-05.com.vmware:esx5-1":::D)

For more information about the format of the DHCP Option 43 string, see the following sections.

18. Click OK.

NOTE:
'It is not necessary to restart the DHCP service after making a modification.

Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 160

19. After the configuration is complete on the DHCP server, enable DHCP and use the same vendor ID
within the VC server profile in VCM.

NOTE:
The DHCP Vendor ID is case sensitive.
NOTE:
If the Initiator Name is not set up in the DHCP Vendor option 43: It is necessary
to enter the initiator name at the same time as the Vendor ID:

If the Initiator Name is set up in the DHCP Vendor option 43: It is not necessary
to enter the initiator name.

If users enter the initiator name, then the DHCP option 43 set up overrides the
value entered from VC.
The precedence of initiator name is as follows: DHCP option 43 then VC
configuration then default initiator name.




Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 161

a. The Initiator Network Configuration is not configured through DHCP Vendor Option 43, so
the user must fill in the IP address and Netmask fields.

b. Or check Use DHCP to retrieve network configuration.


c. The Authentication method must match what is configured in DHCP vendor option 43 (that
is, in the AuthenticationType field).
DHCP vendor option 43 does not configure the Username and secret so provide the
Username/Secret here if authentication is enabled in DHCP:





Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 162


option vendor-class-identifier "Emulex";
option space HP;
option HP.root-path code 201 = text;

class "vendor-classes" {
match option vendor-class-identifier;
}

subclass "vendor-classes" "Emulex" {
vendor-option-space HP;
option HP.root-path "iscsi:\"10.10.10.203\":\"3260\"::\"iqn.19860-
3.com.hp:storage.p2000g3.1020108c44\":::::";

}
Linux DHCP server configuration
The following are set up instructions for a Linux DHCP server to enable DHCP option 43 as specified by
Emulex for the NC551 and NC553 Dual Port FlexFabric 10Gb Converged Network Adapter.
For a Linus DHCP server, the vendor option 43 is defined in the /etc/dhcpd.conf file.
The following example illustrates how to define the vendor option 43 for the NC551/NC553 in
dhcpd.conf.
























After the dhcpd.conf file has been modified and saved, restart the DHCP service as follows:
[root@DHCP_server]# service dhcpd restart

The DHCP server configuration is now complete.




This is the vendor
identifier that must be
set as well in the
iSCSI VC profile.
The string format must be
defined correctly as shown.
For more information about the
format of the DHCP Option 43
string, see below.
Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 163

Format of DHCP option 43 for NC551/NC553 CNA
iscsi:'<TargetIP>':'<TargetTCPPort>':'<LUN>':'<TargetName>':'<InitiatorName>':'<HeaderDigest>':'<Data
Digest>':'<AuthenticationType>'
Strings shown in quotes are part of the syntax and are mandatory.
Fields enclosed in angular brackets (including the angular brackets) should be replaced with their
corresponding values. Some of these fields are optional and may be skipped.
When specified, the value of each parameter should be enclosed in double quotes. See the
following examples.
All options are case insensitive.

Description of Optional Parameters
TargetIP
Replace this parameter with a valid IPv4 address in dotted decimal notation. This is a mandatory
field.
TargetTCPPort
Replace this parameter with a decimal number ranging from 1 to 65535 (inclusive). It is an
optional field. The default TCP port 3260 is assumed, if not specified.
LUN
This parameter is a hexadecimal representation of Logical Unit number of the boot device. It is an
optional field. If not provided, LUN 0 is assumed to be the boot LUN. It is an eight-byte number
which should be specified as a hexadecimal number consisting of 16 digits, with an appropriate
number of 0's padded to the left, if required.
TargetName
Replace this parameter with a valid iSCSI target iqn' name of up to 223 characters. This is a
mandatory field.
InitiatorName
Replace this parameter with a valid iSCSI iqn' name of up to 223 characters. This is an optional
field. If not provided, the initiator name configured through VC will be used; if it is not configured
in VC, then the default Initiator name will be used.
HeaderDigest
This is an optional field. Replace this parameter with either E or D.
a- E denotes header digest is enabled.
b- D denotes that it is disabled.
If skipped, it is assumed that Header Digest is disabled.
DataDigest
This is an optional field. Replace this parameter with either E or D.
a- E denotes data digest is enabled.
b- D denotes that it is disabled.
If not provided, it is assumed that Data Digest is disabled by default.

Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 164

AuthenticationType
This is an optional field. If applicable, replace this parameter with D, E, or M.
a- D denotes authentication is disabled.
b- E denotes one-way CHAP is enabled. The username and secret to be used for
one-way CHAP must be specified by non-DHCP means.
c- M denotes MutualCHAP is enabled. The username and secret required for mutual
CHAP authentication must be specified by non-DHCP means
If not specified, this field defaults to authentication-disabled.

Examples

NOTE:
Emulex requires all attributes be within double quotes () and that any optional
parameter not defined must include a colon (:) in the string. If the string is not
properly formed, then the option ROM ignores the DHCP servers offer.
iscsi:'<TargetIP>':'<TargetTCPPort>':'<LUN>':'<TargetName>':'<InitiatorName>':'<HeaderDigest>':'<Dat
aDigest>':'<AuthenticationType>'

Typical example with target and initiator name:
iscsi:192.168.1.19:::iqn.2003-10.com.lefthandnetworks:p4300:13392:esx5-1:"iqn.1991-
05.com.vmware:esx5-1":::D
Target IP address: 192.168.1.19
Target TCP port: Not specified, use default from RFC 3720 (3260)
Target boot LUN: Not specified, Assumed LUN 0
Target iqn name: iqn.2003-10.com.lefthandnetworks:p4300:13392:esx5-1
Initiator Name: iqn.1991-05.com.vmware:esx5-1
Header Digest: Not specified. Assume disabled.
Data Digest: Not specified; Assume disabled
Authentication Type: Disabled

Default Initiator name and Data Digest settings:
iscsi:192.168.0.2:3261:000000000000000E:iqn.2009-4.com:1234567890::E::E
Target IP address: 192.168.0.2
Target TCP port: 3261
Target boot LUN: 0x0E
Target iqn name: iqn.2009-04.com:1234567890
Initiator name: Not specified. Use the Initiator name already configured. Use the default name if
none was configured.
Header Digest: Enabled
Data digest: Not specified. Assume disabled.
Authentication Type: 1-way CHAP.




Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 165

Default TCP Port and Mutual CHAP:
iscsi:192.168.0.2::000000000000000E:iqn.2009-4.com:1234567890::E:D:M
Target IP address: 192.168.0.2
Target TCP port: Use default from RFC 3720 (3260)
Target boot LUN: 0x0E
Target iqn name: iqn.2009-04.com:1234567890
Initiator name: Not specified. Use the Initiator name already configured. Use the default name if
none was configured.
Header Digest: Enabled
Data digest: Data Digest disabled
Authentication Type: Mutual CHAP


Appendix 3- How to monitor an iSCSI Network? 166

Appendix 3- How to monitor an iSCSI Network?
Some methods for identifying the root cause of performance problems, bottlenecks, and various network
issues is something important to cover when playing with iSCSI traffic.
Let's see briefly how we can identify network problems and recommend actions.

Monitoring Disk Throughput on the iSCSI Storage System
iSCSI devices usually provide performance information from their management software.

The performance monitoring on the iSCSI system can help you understand the current load on the SAN and
provide additional data to support decisions on issues, as follows:
Configuration options (Would network bonding help me?)
Capacity expansion (Should I add more storage systems?)
Data placement (Should this volume be on my SATA or SAS cluster?).
The performance data does not directly provide answers, but will let you analyze what is happening and
provide support for these types of decisions.





Appendix 3- How to monitor an iSCSI Network? 167

Monitoring Network and Disk Throughput on the iSCSI Host
VMware vSphere
With vSphere, network performance and disk throughput are displayed from the Performance tab. Click
Advanced for both vSphere Host and Virtual Machines.

To check the different load, you can then switch from Network, Disk, Datastore, and Storage Adapter.

Virtual Machines vSphere host

The Disk monitoring counters:
Disk Read rate
Disk Write rate
Disk Usage

Appendix 3- How to monitor an iSCSI Network? 168


For more information about data analysis and recommended actions to correct the problems you identify,
see sections "Storage Analysis" and "Network Analysis" of the following VMware technical note:
http://www.vmware.com/files/pdf/perf_analysis_methods_tn.pdf



Appendix 3- How to monitor an iSCSI Network? 169

Microsoft Windows Resource Monitor
For a Windows system, Microsoft provides a Windows Resource Monitor tool that provides monitoring
resource usage in real time. Among other things, it can help you analyze the disk and network throughput.


























Other performance utilities are also provided by Microsoft like SQLIO, which is designed to measure the
I/O capacity of a given SQL configuration, and can help to verify that your I/O subsystem is functioning
correctly under heavy loads.
For SQL deployments and tests in an iSCSI SAN, see
http://technet.microsoft.com/en-us/library/bb649502%28SQL.90%29.aspx for more information.
For MS Exchange, Microsoft provides specific tools and counters to measure performance. See
http://technet.microsoft.com/en-us/library/dd351197.aspx for more information.






Appendix 3- How to monitor an iSCSI Network? 170

Analyzing Network information from the Virtual Connect interface
The Virtual Connect Manager interface provides lots of counters, port statistics, and information that can
help to troubleshoot a network issue. Both the GUI and CLI provide telemetry statistics.
From the Interconnect Bays link on the left navigation menu, select one of the modules.

Scroll down to the Uplink Port Information section.
Appendix 3- How to monitor an iSCSI Network? 171


This screen provides details on Port Information (uplinks and downlinks) Port Status, Port Statistics, and
Remote Device Information.
The Connected to column shows the upstream device information (LLDP must be enabled on the switch).

Click Detailed statistics/information for the uplink port used for iSCSI:

A statistic screen opens with lot of information. 'Lets see briefly the port statistics section.

Appendix 3- How to monitor an iSCSI Network? 172










No error must be detected here.





No error must be detected here.


No error must be detected here.


Number of frames discarded due to excessive transit delay.

Number of frames discarded due to excessive MTU size.
(Can indicate a Jumbo frames configuration issue.)



High number may show that VC is running out of resource.




Show FCS error (can indicate a collision issue, use smaller
network segment, or avoid hubs).





Appendix 3- How to monitor an iSCSI Network? 173









Show good Jumbo frames statistics.
df
Show Jumbo frames statistics with bad FCS (indicates
collisions?).


Lowest number of error must be shown here.












Lowest number of error must be shown here.


Frames received that exceeds the maximum permitted frame
size.








Excessive Pause frames are likely due to network
congestion.

See the Virtual Connect User Guide for more details about the different port statistics counters.




Appendix 3- How to monitor an iSCSI Network? 174

At the end of the screen, the Remote Device Information is also good for upstream switch information, which
is sometimes useful if you need to troubleshoot the remote network devices. The remote switch IP address,
MAC address, remote port, and switch description are provided:

VCM provides as well statistics for the Server port (that is, VC downlinks). Go back to the VC Module Bay1
screen and click Detailed statistics for the server port you are using.

NOTE:
VC does not provide at this time statistics on individual FlexNIC, only the statistics
of the Virtual Connect physical port are provided.
Appendix 3- How to monitor an iSCSI Network? 175

Analyzing Virtual Connect Network performance
To verify the throughput statistics of a VC port, enter the following command under the VC CLI:
-> show statistics-throughput <EnclosureID>:<BayNumber>:<PortLabel>

This command shows the historical throughput (bandwidth and packets) of a port.
VC provides also real-time traffic SNMP statistics for all ports: server ports and uplink ports.
Those performance statistics can be sent from VC to any SNMP tools (for example, HP iMC, CACTI,
NAGIOS, PRTG, and so on).

CACTI Management Interface



Appendix 3- How to monitor an iSCSI Network? 176

Wireshark
The best noncommercial tool for debugging network problems is the open source protocol analysis tool
Wireshark. Wireshark contains protocol dissectors for the iSCSI protocol, which can be useful for
debugging network and performance problems.

Wireshark is available under Windows, Linux, HP-UX, and so forth.
For more information, see http://www.wireshark.org/.

Software iSCSI analysis
For Software iSCSI traffic, that is, iSCSI traffic is using a network Interface card; you must select under
Wireshark the correct interface running the iSCSI traffic for the network capture.
If you are unsure, you can open the VC server profile to get the MAC addresses used by iSCSI.

Then under Wireshark, select one of the two interfaces used by MPIO and click Start. You can use the
Details button to see the corresponding interface MAC address.

Appendix 3- How to monitor an iSCSI Network? 177

For a better display of the iSCSI traffic, enter iSCSI in the Filter field.

Following is a sample of a network capture filtering the iSCSI protocol:




Appendix 3- How to monitor an iSCSI Network? 178

iSCSI analysis for Accelerated iSCSI adapters
For Accelerated iSCSI connections (that is, iSCSI traffic is using a Host Bus Adapter), Wireshark cannot be
used on the server. It captures only packet data from a network interface so it is necessary to use the Virtual
Connect Port monitoring feature to monitor the Accelerated iSCSI traffic.

VC profile using Accelerated iSCSI adapters


The Virtual Connect Port monitoring enables network traffic on a set of server ports to be duplicated to an
unused uplink port so network traffic on those server ports can be monitored by Wireshark and debugged.
To configure Port Monitoring under VC:
1- Open VC Manager.
2- From the left navigation menu, select Ethernet, then Port Monitoring.
3- Select an unused uplink port for the Network Analyzer Port.
4- For the ports to monitor, select the correct downlink used by your server from the All Ports drop
down menu.


Appendix 3- How to monitor an iSCSI Network? 179

5- From the new lis,t select the two downlinks connected to VC Bay 1 and Bay 2.

6- Click OK.
7- Enable Port Monitoring and click Apply.

8- You should see a new icon appearing in the interface, meaning port monitoring is running.
Appendix 3- How to monitor an iSCSI Network? 180


9- Now you are ready to connect a laptop running Wireshark to the VC uplink port configured as the
network analyzer port.

10- Start the capture and filter using iSCSI.



For more information 181

For more information
HP Virtual Connect: www.hp.com/go/virtualconnect

HP Virtual Connect Manuals:
http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?contentType=SupportManual&lang=en&c
c=us&docIndexId=64180&taskId=101&prodTypeId=3709945&prodSeriesId=4144084

HP Virtual Connect Cookbook: Single & Multi-Domain Series:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01990371/c01990371.pdf

iSCSI Technologies in HP ProLiant servers using advanced network adapters, Technology Brief:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01600624/c01600624.pdf

HP NC-Series ServerEngines iSCSISelect User Guide
http://h10032.www1.hp.com/ctg/Manual/c02255542.pdf
P4000 Best Practices guide:
http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA2-5615ENW.pdf



Report comments or feedback to iscsi-vc-cb-feedback@hp.com.
















Share with colleagues





Copyright 2011, 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject
to change without notice. The only warranties for HP products and services are set forth in the express warranty
statements accompanying such products and services. Nothing herein should be construed as constituting an
additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
c02533991-003, February 2012

You might also like