You are on page 1of 60

Advanced Virtual I/O Server

Configurations
César Diniz Maciel
Consulting IT Specialist – IBM US
Agenda

 Virtual Optical Devices, Virtual Optical Media, File-backed Devices


 Shared Ethernet Adapter (SEA) Failover
 SEA over Host Ethernet Adapter (HEA)
 Live Partition Mobility configuration
 Virtual Tape
 N-Port ID Virtualization (NPIV)
 Heterogeneous Multipathing
 Active Memory Sharing
Progression of Virtual Storage Devices on VIOS

 The ability to share virtual SCSI disks backed by a


Physical Volume (PV) or a Logical Volume (LV) has been
available from the beginning.

 VIO server 1.2 gave the ability to share the CDROM drive
with client LPARs through Virtual Optical devices.

 With VIO server 1.5, the ability to create “file-backed”


virtual devices in addition to virtual SCSI devices backed
by a PV or LV

 Using the cpvdi command a virtual device image can now


be copied from one virtual target device (VTD) to a
different VTD. This feature was added under VIO 1.5.2.1-
FP11.1.
Power Systems Virtual Optical Device
 padmin user commands in VIO server

$ lsdev -type optical


name status description
cd0 Available SATA DVD-ROM Drive

$ mkvdev –vdev cd0 –vadapter vhost3


vtopt0 available

$ lsmap -vadapter vhost0


SVSA Physloc Client
Partition ID
--------------- ------------------------------------------- ----------
vhost0 U9111.520.10C1C1C-V1-C13 0x00000000

VTD vtopt0
LUN 0x8100000000000000
Backing device cd0
Physloc U787A.001.DNZ00ZE-P4-D3
Power Systems Virtual Optical Device

Backing device
(disk, lv, file or media)
Client connection - Option One

 On old client
# lsdev –Cl cd0 –F parent
vscsi2
# rmdev –R vscsi2

 On new client
# cfgmgr
Client Side - Virtual Optical Device
 First AIX client LPAR to activate will show new vscsi adapter
and cd0 available
# lsdev -Cs vscsi
cd0 Available Virtual SCSI Optical Served by VIO Server
hdisk0 Available Virtual SCSI Disk Drive

# lsdev -Cs vscsi -F "name physloc“


cd0 U9111.520.10C1C1C-V3-C2-T1-L810000000000
hdisk0 U9111.520.10C1C1C-V3-C31-T1-L810000000000

# lsdev -Cc adapter


ent0 Available Virtual I/O Ethernet Adapter (l-lan)
vsa0 Available LPAR Virtual Serial Adapter
vscsi0 Available Virtual SCSI Client Adapter
vscsi1 Available Virtual SCSI Client Adapter
vscsi2 Available Virtual SCSI Client Adapter
Client Side - Virtual Optical Device
 Subsequent AIX client LPARs activate, but only show vscsi adapter
Defined, and no optical device

# lsdev -Cc adapter


ent0 Available Virtual I/O Ethernet Adapter (l-lan)
vsa0 Available LPAR Virtual Serial Adapter
vscsi0 Available Virtual SCSI Client Adapter
vscsi1 Available Virtual SCSI Client Adapter
vscsi2 Defined Virtual SCSI Client Adapter

 This client’s adapter will NOT configure while another client is connected
to the server adapter

# cfgmgr -vl vscsi2



Method error (/usr/lib/methods/cfg_vclient -l vscsi2 ):
0514-040 Error initializing a device into the kernel.
Client Side - Virtual Optical Device

 To release the optical device from owning LPAR

# lsdev –Cl cd0 –F parent


vscsi2

# rmdev –R vscsi2
cd0 defined

 Now, cfgmgr in the receiving LPAR


Client connection - Option Two

 Move from the VIO server


$ rmdev –dev vtopt0
$ mkvdev –vdev cd0 \
–vadapter vhost#
(where vhost# is
the VSCSI adapter
for the client
Partition)
Virtual Optical Media

 File-backed device that works like an optical device (think of it as an ISO image).

 With read-only virtual media the same virtual optical device can be presented to
multiple client partitions simultaneously

 You could easily boot from and install partitions remotely without having the need to
swap out physical CD/DVDs or setup Network Installation Manager (NIM) server. It is
also easier to boot a partition into maintenance mode to repair problems

 Easier to maintain a complete library of all the software needed for the managed
system. Various software packages as well as all the necessary software levels to
support each partition

 Client partitions could use blank file-backed virtual optical media for backup purposes
(read/write devices)

 These file-backed optical devices could then be backed up from on the VIO server to
other types of media (tape, physical CD/DVD, TSM server, etc.)
Virtual Optical Media

Create an ISO file from CDROM


$ mkvopt -name dvd.AIX_6.1.iso -dev cd0 -ro
 You choose the name for this file, so make it meaningful
 Creates an ISO image from the media in /dev/cd0

After the .iso file is in your /var/vio/VMLibrary directory, run:


$ mkvdev -fbo -vadapter vhost#
vtopt0 Available
 Replace vhost# with your Virtual SCSI server adapter name.
 This mkvdev command creates your virtual optical target device.

$ loadopt -vtd vtopt0 –disk dvd.AIX_6.1.iso


 The loadopt command loads vtopt0 with your ISO image
 Replace “dvd.AIX_6.1.iso” with your meaningful filename
Converting between backing devices - cpvdi

 New command added at VIO 1.5.2.1-FP-11.1


$ cpvdi -src input_disk_image -dst output_disk_image
[-isp input_storage_pool] [-osp output_storage_pool]
[-overwrite] [-unconfigure] [-f] [-progress]

The cpvdi command copies a block device image, which can be either
a logical or physical volume, a file-backed device, or a file on
another existing disk.

This command is NOT used to move data between non-virtualized


disks and virtualized disks.
Starting from scratch on the VIO server

$ mksp lv-storage-pool hdisk4


lv-storage-pool
0516-1254 mkvg: Changing the PVID in the ODM.
$ lspv
NAME PVID VG STATUS
hdisk0 00c23c9f9e9e1909 rootvg active
hdisk1 00c23c9fa415621f clientvg active
hdisk2 00c23c9f2fbda0b4 clientvg active
hdisk3 00c23c9ffbf3c991 None
hdisk4 00c23c9f20c41ad6 lv-storage-pool active
$ mksp -fb file-storage-pl -sp lv-storage-pool -size 1G
file-storage-pl
File system created successfully.
1040148 kilobytes total disk space.
New File System size is 2097152
$ lssp
Pool Size(mb) Free(mb) Alloc Size(mb) BDs Type
rootvg 69888 49408 128 0 LVPOOL
clientvg 139776 102912 64 3 LVPOOL
lv-storage-pool 69888 67840 64 1 LVPOOL
file-storage-pl 1016 1015 64 0 FBPOOL
$ df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 524288 455864 14% 2293 5% /
/dev/hd2 5242880 936112 83% 51854 32% /usr
/dev/hd9var 1310720 1181296 10% 474 1% /var
/dev/hd3 4718592 4558112 4% 384 1% /tmp
/dev/hd1 20971520 9927544 53% 1374 1% /home
/proc - - - - - /proc
/dev/hd10opt 3407872 2538704 26% 10562 4% /opt
/dev/file-storage-pl 2097152 2079792 1% 4 1% /var/vio/storagepools/file-
storage-pl
Creating a new Virtual Media Repository

$ mkrep -sp lv-storage-pool -size 500M


Virtual Media Repository Created
Repository created within "VMLibrary_LV" logical volume
$ lssp
Pool Size(mb) Free(mb) Alloc Size(mb) BDs Type
rootvg 69888 49408 128 0 LVPOOL
clientvg 139776 102912 64 3 LVPOOL
lv-storage-pool 69888 67328 64 1 LVPOOL
file-storage-pl 1016 1005 64 1 FBPOOL
VMLibrary_LV 508 507 64 0 FBPOOL
$ lsvopt
VTD Media Size(mb)
vtopt0 No Media n/a
$ lsmap –vadapter vhost0
SVSA Physloc Client Partition ID
--------------- -------------------------------------------- ------------------
vhost0 U9117.MMA.1023C9F-V2-C11 0x00000004

VTD vt_ec04
Status Available
LUN 0x8100000000000000
Backing device client2lv
Physloc

VTD vtopt0
Status Available
LUN 0x8200000000000000
Backing device
Physloc
$ mkvopt -name vio-1-5-expansion.iso -file /var/vio/storagepools/file-storage-pl/vio-1-5-
expansion.iso -ro
Seeing the image on the client LPAR

$ lsmap -all | more


SVSA Physloc Client Partition ID
--------------- -------------------------------------------- ------------------
vhost0 U9117.MMA.1023C9F-V2-C11 0x00000004

VTD vt_ec04
Status Available
LUN 0x8100000000000000
Backing device client2lv
Physloc

VTD vtopt0
Status Available
LUN 0x8200000000000000
Backing device /var/vio/VMLibrary/vio-1-5-expansion.iso
Physloc
From the client LPAR:
root@ec04 / # cfgmgr
root@ec04 / # lsdev -Cc cdrom
cd0 Available Virtual SCSI Optical Served by VIO Server
root@ec04 / # mount -v cdrfs -o ro /dev/cd0 /cdrom
root@ec04 / # df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 98304 49488 50% 1982 9% /
/dev/hd2 2490368 88936 97% 23302 8% /usr
/dev/hd9var 65536 31584 52% 494 7% /var
/dev/hd3 229376 126928 45% 64 1% /tmp
/dev/hd1 65536 63368 4% 20 1% /home
/proc - - - - - /proc
/dev/hd10opt 229376 35272 85% 2937 11% /opt
/dev/cd0 19724 0 100% 4931 100% /cdrom
Agenda

 Virtual Optical Devices, Virtual Optical Media, File-backed Devices


 Shared Ethernet Adapter (SEA) Failover
 SEA over Host Ethernet Adapter (HEA)
 Live Partition Mobility configuration
 Virtual Tape
 N-Port ID Virtualization (NPIV)
 Heterogeneous Multipathing
 Active Memory Sharing
Shared Ethernet Adapter

 Physical access shared by multiple networks

 Physical access can be a single adapter or an aggregate of


adapters (EtherChannel/Link Aggregation)

 Shared Ethernet operates at layer 2


 Virtual Ethernet MAC visible to outside systems
 Broadcast/Multicast support
Create the Shared Ethernet Adapter (SEA)
VIOS 1 Client 1 Client 2
Shared en4
Ethernet (if)
Adapter
ent3 ent4 en0 en1 en0 en1
(LA) (SEA) (if) (if) (if) (if)

ent1 ent0 ent2 ent0 ent1 ent0 ent1


(Phy) (Phy) (Vir) (Vir) (Vir) (Vir) (Vir)

VID PVID PVID PVID PVID PVID


100 1 1 100 1 100

Untagged (PVID 1)
VID 100
 Create the Shared Ethernet Adapter (SEA)
$ mkvdev -sea ent3 -vadapter ent2 -default ent2 -defaultid 1
ent4 Available
en4
et4
Shared Ethernet Adapter Failover Client
Virt
Enet

VIOS 1 VIOS 2
 VIOS feature (indepentent of the client partition) SEA SEA
Enet Enet
PCI PCI
Primary Backup
 Provides a backup adapter for the SEA, with active
monitoring.
 Virtual Ethernet Control Channel between the two VIOS.

 No load balancing; only the primary SEA is active.


Traffic flows through secondary SEA only when
primary SEA fails.

 No configuration required on the client partition;


everything is done on the two VIOS.

 Can be used with Etherchannel/802.3ad devices.

 Configured with the mkvdev command on both VIOS.


Shared Ethernet Adapter Failover, Dual VIOS

 Complexity
 Specialized setup confined to VIOS
POWER5 Server
 Resilience
Client
 Protection against single VIOS / switch port /
switch / Ethernet adapter failure Virt
Enet
 Throughput / Scalability
 Cannot do load-sharing between primary and VIOS 1 VIOS 2

backup SEA (backup SEA is idle until needed).


SEA SEA
 SEA failure initiated by:
 Backup SEA detects the active SEA has failed. Enet Control Channel Enet
PCI PCI
 Active SEA detects a loss of the physical link
Primary Backup
 Manual failover by putting SEA in standby mode
 Active SEA cannot ping a given IP address.
 Notes
 Requires VIOS V1.2 and SF235 platform firmware
 Can be used on any type of client (AIX, Linux, I (on
POWER6)
 Outside traffic may be tagged
Tips and considerations when using SEA and SEA
Failover
 SEA
 Make sure there is no IP configured on either the physical Ethernet
interface or the virtual interface that will be part of the SEA prior to
performing the SEA configuration.
 You can optionally configure an IP address on the new SEA interface after
the configuration is done.
 SEA failover
 If you have multiple SEAs configured on each of the VIO servers, then for
each SEA pair, you need to configure a separate control channel with a
unique PVID on the system.

 Make sure you configure the SEA failover adapter (on the second VIOS) at
the same time you configure the primary adapter.
Agenda

 Virtual Optical Devices, Virtual Optical Media, File-backed Devices


 Shared Ethernet Adapter (SEA) Failover
 SEA over Host Ethernet Adapter (HEA)
 Live Partition Mobility configuration
 Virtual Tape
 N-Port ID Virtualization (NPIV)
 Heterogeneous Multipathing
 Active Memory Sharing
Virtualization: HEA Logical Port Concept

 To a LPAR, a HEA logical port


appears as a generic Ethernet
Partition Partition Partition
interface
 With its own resources and MAC
address
Logical
Ports  Sharing bandwidth w/ other logical ports
defined on same physical port

Logical L2 switch  Logical ports are allocated to


partitions
HEA
 Each Logical Port can be owned by a
Physical
separate LPAR
Port  A Partition can own multiple Logical
Ports
 Only one Logical Port per Physical Port
per partition
24
Virtualization: SEA and HEA

Virtual Ethernet and SEA


HEA
I/O
Hosting Linux iOS AIX
Linux iOS AIX
Partition

Packet Virtual Virtual Virtual


Ethernet Ethernet Ethernet Ethernet
Forwarder Ethernet Ethernet
Driver Driver Driver Driver Driver Driver

Virtual Ethernet Switch PHYP


PHYP

Networ
Ethernet adapter HEA
k

...considerations

Remove SW Forwarder bottleneck ƒ Adapter sharing with Native Performance


10 Gbps are likely to be shared by multiple partitions.... ƒ Removes SW forwarding overhead
ƒ LPAR mobility
25
When might you use SEA over IVE
 When the number of Ethernet adapters needed on a single partition
is more than the number of physical ports available on the HEA(s)

 If the number of LPARs sharing a physical port exceeds the number


of LHEA ports available
 Depends on the type of daughter card and the MCS value

 If you anticipate a future need for more adapters than you have
LHEA ports (LP-HEA) available

 Very small amount of memory on LPAR


 Each LP-HEA needs around 102 MB system memory

 Some situations you might consider using a combination of SEA,


IVE, and/or dedicated Ethernet adapters
26
Considerations when using HEA on VIO
 When the VIO server uses the HEA as a SEA you must set the VIO server
as a promiscuous LPAR for that LHEA

 When in promiscuous mode there is only one LP-HEA per physical port

 The promiscuous LPAR receives all unicast, multicast, and broadcast


network traffic from the physical network.

 Always use flow control, and the large_send parameter for all Gigabit and
10 Gbit Ethernet adapters, and large_receive parameter when using a
10Gbit Ethernet adapter (VIOS 1.5.1.1) to increase performance

27
Promiscuous Mode

28
Agenda

 Virtual Optical Devices, Virtual Optical Media, File-backed Devices


 Shared Ethernet Adapter (SEA) Failover
 SEA over Host Ethernet Adapter (HEA)
 Live Partition Mobility configuration
 Virtual Tape
 N-Port ID Virtualization (NPIV)
 Heterogeneous Multipathing
 Active Memory Sharing
Live Partition Mobility concepts

 Mover service partitions (MSP)


 Mover service partitions (MSP) is an attribute of the Virtual I/O Server
partition. It enables the specified Virtual I/O Server partition to allow the
functionality that asynchronously extracts, transports, and installs
partition state. Two mover service partitions are involved in an active
partition migration: one on the source system, the other on the
destination system. Mover service partitions are not used for inactive
migrations.
 Virtual asynchronous services interface (VASI)
 The source and destination mover service partitions use this virtual
device to communicate with the POWER hypervisor to gain access to
partition state. The VASI device is included on the Virtual I/O Server, but
is only used when the server is declared as a mover service partition.
Partition Migration moves Active and Inactive LPAR

Active Partition Migration


 Active Partition Migration is the actual movement of a running LPAR
from one physical machine to another without disrupting* the
operation of the OS and applications running in that LPAR.

 Supported by all POWER6-  Applicability


 Workload consolidation (e.g. many to one)
based servers  Workload balancing (e.g. move to larger system)
 Workload migration to newer systems
 Planned CEC outages for maintenance/upgrades
 Impending CEC outages (e.g. hardware warning received)

Inactive Partition Migration


 Inactive Partition Migration transfers a partition that is logically
‘powered off’ (not running) from one system to another.
 Subject to fewer compatibility restrictions than active partition
migration because the OS goes through the boot process on the
destination.
 Provides some ease of migration from systems prior to those
enabled for active migration.
Requisites

 The mobile partition’s network and disk access must be virtualized


using one or more Virtual I/O Servers.
 The Virtual I/O Servers on both systems must have a shared Ethernet
adapter configured to bridge to the same Ethernet network used by the
mobile partition
 The Virtual I/O Servers on both systems must be capable of providing
virtual access to all disk resources the mobile partition is using.
 The disks used by the mobile partition must be accessed through virtual
SCSI and/or virtual Fibre Channel-based mapping
Normal Running Pre-Migration
(Disk I/O Shown)
LAN

Source Partition VIOS VIOS


(VSCSI, VLAN, MSP) (VSCSI, VLAN, MSP)

Ded Ded
Ded. VSCSI VSCSI VSCSI
Adapt Adapt
Adapt DD Client DD Server DD DD Server

PHYP PHYP
Ded Ded Ded Ded
Adapter Adapter Adapter Adapter

SAN

Disks
Create Target Partition LAN

Source Partition VIOS VIOS Target Partition


(VSCSI, VLAN, MSP) (VSCSI, VLAN, MSP)

Ded Ded
VSCSI VSCI VSCI
Adapt Adapt
Client DD Server DD DD Server

PHYP PHYP
Ded Ded Ded Ded
Adapter Adapter Adapter Adapter

SAN

Disks
Transfer Partition State LAN

Source Partition VIOS VIOS Target Partition


(VSCSI, VLAN, MSP) (VSCSI, VLAN, MSP)

Ded Ded
VSCSI VSCI VSCI VSCSI
Adapt Adapt
Client DD Server DD DD Server Client DD

PHYP PHYP
Ded Ded Ded Ded
Adapter Adapter Adapter Adapter

SAN

Disks
Transfer VIO to Target LAN

Source Partition VIOS VIOS Target Partition


(VSCSI, VLAN, MSP) (VSCSI, VLAN, MSP)

Ded Ded
VSCSI VSCI VSCI VSCSI
Adapt Adapt
Client DD Server DD DD Server Client DD

PHYP PHYP
Ded Ded Ded Ded
Adapter Adapter Adapter Adapter

SAN

Disks
Re-Attach Dedicated Adapters LAN
(via DLPAR)

Source Partition VIOS VIOS Target Partition


(VSCSI, VLAN, MSP) (VSCSI, VLAN, MSP)

Ded Ded
VSCSI VSCI VSCI VSCSI Ded.
Adapt Adapt
Client DD Server DD DD Server Client DD Adapt DD

PHYP PHYP
Ded Ded Ded Ded
Adapter Adapter Adapter Adapter

SAN

Disks
Clean Up Unused Resources LAN

Source Partition VIOS VIOS Target Partition


(VSCSI, VLAN, MSP) (VSCSI, VLAN, MSP)

Ded Ded
VSCSI VSCI VSCI VSCSI Ded.
Adapt Adapt
Client DD Server DD DD Server Client DD Adapt DD

PHYP PHYP
Ded Ded Ded Ded
Adapter Adapter Adapter Adapter

SAN

Disks
Devices not supported for LPM

 IVE
 Virtual Optical Device
 Virtual Optical Media
 OS installed on internal disks
 OS installed on logical volumes or file-backed devices
 Virtual Tape
Agenda

 Virtual Optical Devices, Virtual Optical Media, File-backed Devices


 Shared Ethernet Adapter (SEA) Failover
 SEA over Host Ethernet Adapter (HEA)
 Live Partition Mobility configuration
 Virtual Tape
 N-Port ID Virtualization (NPIV)
 Heterogeneous Multipathing
 Active Memory Sharing
VIOS Virtual Tape Support

 Enables client partitions to directly access selected SAS tape devices,


sharing resources and simplifying backup & restore operations
 SAS adapter is owned by VIOS partition
 Included with PowerVM Express, Standard, or Enterprise Edition
 Supports AIX 5.3 & 6.1 partitions and IBM i 6.1 partitions
 POWER6 processor-based systems

VIOS

SAS Adapter Virtual SCSI Adapter Virtual SCSI Adapter

Power Hypervisor

Tape drives supported


• DAT72: Feature Code 5907
• DAT160: Feature Code 5619
• HH LTO4: Feature Code 5746
VIOS Virtual Tape Support

 Virtual tape device created and managed the same way as a virtual disk
 mkvdev -vdev TargetDevice –vadapter VirtualSCSIServerAdapter
 lsdev -virtual returns results similar to the following:
name status description
vhost3 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter
vtscsi0 Available Virtual Target Device - Logical Volume
vttape0 Available Virtual Target Device - Tape
 On the client partition, simply run cfgmgr to configure the virtual tape
 Device can be used as a regular tape, for data and OS backup and restore, including
booting from media.
 Automated tape libraries are not supported.
Agenda

 Virtual Optical Devices, Virtual Optical Media, File-backed Devices


 Shared Ethernet Adapter (SEA) Failover
 SEA over Host Ethernet Adapter (HEA)
 Live Partition Mobility configuration
 Virtual Tape
 N-Port ID Virtualization (NPIV)
 Heterogeneous Multipathing
 Active Memory Sharing
NPIV
 N_Port ID Virtualization (NPIV) provides direct Fibre Channel connections from client
partitions to SAN resources , simplifying SAN management
 Fibre Channel Host Bus Adapter is owned by VIOS partition
 Supported with PowerVM Express, Standard, and Enterprise Edition
 Supports AIX 5.3 and AIX 6.1 partitions
 Power 520, 550, 560, and 570, with an 8 GB PCIe Fibre Channel Adapter

VIOS

FC Adapter Virtual FC Adapter Virtual FC Adapter

Power Hypervisor

Enables use of existing storage management tools


Simplifies storage provisioning (i.e. zoning, LUN masking)
 Statement of Direction Enables access to SAN devices including tape libraries

 IBM intends to support N_Port ID Virtualization (NPIV) on the POWER6


processor-based Power 595, BladeCenter JS12, and BladeCenter JS22 in
2009.
 IBM intends to support NPIV with IBM i and Linux environments in 2009.
All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
NPIV details

 �VIOS V2.1 (PowerVM Express, Standard, and Enterprise)


 �Client OS support: AIX(5.3, 6.1); later in 2009, Linux and IBM i
 �POWER6 only; Blade and High-End support in 2009
 �8 Gigabit PCI Express Dual Port Fibre Channel Adapter
 �Compatible with Live Partition Mobility (LPM)
 �VIO servers can support NPIV and vSCSI simultaneously
 �Clients can support NPIV, vSCSI and dedicated Fibre Channel
simultaneously
 �HMC-managed and IVM-managed servers
 �Unique Worldwide Port Name (WWPN) generation (allocated
in pairs)
NPIV Simplifies SAN Management
Current
Virtual SCSI model N-Port ID Virtualization

POWER5 or POWER6 POWER6

AIX AIX
Virtualized

Disks
disks

generic generic DS8000 EMC


scsi disk scsi disk

Virtual SCSI Virtual FC


FC Adapter

FC Adapter
VIOS VIOS

Shared
FC Adapters FC Adapters

SAN SAN

DS8000 DS8000
EMC EMC
Partition SAN access through NPIV
SAN Switch requirements

 Only the first SAN switch attached to the Fibre Channel adapter
needs to be NPIV capable
 • Other switches in the environment do not need to be NPIV
capable
 • Not all ports on the switch need to be configured for NPIV, just the
one which the adapter will use
 �Check with your storage vendor to make sure the switch is NPIV
capable
 � Order and install the latest available firmware for your SAN switch
Create a Virtual Fibre Channel Adapter

 Client/server relationship similar to Virtual SCSI


 VSCSI Server on the VIOS, client on the client partition
 VFC Server on the VIOS, VFC client on the client partition
Mapping the adapter

 vfcmap – binding the VFC Server to the Fibre Channel Port


 vfcmap -help
 Usage: vfcmap -vadapter VFCServerAdapter -fcp
FCPName
 Maps the Virtual Fibre Channel Adapter to the physical Fibre Channel
Port
 -vadapter Specifies the virtual server adapter.
 -fcp Specifies the physical Fibre Channel Port
 Example: vfcmap –vadapter vfchost0 –f
 After mapping done, running cfgmgr on the client partitions to
configure SAN devices
 Before this step, zoning (if used) must be done on the switches. Virtual
adapter WWPN can be obtained on the HMC.
WWPN for the Virtual Adapter
Agenda

 Virtual Optical Devices, Virtual Optical Media, File-backed Devices


 Shared Ethernet Adapter (SEA) Failover
 SEA over Host Ethernet Adapter (HEA)
 Live Partition Mobility configuration
 Virtual Tape
 N-Port ID Virtualization (NPIV)
 Heterogeneous Multipathing
 Active Memory Sharing
Dynamic Heterogeneous Multi-Path I/O
 Delivers flexibility for Live Partition Mobility environments
 Provides efficient path redundancy to SAN resources

 Supported between virtual NPIV and physical Fibre Channel


Adapters
 AIX 5.3 and 6.1 partitions
 POWER6 processor-based servers

VIOS VIOS

FC Adapter Virtual FC Adapter FC Adapter FC Adapter Virtual FC Adapter FC Adapter

Power Hypervisor Power Hypervisor


NPIV NPIV

1) Real adapter 3) Partition moves via


4) Real adapter
virtual adapter

2) Virtual adapter to
prepare for mobility
Agenda

 Virtual Optical Devices, Virtual Optical Media, File-backed Devices


 Shared Ethernet Adapter (SEA) Failover
 SEA over Host Ethernet Adapter (HEA)
 Live Partition Mobility configuration
 Virtual Tape
 N-Port ID Virtualization (NPIV)
 Heterogeneous Multipathing
 Active Memory Sharing
PowerVM Active Memory™ Sharing
 Active Memory Sharing will intelligently flow memory from one partition to
another for increased utilization and flexibility of memory usage.

 Memory virtualization enhancement for Power Systems


 Memory dynamically allocated based on partition’s workload demands
 Contents of memory written to a paging device
 Improves memory utilization

 Extends Power Systems Virtualization Leadership


 Capabilities not provided by Sun and HP virtualization offerings

 Designed for partitions with variable memory requirements


 Low average memory requirements
 Active/inactive environments
 Workloads that peak at different times across the partitions

 Available with PowerVM Enterprise Edition


 AIX 6.1, Linux and i 6.1 partitions that use VIOS and shared processors
 POWER6 processor-based systems

* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
Active Memory Sharing Enables Higher Memory Utilization
Memory allocation Memory requirements

25

20
 Partitions with dedicated memory

Memory (GB)
15 Partition 3
 Memory is allocated to partitions 10
Partition 2
Partition 1
 As workload demands change, 5
memory remains dedicated 0
 Memory allocation is not optimized Time

to workload
 Partitions with shared memory 25

 Memory is allocated to shared pool 20

Memory Usage (GB)


 Memory is used by partition that 15 Partition 3
Partition 2
needs it enabling more throughput 10 Partition 1

 Higher memory utilization 5

Time
Active Memory Sharing Examples
15

Memory Usage (GB)


10
Asia
 Around the World Americas
Europe
5
 Partitions support workloads with
memory demands that peak at 0
different times Time
15
 Day and Night

Memory Usage (GB)


10
 Partitions support day time web Night
applications and night time batch 5
Day

0
Time
15
#10
 Infrequent use

Memory Usage (GB)


#9
#8
10
 Large number of partitions with #7
#6
sporadic use 5
#5
#4
#3
#2
0
#1
Time
When not to use AMS

 High Performance Computing (HPC) applications that has high and


constant memory usage
 Crash analysis, CFD, etc
 Databases that have fixed buffer cache allocation
 Generally use all the available memory on the partition, and buffer cache
paging is undesirable
 Realtime, fixed response-time type of applications
 Predictability is key, so resources should not be shared
References

 Using File-Backed Virtual SCSI Devices, by Janel Barfield


 http://www.ibmsystemsmag.com/aix/februarymarch09/tipstechniques/24273p1.aspx
 Configuring Shared Ethernet Adapter Failover
 http://techsupport.services.ibm.com/server/vios/documentation/SEA_final.pdf
 Integrated Virtual Ethernet Adapter Technical Overview and Introduction
 http://www.redbooks.ibm.com/abstracts/redp4340.html
 IBM PowerVM Live Partition Mobility
 http://www.redbooks.ibm.com/abstracts/sg247460.html

 Power Systems Virtual I/O Server


 http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphb1/iphb1.pdf

59
Gracias

César Diniz Maciel


cmaciel@us.ibm.com
forotecnicoargentina.com/facebook

You might also like