Professional Documents
Culture Documents
Exceptional price-performance
per watt for WebSphere with
AIX or Linux OS
Stephen Hochstetler
Benjamin Ebrahimi
Tom Junkin
Bernhard Zeller
ibm.com/redbooks
International Technical Support Organization
October 2006
SG24-7273-00
Note: Before using this information and the product it supports, read the information in
“Notices” on page ix.
This edition applies to the IBM BladeCenter JS21, Linux, and IBM AIX 5L Version 5.3, product
number 5765-G03.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Contents v
7.5.8 Logical partition configuration changes . . . . . . . . . . . . . . . . . . . . . . 235
7.6 Network management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
7.7 Advanced storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
7.7.1 Virtual storage assignment to a partition . . . . . . . . . . . . . . . . . . . . . 244
7.7.2 Virtual disk extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
7.7.3 VIOS system disk mirroring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
7.8 Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
7.8.1 VIOS maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
7.8.2 Logical partition maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
7.8.3 Command logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Contents vii
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such provisions
are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES
THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.
Java, Power Management, Solaris, and all Java-based trademarks are trademarks of Sun Microsystems,
Inc. in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
i386, Intel, MMX, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered
trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
This IBM Redbook takes an in-depth look at the IBM BladeCenter JS21. This is a
2-core or 4-core blade server for applications requiring 64-bit computing. It is
ideal for computer-intensive applications and transactional Internet servers. This
book helps you to install, tailor, and configure the IBM BladeCenter JS21 running
either IBM AIX® 5L™ or the Linux® operating system (OS).
This document expands the current set of IBM BladeCenter JS21 documentation
by providing a desktop reference that offers a detailed technical description of
the JS21 system. This publication does not replace the latest IBM BladeCenter
JS21 marketing materials and tools. It is intended as an additional source of
information that you can use, together with existing sources, to enhance your
knowledge of IBM blade solutions.
Bernhard Zeller is an I/T Specialist for Technical Sales System p in the IBM
Systems & Technology Group (STG), IMT Germany, in Mannheim. He has
17 years of experience in the AIX 5L and IBM System p field. He holds a degree
in electronics engineering from Fachhochschule Aalen. He has worked in IBM for
17 years. His areas of expertise include networking, operating systems, server
management, and speech synthesis technology.
Betsy Thaggard
ITSO, Austin Center
Donn Bullock
IBM, Raleigh
Kerry C. Anders
Walter Butler
Catherine Kostetsky
Frank Petsche
IBM, Austin
Torsten Bloth
IBM Germany
Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you'll develop a network of contacts in IBM development labs, and
increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
Preface xiii
xiv IBM BladeCenter JS21: The POWER of Blade Innovation
1
Learn more about IBM BladeCenter and its components in The Cutting Edge:
IBM eServer BladeCenter, REDP-3581.
Designed with the demands of enterprise and scientific computing in mind, the
BladeCenter JS21 is a highly differentiated solution for high-performance Linux
clusters, seismic analysis for oil and gas, UNIX® applications for retail and
Operating frequency 50 Hz or 60 Hz
Number of cores 2 4
Hard disk drives (HDD) Integrated SAS or Redundant Integrated SAS or RAID
controller Array of Independent Disks
(RAID)
Weight
Figure 2-1 shows the JS21 with the cover off. You can see the two processors
and the memory slots completely full. The two optional SAS disks are installed.
Figure 2-1 View of the BladeCenter JS21 with the cover off
Family JS21 Family of 64-bit PowerPC technology based blade servers, running
64-bit AIX 5L or Linux operating systems
Figure 2-2 shows the memory, SAS disk, and expansion option connectors,
which are the same for the 31X and 51X models.
The following list provides some of the optional features available on the
BladeCenter JS21. For a complete list of the supported modules and adapters,
see the following IBM ServerProven® Web site:
http://www.ibm.com/servers/eserver/serverproven/compat/us/eserver.html
Up to 16 GB of system memory
Up to 2 internal hard disk drives for up to 146 GB of internal storage
Support for small-form-factor ((SFF), 2.5”) 36 GB/73 GB SAS
10,000 revolutions per minute HDD
Standard or SFF Gigabit Ethernet expansion cards
AltiVec
The AltiVec is an extension to the IBM PowerPC Architecture™. It defines
additional registers and instructions to support single-instruction multiple-data
(SIMD) operations that accelerate data-intensive tasks.
Figure 2-3 illustrates the difference between scalar and vector operations.
7 5 12
Scalar Add Operation 1 6 7
6 11 17
4 + 7 11 +
3 4 7
3 5 8
10 2 12
The concept of vector processing has existed since the 1950s. Early
implementations of vector processing (known as array processing) were installed
in the 1960s. They used special purpose peripherals attached to general
purpose computers. An example is the IBM 2938 Array Processor, which could
be attached to some models of the IBM System/360™. This was followed by the
IBM 3838 Array Processor in later years.
Instructions
Branch
from Storage
Processing
(BPU)
Data to/from
Storage
Storage
The VXU operates on vectors that are a total of 128 bits long. These can be
interpreted by the VXU as either:
A vector of sixteen 8-bit bytes
A vector of eight 16-bit half words
A vector of four 32-bit words
VR0
VR1
...
VR31
0 127
The AltiVec extensions to PowerPC Architecture define new instructions that use
the VXU to manipulate vectors stored in the VRF. These instructions fall into the
following categories:
Vector integer arithmetic instructions (on 8-bit, 16-bit, or 32-bit integers)
Vector floating-point arithmetic instructions (32-bit only)
Vector load and store instructions
Vector permutation and formatting instructions
Processor control instructions used to read and write from the AltiVec status
and control register
Memory control instructions used to manage caches
For technical details about AltiVec, visit one of the following Web sites:
Unrolling AltiVec, Part 1: Introducing the PowerPC SIMD unit
http://www.ibm.com/developerworks/power/library/pa-unrollav1/
High-Performance Processors: Altivec Technology
http://www.freescale.com/files/32bit/doc/fact_sheet/ALTIVECFACT.pdf
AltiVec - Wikipedia
http://en.wikipedia.org/wiki/AltiVec
The 2-core BladeCenter JS21 ships with 1 GB main memory (two PC2-3200 512
MB DIMMs). The 4-core BladeCenter JS21 ships with 2 GB main memory (two
PC2-3200 1 GB DIMMs).
You do not have to install a hard disk drive if you have installed the QLogic 4Gb
SFF Fibre Channel expansion card and configured the BladeCenter JS21 to boot
from storage area network (SAN).
If the existing power modules are replaced with the 2000 watt power modules,
you must upgrade the management module firmware. If two management
modules are installed in the BladeCenter chassis, upgrade both management
modules to the same level of firmware.
BladeCenter H (8852)
This BladeCenter H is a 9U rack mountable chassis that contains bays for up to
14 blade servers, four power modules, four switch modules, four high-speed
switch modules, four high-speed bridge modules, and two management
modules. The BladeCenter H requires 2900 watt hot-swap redundant power
supply modules. Figure 2-8 shows the front view of this chassis.
The standard redundant power supplies are installed in power bays one and
three of the BladeCenter H. They provide power to the first seven blade server
bays. To install blade servers in the remaining bays, eight through 14, you must
install an additional pair of redundant power supply modules in power bays two
and four.
If your BladeCenter chassis was shipped before June 2003, an update to the
interface card on the media tray might be required for proper CD-ROM operation
with the BladeCenter JS21. To determine the part number of your existing media
tray, from the management module Web interface, under the heading Monitors
in the left column, select Hardware vital product data (VPD), and then look at
the module name media tray. If the field-replaceable units (FRU) number of the
media tray is 59P6629, call your hardware support center and request a free
replacement media tray.
Both AIX 5L and Linux operating systems work with a 64-bit kernel and
64-bit/32-bit programs. Some of the included programs were compiled for 64-bit
when the code fully exploited the 64-bit address space.
3.1.1 AIX 5L
AIX has evolved from its beginnings on the IBM RT to become the operating
system of choice for the largest UNIX servers of IBM. AIX is an enterprise
operating system that scales from workstations all the way up to massively
parallel supercomputers.
With AIX 5L installed, the JS21 can run thousands of software titles that were
written for the AIX 5L platform, taking full advantage of the rich capabilities of
AIX 5L. At the time of writing of this book, AIX 5L 5.2M and AIX 5L 5.3E support
the BladeCenter JS21.
By early 2003, Red Hat stopped developing its non-enterprise distribution and
focused its efforts on Red Hat Enterprise Linux. Red Hat elected to turn the
development of its free software over to the community. This free
community-supported distribution became known as Fedora. Red Hat Enterprise
Linux AS 4 - Update 3 and later is supported on the JS21.
SUSE Linux Desktop is free to download, but SLES is available only with the
purchase of a maintenance contract. The desktop product is also available only
for x86_64 and IA32, but SLES supports IA32, ia64, ppc/64, s390, s390x, and
x86_64.
SLES Version 9 Update 2 and Update 3 are supported on the BladeCenter JS21.
SLES9, the latest version at the time of writing, ships with the 2.6 Linux kernel.
This brings significant gains to performance and scalability. SLES is AltiVec
savvy and has versions of GNU C Compiler (gcc) that allow manual exploitation
of the vector engine.
Restriction: The BladeCenter JS21 requires version 1.2.1 or later of the VIOS
for logical partitions (LPARs).
IVM provides a simple management model for a single system such as a JS21.
Although it does not provide all of the HMC capabilities, it enables the
exploitation of the IBM Virtualization Engine™ technology.
The VIOS is automatically configured to own all of the I/O resources and it can
be configured to provide service to other LPARs through its virtualization
capabilities. Therefore, all other LPARs cannot own any physical adapters. They
must access disk, network, and optical devices only through the VIOS as virtual
devices. Otherwise, the LPARs operate as they have previously with respect to
processor and memory resources.
Figure 3-1 shows a sample configuration using IVM. The VIOS owns all the
physical adapters, and the other two partitions are configured to use only virtual
devices. The administrator can use a browser to connect to IVM to set up the
system configuration.
LPAR #2 Administrator’s
browser
Virtual adapters
LPAR #1
VIOS + IVM
Physical adapters
Corporate LAN
Because IVM runs on an LPAR, it has limited service-based functions, and the
BladeCenter management module must be used. For example, a system power
on must be performed by physically pushing the system power on button or
remotely accessing the BladeCenter management module, because IVM does
not run while the power is off. The BladeCenter management module and IVM
together provide a simple but effective solution for a single partitioned server.
LPAR management using IVM is through a common Web interface developed for
basic administration tasks. Being integrated within the VIOS code, IVM also
handles all virtualization tasks that normally require VIOS commands to be run.
Because IVM relies on VMC to set up logical partitioning, it can manage only the
system on which it is installed. For each IVM managed system, the administrator
must open an independent Web browser session.
Figure 3-2 provides the schema of the IVM architecture. The primary user
interface is a Web browser that connects to port 80 of the VIOS. The Web server
provides a simple graphical user interface (GUI) and runs commands using the
same command-line interface (CLI) that can be used to log on to the VIOS. One
set of commands provides LPAR management through the VMC, and a second
set controls VIOS virtualization capabilities.
VIOS
Web server
Command line CGI/HTML
LPAR 1
LPAR 2
IVM
LPAR CLI Virtual I/O CLI
I/O subsystem
VMC
Hypervisor Service processor
One of the main building blocks in this strategy is the use of simple yet powerful
software to oversee the management of many different systems. This section
investigates some of the possible options that are available for managing such
systems. It reviews the BladeCenter Web interfaces, IBM Director, and Cluster
Systems Management (CSM).
From the IBM Director console, system administrators can monitor resource
utilization. You can use this key feature for performance and capacity planning,
and to alert support staff of critical errors such as hardware failure.
IBM Director also allows the remote management of software. You can remotely
reconcile and install software from the console interface. This can be useful in
large environments for resource-intensive tasks such as patch management. By
using the Software Distribution Premium Edition, you can remotely install
patches to several servers at the same time with the click of a button.
You can learn more about IBM Director in 10.2, “IBM Director” on page 346.
CSM is a collection of components that have been integrated to provide the basis
to construct and manage a cluster. Each of these components provides specific
capabilities related to the management of the cluster. This component-based
architecture provides flexibility for future expansion of the capabilities provided
by CSM. Each of the CSM components can be easily personalized to help meet
specific needs. For example, a cluster administrator can set up monitoring of
The BladeCenter JS21 has native virtualization features that are enabled by
ordering and installing the no-charge VIOS. Without the VIOS installed, the
BladeCenter JS21 is a 2-core or 4-core symmetric multiprocessor (SMP) server.
For more information about virtualization, see the following Web site:
http://www.ibm.com/servers/eserver/about/virtualization/
POWER Hypervisor
Combined with features designed into the PowerPC 970MP, the IBM POWER
Hypervisor™ delivers functions that enable other system technologies, including
Micro-Partitioning, virtualized processors Institute of Electrical and Electronics
Engineers (IEEE) virtual local area network (VLAN), compatible virtual switch,
virtual Small Computer System Interface (SCSI) adapters, and virtual consoles.
The POWER Hypervisor is a component of system firmware that is always
active, regardless of the system configuration. Therefore, it requires no separate
license apart from the VIOS for setup and usage.
Virtual Ethernet
The POWER Hypervisor provides a virtual Ethernet switch function that allows
partitions on the same server as a means for fast and secure communication.
Virtual Ethernet working on LAN technology allows a transmission speed in the
range of 1 GBps to 3 GBps depending on the maximum transmission unit (MTU)
size. Virtual Ethernet requires a BladeCenter JS21 running either AIX 5L
Version 5.3 or the level of Linux supporting virtual Ethernet devices. Virtual
Ethernet is part of the base system configuration.
Note: The POWER Hypervisor is active when the server is running in partition
and non-partition mode. Consider the Hypervisor memory requirements when
planning the amount of system memory required. In AIX 5L V5.3, use the
lshwres command to view the memory usage.
lshwres -r mem --level sys -F sys_firmware_mem
You can also determine this using the console of the IVM: View/Modify
Partitions → System Overview → Reserved Firmware Memory.
The virtual feature includes installation image for the VIOS software that
supports the following features:
Ethernet adapter sharing
Virtual SCSI Server
VIO software ships on a DVD
Software support of:
– AIX 5L V5.3
– SUSE Linux Enterprise Server 9 (SLES9) for POWER (Service Pack 3 or
later)
– Red Hat Enterprise Linux AS 4 for POWER
Partition management using IVM (VIOS V1.2.1 or later)
The maximum values stated in Table 4-1 are supported by the hardware.
However, the practical limits based on production workload demands and
application utilization might be significantly lower.
Because the VIOS is an AIX 5L V5.3 operating system-based appliance, you can
provide redundancy for physical devices attached to the VIOS by using
capabilities such as Multipath I/O and IEEE 802.3ad Link Aggregation. Install the
VIOS partition from a special bootable DVD that is provided when you order the
VIOS. This dedicated software is only for the VIOS operations, therefore, the
VIOS software is only supported in VIOS partitions.
Two major functions are provided with the VIOS: a shared Ethernet adapter and
Virtual SCSI. The following sections discuss these functions.
All current storage device types, such as storage area network (SAN), SCSI, and
RAID are supported. iSCSI and Serial Storage Architecture (SSA) are not
supported. For more information about the specific storage devices that are
supported, visit the following Web site:
http://techsupport.services.ibm.com/server/vios/home.html
The IVM provides a simple management model for a single system such as a
BladeCenter JS21. Although it does not provide the full flexibility of an HMC, it
enables the exploitation of the IBM Virtualization Engine technology. The
BladeCenter JS21 is ideally suited for management using the IVM. The HMC,
You can reconfigure resources on client LPARs without recycling the whole
server. You can use IVM to reconfigure resources across client LPARs if the
LPARs are stopped. This dynamic movement of resources does not affect other
LPARs that are running.
For clients: The BladeCenter JS21 ships with a documentation CD that provides
multiple installation, user and setup guides that are integral to your efficient
implementation of the BladeCenter. Current updates to these documents are
available online for your review and planning considerations. Visit the following
Web site:
http://www.ibm.com/pc/support/site.wss
Hardware
Management
Subnet
I/O
I/O
Modules Management
ModuleI/O Module
ModuleI/O
Module
SOL
Subnet
eth0 eth1 eth0 eth1 eth0 eth1 eth0 eth1 eth0 eth1 eth0 eth1
OS
Application
Management
Subnet
Subnet
You can use this subnet to access the management module Web interface and
command-line interface (CLI). You can also use this subnet to access the Web
interface and CLI of I/O modules. System management applications such as the
IBM Director or CSM also use this subnet to communicate with the hardware
management functions of the BladeCenter infrastructure.
Note: Although the logical network view (illustrated in Figure 5-1 on page 48)
shows the I/O management module interfaces connecting directly to the
hardware management subnet, they are physically connected using the
management module, which acts as gateway to these interfaces. The
management module performs a proxy Address Resolution Protocol (ARP)
function to make it appear as though the I/O module management interfaces
are attached to the hardware management subnet.
If you use the 4-Port Gigabit Ethernet switch module or the Nortel Networks
Layer 2-7 Gigabit Ethernet switch module, the VLAN uses VLAN ID 4095. If you
use the Cisco Systems Intelligent Gigabit Ethernet switch module, you must
configure the VLAN ID to be 4095.
You have to assign a unique range of IP addresses to this subnet for use by the
SoL remote text console function.You have to specify only the starting IP
address within the range of IP addresses that you assign to the management
module. The management module then automatically assigns consecutive IP
addresses from the starting address that you provide to each blade server that
you have installed.
You can learn how to configure the SoL subnet and VLAN in 6.7.1, “Configuring
Serial over LAN” on page 124
The operating system management subnet is used to support both the initial
installation and subsequent management of the operating systems installed on
BladeCenter JS21s. This subnet is implemented using a VLAN provided by the
Ethernet Switch I/O modules installed in I/O module bay 2 of each BladeCenter
chassis.
You might want to install both AIX and Linux operating systems on different
BladeCenter JS21s in the same environment. In this case, you might have to set
up multiple operating system management subnets and underlying VLANs, one
for blade servers running AIX and the other for blade servers running Linux if you
are performing network installations that use broadcast packets.
To install a Myrinet network, you must install a Myrinet expansion card on each
BladeCenter JS21 that requires connectivity to the high-performance,
low-latency interconnection network. You must also install an optical pass-thru
I/O module in I/O module bay 4 of each BladeCenter chassis that contains blade
servers equipped with Myrinet expansion cards. Then connect the optical
pass-thru I/O module to external Myrinet switches to complete the Myrinet
network infrastructure.
Break-out
Cables
External
Myrinet
Switch
You can define IP addresses for each Myrinet network interface. You can also
use the Myrinet network infrastructure to support any application communication
based on IP protocols. For example, you can use this capability to support a
clustered file system such as IBM General Parallel File System (GPFS).
You can define a dedicated IP subnet for use with the Myrinet network
infrastructure that is distinct from the other IP subnets that are identified in 5.2.1,
“Minimal network requirements” on page 47.
AIX 5L
The following AIX 5L technology levels support installation on the JS21.
AIX 5L for Advanced Performance Optimization with Enhanced RISC V5.2
(POWER V5.2) with the 5200-08 technology level (APAR IY77270), plus
APAR IY80493 or a later technology level
AIX 5L for POWER V5.3 with the 5300-04 technology level (APAR IY77273),
plus APAR IY80499 or a later technology level
Advanced POWER Virtualization for AIX 5L V5.3 and Linux environments
require Virtual I/O Server (VIOS) V1.2.1 (5765-G34)
Linux
The following Linux distributions support installation on the JS21.
SUSE Linux Enterprise Server 9 for POWER Service Pack 3 (SP3), or later
Red Hat Enterprise Linux AS 4 for POWER update 3, or later
For AltiVec-optimized application development
There are several methods to install the JS21 in a BladeCenter. The first is an
attended method, using CD-ROM media. The second is a remote method
through a network installation. You can accomplish network installation using
NIM, Linux installation server, IBM Director, or CSM depending on the operating
systems involved. NIM and Linux installation servers are discussed in Chapter 8,
“Installing AIX” on page 259 and Chapter 9, “Installing Linux” on page 281.
With network installation, you can perform several installations at the same time.
The method is designed to reduce the installation time required when a large
number of blades require operating system installation. We choose this method
as the focus for the following sections.
The remainder of this section focuses on the planning considerations for the first
approach. You can find planning considerations for the second approach in
Chapter 7 of Cluster Systems Management for AIX and Linux V1.5 Planning and
Installation Guide, SA23-1344. You can find this guide at:
http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=
/com.ibm.cluster.csm.doc/clusterbooks.html
You can use one physical server to provide all three of these network services. If
you want to install AIX on BladeCenter JS21s, this server must run AIX. If you
want to install Linux on the BladeCenter JS21s, we recommend that you use the
same Linux distribution in the network installation server that you are planning to
install on the BladeCenter JS21s.
In some situations, you might want to install AIX on some BladeCenter JS21s
and Linux on other BladeCenter JS21s. Although it is possible to use a single
AIX server to do this, we recommend that you establish two network installation
servers, one for installing AIX and the other for installing your chosen Linux
distribution.
One approach is to place the BladeCenter JS21s running AIX and the AIX
network installation server on a different VLAN from the BladeCenter JS21s
running Linux and the Linux installation server. If you use this approach, you
must also disable the relay of BOOTP or DHCP requests in your network
routers.
To learn how to set up AIX network installation servers, see 8.3.3, “Configuring
the NIM master” on page 263. For installation instructions for specific Linux
distributions, see 9.2.1, “Installing Linux using the network: General remarks” on
page 284. For instructions about setting up fundamental network installation
parameters for Linux, see 9.2, “Basic preparations for a Linux network
installation” on page 283.
The BOOTP protocol has been around since the mid1980s. In many
environments, BOOTP has now been replaced by the newer DHCP protocol.
DHCP was designed to interoperate with BOOTP, and most DHCP servers can
serve BOOTP clients.
When you perform network installations of AIX, use the AIX BOOTP server.
When you perform network installations of Linux, use a DHCP server that is
configured to support BOOTP clients.
Restriction: The BladeCenter JS21 does not support the keyboard, video,
and mouse (KVM) console supported by other blade servers at this time.
Instead the BladeCenter JS21 supports SoL remote text consoles through the
BladeCenter management module.
The SoL remote text console function uses the first Gigabit Ethernet interface on
each BladeCenter JS21 for communication between the management module
and the service processor found in each BladeCenter JS21. This interface is
known as eth0 under Linux and en0 under AIX. Under normal conditions, using
the Gigabit Ethernet interface by the SoL remote text console function is entirely
transparent. It does not impact usage of the Ethernet interface by the operating
system running on the BladeCenter JS21.
To use the second interface, you must have a LAN switch I/O module, or
pass-thru I/O module connected to an external LAN switch, installed in I/O
module bay 2. You can learn more about why you must use the second Gigabit
Ethernet interface for network installation in the IBM RETAIN® tip H181655 on
the Web at:
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-55282
IBM Director enables you to remotely manage many IBM and non-IBM servers
including the BladeCenter JS21. The IBM Director console allows system
administrators to manage multiple BladeCenter chassis in a heterogeneous
environment or environments where a Director infrastructure exists. IBM Director
V5.1 supports the following functions on the BladeCenter JS21:
Events
Resource monitoring
Inventory (limited)
Remote session
Software distribution
This section reviews the major planning considerations associated with the
installation of IBM Director. For a comprehensive treatment of planning for IBM
Director, refer to Implementing IBM Director 5.10, SG24-6188.
For more planning information, refer to Cluster Systems Management for AIX
and Linux V1.5 Planning and Installation Guide, SA23-1344.
CSM requires specific support for each operating system version that is running
on the servers that it manages. Currently (at the time of publication) CSM
Version 1.5.1 supports the following operating systems on the BladeCenter
JS21:
AIX 5L Version 5.2 with APAR IY77440
AIX 5L Version 5.3 with APAR IY7740
The default base versions of SUSE Linux Enterprise Server 9 (SLES9) for
POWER systems
The default base versions of Red Hat Enterprise Linux (AS) 4 for POWER
Certain CSM for Linux on POWER functions require non-IBM software. The
following non-IBM software is required. You can obtain it from the listed sources.
AutoUpdate V4.3.4,or later levels
You must use this if you want to perform the software maintenance
installation and upgrade of non-CSM RPMs on your Linux-managed nodes
from the management server. Download the software from:
http://freshmeat.net/projects/autoupdate
sg3_utils-1.06-1.ppc64
http://people.redhat.com/pknirsch
Planning elements for CSM for AIX and Linux is covered in detail in Chapter 3 of
IBM Cluster Systems Management for AIX V5L and Linux Planning and
Installation Guide V1.50, SA23-1344. To check for updates to this document,
refer to:
http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=
/com.ibm.cluster.csm.doc/clusterbooks.html
You can use a CSM management node to both install and manage BladeCenter
JS21s, or you can use a management node to manage BladeCenter JS21s that
are installed using other mechanisms. Most environments use CSM to both
install and manage BladeCenter JS21s; therefore, we focus on this scenario.
If you use a management node to both install and manage BladeCenter JS21s
running Linux, you must use the same type of Linux distribution on both the
management node and the BladeCenter JS21s. However, the management
node does not require the same processor architecture that is used in the
BladeCenter JS21. For example, if you plan to install SLES on BladeCenter
JS21s, you can use the following types of management node and operating
system combinations to both install and manage the BladeCenter JS21s:
A supported IBM eServer xSeries® server running SLES
A supported IBM eServer pSeries® server running SLES
An IBM HS20 Blade Server running SLES
An IBM BladeCenter JS21 running SLES
All BladeCenter chassis run with one or two management modules (MM) or
advanced management modules (AMM). BladeCenter-H supports the AMM only.
The first management module must be located in management module bay 1
and the second in management module bay 2. The Ethernet switch modules
must be located in I/O module bays 1 and 2. Additional modules can be added in
the remaining I/O module bays depending on the type of the chassis. Because
the interfaces are always paired and the pair associated to I/O module bays 1
and 2 is always Ethernet, there must be Ethernet switches in these bays.
However, the type of Ethernet switches might be different in bay 1 compared to
bay 2.
Note: This schematic does not exactly match the real electric design. It is a
simplified abstract to explain network paths for planning and configuring the
BladeCenter’s network setup. You can find a more detailed schematic in
BladeCenter chassis management:
http://www.research.ibm.com/journal/rd/496/brey.pdf
Management Interface
HTTP/HTTPS/Telnet/
SSH/DOT/SNMP/LDAP
MGT 1 EXT 1
MGT 2 EXT 2
INT 1 EXT 3
..
INT 14 EXT 4
MGT 1 EXT 1
MGT 2 EXT 2
INT 1 EXT 3
..
INT 14 EXT 4
Port 2 (eth1)
Port 1 (eth0)
Port 2 (eth1)
Network (LAN)
...
Optional Network
BSMP Build in Ethernet BSMP Build in Ethernet connection
Controller Controller
Production VLANs
SoL VLAN
default ID = 4095
Management VLAN
(untagged)
BladeCenter JS21 BladeCenter JS21 External management
Note: BladeCenter HS20-8678 blade servers map the switch modules in the
opposite way: The switch module in module bay 1 accesses the blade server’s
eth1 interface, and the switch module in module bay 2 accesses the blade
server's eth0 interface.
To use HS20-8678 blade servers in the same chassis with other HS20 blade
servers, you must upgrade the HS20-8678 blade server Basic I/O system
(BIOS) to level 1.05 or later. Then modify the BIOS settings for each
HS20-8678 blade server. When you turn on the server, press F1 to enter
BIOS setup. In the BIOS setup screen, select BIOS setup → Advanced
Setup → Core Chipset Control → Swap the numbering of onboard NICs
[Yes].
In Figure 6-3 on page 67, we show two Ethernet switch modules (ESM) in I/O
module bays 1 and 2. The ESMs have two internal management ports going to
MM1 and MM2 (MM2 is optional and not shown), 14 internal ports going to the
blades and four to six external (uplink) ports to connect to the external
production network. To connect the management module’s external interface
(eth0) to the external management network, you must use the management
module’s remote management and console RJ45 Ethernet connector.
Note: The management module provides the only access to the internal
management network. This means that you cannot access the internal
management network through the Ethernet switch modules from the
production network, from outside the BladeCenter (EXTx ports), or from a
blade server itself (INTx ports).
Note: Due to the proxy ARP function of the management module, you see all
of the IP addresses of the internal management network, that is, the
management module’s internal port eth1 (the advanced management module
has no internal port eth1) and the management ports of the ESMs (if accessed
through the management module and not using the external uplink ports)
associated with the hardware Medium Access Control (MAC) address of the
management module’s eth0 interface in the outside network.
In the I/O module bays there are connections to both networks, the internal
management network and the internal production network.
Note: Although the internal management network and the internal production
network are connected through the ESM, there are methods implemented to
separate the management and production traffic. BladeCenter ESMs have
certain hard-coded filters that prevent any traffic that enters any of the
upstream ports from exiting out of the management module facing ports and
vice versa. This also prevents any unexpected spanning-tree loops.
The only way to control the management module is through the external RJ45
Ethernet connector of the management module. It is also not possible to pass
substantive data between switch modules across the midplane using the
MGTx ports. The ESM does not forward data between the MGTx ports and
any of the internal (INTx) or external (EXTx) uplink ports. If you want to pass
data from one switch module to another, then the modules must either be
cabled directly to each other or connected by way of an external switch or
router.
This is especially important if you have enabled “External management over all
ports” (see 6.4.2, “Setting external management over all ports” on page 87) for
the ESM (represented by a closed “External management over all ports” switch
in Figure 6-3 on page 67). In this case, you can reach the ESM’s management
interface (HTTP, Telnet, Secure Shell (SSH), Simple Network Management
Protocol (SNMP), and so on) through two paths. You can reach it from the
external management network through the management module that supports
pass-through function to the internal management port of the ESM on the one
side, and from the external production network through the ESM’s external ports
on the other side. Even in this scenario the ESM’s management interface does
not pass through data from one path to the other.
If you do not have a separate subnet for management available to you, then
you can also attach the ESM external ports to this subnet. To keep duplicate
IP addresses from being reported, the default setting is off for “External
management over all ports” (see 6.4.2, “Setting external management over all
ports” on page 87). If you want to turn on the “External management over all
ports” setting, then keep the management module Ethernet interface in a
separate network because this is the proper configuration.
You must carefully consider when you connect the external management network
and the external production network (see the dotted Ethernet link in Figure 6-3
on page 67) or leave them separated. When you connect both, use different
VLANs to separate the networks.
Another method of separating the network paths is used for Serial over LAN
(SoL). SoL traffic goes from the management module’s management interface
through the ESM in I/O module bay 1 to the blade servers’s blade system
management processor (BSMP).
To do the separation, SoL uses the VLAN ID (VID) 4095. The default VID for the
production network is 1 (you can define additional VLANs with an ID less than (<)
4095 in the ESM for the production network, and the VID 4095 is blocked for this
purpose). Therefore, the SoL traffic does not interact with the internal production
network. However, it partly uses the same physical network path.
For information about accessing the management module’s CLI, see the
Management Module Command Line Interface Reference Guide - IBM
BladeCenter and BladeCenter T. Refer to the procedure described in
“BladeCenter product documentation” on page 64.
Note: The advanced management module can use either a standard Ethernet
cable or an Ethernet crossover cable to make this connection.
You can reset the IP addresses of a management module that was previously
configured back to the factory defaults by using the IP reset button (shown in
Figure 6-4 on page 72) on the management module. You can find the procedure
for doing this in 6.3.7, “Resetting the management module” on page 84.
You can now connect to the management module Web and command line
interfaces using the IP address that you assigned to the management module
external network interface.
Now consider performing other management module setup tasks, such as:
Changing the factory set default user ID and password for security reasons
Setting the management module date and time so that log entries have useful
time stamps
Defining user IDs and passwords for system administrators and operators
who manage the BladeCenter and disable the factory set default user ID
Alternatively, you can configure the management module to use a Lightweight
Directory Access Protocol (LDAP) directory for this purpose.
Configuring the management module to send alerts to management systems
using SNMP, or system administrators using e-mail using Simple Mail
Transfer Protocol (SMTP)
Enabling the use of Secure Sockets Layer (SSL) to securely access the
management module Web interface
Enabling the use of SSH to securely access the management module CLI
For additional information about how to perform these tasks, refer to the
Advanced Management Module and Management Module User's Guide - IBM
BladeCenter, BladeCenter T, BladeCenter H and the Management Module
Command Line Interface Reference Guide - IBM BladeCenter and BladeCenter
T. You can find these publications using the procedure described in “BladeCenter
product documentation” on page 64.
Tip: You can see the BladeCenter JS21 MAC addresses in the hardware vital
product data (VPD).
You can configure the failover behavior using the uplink command described in
the Management Module Command Line Interface Reference Guide - IBM
BladeCenter and BladeCenter T. To locate this document, use the procedure
described in “BladeCenter product documentation” on page 64.
Note: During failover, the ARP table is flushed and automatically rebuilt over a
time period of some minutes. When it is rebuilt, you will be able to re-establish
a network connection to the management module. This ARP table rebuilding
does not impact the network attached to the Ethernet switch modules.
Sometimes the firmware transfer does not work on the back-level firmware. (See
6.6, “Firmware” on page 94 for more information.) In this case, you have to
update both the management modules separately. To do this, you must activate
the backup management module by deactivating the primary management
module.
Note: To switch the active management module, from the Web interface,
select MM control → Restart MM → Switch Over.
The ports in Table 6-1 are user configurable. The default port numbers used are
indicated in column 2.
You can learn about the selection of IP addresses for the I/O module
management interfaces in “Hardware management subnet” on page 49.
When you first install a new I/O module, the management module assigns a
default IP address to the management interface of the I/O module. The default IP
address is chosen based on the I/O module bay where the I/O module is
installed. The I/O module installed in I/O module bay 1 is assigned the IP
address 192.168.70.127, the I/O module installed in I/O module bay 2 is
assigned 192.168.70.128, the I/O module installed in I/O module bay 3 is
assigned 192.168.70.129, and the I/O module installed in I/O module bay 4 is
assigned 192.168.70.130.
Note: Depending on the firmware level of the management module, this task
can be listed under Configuration instead of Admin/Power/Restart.
Attention: You have to select the I/O module that you want using the Select a
module menu, and not by using the check boxes in the table shown in
Figure 6-14.
At this point, the I/O module management interface has an IP address. Also, the
external ports on the I/O module are enabled so that they can be used to
communicate with blade servers.
The SoL remote text console function of the management module depends on a
VLAN provided by an Ethernet switch I/O module installed in I/O module bay 1.
This VLAN is automatically provided by the 4-port Gigabit Ethernet switch
module and the Nortel Networks Layer 2-7 Gigabit Ethernet switch module using
VLAN 4095.
For further information, see BladeCenter Serial over LAN Setup Guide or the
Cisco Systems Intelligent Gigabit Ethernet Switch Module for the IBM eServer
BladeCenter, REDP-3869. The Cisco guide describes how to manually set up
the VLAN that is necessary to support the SoL remote text console function. You
can access these guides on the Internet by following the instructions in
“BladeCenter product documentation” on page 64.
The easiest way to perform these tasks is through the management module Web
interface.
The correct boot sequence depends on the method that you plan to use to install
an operating system on the blade server.
Note: You can also change the boot sequence using the System
Management Services (SMS) (see “Configuring boot device order” on
page 141) or the Open Firmware interface (see boot-device variable in 6.9.1,
“Activating the Open Firmware interface” on page 152). Independent of which
of the three methods you use to change the boot sequence, it will be stored in
the same location in the nonvolatile random access memory (NVRAM).
This means that when you use one method to change the boot sequence, the
changes are also reflected in the other interfaces with at least one exception:
If you select network - BOOTP in the management module’s management
interface (as shown in Figure 6-16), it does not give you an option to specify
the individual Ethernet ports. Using the network - BOOTP option results in two
entries in the boot device order where the secondary adapter (location code
-T8) shows up first as recommended for use with SoL (6.7, “Providing a
console for the BladeCenter JS21” on page 121). It also does not allow you to
specify IP addresses for remote IPL, which is possible in SMS and Open
Firmware.
The Open Firmware aliases network and network1 definitions are shown in
Example 6-2, which shows that the secondary Ethernet port comes first.
The boot sequence changes are also reflected in the SMS interface shown in
Figure 6-17.
PowerPC Firmware
Version MB240_470_013
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
---------------------------------------------------------------------------
Current Boot Sequence
1. Ethernet
( loc=U788D.001.23A1137-P1-T8 )
2. Ethernet
( loc=U788D.001.23A1137-P1-T7 )
3. USB CD-ROM
( loc=U788D.001.23A1137-P1-T1-L1-L3 )
4. SCSI 36401 MB Harddisk, part=1 ()
( loc=U788D.001.23A1137-P1-T10-L1-L0 )
5. SCSI 36401 MB Harddisk, part=1 ()
( loc=U788D.001.23A1137-P1-T11-L1-L0 )
---------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
---------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-17 SMS Configure Boot Device Order: Displaying current setting
Note: KVM, Remote Disk, and Remote Console are currently not supported
for the blade server JS21.
6.6 Firmware
This section deals with the firmware management of the BladeCenter JS21 and
the various hardware components used in combination with it.
General support
To stay informed about the BladeCenter JS21 and to get the latest firmware for
BladeCenter JS21 components:
1. Visit the following IBM Web site:
http://www.ibm.com/pc/support
On this service Web page, in the Browse by product section, enter the
product or model number of the hardware. In the case of the BladeCenter
JS21, you have to enter the Type number 8844 and click the GO button.
2. In the new window that opens, you can optionally choose a model number, an
operating system, or choose both. Click the Continue button.
3. The BladeCenter JS21 service page opens. On this page, use the Downloads
and drivers link to check the firmware that is available.
You can choose BladeCenter JS21 or choose the Link for BladeCenter or
BladeCenter H to get the latest firmware especially for BladeCenter switches.
Tip: For hardware acquired from another vendor, it might be helpful to also
consult the Web page of the manufacturer.
You can use the BladeCenter management module to identify many firmware
levels of installed hardware. See Table 6-3 on page 97, which lists the available
update methods and the possible impact on different features. In the
management module, you can access the information by using the menu on the
left. Navigate to Monitors → Firmware VPD. On the command-line interface of
the management module, type the info command to get this information.
Table 6-3 lists the hardware components and the firmware levels used during the
creation of this book.
Qlogic 4Gb Fibre Diag-CD, Driver(IBM): 1.14 No General Fibre Channel issues,
Channel small- OS Firmware version: especially important for SAN
form-factor (SFF) 4.00.22 boot
expansion card
# /usr/lpp/diagnostics/bin/update_flash -f mb-240.470.013_anyos_ppc64.img
The image is valid and would update the temporary image to MB240_470_013.
The new firmware level for the permanent image would be MB240_470_012.
Note: The firmware stored on the temporary side of the flash memory is
the default firmware used during a BladeCenter JS21 startup. It is possible
to boot the firmware stored on the permanent side of the flash memory by
using the SMS interface. See 6.8, “System Management Services
interface” on page 133.
Alternatively, if the new firmware works fine, you can commit the firmware
update and copy the image from the temporary side to the permanent side.
To do this, use the following command:
/usr/lpp/diagnostics/bin/update_flash -c
You can obtain the latest version of the update_flash script by installing the
diagnostic utilities for Linux on Advanced Performance Optimization with
Enhanced RISC (POWER) that IBM distributes from the Web at:
https://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html
Subsequently, if you have a problem with the new firmware and want to revert to
the previous firmware level, then use the following command:
/usr/sbin/update_flash -r
If you are satisfied with the new firmware, commit the firmware upgrade before
you install any future firmware by using the following command:
/usr/sbin/update_flash -c
After you install the RPM, you can obtain information about the installed firmware
level by issuing the following command:
/usr/sbin/bcmflashdiag ethX
/usr/sbin/bcmflashdiag eth3
Firmware versions:
Type Version
--------- -------
BootCode 5780s-v3.18
ASF CFG 40
ASF CPUB ASFIPMI v6.08
ASF CPUA ASFIPMI v6.08
ASF Init ASFIPMI v6.08
Figure 6-20 Firmware information of onboard Ethernet adapter eth3
Here FIRMWARE.BIN is a placeholder for the actual name of the binary firmware
file. See the command line output produced by this command in Figure 6-21. The
flash process does not interrupt the data transfer on the interface, but you have to
restart the BladeCenter JS21 to activate the new firmware.
Attention: The binary firmware files for BladeCenter JS21 onboard Ethernet
and Ethernet expansion cards are not compatible.
Firmware versions:
Type Version
--------- -------
BootCode 5780s-v3.17
ASF CFG 40
ASF CPUB ASFIPMI v6.08
ASF CPUA ASFIPMI v6.08
ASF Init ASFIPMI v6.08
Note: The management module requires two PKT files. For the advanced
management module, we found the package to contain only one PKT file.
Note for X Server users: Make sure that the directory where the PKT file
is stored is accessible from the computer system that provides the Web
browser.
You can also use the update command on the BladeCenter management module
command-line interface in combination with a running TFTP server. See 9.2.3,
“Configuring a Trivial File Transfer Protocol service” on page 287.
For information about upgrading firmware of other BladeCenter I/O modules use
the search function in the following Web site:
http://www.redbooks.ibm.com
To update the firmware of a 4-port Gigabit Ethernet switch module from IBM
using the switch module Web interface, follow this procedure:
1. Download the firmware update package from the IBM support Web site. See
6.6.1, “Getting the latest firmware, tools, and support” on page 95. This is
usually a compressed file that contains the firmware file. In this case it is
called ibmrun.096.
2. Extract the firmware update package into a directory on the workstation
where you are running the Web browser that you use to connect the
management module Web interface.
3. Before the firmware of the switch module can be updated, configure an IP
address for the switch module management interface. This enables the
capability to directly connect to the switch module Web interface. For details
about how to configure the IP address of the switch module management
interface, see 6.4, “I/O module configuration” on page 85.
6. A window opens requesting confirmation. After you confirm the update, the
main window displays the status of the update until it completes and reboots
the switch module.
7. After the restart of the Ethernet switch module, verify that the IP address of
the switch module is still set correctly through the management module Web
interface.
*****************************************************
* *
* Command Line Interface SHell (CLISH) *
* *
*****************************************************
firmware_2.0.1.09
As an example, we describe the update process for the Qlogic 4Gb Fibre
Channel expansion card.
1. Assign the CD/DVD drive to the BladeCenter JS21 where the expansion card
for the update is installed.
2. Set the CD/DVD drive as first boot device using the management module.
3. Start the BladeCenter JS21 and wait until the boot process is finished.
--------------------------------------------------------------------
Welcome to AIX.
boot image timestamp: 15:58 12/19
The current time and date: 15:08:30 06/06/2006
number of processors: 4 size of memory: 3968MB
boot device:
/pci@8000000f8000000/pci@7/usb@0/hub@1/hub@1/cdrom@3:\ppc\chrp\bootf
ile.exe
kernel size: 13421062; 64 bit kernel
--------------------------------------------------------------------
FUNCTION SELECTION
1 Diagnostic Routines
This selection will test the machine hardware. Wrap plugs and
other advanced functions will not be used.
2 Advanced Diagnostics Routines
This selection will test the machine hardware. Wrap plugs and
other advanced functions will be used.
3 Task Selection (Diagnostics, Advanced Diagnostics, Service Aids,
etc.)
This selection will list the tasks supported by these procedures.
Once a task is selected, a resource menu may be presented showing
all resources supported by the task.
4 Resource Selection
This selection will list the resources in the system that are
supported
by these procedures. Once a resource is selected, a task menu will
be presented showing all tasks that can be run on the resource(s).
99 Exit Diagnostics
DEFINE TERMINAL
[MORE...18]
Display or Change Bootlist
Format Media
Hot Plug Task
Identify and Attention Indicators
Local Area Network Analyzer
Microcode Tasks
Process Supplemental Media
RAID Array Manager
SSA Service Aids
This selection provides tools for diagnosing and resolving
problems on SSA attached devices.
Update and Manage System Flash
[BOTTOM]
All Resources
This selection will select all the resources currently
displayed.
U788D.001.23A1137-
sisioa0 P1 PCI-XDDR Dual Channel SAS
RAID Adapter
+ fcs0 P1-C5-T1 FC Adapter
fcs1 P1-C5-T2 FC Adapter
ent0 P1-T7 Gigabit Ethernet-SX PCI-X
Adapter
(14101403)
ent1 P1-T8 Gigabit Ethernet-SX PCI-X
Adapter
(14101403)
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x x
x x
x [TOP] x
x *** NOTICE *** NOTICE *** NOTICE *** x
x x
x fcs0 represents a port on a dual-port x
x fibre channel adapter. To update microcode on x
x this adapter, microcode needs to be installed x
x on both fcs0 and fcs1. x
x Selecting either fcs0 or fcs1 installs x
x microcode on both ports. x
x [MORE...11] x
x x
x F3=Cancel F10=Exit Enter x
F3=Cancel mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj
Figure 6-30 Notice that both ports will be updated
M 0117040022
Note: You can use SoL to connect to running systems as well. However, we
recommend that you use standard network access methods over the
production network path if the network connection is working well because it
provides a much better performance, and there is less risk of dropping
connections. SoL is mostly intended during installation or when a normal
network connection fails for some reason.
Note: The SoL ends at the BSMP that emulates the console (ASCII terminal)
and not at the BladeCenter server’s IBM Power PC® processor. Therefore,
the SoL session can be active even when the BladeCenter server is powered
off.
In addition, the ESM in switch bay 1 is used to create the path from the
management module to the BSMP. However, its function is transparent if it is set
up correctly.
Note: For SoL to work, it is essential to have a supported and properly setup
ESM in I/O module bay 1. SoL will not work over a copper or optical pass-thru
module in bay 1 or an ESM in bay 2.
MGT 1 EXT 1
MGT 2 EXT 2
INT 1 EXT 3
..
INT 14 EXT 4
Port 2 (eth1)
Management
SoL Data Workstation
Network (LAN)
BladeCenter JS21
BAY 1 SoL VLAN
default ID = 4095
Note: Depending on the operating system where you run the Telnet or
SSH client, there might be differences in the mapping of special keys such
as function keys, cursor keys, Backspace, Delete, and Insert keys.
Especially when you log on to AIX or Linux, you might encounter difficulties
in System Management Interface Tool (SMIT) or Yet Another Setup Tool
(YaST) menus. For most purposes, we found PUTTY to be a good choice
for the client.
To configure the SoL remote text console function, use the following procedure:
1. From the management module Web interface, select Blade Tasks → Serial
Over LAN, as shown in Figure 6-34. In the right pane, scroll down to the
Serial Over LAN Configuration section. Complete the following tasks:
a. From the Serial over LAN list, select the Enabled option.
b. Leave the value for SoL VLAN ID at the default (4095) if you have either a
4-port Gigabit Ethernet switch module or a Nortel Networks Layer 2-7
Gigabit Ethernet switch module installed in I/O module bay 1.
If you have a Cisco Systems Intelligent Gigabit Ethernet switch module
installed in I/O module bay 1, set the VLAN ID to the same value that you
used when you configured the Cisco Systems Intelligent Gigabit Ethernet
switch module as described in 6.4.3, “Enabling the external I/O module
ports” on page 88.
c. In the BSMP IP address range field, enter the start of the IP address range
that will be used by the management module to communicate with the
BSMP on each blade server.
d. Leave the values for Accumulate timeout, Send threshold, Retry count,
and Retry interval at their defaults (5, 250, 3, and 250).
e. In the User Defined Keystroke Sequences section, leave the values at
their defaults.
f. Click the Save button.
3. After you restart the management module and reconnect to the management
module Web interface from your Web browser, enable the SoL remote text
console for each blade server by doing the following steps.
a. Select Blade Tasks → Serial Over LAN from the management module
Web interface. In the right pane, scroll down until you see the Serial Over
LAN Status section (see Figure 6-35 on page 127).
Note: If the blade server does not show the Ready state, use Blade
Tasks → Power/Restart → Restart Blade System Mgmt Processor or
Blade Tasks → Power/Restart → Restart Blade.
The configuration of the SoL remote text console function is now complete.
Note: To use SSH, you must first configure the management module using
MM Control → Security as described in Advanced Management Module
and Management Module User's Guide - IBM BladeCenter, BladeCenter T,
BladeCenter H.
Figure 6-36 shows an active Telnet session that displays the help command.
There are different commands that you can use to access the SoL remote text
console function. Table 6-4 lists the most useful commands.
console Opens an SoL remote text console for the blade server. This command fails if another
SoL remote text console is already open for the blade server.
console -o Terminates any existing SoL remote text console for the blade server and opens an
SoL remote text console for the blade server.
boot -c Resets the blade server, and then opens an SoL remote text console for the blade
server.
env -T One of the built-in commands. Sets the target (management module, BladeCenter,
blade server, or switch) for the current session.
reset -c This is functionally equivalent to the boot -c command when used in a blade server
context.
power -on -c Powers on the blade server and then opens an SoL remote text console for the blade
server.
power -cycle -c Powers on the blade server and then opens an SoL remote text console for the blade
server. If the blade server is already powered on, powers it off first, and then powers on.
Note: To terminate an active SoL remote text console, press the Esc key
followed by an open parenthesis “(“ (Shift+9 on U.S. keyboards).
You can see active SoL sessions in the Web interface as shown for the blade
server in bay 3 in Figure 6-37. Using the CLI, the following command shows you
the status for only one blade server at a time:
sol -T system:blade[3]
When the SoL remote text console ends, you return to the management module
CLI prompt.
Note: The BladeCenter JS21 supports the SMS and Open Firmware
interface, but the BladeCenter JS20 supports only the Open Firmware
interface.
...
CA00D003
CA00D004
CA00E139
CA00E1FB
CA00E100
CA00D008
CA00E1DC
PowerPC Firmware
Version MB240_470_009
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
Main Menu
1. Select Language
2. Setup Remote IPL (Initial Program Load)
3. Change SCSI Settings
4. Select Console
5. Select Boot Options
6. Firmware Boot Side Options
7. Progress Indicator History
--------------------------------------------------------------------
Navigation Keys:
X = eXit System Management Services
--------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-40 PowerPC Firmware: SMS menu
Network interface
Host name
IP address _______.________.________.________
Domain name
Gateway _______.________.________.________
Depending on the operating system you want to install, refer to 8.3, “Preparing
AIX network installation using NIM” on page 261; 9.3, “Installing SLES using the
network” on page 290; or 9.4, “Installing Red Hat Enterprise Linux AS 4 Update 3
using the network” on page 321.
PowerPC Firmware
Version MB240_470_009
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
---------------------------------------------------------------------------
NIC Adapters
Device Location Code Hardware
Address
1. Port 1-IBM 2 PORT 1000 Base-SX U788D.001.23A1137-P1-T7 001125c90ba6
2. Port 2-IBM 2 PORT 1000 Base-SX U788D.001.23A1137-P1-T8 001125c90ba7
---------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
---------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-41 SMS remote IPL: NIC adapters
PowerPC Firmware
Version MB240_470_009
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
-------------------------------------------------------------------
Network Parameters
Port 2-IBM 2 PORT 1000 Base-SX PCI-X Adapter:
U788D.001.23A1137-P1-T8
1. IP Parameters
2. Adapter Configuration
3. Ping Test
4. Advanced Setup: BOOTP
--------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
-------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-42 SMS remote IPL: Network parameters
PowerPC Firmware
Version MB240_470_009
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
IP Parameters
Port 2-IBM 2 PORT 1000 Base-SX PCI-X Adapter:
U788D.001.23A1137-P1-T8
1. Client IP Address [9.3.5.231]
2. Server IP Address [9.3.5.228]
3. Gateway IP Address [0.0.0.0]
4. Subnet Mask [255.255.255.000]
--------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
--------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-43 SMS remote IPL: IP parameters
If you want, you can also check other menu items such as Adapter Configuration
in Figure 6-42 on page 138 and Spanning Tree Enabled, which we recommend
that you disable. Although this option was enabled in earlier firmware versions, it
is already disabled in the current firmware.
You do not have to change the speed and duplex settings for blade servers.
Note: You can always get back to the main menu by pressing M (without
pressing Enter).
PowerPC Firmware
Version MB240_470_009
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
Multiboot
1. Select Install/Boot Device
2. Configure Boot Device Order
3. Multiboot Startup <OFF>
--------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
--------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-44 SMS Select Boot Options: Multiboot
PowerPC Firmware
Version MB240_470_013
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
Configure Boot Device Order
1. Select 1st Boot Device
2. Select 2nd Boot Device
3. Select 3rd Boot Device
4. Select 4th Boot Device
5. Select 5th Boot Device
6. Display Current Setting
7. Restore Default Setting
--------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
--------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-45 SMS Select Boot Options: Configuring boot device order
PowerPC Firmware
Version MB240_470_013
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
Select Device Type
1. Diskette
2. Tape
3. CD/DVD
4. IDE
5. Hard Drive
6. Network
7. None
8. List All Devices
--------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
--------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-46 SMS Select Boot Options: Selecting device type
PowerPC Firmware
Version MB240_470_013
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
Select Media Adapter
1. U788D.001.23A1137-P1-T10
/pci@8000000f8000000/pci@1/pci1014,028C@1/scsi@0
2. U788D.001.23A1137-P1-T11
/pci@8000000f8000000/pci@1/pci1014,028C@1/scsi@1
3. U788D.001.23A1137-P1-T12-T1
/pci@8000000f8000000/pci@1/pci1014,028C@1/scsi@ff
4. None
5. List all devices
--------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
--------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-47 SMS Select Boot Options: Selecting media adapter
check /pci@8000000f8000000/pci@1/pci1014,028C@1/scsi@0/sd@1,0
PowerPC Firmware
Version MB240_470_013
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
Select Device
Device Current Device
Number Position Name
1. - SCSI 36401 MB Harddisk, part=1 ()
( loc=U788D.001.23A1137-P1-T10-L1-L0 )
2. None
--------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
--------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-48 SMS Select Boot Options: Selecting device hard drive SCSI
After you select the necessary disk, you can see the current boot device order
and go back to the main menu.
Attention: If the firmware does not detect the disk as bootable (that is, having
an operating system installed), it will not display the disks here. In particular,
new drives or drives that have been reformatted with the RAID functions as
described in 6.12, “SAS hardware RAID configuration” on page 170 cannot be
seen here. They first show up in the installation dialog of the corresponding
operating system or diagnostics.
PowerPC Firmware
Version MB240_470_013
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
Select Device Type
1. Diskette
2. Tape
3. CD/DVD
4. IDE
5. Hard Drive
6. Network
7. List all Devices
--------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
--------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-49 SMS Select Install/Boot Device: Selecting device type
PowerPC Firmware
Version MB240_470_013
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
Select Device
Device Current Device
Number Position Name
1. - Virtual Ethernet
( loc=U8844.51X.23A0248-V1-C3-T1 )
2. - Virtual Ethernet
( loc=U8844.51X.23A0248-V1-C4-T1 )
3. - Virtual Ethernet
( loc=U8844.51X.23A0248-V1-C5-T1 )
4. - Virtual Ethernet
( loc=U8844.51X.23A0248-V1-C6-T1 )
5. - Ethernet
( loc=U788D.001.23A0248-P1-T7 )
6. - Ethernet
( loc=U788D.001.23A0248-P1-T8 )
--------------------------------------------------------------------
Navigation keys:
M = return to Main Menu N = Next page of list
ESC key = return to previous screen X = eXit System Management
Services
--------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-50 SMS Select Install/Boot Device: Listing all devices
PowerPC Firmware
Version MB240_470_013
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
Select Device
7. - USB CD-ROM
( loc=U788D.001.23A0248-P1-T1-L1-L3 )
8. 1 SCSI 36401 MB Harddisk, part=2 (AIX 5.3.0)
( loc=U788D.001.23A0248-P1-T10-L1-L0 )
--------------------------------------------------------------------
Navigation keys:
M = return to Main Menu P = Previous page of list
ESC key = return to previous screen X = eXit System Management
Services
--------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-51 SMS Select Install/Boot Device: USB CD-ROM
PowerPC Firmware
Version MB240_470_013
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
Select Task
USB CD-ROM
( loc=U788D.001.23A0248-P1-T1-L1-L3 )
1. Information
2. Normal Mode Boot
3. Service Mode Boot
--------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
-------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:2
Figure 6-52 SMS Select Install/Boot Device: CD-ROM Select Task
PowerPC Firmware
Version MB240_470_013
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
Select Device
Device Current Device
Number Position Name
1. - Ethernet
( loc=U788D.001.23A1137-P1-T7 )
2. - Ethernet
( loc=U788D.001.23A1137-P1-T8 )
3. None
--------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
--------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-53 SMS Select Install/Boot Device: Network
PowerPC Firmware
Version MB240_470_009
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
Select Task
Ethernet
( loc=U788D.001.23A1137-P1-T8 )
1. Information
2. Normal Mode Boot
3. Service Mode Boot
--------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
--------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-54 SMS Select Install/Boot Device: Selecting the task
PowerPC Firmware
Version MB240_470_013
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
--------------------------------------------------------------------
Are you sure you want to exit System Management Services?
1. Yes
2. No
--------------------------------------------------------------------
Navigation Keys:
X = eXit System Management Services
--------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Figure 6-55 SMS Select Boot Options: Exiting SMS
The system tries to boot from the selected media. In the case of a network boot,
you see a screen similar to Figure 6-56. However, this requires that the network
boot is prepared as described in Chapter 8, “Installing AIX” on page 259 or
Chapter 9, “Installing Linux” on page 281.
STARTING SOFTWARE
PLEASE WAIT...
There is no guarantee that everything you find in this site will exactly match the
JS21’s Open Firmware implementation.
...
CA00D003
CA00D004
CA00E139
CA00E1FB
CA00E100
CA00D008
CA00E1DC
7. Type 8 and wait until the Open Firmware prompt is shown. If you do not type
any key when the POST menu and indicators are displayed, the blade server
proceeds to boot using the default boot sequence. To learn how to configure
the boot sequence, see 6.5, “Blade server configuration” on page 90.
In the following sections, we provide some examples of how to use the Open
Firmware interface.
0 > printenv
---------- Partition: of-config -------- Signature: 0x50 ----------
ibm,fw-dc-select 100 0
ibm,fw-default-mac-address? false false
ibm,fw-forced-boot
ibm,fw-n-bc 255.255.255.255 255.255.255.255
ibm,fw-n-bretry 00 00
ibm,fw-n-tretry 00 00
ibm,fw-n-dbfp 00000000 00000000
ibm,fw-n-dafp 00000000 00000000
ibm,fw-n-rc A A
ibm,fw-n-ru Y Y
---------- Partition: common -------- Signature: 0x70 --------------
little-endian? false false
real-mode? true true
auto-boot? true true
diag-switch? false false
fcode-debug? false false
oem-banner? false false
oem-logo? false false
use-nvramrc? true false
ibm,fw-tty-language 1 1
ibm,fw-new-mem-def false false
ibm,fw-prev-boot-vpd 00000000: 08 07 07 0c 09 ff 07 07 07 07 07
07 07 07 07 07 |................|
ibm,fw-keyboard 1 1
real-base c00000 c00000
virt-base ffffffff ffffffff
real-size 1000000 1000000
virt-size ffffffff ffffffff
load-base 4000 4000
screen-#columns 64 64
screen-#rows 28 28
selftest-#megs 0 0
Figure 6-58 Open Firmware printenv output: Page 1 of 2
After you set the parameters for remote IPL and install AIX, you can see
additional entries as shown in Figure 6-60.
Note: When you have already configured a JS21 blade, you might find
additional entries in the output of the printenv command as shown in
Figure 6-60.
Devalias command
In Open Firmware you can define aliases to specify devices in a more convenient
way instead of using full path names. Use the following command to set aliases
for the current session:
devalias alias device-path
Attention: Be aware that the aliases found in a completely new JS21 might
not be set up correctly already. You might not yet find the aliases net and
cdrom, and the aliases for hdd* and network* might not point to the correct
Peripheral Component Interconnect (PCI) adapter. Correct this if you define
the boot sequence as described in 6.5.2, “Setting the boot sequence” on
page 91.
0 > devalias
ibm,sp /vdevice/IBM,sp@4000
hdd1 /pci@8000000f8000000/pci@1/pci1014,028C@1/scsi@1/sd@1,0
hdd0 /pci@8000000f8000000/pci@1/pci1014,028C@1/scsi@0/sd@1,0
disk /pci@8000000f8000000/pci@1/pci1014,028C@1/scsi@0/sd@1,0
network /pci@8000000f8000000/pci@2/ethernet@4
net /pci@8000000f8000000/pci@2/ethernet@4
network1 /pci@8000000f8000000/pci@2/ethernet@4,1
scsi /pci@8000000f8000000/pci@1/pci1014,028C@1/scsi@0
cdrom /pci@8000000f8000000/pci@7/usb@0/hub@1/hub@1/cdrom@3
nvram /vdevice/nvram@4002
rtc /vdevice/rtc@4001
screen /vdevice/vty@30000000
ok
0 >
Figure 6-61 Open Firmware: Devalias configured output
Ls command
Use the ls command to display the names of the current node’s children as
shown in Figure 6-62 on page 158 and Figure 6-63 on page 159. The output
might be useful if you do not have an alias already and want to specify a device
or define an alias. It is also useful to check whether the aliases that are already
defined (see “Devalias command” on page 156) are correct. Be aware that the
output is actually a tree example. Therefore, the second Ethernet adapter is:
/pci@8000000f8000000/pci@2/ethernet@4,1
Here server_ip, client_ip and gateway_ip are the IP addresses of the network
installation server, the client, and an optional gateway (note the double comma).
If there is no gateway, use 0.0.0.0 for the gateway_ip address.
Because SoL uses ent0, use ent1 as explained in 6.7, “Providing a console for
the BladeCenter JS21” on page 121. First you have to check whether an alias net
is defined using the devalias command, as shown in “Devalias command” on
page 156. Figure 6-61 on page 157 shows that the alias net points to the first
Ethernet port which is ent0. Therefore, you can use the ls command (as shown
in “Ls command” on page 157) to determine the full path name of the second
Ethernet port and run a command similar to:
boot
/pci@8000000f8000000/pci@2/ethernet@4,1:bootp,9.3.4.9,,9.3.5.3,9.3.5.1
Alternatively, you can use the alias network1 (as shown in Figure 6-61 on
page 157). This alias refers to /pci@8000000f8000000/pci@2/ethernet@4,1:
boot network1:bootp,9.3.4.9,,9.3.5.3,9.3.5.1
0 > ls
000000c887d0: /ibm,serial
000000c89658: /chosen
000000c89898: /packages
...
000000d4b148: /tpm@100f4003000
000000d4d5f0: /pci@8000000f8000000
000000d55088: /pci@1
000000d6e6d0: /pci1014,028C@1
000000d785e0: /scsi@0
000000d7ac40: /sd
000000d7c660: /st
000000d7e1a8: /scsi@1
000000d80808: /sd
000000d82228: /st
000000d83d70: /scsi@ff
000000d863d0: /sd
000000d87df0: /st
000000d5bd30: /pci@2
000000d89a10: /ethernet@4
000000d990b0: /ethernet@4,1
000000d629b8: /pci@7
Figure 6-64 ls command output
After you run the boot command, the network installation begins and you see an
output similar to Figure 6-56 on page 151.
On AIX, this happens if you have already installed the Ethernet expansion card
before installing AIX. However, when you install the Ethernet expansion after
installing AIX, the names eth0 and eth1 remain on the integrated Ethernet
interfaces because AIX has saved this information in the Object Data Manager
(ODM).
If this is an issue, you must first install the operating system to allow the onboard
ports to be recognized and configured before the ports on the expansion card. If
you install the Ethernet expansion card before you install the operating system,
be aware that the expansion card ports are assigned before the onboard ports.
Abort ()
{ echo "Error: $*" 1>&2 ; exit 1; }
if [ $# -ne 2 ] ; then
echo "Usage: $0 entX entY ; where entX & entY are old & new adapter
names"
exit 1
fi
case "$1" in
ent[0-9]*) entX="$1" ;;
*) Abort "ERROR: entX format is incorrect" ;;
esac
case "$2" in
ent[0-9]*) entY="$2";;
*) Abort "ERROR: entY format is incorrect" ;;
esac
odmadd /tmp/$entY
if [ $? -ne 0 ] ; then
Abort "\"odmadd /tmp/$entY\" FAILED"
fi
mkdev -l $entY
if [ $? -ne 0 ] ; then
Abort "\"mkdev -l $entY\" FAILED"
fi
echo "Done"
Use the following command to manually create an Ethernet interface and specify
the name that you want:
/usr/lib/methods/define_rspc -c adapter -s pci -t e414a816 -p pci1 -w 8
-d -L '01-08' -l ent5
The main problem is to get the necessary parameters, which depend on the
hardware of the blade server and the type of expansion card. Use the following
command to extract the information from the ODM, when the interface has to be
configured.
odmget -q name=ent0 CuDv
CuDv:
name = "ent0"
status = 1
chgstatus = 0
ddins = "pci/bentdd"
location = "01-08"
parent = "pci1"
connwhere = "8"
PdDvLn = "adapter/pci/e414a816"
Figure 6-65 Ethernet controller ODM data
The actual task shown in Example 6-6 performs the following functions:
Extract ODM data
Remove the enX, etX, and entX interface
Create the new entY interface with the name that you want
Run cfgmgr to change the entY from defined to available state and to create the
corresponding interfaces enY and etY.
Abort ()
{ echo "Error: $*" 1>&2 ; exit 1; }
if [ $# -ne 2 ] ; then
echo "Usage: $0 entX entY ; where entX & entY are old & new adapter
names"
exit 1
fi
case "$1" in
ent[0-9]*) entX="$1" ;;
*) Abort "ERROR: entX format is incorrect" ;;
esac
idX=${entX##ent}
idY=${entY##ent}
{ gsub("\"","") }
$1 == "PdDvLn" { split($3,Pd,"/") }
$1 == "location" { Loc=$3 }
$1 == "connwhere" { Conn=$3 }
$1 == "parent" { Parent=$3 }
END { args="-c " Pd[1] " -s " Pd[2] " -t " Pd[3] " -p " Parent
args=args " -w " Conn " -d -L " Loc " -l ent" IdY
system ("rmdev -dl en" IdX )
system ("rmdev -dl et" IdX )
system ("rmdev -dl ent" IdX )
system ("/usr/lib/methods/define_rspc " args)
print ""
system ("cfgmgr")
}
'
Interface 2 2
You can verify which Ethernet controller is routed to which I/O module bay by
using the following test:
1. Install only one Ethernet switch module or pass-thru module in I/O module
bay 1.
2. Ensure that the ports on the switch module or pass-thru module are enabled
(click I/O Module Tasks → Admin/Power/Restart in the management
module Web interface).
3. Enable only one of the Ethernet controllers on the blade server. Note the
designation that the blade server operating system has for the controller.
4. Use ping in an external computer in the network that is connected to the
switch module or pass-thru module. If you can use ping in the external
computer, the Ethernet controller that you enabled is associated with the
switch module or pass-thru module in I/O-module bay 1. The other Ethernet
controller in the blade server is associated with the switch module or
pass-thru module in I/O-module bay 2.
If you have installed an I/O expansion card in the blade server, communications
from the expansion card are routed to I/O module bay 3 and module bay 4, if
these bays are supported by your BladeCenter unit. You can verify which
controller on the card is routed to which I/O module bay by performing the same
test, and using a controller on the expansion card and a compatible switch
module or pass-thru module in I/O module bay 3 or module bay 4.
As RAID is considered an option, the disks are shipped blank, ready for use as
individual disk drives. If you want to configure RAID for any onboard drives, run
the RAID configuration tools first to prepare the disks to be used in a RAID.
Before you can create a RAID array, you must reformat the hard disk drives so
that the sector size of the drives changes from 512 bytes to 522 bytes. Later if
you decide to remove the hard disk drives, delete the RAID array before you
remove the drives. If you decide to delete the RAID array and reuse the hard disk
drives, you must reformat the drives so that the sector size of the drives changes
from 522 bytes to 512 bytes.
FUNCTION SELECTION
1 Diagnostic Routines
This selection will test the machine hardware. Wrap plugs and
other advanced functions will not be used.
2 Advanced Diagnostics Routines
This selection will test the machine hardware. Wrap plugs and
other advanced functions will be used.
3 Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.)
This selection will list the tasks supported by these procedures.
Once a task is selected, a resource menu may be presented showing
all resources supported by the task.
4 Resource Selection
This selection will list the resources in the system that are supported
by these procedures. Once a resource is selected, a task menu will
be presented showing all tasks that can be run on the resource(s).
99 Exit Diagnostics
[MORE...17]
Display USB Devices
Display or Change Bootlist
Format Media
Hot Plug Task
Identify and Attention Indicators
Local Area Network Analyzer
Microcode Tasks
RAID Array Manager
SSA Service Aids
This selection provides tools for diagnosing and resolving
problems on SSA attached devices.
Update and Manage System Flash
[BOTTOM]
COMMAND STATUS
----------------------------------------------------------------------------
Name Location State Description Size
----------------------------------------------------------------------------
sisioa0 01-08 Available PCI-XDDR Dual Channel SAS RAID Adapter
The reformat to 522-byte sectors starts as shown in Figure 6-73. This takes
some time to complete.
Format in progress
|#################################################-| | 98%
hdisk1 deleted
|##################################################| / 99%
hdisk0 deleted
|##################################################| - 100%
Formats complete.
COMMAND STATUS
--------------------------------------------------------------------
Name Location State Description Size
---------------------------------------------------------------------
sisioa0 01-08 Available PCI-XDDR Dual Channel SAS RAID Adapter
Restriction: Figure 6-75 shows that the Disk Array Manager supports
RAID levels 0, 5, and 10. The JS21 has only two physical drives. RAID
level 5 requires a minimum of three drives. RAID level 10 requires a
minimum of three drives. RAID level 0 is striping, which you can do with
two disks. When you select RAID level 10 with two disks, we suspect that
the system is performing RAID 1, which is mirroring with two disks.
[Entry Fields]
Controller sisioa0
RAID Level 0
Stripe Size in KB 64
Selected Disks pdisk0 pdisk1
COMMAND STATUS
hdisk0 Available
COMMAND STATUS
------------------------------------------------------------------------
Name Location State Description Size
------------------------------------------------------------------------
sisioa0 01-08 Available PCI-XDDR Dual Channel SAS RAID Adapter
Example 6-7 shows how the RAID array disk is displayed by different disk-related
commands in AIX 5L.
# lscfg -vp
hdisk0 U788D.001.23A1137-P1-T12-T1-L0-L0 SCSI RAID 0 Disk Array
Figure 6-81 shows how the RAID array disk shows up in the SMS.
PowerPC Firmware
Version MB240_470_013
SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
-----------------------------------------------------------------
Select Device
Device Current Device
Number Position Name
1. 1 SCSI 69793 MB Harddisk, part=2 (AIX 5.3.0)
( loc=U788D.001.23A1137-P1-T12-T1-L0-L0 )
Figure 6-81 RAID array in SMS
Attention: The drivers for the Small Computer System Interface (SCSI)
adapter are on the SLES SP3 CD-ROM. You must boot from CD 1 of the SP
CD-ROMs or a network location of SLES where SP3 is installed. If you boot
from the SLES9 CD 1, you will get a message that no hard drives are found.
7. Wait for the screen shown in Figure 6-83. Type 1 and press Enter to continue.
1) OK
2) Back
> 1
Figure 6-83 SLES9 SP3 booting
1) Bosnia
2) Cestina
3) Deutsch
4) English
5) Español
6) Français
7) Hellenic
8) Italiano
9) Japanese
10) Magyar
11) Nederlands
12) Polski
13) Português
14) Português Brasileiro
15) Russian
16) Slovencina
17) Simplified Chinese
18) Traditional Chinese
> 4
Figure 6-84 Choosing the installation language
Main Menu
1) Settings
2) System Information
3) Kernel Modules (Hardware Drivers)
4) Start Installation or System
5) Eject CD
6) Exit or Reboot
7) Power off
> 4
Figure 6-85 Starting the installation process
10.You go into the rescue mode as shown in Figure 6-86. Type 3 to start the
rescue system.
> 3
Figure 6-86 Starting the rescue mode
1) CD-ROM
2) Network
3) Hard Disk
4) Floppy
> 1
Figure 6-87 Choosing the source of the SUSE installation
12.Log in as root and run the iprconfig command, as shown in Figure 6-88.
Selection: 1
Figure 6-89 iprconfig main screen
Tip: If you get a message that there are no hard drives, then check the
CD-ROM that you booted from. If you booted from the SLES9 CD-ROM 1,
you will get this message. You must boot from the SLES9 SP 3 CD-ROM 1
for this to work.
Selection: 5
Figure 6-91 iprconfig: Working with disk arrays
17.. You see a list of disks that you can format as shown in Figure 6-92. Type 1
beside each hard drive that you want to format for RAID. We put a 1 beside
both the hard drives.
ATTENTION! System crash may occur if selected device is in use. Data loss will
occur on selected device. Proceed with caution.
19.It takes a few minutes to format the disks. When they are finished, you see
the disk array screen shown in Figure 6-91 on page 189. Type 2 to create a
disk array.
20.This opens the screen shown in Figure 6-94. Type 1 beside the adapter that
owns the hard drives that you just formatted.
22.When you press Enter you see the screen to select the protection level as
shown in Figure 6-96. If you select c to change the setting, you will see an
option of either RAID 0 or RAID 10. RAID 0 is just stripping. RAID 10 is
normally stripping and mirroring, but because we only have two drives
internally in the JS21, it becomes just mirroring. This can be helpful to guard
against a disk crash. When you press Enter on the screen it creates the array
and takes you back to the Work with Disk Arrays screen shown in Figure 6-91
on page 189.
c=Change Setting
You have finished creating the disk array. If you are going to install SLES9 from a
CD-DROM, you have to boot from CD-ROM 1 of the SLES9 SP3. After you choose
to start the installation, you are prompted for CD-ROM 1 of the SLES9 installation
media. Insert this CD-ROM and follow the instructions in the screen menus.
Attention: When using the JS21 in a BladeCenter chassis that does not have
a DVD drive in the media tray, you can install the VIOS through the network
from a Network Installation Manager (NIM) server or a Linux server.
When you install from a DVD, assign the media tray to the blade that you want
(see 6.5.3, “Assigning the media tray” on page 94) and mount the VIOS
installation media in the DVD drive of the media tray. The remaining steps are
similar to a normal AIX installation as described in 8.2, “Preparing AIX installation
from CD/DVD” on page 260.
Note: The method described in this section is different from the option where
you type smitty nim_power5 from an AIX command-line, and then select
Virtual I/O Server and Integrated Virtualization Manager Installation
Tasks (/usr/sbin/installios command). This option requires a Hardware
Management Console (HMC). Because we have no HMC in an IVM
environment, we require another method in our case.
Note: If your operating system is not capable of resolving links, then you
might have to copy the file mksysb from the directory /usr/sys/inst.images
instead. The file mksysb must have a size of approximately 600 MB.
Define a Resource
[Entry
Fields]
* Resource Name [vios_mksysb]
* Resource Type mksysb
* Server of Resource
[master] +
* Location of Resource
[/export/vios/mksysb]
[MORE...9]
Define a Resource
[Entry
Fields]
* Resource Name [vios_bid]
* Resource Type bosinst_data
* Server of Resource [master]
+
* Location of Resource
/export/vios/bosinst.data]
Comments []
Define a Resource
[Entry Fields]
* Resource Name [vios_spot]
* Resource Type spot
* Server of Resource [master] +
* Source of Install Images [vios_mksysb] +
* Location of Resource [/export/vios] /
Expand file systems if space needed? yes +
Comments []
installp Flags
COMMIT software updates? no +
SAVE replaced files? yes +
AUTOMATICALLY install requisite software? yes +
OVERWRITE same or newer versions? no +
VERIFY install and check file sizes? no +
Important: It is essential to define the SPOT from the just created mksysb
resource and not from the lpp_source of the NIM server as highlighted in
Figure 7-3. This guarantees that the SPOT and the corresponding network
boot image match the operating system version of the VIOS to be installed.
Mo????????????????????????????????????????????????????????????????????????
? Available Network Install Resources ?
? ?
? Move cursor to desired item and press F7. ?
? ONE OR MORE items can be selected. ?
? Press Enter AFTER making all selections. ?
? ?
? [MORE...10] ?
? > vios_bid bosinst_data ?
? > vios_spot spot ?
? > vios_mksysb mksysb ?
? [BOTTOM] ?
? ?
? F1=Help F2=Refresh F3=Cancel ?
? F7=Select F8=Image F10=Exit ?
F1? Enter=Do /=Find n=Find Next ?
F9????????????????????????????????????????????????????????????????????????
Figure 7-4 NIM: Allocating VIOS resources
[Entry
Fields]
Target Name js21_vios
Source for BOS Runtime Files mksysb +
installp Flags [-agX]
Fileset Names []
Remain NIM client after install? no +
Initiate Boot Operation on Client? no +
Set Boot List if Boot not Initiated on Client? no +
Force Unattended Installation Enablement? no +
ACCEPT new license agreements? [no] +
COMMAND STATUS
The rest of the installation is similar to 8.4, “Installing AIX on the client” on
page 270.
NIM resources
The following commands define NIM resources.
bosinst_data resource:
# nim –o define –t bosinst_data –a server=master –a
location=/export/vios/bosinst.data vios_bid
mksysb resource:
# nim –o define –t mksysb –a server=master –a
SPOT resource (/usr file system):
# nim –o define –t spot –a server=master –a source=vio_mksysb –a
location=/export/vios vios_spot
This allocates all the necessary resources, but waits for the VIOS to initiate the
installation itself. To finish the installation, proceed with the steps described in
8.4, “Installing AIX on the client” on page 270.
This section describes how to set up the Linux server for a VIOS network
installation. The following description is based on SUSE Linux Enterprise Server
9, Service Pack 3 (SLES9 SP3).
General notes
We use the following names and values in the configuration file examples of this
section. You might have to change these based on your environment:
Subnet: 9.153.99.0
Subnet mask: 255.255.255.0
Gateway: 9.153.99.1
Directories
Create the directories for storing the VIOS installation source:
mkdir /tftpboot
mkdir -p /export/vios
If you do not want an unattended installation later, you have to edit bosinst.data
and change PROMPT=no to PROMPT=yes. Later this opens the installation
dialog and offers you the possibility to select the target hard disk.
2. Check if portmap is already running on SLES9 SP3. This was true with the
default installation on our system:
/etc/init.d/portmap status
If not, start portmap:
/etc/init.d/portmap start
3. Start the Network File System (NFS) server:
/etc/init.d/nfsserver start
Note: The value for the file name does not necessarily have to be identical
with the host name, but it must match with the file name that we use in
/tftpboot.
Start a Serial over LAN (SoL) session to the client and power on the JS21. Enter
the System Management Services (SMS) menu and start the network installation
as described in 6.7, “Providing a console for the BladeCenter JS21” on
page 121; 6.8.1, “Activating the System Management Services interface” on
page 133; and 8.4, “Installing AIX on the client” on page 270.
You can monitor the progress of the client installation on the server by monitoring
the syslog file that the installation creates:
tail -f /var/log/nimol.log
For example:
mkgencfg -o init -i "mac_prefix=06ABC0"
Note: If the virtual Ethernet adapters are already available after first booting
the VIOS (which is possible if a VIOS is already installed), you might have to
run:
lpcfgop -o clear
Restart the system and then run mkgencfg. The help function in the IVM’s CLI
does not display the mkgencf and the lpcfgop command. You can get the
description by using:
man mkgencf
You can use the optional configuration data to define the prefix of the MAC
address of all four virtual Ethernet adapters of the VIOS and to define the
maximum number of partitions supported by the VIOS after the next restart.
$ mkgencfg -o init
Here is a method that worked in our environment. You will not get a network
connection after the reboot and therefore have to use SoL.
$ bkprofdata -o backup -f profile.bak # backup VIOS profile
$ lpcfgop -o clear # clear VIOS profile
Example 7-8 shows how to set the host name, address, and IP address for the
VIOS.
After the IVM Web server has access to the network, it is possible to use the
Web GUI with the Hypertext Transfer Protocol (HTTP) or the Hypertext Transfer
Protocol-Secure (HTTPS) protocol pointing to the IP address of the IVM server
application. Authentication requires the use of the padmin user, unless other
users have been created.
You can use either of the two interfaces to create, delete, and update the logical
partitions and perform non-dynamic operations on LPARs including the partition
of the VIOS itself.
As a result, a Welcome window that contains the login and the password
prompts opens, as shown in Figure 7-7. The default user ID is padmin, and the
password is the one you defined during the VIOS installation. The default
password is padmin until you change it.
After the authentication process, the default IVM console window opens, as
shown in Figure 7-8 on page 213. The IVM GUI consists of several elements.
The following elements are the most important:
Navigation area The navigation area displays the tasks that you can
access in the work area.
Work area The work area contains information related to the
management tasks that you perform using the IVM and
related to the objects on which you can perform
management tasks.
Task area The task area lists the tasks that you can perform for
items displayed in the work area. The tasks listed in the
task area can change depending on the page that is
displayed in the work area, or even depending on the tab
that is selected in the work area.
login: padmin
padmin's Password:
Enter help to show an overview of the available commands. Example 7-10 shows
the output of the help command.
Help relating to an individual command is available with the -h flag. Example 7-11
shows the help for the mkvt command.
You can assign both physical volumes and virtual disks to an LPAR to provide
disk space. Each of them is represented by the LPAR operating system as a
single disk. For example, if you assign a 73.4 GB physical disk and a 3 GB virtual
disk to an LPAR running AIX 5L, then the operating system creates two hdisk
devices.
At installation time of the VIOS, there is only one storage pool named rootvg,
typically containing only one physical volume. All the remaining physical volumes
are available but not assigned to any pool.
Note: Typically a storage pool is the same as a volume group in AIX. While
the VIOS is running on an underlying AIX, it uses volume groups as storage
pools, which you can especially see with the name rootvg. The different name
is related to the possibility of implementing VIOS on Linux platforms too.
The rootvg storage pool is used for the VIOS. We recommend that you do not
use it to provide disk space to logical partitions. Because it is the only pool
available at installation time, it is also defined as the default pool. Create another
pool and set it as the default before creating other partitions.
Important: Create at least one additional storage pool so that the rootvg pool
is not the default storage pool.
However, if you have only one disk, which might be the case if you have
configured all the disks as a RAID array as described in 6.12, “SAS hardware
RAID configuration” on page 170, your only choice is to use rootvg. In this case,
you must be very careful not to overwrite rootvg, for example, when reinstalling
the VIOS. Be sure to back up all essential data.
From any storage pool, logical volumes can be defined and configured as virtual
disks. They can be created in several ways, depending on the IVM menu that is
in use:
During LPAR creation: A logical volume is created in the default storage pool
and assigned to the partition.
Using the Create Devices link: A logical volume is not assigned to any
partition and it is created in the default storage pool. There is an Advanced
tab that enables the storage pool selection.
Important: All the data of a physical volume (hard disk) is erased when you
add this volume to a storage pool.
Important: Create at least one additional storage pool. Because the rootvf
storage pool is the default storage pool, this causes VIOS data and user data
to be merged. Therefore, do not use the rootvg storage pool for virtual disk
storage.
Because the VIOS is installed in rootvg, when VIOS is reinstalled, the rootvg
storage pool is overwritten. Change the default storage pool to another one to
avoid creating virtual disks within the rootvg by default. This prevents the loss of
user data during a VIOS update.
3. A summary with the current and the next default storage pool opens, as
shown in Figure 7-12. Click OK to validate the change.
After the initial installation of the VIOS, there is only one VIOS partition on the
system with the following characteristics:
The ID is 1
The name is equal to the system’s serial number
The state is Running
The allocated memory is the maximum value between 512 MB and
one-eighth of the installed system memory
The number of allocated processors is equal to the number of physical
processors, and the processing units is equal to 0.1 times the number of
allocated processors
The default configuration for the partition is designed to be appropriate for most
VIOS installations. If the administrators want to change memory or processing
unit allocation of the VIOS partition, they can perform a re-configuration using
either the Web GUI or the command line, as described in 7.5.8, “Logical partition
configuration changes” on page 235.
To view the new logical partition and use it, from the Partition Management menu
in the navigation area, click the View/Modify Partitions link. A list opens in the
work area.
AIX Version 5
(C) Copyrights by IBM and by others 1982, 2005.
Console login:
4. If you move or remove an optical device from a running logical partition, you
are prompted to confirm the forced removal before the optical device is
removed. Because the optical device becomes unavailable, log on to the
logical partition and unmount the optical device before going further. Click the
Eject button. If the drawer opens, this is an indication that the device is not
mounted. Click OK.
Note: We strongly recommend that you unmount the optical device’s file
systems within the operating system before unassigning the device to
avoid endless loop conditions.
5. The new list of optical devices is shown with the changes you made. Log on
to the related logical partition and use the appropriate command to discover
the new optical device. On AIX 5L, use the cfgmgr command.
4. Log on to the related logical partition and discover the new disks. On AIX 5L,
use the cfgmgr command. Example 7-17 shows how the partition discovers
the two new virtual disks on AIX 5L.
# cfgmgr
# lsdev -Ccdisk
hdisk0 Available Virtual SCSI Disk Drive
hdisk1 Available Virtual SCSI Disk Drive
hdisk2 Available Virtual SCSI Disk Drive
These updates become pending and are taken in account only at the next reboot
of the related logical partition. A triangle with an exclamation mark inside it is
displayed in the View/Modify Partitions screen if the current and pending values
are not synchronized. This also shows that changes made by the CLI are
immediately reflected by the GUI.
Any client logical partition can be created with its own virtual adapters connected
to any of the four available virtual networks. Client logical partitions can only
have up to two virtual Ethernet adapters, each connected to one of the four
virtual networks present in the system. No bridging is provided with physical
adapters at installation time.The VIOS enables any virtual network to be bridged
with any physical adapter, provided that the same physical adapter is not used to
bridge more than one virtual network.
Note: If the physical Ethernet that you selected for bridging is already
configured with an IP address using the CLI, all connections to that address
are reset. This might drop an SoL session when configuring the interface
associated with I/O module 1 as described in 6.7, “Providing a console for the
BladeCenter JS21” on page 121.
$ lstcpip
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en0 1500 link#2 0.11.25.c9.11.3d 1379465 0 44040 0 0
en0 1500 9.3.5 js21_vios.itsc.au 1379465 0 44040 0 0
lo0 16896 link#1 82 0 295 0 0
lo0 16896 127 localhost 82 0 295 0 0
lo0 16896 ::1 82 0 295 0 0
When a virtual Ethernet bridge is created, a new shared Ethernet adapter (SEA)
is defined, binding the physical device with the virtual device. If a network
interface is configured on the physical adapter, the IP address is migrated to the
new SEA.
$ lstcpip
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en6 1500 link#2 0.11.25.c9.11.3d 1380345 0 44078 0 0
en6 1500 9.3.5 js21_vios.itsc.au 1380345 0 44078 0 0
lo0 16896 link#1 82 0 295 0 0
lo0 16896 127 localhost 82 0 295 0 0
lo0 16896 ::1 82 0 295 0 0
Use the following command to display the associated SEA devices, as shown in
Example 7-21.
lsmap -net -all
SEA ent6
Backing device ent0
SVEA Physloc
------ --------------------------------------------
ent3 U8844.5CZ.23A1137-V1-C4-T1
SVEA Physloc
------ --------------------------------------------
ent4 U8844.5CZ.23A1137-V1-C5-T1
SVEA Physloc
------ --------------------------------------------
ent5 U8844.5CZ.23A1137-V1-C6-T1
Before removing a physical disk or a virtual disk from a running partition, the
operating system must remove the corresponding disk device because it
becomes unavailable. In an AIX 5L environment, do this by using the rmdev
command.
5. From the Storage Management menu in the IVM navigation area, click
View/Modify Devices.
6. From the work area, select the virtual disk. Click Modify partition
assignment.
8. Perform the same action as in step 5, but assign the virtual disk back to the
partition.
9. On the operating system, issue the appropriate procedure to recognize the
new disk size. On AIX 5L, issue the varyonvg command on the volume group
to which the disk belongs and, as suggested by a warning message, issue the
chvg -g command on the volume group to recompute the volume group size.
Disk mirroring on the VIOS is an advanced feature that, at the time of writing, is
not available on the Web GUI. You can configure it by using VIOS capabilities on
the CLI, but only system logical volumes can be mirrored. The following steps
describe how to provide a mirrored configuration for the rootvg storage pool.
Important: Mirrored logical volumes are not supported as virtual disks. This
procedure mirrors all logical volumes defined in rootvg and must not be run if
rootvg contains virtual disks.
1. Use the IVM to add a second disk to rootvg. From the Storage Management
menu in the navigation area, click Advanced View/Modify Devices.
2. Select the Physical Volumes tab. Select a disk not assigned to any storage
pool. Click Add to storage pool, as shown in Figure 7-34.
SHUTDOWN PROGRAM
Mon Aug 15 10:20:20 CDT 2005
On the VIOS, virtual disks are created out of storage pools. They are created
using the minimum number of physical disks in the pool. If there is no sufficient
space on a single disk, they can span multiple disks. If the virtual disks are
expanded, the same allocation algorithm is applied. To guarantee mirror copy
separation, we recommend that you create two storage pools and create one
virtual disk from each of them. After virtual storage is created and made available
as an hdisk to AIX 5L, it is important to correctly map it.
On the IVM, the CLI is required. On the IVM, the lsmap command provides all the
mapping between each physical and virtual device. For each partition, there is a
separate stanza, as shown in Example 7-23. Each logical or physical volume
displayed in the IVM GUI is defined as a backing device, and the command
provides the virtual storage’s assigned LUN value.
VTD vtscsi1
LUN 0x8100000000000000
Backing device aixboot1
Physloc
VTD vtscsi2
LUN 0x8200000000000000
Backing device extlv
Physloc
VTD vtscsi3
LUN 0x8300000000000000
Backing device hdisk6
Physloc U787B.001.DNW108F-P1-T14-L5-L0
VTD vtscsi4
LUN 0x8400000000000000
Backing device hdisk7
Physloc U787B.001.DNW108F-P1-T14-L8-L0
...
On AIX 5L, use the lscfg command to identify the hdisk using the same LUN
used by the IVM. Example 7-24 shows the command output with the 12-digit
hexadecimal number representing the virtual disk’s LUN number.
Example 7-24 Identification of AIX 5L virtual SCSI disk’s logical unit number
# lscfg -vpl hdisk0
hdisk0 U9113.550.105E9DE-V3-C2-T1-L810000000000 Virtual
SCSI Disk Drive
PLATFORM SPECIFIC
Name: disk
Node: disk
Device Type: block
Configure the adapter to create the array before installing the VIOS. To do this
operation, boot the system with the stand-alone diagnostic CD and enter the
adapter’s setup menu as described in 6.12, “SAS hardware RAID configuration”
on page 170. After you create the array and finish formatting, install the VIOS.
During the installation, the VIOS partition’s rootvg is created on the array. Provide
disk space for logical partitions using the logical volumes created on the rootvg
storage pool.
Perform adapter maintenance using the IVM command line with the diagmenu
command to access diagnostic routines. Figure 7-35 shows the menu related to
the SCSI RAID adapter. It enables you to modify the array configuration and to
handle events such as the replacement of a failing physical disk.
There is only one unique backup file on the VIOS at a time, and a new backup
file replaces an existing one. In the work area, you can select this file name and
save it on your workstation, where the browser is running. This allows you to
specify a file name and save several copies.
The backup file contains the logical partition’s configuration such as processors,
memory, and network and virtual disk assignment. The content of the virtual
disks is not included in the backup file.
To perform a restore operation, the system must not have any logical partition
configuration defined. If necessary, use the following command from the CLI
before the restore.
lpcfgop -o clear
You can also back up and restore logical partition configuration information from
the CLI. Use the bkprofdata command to back up the configuration information
and the rstprofdata command to restore it. See the VIO Server and PLM
command descriptions in the Information Center at the following Web page for
more information:
http://publib.boulder.ibm.com/infocenter/eserver/v1r3s/index.jsp?topic=
/iphb1/iphb1_vios_commandslist.htm
Important: You can use AIX or Linux commands to back up and restore the
virtual disks or physical volumes assigned to the logical partitions.
To create a backup that you can use for the NIM installation described in 7.1,
“VIOS installation in a JS21” on page 194, create a directory, for example,
backupios -file /home/padmin/ and use the following syntax:
backupios -file /home/padmin/backup_loc
Tip: The directory used for the backup can also be a NFS mounted directory,
for example, from the NIM server.
VIOS updates
To update VIOS to the latest fix pack, use the updateios command. Before it
installs the update, the updateios command makes a preview installation and
displays the results. You are prompted to continue or exit at the end of the
preview. Do not install the updates if the preview installation fails for any reason.
Assuming that you have located the update images in /home/padmin/update, the
command syntax is:
updateios -dev /home/padmin/update
Important: Ensure that you have the right level of firmware before updating
the VIOS.
AIX offers an easy to configure network installation server called NIM, which you
can configure on one or more machines in the network. The so-called NIM
master can either be outside the BladeCenter environment or even an IBM
BladeCenter JS20 or JS21. It must be a machine running AIX 5L. If the NIM
master is the first machine running AIX 5L in the given environment, you have to
install it from CD, DVD, or tape.
Note: Before you install any operating system, we recommend that you
upgrade all firmware to the latest available level. You can apply the Flash
BIOS Update of the JS21 itself and the Broadcom Firmware Update for the
integrated Ethernet adapter only by using an already installed operating
system (AIX or Linux) or with the stand-alone diagnostics booted from CD or
network (NIM). See 6.6, “Firmware”, for further information.
Note: If a NIM master already exists, define your client installation on the
existing NIM master and proceed with 8.4, “Installing AIX on the client” on
page 270. Be aware that the NIM master’s AIX version and technology level
(maintenance level) must be at least the same level as the level of the AIX it
offers to the client for installation. You can check this using one of the
following commands:
oslevel -rf
instfix -i | grep AIX_ML
If you want to create a separate file system for the different NIM resources, refer
to 8.3.3, “Configuring the NIM master” on page 263. The NIM setup automatically
creates these file systems, if necessary. See the options Create new filesystem
for LPP_SOURCE and Create new filesystem for Shared Product Object Tree
(SPOT) in Figure 8-6 on page 267.
You can do this by using smitty install_latest and selecting the highlighted
filesets in Figure 8-1.
bos.sysmgt ALL
@ 5.3.0.40 Filesystem Quota Commands
@ 5.3.0.0 License Management
@ 5.3.0.40 Network Install Manager - Client Tools
> + 5.3.0.40 Network Install Manager - Master Tools
> + 5.3.0.30 Network Install Manager - SPOT
@ 5.3.0.40 Software Error Logging and Dump Service Aids
@ 5.3.0.40 Software Trace Service Aids
Figure 8-1 Selecting the NIM fileset
Note: You can also perform the commands that we issue using SMIT from the
command line. Press the F8 key in SMIT to display the used command.
However, most of the NIM functions are performed by a more complex script
in SMIT. In this example, it is more convenient to copy the appropriate entry
from $HOME/smit.script to another file, for example, /tmp/smit.test. The entry
might start with:
#
# [May 30 2006, 20:51:47]
#
mkres()
Insert an echo statement in the beginning of the line with the nim -o ...
command:
echo nim -o ...
Select the option Configure the NIM Environment. This opens the Configure a
Basic NIM Environment (Easy Startup) screen shown in Figure 8-3.
Define a Machine
[Entry
Fields]
* Host Name of Machine [js21client]
(Primary Network Install Interface)
Figure 8-4 NIM: Selecting the client’s host name
Attention: The NIM master must be able to resolve the client’s Internet
Protocol (IP) address from the host name. In our example, the host name is
js21client. This can be from a name server, /etc/hosts, or another supported
resolver method. When you use /etc/hosts, list the short name in addition to
the fully qualified host name:
When you press Enter, and if the NIM master cannot resolve the js21client
name, you see the screen shown in Figure 8-5 in the SMIT.
Define a Machine
[Entry
Fields]
* NIM Machine Name [js21client]
* Machine Type [standalone] +
* Hardware Platform Type [chrp] +
Kernel to use for Network Boot [mp] +
Communication Protocol used by client [] +
Primary Network Install Interface
* Cable Type tp +
Network Speed Setting [] +
Network Duplex Setting [] +
* NIM Network network1
* Host Name js21nim
Network Adapter Hardware Address [001125c9113c]
Network Adapter Logical Device Name []
IPL ROM Emulation Device [] +/
CPU Id []
Machine Group [] +
Comments []
Figure 8-6 NIM: Defining a machine
[TOP]
> lpp_source_aix_534 lpp_source
> spot_aix_534 spot
Figure 8-7 NIM: Selecting the resources
Tip: For a basic installation, you have to select at least a SPOT and an
lpp_resource.
[Entry
Fields]
Target Name js21client
Source for BOS Runtime Files rte
installp Flags [-agX]
Fileset Names []
Remain NIM client after install? yes
Initiate Boot Operation on Client? no
Set Boot List if Boot not Initiated on Client? no
Force Unattended Installation Enablement? no
ACCEPT new license agreements? [no]
Figure 8-9 NIM: Performing the client installation operation
In this case, use the reset operation as shown in Figure 8-8 on page 269.
Tip: The following description shows the manual setup of the client’s
installation dialog. To avoid manual input on every client, NIM offers a
non-prompted installation method also. See the following Web site:
http://publib.boulder.ibm.com/infocenter/pseries
---------------------------------------------------------------------------
Welcome to AIX.
boot image timestamp: 03:00 05/15
The current time and date: 16:07:33 05/16/2006
number of processors: 4 size of memory: 3968MB
boot device:
/pci@8000000f8000000/pci@2/ethernet@4,1:9.3.5.228,,9.3.5.231,0.0.0.0,00,00
kernel size: 11000928; 32 bit kernel
---------------------------------------------------------------------------
Figure 8-10 AIX welcome screen
Tip: If the correct number for the selection that you want is already
displayed between the square brackets (default), press Enter without
entering the number.
Type the number of your choice and press Enter. Choice is indicated
by >>>.
88 Help ?
99 Previous Menu
Either type 0 and press Enter to install with current settings, or type the
number of the setting you want to change and press Enter.
1 System Settings:
Method of Installation.............New and Complete Overwrite
Disk Where You Want to Install.....hdisk0
+-----------------------------------------------------
88 Help ? | WARNING: Base Operating System Installation will
99 Previous Menu | destroy or impair recovery of ALL data on the
| destination disk hdisk0.
>>> Choice [0]:1
Figure 8-14 Installation and language options
Attention: If the target hard disk already contains valuable data, this data is
overwritten or modified.
2 Preservation Install
Preserves SOME of the existing data on the disk selected for
installation. Warning: This method overwrites the usr (/usr),
variable (/var), temporary (/tmp), and root (/) file systems.
Other product (applications) files and configuration data will
be destroyed.
88 Help ?
99 Previous Menu
Type one or more numbers for the disk(s) to be used for installation
and press
Enter. To cancel a choice, type the corresponding number and Press
Enter.
At least one bootable disk must be selected. The current choice is
indicated
by >>>.
Important: You have to verify the actual location code (for example,
01-08-00-1,0) of the hard disk. You have to do this because the logical name
for the hard disks (for example, hdisk0) that is displayed in this menu can be
different from the logical name for the same hard disk that is listed within the
AIX operating system (for example, from the lspv command) that runs on the
same machine. This can happen when disks are added after AIX is installed.
When you define the client, you can select the Communication Protocol used
by client. Figure 8-17 shows the corresponding SMIT screen and the selection
box that you get when you press F4 on this item. You can select shell, which
uses the insecure remote shell (rsh) protocol, and nimsh, which uses the more
secure Secure Sockets Layer (SSL) protocol.
Define a Machine
[Entry Fields]
* NIM Machine Name [js21client]
* Machine Type [standalone] +
* Hardware Platform Type [chrp] +
Kernel to use for Network Boot [mp] +
Communication Protocol used by client [] +
Primary Network Install Interface
* ????????????????????????????????????????????????????????????????????
? Communication Protocol used by client ?
? ?
? Move cursor to desired item and press Enter. ?
? ?
? shell = RSH Protocol is used by client ?
? nimsh = NIM Service Handler is used by client ?
? ?
? F1=Help F2=Refresh F3=Cancel ?
F1? F8=Image F10=Exit Enter=Do ?
F5? /=Find n=Find Next ?
F9????????????????????????????????????????????????????????????????????
Figure 8-17 NIM communication protocol used by the client
When you choose nimsh, a daemon named nimsh runs on the client. The
communication with the NIM server uses SSL. For this, you must install SSL on
the server and the client. To do this, use:
# rpm -i openssl-0.9.7g-1.aix5.1.ppc.rpm
[Entry
Fields]
* Enable Cryptographic Authentication [enable]
for client communication?
[Entry
Fields]
* Communication Protocol used by client [nimsh]
The line containing connect = nimsh must have the word secure at the end.
Attention: It is possible that on a JS21 the CPUID is displayed with all zeroes.
You can test this with the following command:
uname -m
In this case, you have to update the BIOS of the JS21 to at least Version
240.470.014. If you get the following error from NIM on the client, this might be
the reason.
0042-172 NIMkid: This machine's CPU ID does not match the CPU ID
stored in the NIM database.
AIX uses similar services and protocols, and the Network Installation Manager
(NIM) utility provided by any AIX installation uses these services and protocols to
enable AIX installations using a network. Hence it might be possible to use a
running AIX server to install Linux using the network. However, this book does
not cover this topic.
Important: Before installing any operating system, update all firmware to the
latest level. See 6.6, “Firmware” on page 94 for instructions about how to
complete this task.
Note regarding VIOS: All procedures described in this chapter are also true
for Linux installations on logical partitions (LPARs) on BladeCenter JS21, but
there might be some minor changes in some actions or procedures. Especially
the soft reboot function does not always work if Linux is running on an LPAR.
In all cases, use the System Management Services (SMS) menu instead of
the BladeCenter management module. For more information about VIOS and
LPARs, see Chapter 7, “Installing and managing the Virtual I/O Server” on
page 193.
You can transfer some of the information provided in the following sections to a
CD/DVD installation, but most information such as preparing an infrastructure or
configuring the boot image are only helpful for a network installation.
Also note that during the initial startup of the installation, Red Hat provides the
opportunity to run a media check on the CD media before using the CD for the
installation. If you select this option, it runs the check and then ejects the
media at the completion of the check. You have to reinsert the CD tray before
starting the installation.
Tip for Microsoft Windows Telnet users: The Serial over LAN Setup Guide
provides instructions about how to use the Telnet command to begin your
console sessions. You can find this guide at the following Web site:
http://www-307.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm
&lndocid=MIGR-54666&velxr-layout=print
An alternative to using the Telnet command is the putty.exe, which you can
download from multiple Internet sources. Keyboard usage is much easier to
follow and alter than with Windows Telnet, and monitoring the screen output is
easier.
Note: If a firewall is running on your installation server, ensure that you update
the settings to allow traffic for your installation protocol.
Note: The directory you use for the configuration files depends on the
distribution. The following directories are possible examples:
/etc/
/etc/sysconfig/
/etc/default/
/etc/xinet.d/ (eXtended InterNET daemon configuration files)
The examples in this chapter use the most common directories. In general,
the name of a configuration or script file is related to the name of the installed
package. For example, if a DHCP daemon is called dhcpd3-server, you can
find the configuration in /etc/dhcpd3-server.conf and
/etc/sysconfig/dhcpd3-server, and the start/stop script is in
/etc/init.d/dhcp3-server.
deny unknown-clients;
not authoritative;
default-lease-time 600;
host js21-eth0 {
fixed-address 192.168.1.100;
hardware ethernet 00:11:25:C9:0B:A6;
}
host js21-eth1 {
fixed-address 192.168.1.101;
hardware ethernet 00:11:25:C9:0B:A7;
filename "/tftpboot/sles9_sp3_basic";
}
}
}
You can find the start and stop scripts of Linux services in the /etc/init.d/
directory. To start the standard DHCP daemon, use the /etc/init.d/dhcpd
start command. To restart the DHCP daemon, use the /etc/init.d/dhcpd
restart command.
Tip for Linux beginners: The following tasks help you to double-check or
troubleshoot a configuration in general.
To trace messages of running services, type tail -f -n 10
/var/log/messages to get the last 10 messages and auto update if there
are new messages.
Connect to a running service with a local client, remote client, or both these
clients and try to receive the data that you want.
Make sure a changed configuration is activated by restarting a service
directly after editing, for example:
a. vi /etc/dhcpd.conf
b. /etc/init.d/dhcpd restart
The standard FTP daemon is called ftpd, but there are other FTP daemons
available. In this book, we use vsftpd. Similar to the TFTP daemon, there are
several ways to start and configure an FTP service. In this case, we use the
xinetd super daemon to start the FTP daemon. Therefore, the network
configuration is stored in /etc/xinet.d/vsftpd. See Example 9-4.
This is highlighted in bold in Example 9-5. To enable the FTP service and restart
xinetd use these commands:
1. chkconfig vsftpd on
2. /etc/init.d/xinetd restart
2
This is the directory for anonymous login
Tip: To test if an FTP or TFTP service is running, use the following command
on the FTP or TFTP server and look for LISTEN connections:
netstat -a |grep ftp
If you can connect to the FTP server with an FTP client or even download
data, but later the installation stops right after booting from the boot image, in
most cases, the installation directory is not set up correctly. See 9.4.2,
“Preparing the installation source for Red Hat Enterprise Linux” on page 324.
The next step is the preparation of the installation source directory and the
corresponding service. Here the preparation depends on the distribution to be
installed. Therefore, we document this in separate sections: 9.3, “Installing SLES
using the network”, and 9.4, “Installing Red Hat Enterprise Linux AS 4 Update 3
using the network” on page 321.
Note: SLES switches the boot sequence of the boot devices automatically
from Network - BOOTP to Hard drive X after the installation. Here X can be in
the range from 0 to 3.
After this final step, the basic preparation for the network installation is complete.
There are different ways (shown as decision symbols in Figure 9-1 on page 292)
to install SLES:
Decide whether you want to define the boot parameter such as installation
server, installation source directory, or transfer protocol using the Open
Firmware prompt, also called Open Firmware interface. To learn more about
the Open Firmware prompt in general, see 6.9, “Open Firmware interface” on
page 152. Information is also available in 9.3.5, “Unattended installation with
SLES” on page 311. In most cases, the Open Firmware method is not used to
install SLES. Therefore, for a more sophisticated method, see the following
section.
You can predefine the installation server, installation source directory, and
transfer protocol. You can do this if the installation is attended or unattended.
To predefine the parameters by saving the necessary information in a boot
image, see 9.3.4, “Configuring the boot image file with mkzimage_cmdline”
on page 306.
This basic installation procedure is described in 9.3.3, “Basic attended SLES
network installation” on page 299. The tasks to enable a fully unattended
installation are explained in 9.3.5, “Unattended installation with SLES” on
page 311.
Set
Network – BOOTP
as first boot device
via MM or AMM
Power on
BladeCenter JS21
No Yes
Install Server,
protocol and source
path predefinied?
Initial
installation menu,
set Install Server,
protocol and
source
Attended Attended or Unattended
Unattended
install?
Main installation
menu, configure
provided options
Data transfer/
install from Install
Server to
BladeCentre JS21
Repeat this step for all the CDs/DVDs, changing the file name as appropriate.
You can also increase the bs parameter as appropriate. This parameter controls
block size. The larger the block size is, the more RAM is taken for the dd
process, but the faster the process works.
Important: Ensure that the CD/DVD is not mounted before beginning the dd
process. Also ensure that the destination of the ISO has enough space to
store all the data. One CD ISO image file typically requires 650 MB and a
single layer DVD ISO image file requires up to 4.7 GB of hard drive space.
Note: We created our ISO files on a remote server and then transferred them
to our installation server within the BladeCenter to take advantage of its fast
network interfaces.
To fuse these parts, use the following command on a Linux operating system:
3
An ISO image is a full CD or DVD image of an ISO 9660 file system
Important: The SLES version that you use as an installation server must
be equal to or a later version than the SLES version which is prepared as
an installation source to avoid problems.
Copy and rename the bootable image to the public TFTP boot directory of the
TFTP service:
cp /mnt/loop/sles9_sp3/SUSE-SLES-Version-9/CD1/install
/srv/tftp/tftpboot/sles9_sp3_basic
Note: In general, use the bootable image that is shipped with the
distribution. This supports many different hardware drivers or software
versions. You can also build a new bootable image.
The explicit location during the boot process of the boot image file,
/srv/tftp/tftpboot/sles9_sp3_basic is defined by two entries:
– The setting of the public TFTP boot directory (for example, /srv/tftp/)
defined in the TFTP server configuration file (for example,
/etc/xinet.d/tftpd). See the bold line in Figure 9-2 on page 287 or
Figure 9-3 on page 287.
– The file name parameter in the /etc/dhcpd.conf. See the bold line in
Figure 9-1 on page 285.
If there are any changes in the configuration files of a service, restart the
services to activate the changes (see 9.2, “Basic preparations for a Linux
network installation” on page 283).
Tip: You can avoid the manual setup shown in this part by using the
mkzimage_cmdline tool from SUSE (see 9.3.4, “Configuring the boot image file
with mkzimage_cmdline” on page 306).
4
This bootable image is also called zimage or bzimage
Note: Ensure that the boot sequence is set to enable network boot using
Network - BOOTP. Additionally, do not set the BOOTP configuration of the
BladeCenter JS21 to a directed BOOTP request. Set the server IP and client
IP to 0.0.0.0, as shown in Figure 9-7.
.
.
BOOTP: chosen-network-type = ethernet,auto,none,auto
BOOTP: server IP = 0.0.0.0
BOOTP: requested filename =
BOOTP: client IP = 0.0.0.0
BOOTP: client HW addr = 0 11 25 c9 b a7
BOOTP: gateway IP = 0.0.0.0
BOOTP: device /pci@8000000f8000000/pci@2/ethernet@4,1
BOOTP: loc-code U788D.001.23A1137-P1-T8
BOOTP R = 1 BOOTP S = 2
FILE: /tftpboot/sles9_sp3_basic
.
.
>>> SUSE Linux Enterprise Server 9 installation program v1.6.36 (c)
1996-2004 SUSE Linux AG <<<
*** Could not find the SUSE Linux Enterprise Server 9 Installation CD.
1) Bosnia
2) Cestina
3) Deutsch
4) English
5) Español
6) Français
7) Hellenic
8) Italiano
9) Japanese
10) Magyar
11) Nederlands
12) Polski
13) Português
14) Português Brasileiro
15) Russian
16) Slovencina
> _
Figure 9-8 SoL after operating system booted from the original SLES9 boot image
Note: If you use hardware that is available during the installation process
but is not supported by the SLES install kernel version, you can load
additional kernel modules by typing 3 before starting the installation.
Main Menu
1) Settings
2) System Information
3) Kernel Modules (Hardware Drivers)
4) Start Installation or System
5) Eject CD
6) Exit or Reboot
7) Power off
> _
Figure 9-9 SLES9 main menu
2. The screen shown in Figure 9-10 opens. Type 1 to start the installation or
update.
> _
Figure 9-10 SLES9 installation menu
1) CD-ROM
2) Network
3) Hard Disk
> _
Figure 9-11 SLES9 source menu
4. You are prompted for the network protocol as shown in Figure 9-12. In our lab,
we set up our installation source to use FTP. In this screen, we specify option
1.
1) FTP
2) HTTP
3) NFS
4) TFTP
> _
Figure 9-12 SLES9 protocol menu
5. You are now prompted for the network options as shown in Figure 9-13.
a. The menu provides options to choose an Ethernet device, for example,
eth1. (We choose eth1 because eth0 is used by SoL.)
b. If a DHCP server is running in the subnet, see 9.2, “Basic preparations for
a Linux network installation” on page 283. You can type 1 to configure
using DHCP. In our case anonymous login is suitable, therefore, we type 2.
c. If no DHCP proxy is running, again type 2.
d. If the DHCP service serves data about the installation server and
directory, it is presented as default, but you can also choose a different
configuration.
> 2
1) Yes
2) No
> 1
Sending DHCP request...
1) Yes
2) No
> 2
1) Yes
2) No
> 2
Trying to connect to the FTP server...
You can avoid the manual setup shown in this section by using the
mkzimage_cmdline tool from SUSE (see 9.3.4, “Configuring the boot image file
with mkzimage_cmdline” on page 306).
Note for experts: You can create the mkzimage_cmdline binary for other
architectures using a C or C++ Compiler and the source code file called
mkzimage_cmdline.c. The binary that is on the SLES9 installation media only
runs on a Performance Optimization With Enhanced RISC (POWER) system
running Linux. This binary does not ship on SLES9 installation CD-ROMs for
x86 architecture.
To add new options, use -s “STRING”, where STRING is a variable which can
contain several options as shown in Table 9-1. To activate the options saved in
the boot image file use -a 1, and to deactivate use -a 0.
./mkzimage_cmdline -a 1 -s “STRING” /srv/tftp/tftpboot/install
netmask=Y.Y.Y.Y Defines the subnet mask of the Ethernet boot device to Y.Y.Y.Y
usessh=1 sshpassword=asyoulike Enables a Secure Shell (SSH) service on the BladeCenter JS21
to get a secure user interface during the installation. If an
X Server is running on the client connected to the BladeCenter
JS21, use a graphical interface in combination with a secure data
communication.
a. We did not verify all the options.
Note: When you install multiple blades, avoid reconfiguring the hostip network
parameters using parameters within a bootable image. This action overwrites
the configuration using a DHCP server and causes problems. When you
install multiple blades, we recommend that you use the DHCP server to
assign IP addresses.
The following examples enable attended installation without using SoL. SoL can
be helpful when starting the installation. When you use VNC as part of the
installation, the SoL screen shows you the IP address to connect to.
Example 9-9 Configuring the boot image to enable installation using SSH through eth1
./mkzimage_cmdline -a 1 -s “install=ftp://192.168.1.254/sles9_sp3
usessh=1 sshpassword=mysshpassword netdevice=eth1”
/srv/tftp/tftpboot/install
The command shown in Example 9-10 is similar, but enables VNC connections
to the system during the installation process.
Example 9-10 Configuring the boot image to enable installation using VNC through eth1
./mkzimage_cmdline -a 1 -s “install=ftp://192.168.1.254/sles9_sp3 vnc=1
vncpassword=myvncpassword netdevice=eth1” /srv/tftp/tftpboot/install
Example 9-11 shows an example of this command usage after preparing the
boot image (as shown in Example 9-9).
Start the BladeCenter JS21 and connect using an SSH client or VNC client to
complete the installation. After a short time, the boot process is finished. You can
use an SSH client or VNC client to log on to the installation screen of YaST
without SoL. To confirm if a BladeCenter JS21 is ready for login, use ping or
follow the process by tracking the log file on the installation server. If the
connection is established, follow the instructions on the screen. For the
Figure 9-15 shows the last lines of the SoL output of the second installation part.
active interfaces:
It might happen that an error message is shown during the second connection
attempt as shown in Figure 9-16. To resolve this problem, delete the
corresponding entry in ~/.ssh/known_hosts5.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle
attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
43:9e:49:6e:84:b8:e8:cf:8c:e1:e2:8e:52:64:4f:79.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this
message.
Offending key in /root/.ssh/known_hosts:2
RSA host key for 192.168.1.101 has changed and you have requested
strict checking.
Host key verification failed.
Figure 9-16 SSH error message during the second connection attempt
5
“~” is a placeholder for the actual home directory and will be automatically resolved by Linux
***
*** You can connect to 192.168.1.101, display :1 now with
vncviewer
*** Or use a Java capable browser on
http://192.168.1.101:5801/
***
(When YaST2 is finished, close your VNC viewer and return to this
window.)
Apart from general explanations, this section describes how to build the XML file
shown in Appendix A, “SUSE Linux Enterprise Server 9 AutoYaST XML file” on
page 419. There are a lot of optional settings, but some are mandatory settings
or dependencies. For example, if you configure the firewall for eth0 and eth1, it is
also necessary to specify the Ethernet interfaces eth0 and eth1.
1. Start the YaST application, which opens a window as shown in Figure 9-18.
Launch the Autoinstallation applet from the Misc section of YaST.
There are five menu options: File, View, Classes, Tools, Preferences, and
three ways to create an AutoYaST file apart from loading an existing XML file
using File → Open.
6
A predecessor to AutoYaST
Figure 9-20 Creating AutoYaST file from the installation server configuration
After this step, continue with some of the changes using the AutoYaST tool as
described in the following section.
– Hardware
Configure Partitioning, Audio, Printing, and Graphics Card and Monitor, if
necessary. Only the Partitioning settings are critical, but you have to
change them manually later. Therefore, leave all options as they are or
use an alternative partitioning configuration as a placeholder.
– System
Set the general system information such as language configuration, time
zone, other locale-related settings, logging, and run-level information in
this option. The most important configuration is the Boot Loader
Example 9-12 A partitioning configuration not working in the newly created XML file
.
<partitioning config:type="list">
<drive>
<device>/dev/sda</device>
<partitions config:type="list">
<partition>
<partition_id config:type="integer">65</partition_id>
<partition_nr config:type="integer">1</partition_nr>
<region config:type="list">
<region_entry config:type="integer">0</region_entry>
<region_entry config:type="integer">0</region_entry>
</region>
<size>-2097151</size>
</partition>
<partition>
<filesystem config:type="symbol">swap</filesystem>
<format config:type="boolean">true</format>
<mount>swap</mount>
<partition_id config:type="integer">130</partition_id>
<partition_nr config:type="integer">2</partition_nr>
<region config:type="list">
<region_entry config:type="integer">8</region_entry>
<region_entry config:type="integer">512</region_entry>
</region>
<size>1071644673</size>
</partition>
<partition>
<filesystem config:type="symbol">reiser</filesystem>
<format config:type="boolean">true</format>
Example 9-13 shows a changed configuration with two drives that are working
fine. A complete functional XML file including partition information is shown in
Appendix A, “SUSE Linux Enterprise Server 9 AutoYaST XML file” on
page 419. After the changes in the partitioning section, the XML file is ready
to use.
http://www.suse.com/~ug/autoyast_doc/
Keep in mind that some of the options depend on the architecture or SLES
version.
Issue the command shown in Example 9-14 without new lines to write the
configuration to the bootable image sles9_sp3_basic_auto. The file
/srv/tftp/tftpboot/sles9_sp3_basic_auto must exist to perform the operation.
After the installation, which might take some time, the newly installed server must
be reachable using SSH at the IP address configured using the DHCP server.
Attention: Red Hat Enterprise Linux does not switch the boot sequence of
boot devices after the installation. Perform this using the BladeCenter
management module, as described in 6.5, “Blade server configuration” on
page 90.
After this final step, the basic preparation of the network installation is complete,
but there are different ways, shown as decision symbols in Figure 9-23 on
page 323.
One possibility is installation without a defined installation server, installation
source directory, or transfer protocol, which in the most cases also means
attended installation. This basic installation procedure is described in 9.4.3,
“Basic attended Red Hat Enterprise Linux network installation” on page 326.
The definition of boot parameters such as installation server, installation
source directory, or transfer protocol using the Open Firmware prompt is used
in most cases in combination with an unattended installation. To learn more
about the Open Firmware prompt in general, see 6.9, “Open Firmware
interface” on page 152. More specific informations about unattended
installation are provided in 9.4.4, “Unattended installation with Red Hat
Enterprise Linux” on page 336.
Set
Network – BOOTP
as first boot device
via MM or AMM
Power on
BladeCenter JS21
RHEL initial
installation menu,
set Install Server,
protocol and
source path
Main installation
menu, configure
provided options
Data transfer/
install from Install
Server to
BladeCentre JS21
Because it is much easier to work with ISO images, the first step is to create ISO
images. You can name the ISO files as follows:
Red Hat Enterprise Linux CD no. X: RHEL4-U3-ppc-AS-CDX.iso
Repeat the previous step for all the CDs/DVDs, changing the file name as
appropriate. You can also increase the bs parameter as appropriate. This
parameter controls the block size. The larger the block size is, the more RAM is
taken for the dd process, but the faster the process works.
Important: Ensure that you do not mount the CD/DVD before beginning the
dd process. Also ensure that the destination of the ISO has enough space to
store all the data. One CD ISO image file typically requires 650 MB and a
single layer DVD ISO image file requires up to 4.7 GB of hard drive space.
Note: We created our ISO files on a remote server and then transferred them
to our installation server within the BladeCenter to take advantage of its fast
network interfaces.
Now copy the installation data to the FTP directory. Because we are handling
five CD-ROMs, we show how to process all the five CD-ROMs with single
commands.
Note: In general, use the bootable image that is shipped with the
distribution to avoid trouble with different hardware drivers or software
versions.
6. The explicit location during the boot process of the boot image file,
/srv/tftp/tftpboot/rhel_as_4_u3.img is defined by two entries:
– The setting of the public TFTP boot directory (for example, /srv/tftp/)
defined in the TFTP server configuration file (for example,
/etc/xinet.d/tftpd). See the bold text in Figure 9-2 on page 287 or
Figure 9-3 on page 287.
– Adjust the file name parameter in /etc/dhcpd.conf to:
filenname “/tftpboot/rhel_as_4_u3.img
For more information, see 9.2.2, “Configuring a BOOTP or DHCP service”
on page 285.
After this step, all basic preparations to install Red Hat Enterprise Linux using
the network are finished.
If you have fulfilled the prerequisites, the integrated TFTP client of the
BladeCenter JS21 requests a bootable image7 during the boot process. The
preparations to provide this bootable image using the prepared TFTP server are
described in this section.
Attention: The Red Hat Enterprise Linux installation does not set the boot
sequence to a hard drive after the installation using the network. You must
manually set this with the BladeCenter management module or one of the
boot menus. You can also configure the BladeCenter JS21 to boot using the
hard drive, but select the boot menu and select boot using the network
manually before the installation.
7
This bootable image is also called zimage or bzimage
.
.
BOOTP: chosen-network-type = ethernet,auto,none,auto
BOOTP: server IP = 0.0.0.0
BOOTP: requested filename =
BOOTP: client IP = 0.0.0.0
BOOTP: client HW addr = 0 11 25 c9 b a7
BOOTP: gateway IP = 0.0.0.0
BOOTP: device /pci@8000000f8000000/pci@2/ethernet@4,1
BOOTP: loc-code U788D.001.23A1137-P1-T8
BOOTP R = 1 BOOTP S = 2
FILE: /tftpboot/netboot.img
FINAL Packet Count = 12149
FINAL File Size = 6220088 bytes.
load-base=0x4000
real-base=0xc00000
<Enter> to reboot
Figure 9-32 SoL after Red Hat Enterprise Linux installation is complete
In the case of a Red Hat Enterprise Linux installation, it is not possible to reduce
the effort or avoid the SoL connection during a network installation because of
the lack of a tool such as mkzimage_cmdline, but you can prepare an unattended
installation.
Note for Red Hat Enterprise Linux 3 user: Be aware of the changes from
Red Hat Enterprise Linux 3 to Red Hat Enterprise Linux 4. For example, Red
Hat Enterprise Linux 3 uses redhat-config-xxx to start most of the
administration tools; Red Hat Enterprise Linux 4 uses system-config-xxx
instead.
Kickstart is not included in the default software installation. If the Red Hat
Enterprise Linux installation source is stored in the directory rhel4 on an FTP
server with the IP address 192.168.1.254, the first step is to start the package
management by issuing the following command:
system-config-packages -t ftp://192.168.1.254/rhel4
Figure 9-33 Kickstart main window with Basic Configuration panel (RHEL4)
5. The next panel is the Authentication panel. But in this case, we use the
default setting.
8. Manually adjust the Kickstart configuration file that you have created. The
basic Kickstart configuration file created with the Kickstart Configurator is
shown in Example 9-15.
#System language
lang en_US
Important: The order of the main sections in the Kickstart configuration file is
important for the functionality.
You can find the fully functional Kickstart configuration file with some additional
packages and partitioning information in Appendix B, “Red Hat Enterprise
Linux 4 Kickstart file” on page 427.
You can use the Web interface for the management of other BladeCenter
resources, such as I/O modules, and the retrieval of system health information.
You can also configure BladeCenter-specific features such as the Serial over
LAN (SoL) from the Web interface.
IBM Director automates many of the processes that are required to manage
systems proactively, including capacity planning, asset tracking, preventive
maintenance, diagnostic monitoring, troubleshooting, and more. It has a
graphical user interface (GUI) that provides easy access to both local and remote
systems.
At the time of writing this book, IBM Director V5.10.2 was the latest version
available and is the version we used for our testing for this topic. This latest
version includes:
Broader platform coverage for use in a heterogeneous environment that
includes IBM System p5, eServer p5, eServer i5, and eServer pSeries
A new streamlined interface to boost productivity
A new command-line interface in addition to the graphical interface
Lightweight agents for easy deployment
From the IBM Systems Software Information Center link, select Topic
overview → IBM Director.
The IBM Director CD is shipped with IBM BladeCenter chassis but is not shipped
with IBM blade servers. For the purposes of our review and testing, we
downloaded the necessary files and also the images as instructed on the Web
site.
IBM Director can gather some information from a blade server before the IBM
Director Agent or IBM Director Core Services is installed on the blade server.
The information is gathered from the blade server by way of the BladeCenter
management module. In the IBM Director Console, the blade server is
represented by a physical platform managed object. However, after you install
IBM Director Agent or IBM Director Core Services on the blade server, it is a
managed object, and the features and functions that you can use on the blade
server are comparable to those that you can use on any managed object.
Note: When you install IBM Director Agent or IBM Director Core Services on a
blade server, the supported tasks depend on the operating system (OS) that
you install on the blade server.
The instructions provided for downloading and installing from the ISO CD image
for the IBM Director Agent for AIX also worked as shown in the Web site
referenced previously.
Figure 10-1 reflects the selection of Chassis and Chassis Members group. The
only editing necessary for the discovered objects is to change the names for
convenience.
At this point the IBM Director Agent (Level 2) is not installed on any of the blades
except the one JS20 functioning as the IBM Director Server. As you can see, the
management module interface provides information for the management
module, assorted blades, and Ethernet switches. Basic functions such as power
management (on or off) for the blades is available. However, restarting the OS,
remote sessions, file transfer, and other options are not available without
installing the IBM Director Agent. Refer to Table 10-1 on page 348 for the
functions available without the agent on the blades to be managed.
http://publib.boulder.ibm.com/infocenter/eserver/v1r2/index.jsp
Tip: SUSE Linux Enterprise Server 9 (SLES9) for IBM POWER only: Disable
the Service Location Protocol daemon (SLPD) before you install IBM Director
Core Services. IBM Director Server does not discover Level 1 managed
objects that are running SLPD.
Note: After we installed all the prerequisite RPM files, we experienced the
following scenarios:
With Red Hat V4 Update 3, we experienced an error at the end of the
installation of the IBM Director Agent shell script:
Failed Dependencies compat-libstdc++-33.3.2.3-47.3
lsvpd v0.12.7
librtas 1.2
ppc64-utils 2.5
After ensuring that we have already installed these files at the correct level,
we issued the twgstart command. The IBM Director Agent processes
started and performed as expected.
With SLES9 Upgrade 3, we experienced no errors with the installation shell
script, but we received an error when we issued the twgstart command.
Failed dependencies: lsvpd>=0.12.7 needed by
pSeriesCoreServices-level1-5.10.2.1-SLES9
We confirmed that we had already installed a more recent lsvpd level and
reissued the twgstart command. The IBM Director Agent started up with
no other problems.
While the initial labeling of the BladeCenter elements is fairly intuitive after
discovery, we re-labeled some of the blades for easier tracking as shown in
Table 10-2.
Blade management
When you install IBM Director Agent on a blade or LPAR, there are additional
functions that you can access. They include remote sessions (consoles), process
management, power management (restart), and others.
The systems that we used for Agent installation and management as shown are:
linux.site (a SLES9 Linux LPAR on a JS21 blade)
js21a1.itsc.austin.ibm.com (an AIX LPAR on a JS21 blade)
js20ibmdirector2 (the IBM Director Server on a JS20 blade)
js20:linuxinstall1 (Red Hat Linux on a JS20 blade)
BladeCenter management becomes much more efficient when you enlist the aid
of IBM Director.
The BladeCenter JS21 is now supported by CSM V1.5.1. There are specific
program temporary fixes (PTFs) for AIX V5.3 that you have to install along with
this version of CSM. Refer to the following Web site and ensure that you have
the authorized program analysis reports (APARs) or the referenced filesets
installed:
https://www14.software.ibm.com/webapp/set2/sas/f/csm/download/csmaix_1.
5.1.1down.html
With the updates applied, you can follow the steps as outlined in CSM 1.5.0 for
AIX 5L and Linux: Planning and Installation Guide, SA23-1344. The basic steps
are as follows:
1. Set up the management server.
2. Set up one or more installation servers (optional).
3. Define the nodes in the cluster.
4. Define non-node devices to the cluster (optional).
5. Install the nodes of the cluster (optional).
6. Add the nodes to the cluster. You can add AIX, Linux, or both AIX and Linux
nodes.
Note that for this exercise, we did not run steps 2 and 4.
After you assign the CD drive to the JS21 using the management module
interface, use the usual AIX commands to assign and mount /dev/cd0. Use the
smitty csm command and you see the screen shown in Figure 10-8.
When you select Install the Management Server, this provides the options for
selecting your installation source (in our case, the CD), the filesets you want to
include, and the normal System Management Interface Tool (SMIT) software
installation options.
For our exercise, we used a JS21 as the management server and set up an AIX
LPAR and SLES9 LPAR on a JS21 and another JS21 running AIX as compute
nodes.
To access the CSM main pages, add /opt/csm/man to the root user’s $MANPATH
variable on the management server:
export MANPATH=$MANPATH:/opt/csm/man
To verify that this step is completed successfully, issue the following commands:
echo $PATH
echo $MANPATH
Note: The examples provided previously only show how to change the $PATH
and $MANPATH variables in the current login session. To permanently change
them, edit your login environment.
Use File Transfer Protocol (FTP) to copy the file to the management server. You
can extract the file into the /tmp/csm directory and use the tar -xvf command to
extract the files. Use smit install with /tmp/csm as the source directory to
complete the update installation.
Tip: This is only necessary for HMC hardware control. It was not necessary
for our purposes in a BladeCenter environment.
You can download the autoupdate software from the following Web site:
http://freshmeat.net/projects/autoupdate
To download the software, select the link under RPM package, then download
autoupdaterelease.noarch.rpm (for example, autoupdate-5.2.5-1.noarch.rpm).
Copy the RPM to a temporary directory, for example /tmp/csm/RPMS/ppc.
You do not have to install the RPM on the management server. The autoupdate
RPM is required only when you add Linux nodes to the cluster. You can postpone
downloading the autoupdate RPM until you are ready to follow the procedure to
add the Linux node.
If the filesets are not available, use AIX CD-ROM #1 to reinstall them.
You also have to check for the latest Reliable Scalable Cluster Technology
(RSCT) filesets. You can download the latest available from the following Web
site and install them:
https://techsupport.services.ibm.com/server/aix.fdc
RemoteShell uses rsh as the default executable to run dsh for remote
commands. In our initial setup, we accepted the default and were not able to
communicate with our nodes. We first used a temporary override to OpenSSH by
using the command export DSH_REMOTE_CMD=/usr/bin/ssh. This allowed
immediate communication with those nodes where we had installed SSH and
had the daemon running. To make this the standard remote executable, we
issued the following command:
csmconfig RemoteShell=/usr/bin/ssh SetupRemoteShell=1
You can check the success of the csmconfig command by running it with no
flags, and then checking the output as shown in Figure 10-11. You can also run
the license acceptance using the smit csm command.
AddUnrecognizedNodes = 0 (no)
BMCConsoleEncryptAuth = 1 (yes)
BMCConsoleKeepAlive = 0 (no)
BMCConsolePerMsgAuth = 0 (no)
ClusterSNum =
ClusterTM = 9078-160
DeviceStatusFrequency = 12
DeviceStatusSensitivity = 8
ExpDate = Sat Jul 22 18:59:59 2006
HAMode = 0
HeartbeatFrequency = 12
HeartbeatSensitivity = 8
MaxNumNodesInDomain = -1 (unlimited)
NetworkInstallProtocol = nfs
PowerPollingInterval = 300
PowerStatusMode = 1 (Events)
Properties =
RegSyncDelay = 1
RemoteCopyCmd = /usr/bin/rcp
RemoteShell = /usr/bin/rsh
SetupKRB5 = 0
SetupNetworkInstallProtocol = 1 (yes)
SetupRemoteShell = 1 (yes)
TFTPpackage = tftp-hpa
Figure 10-11 Output from the csmconfig command
These files are primarily used when CSM system management scripts are run on
the nodes of the cluster.
Note: You can combine the -c option with the -L option mentioned in the
previous step. Therefore, for example, you can just run csmconfig -c -L
instead of running the command twice.
Use the following command to set up the management server to place Linux
CSM installation files into their appropriate directories:
copycsmpkgs -p /csminstall/Linux InstallCSMVersion=1.5.1
InstallOSName=Linux InstallDistributionName=SLES
InstallDistributionVersion=9 InstallPkgArchitecture=ppc64
Extract the file in a temporary directory using the gunzip command and then
extract the files from the resulting tar file using the tar -xvf
csm-linux-1.5.1.1.ppc64.tar command. The tar file is 106 MB in size.
Therefore, you must ensure that your temporary directory or your /csminstall file
system has sufficient space to accommodate it.
Step 16: Store hardware control point user IDs and passwords
This step covers several configuration options including HMC managed IBM
System p devices.
BladeCenter HS20 (other than HS20-8678), HS40, JS20, and JS21 servers
use the Serial over LAN (SoL) feature to provide remote console access. For
our BladeCenter environment, we followed these specific instructions from
the CSM 1.5.0 for AIX 5L and Linux: Planning and Installation Guide,
SA23-1344.
Use the systemid -c command or the smit systemid command to complete this
step. We used the systemid -c command, as shown in Figure 10-12.
The output from the command provides a view of each of the checks performed
and a final summary statement. The output in Figure 10-13 shows the end of the
output from our exercise.
There are additional steps included in CSM 1.5.0 for AIX 5L and Linux: Planning
and Installation Guide, SA23-1344, such as installing Kerberos as an option,
which we did not use. It also provides a step-by-step instruction about how to
install CSM on a Linux Management Server. We did not implement this for our
test.
This section provides information about the attributes that are required when you
define nodes and how to determine which values to use. The information is
divided into three categories; general attributes, hardware control information,
and information about installation software.
General attributes
The general node attributes are:
Hostname
The resolvable host name or IP address of the node, as known by the
management server. It represents the network adapter host name or IP
address on the cluster virtual local area network (VLAN). Host name is always
required, and you must specify it when you define the node. In a pure Linux
cluster, this attribute can be an unresolved IP address.
ManagementServer
The host name of the CSM management server. ManagementServer is
always required. Set it to the host name of the management server as it is
known by the node. Because the management server can have multiple
interfaces, different nodes might use different interfaces to communicate with
the management server. If a route to the node exists at the time that the node
is defined, CSM attempts to set the value to the IP address of the
management server automatically. If a route to the node does not exist, CSM
The following attributes apply only to the JS20 and JS21 blade servers, and are
valid only if the SoL feature is enabled on the BladeCenter management module:
ConsoleMethod
Set this value to blade.
ConsoleServerName
Use the host name of the BladeCenter management module.
ConsolePortNum
Use the blade slot number within the BladeCenter chassis.
ConsoleSerialDevice
Leave this field blank. This field is not used for JS20 blade servers. If the SoL
feature is not enabled, set the ConsoleSerialDevice attribute for all blades to
NONE, and leave the ConsoleMethod, ConsoleServerName, and
ConsolePortNum attributes blank.
To establish one of the JS21 AIX LPARs as a node, issue the following
command:
definenode -n l2_aix_1 Hostname=l2_aix_1 CSMVersion=1.5.1
ConsoleMethod=blade ConsolePortNum=2 CondoleServerName=mmext
HWControlNodeId=JS21_VIO HWControlPoint=mmext
InstallDistributionVersion=5.3.0 InstallOSName=AIX
ManagementServer=ibmdirector2 PowerMethod=blade
When you have an environment with multiple blades in multiple chassis, you can
use the installation server setup and appropriate InstallServer attributes to
enable you to install the operating system and updates to the CSM software as
necessary. In our test, we had a limited number of nodes and did not use an
installation server for our exercise.
The updatenode command adds the AIX nodes to the cluster. Run the
updatenode command for the AIX nodes that you have defined. The updatenode
command does the following functions for an AIX node:
If remote shell authentication is not already set up, automatically sets up
remote shell authentication for OpenSSH or rsh
Distributes configuration files if the configuration file manager (CFM) is set up
Set up CFM before you install your nodes to avoid customizing the nodes
later. For information about how to configure CFM, see IBM CSM for AIX 5L
and Linux: Administration Guide, SA23-1343.
Runs any user customization scripts
Sets up the Kerberos Version 5 options for remote commands if requested on
the csmconfig command
It is not necessary to update the cluster nodes with the latest available CSM
software updates unless the updates are specifically required. See the following
CSM support Web site for information about any required updates for the nodes:
http://techsupport.services.ibm.com/server/cluster/fixes
We updated the CSM node files csm.core and csm.client to V1.5.1 on all nodes.
For our exercise, we issued the updatenode command for each node
independently. In each case, we were prompted to supply the root password for
the node as shown in Figure 10-15.
# updatenode l2_aix_1
updatenode: 2653-206 dsh, using protocol /usr/bin/ssh, cannot
connect to nodes: l2_aix_1.itsc.austin.ibm.com.
Please enter the password for the current user (normally root) to
access the nodes l2_aix_1.itsc.austin.ibm.com:(Password entered)
Setup complete for remote shell: /usr/bin/ssh
Now running updatenode.client on the nodes.
l2_aix_1.itsc.austin.ibm.com: Setting Management Server to
9.3.5.236.
l2_aix_1.itsc.austin.ibm.com: Node Install - Successful.
l2_aix_1.itsc.austin.ibm.com: Output log is being written to
"/var/log/csm/install.log".
Now running CFM to push /cfmroot files the nodes.
There are no files in /cfmroot.
Figure 10-15 updatenode command applied to AIX node
You can see that the Mode is now Managed and the UpdatenodeFailed is 0 (for
No).
Download the CSM code for Linux and placed it in the /csminstall/Linux directory
on the management server. Keep the SLES9 Service Pack 3 (SP3) Linux CD
ready for use. They contain the required open source software.
Tip: Though you might want to mount the distribution media in the CD-ROM
tray, do not mount it. The copycsmpkgs command scripts mount it for you.
There will be confusion if you mount it before issuing the command.
# updatenode -n l4_sles9_1
updatenode: 2653-073 The Autoupdate RPM is missing from
/csminstall/Linux/SLES/csm/1.5.1/packages/. This means that
Autoupdate is probably not installed on the nodes. Please download
the Autoupdate RPM from http://freshmeat.net/projects/autoupdate and
place it in /csminstall/Linux/SLES/csm/1.5.1/packages/.
Now running updatenode.client on the nodes.
l4_sles9_1.itsc.austin.ibm.com: Installing
autoupdate-5.4.1-1.noarch.rpm.
l4_sles9_1.itsc.austin.ibm.com: The following OPTIONAL RPMs will not
be copied or installed (because they could not be found). This may
prevent the use of some CSM functionality or optional features.
Please consult the CSM Planning and Installation Guide for more
information:
l4_sles9_1.itsc.austin.ibm.com: perl-RPM2
l4_sles9_1.itsc.austin.ibm.com: Setting Management Server to
9.3.5.236.
l4_sles9_1.itsc.austin.ibm.com: Node Install - Successful.
l4_sles9_1.itsc.austin.ibm.com: Output log is being written to
"/var/log/csm/install.log".
Now running CFM to push /cfmroot files the nodes.
There are no files in /cfmroot.
Figure 10-18 Output from the updatenode command
You can see from Figure 10-18 that at least one fileset (perl_RPM2) was not
found. However, the command completed successfully and the node is now
shown as a managed node.
# lsnode -p
js21nim: 1 (alive)
l2_aix_1: 1 (alive)
l4_sles9_1: 1 (alive)
Figure 10-20 Output from the lsnode command
Your cluster is now active. You can begin normal cluster implementation tasks
that are appropriate for you application environment.
There are many factors that you have be address when looking at the
performance of a server. These include both software and hardware
configurations and how they interact with the workload.
On this Web site, on the left pane, select AIX documentation → AIX PDFs. On
the right pane, under the topic Performance management and tuning, select
Performance Management Guide.
Because there is another IBM Redbook that is written for AIX 5L, this chapter
concentrates on Linux. The AIX 5L performance tuning redbook is called: AIX 5L
Practical Performance Tools and Tuning Guide, SG24-6478.
portmap Dynamic port assignment for Remote Procedure Call (RPC) services
(such as network information services (NIS) and NFS)
rhnsd Red Hat Network update service for checking for updates and security
errata
If you do not want the daemon to start the next time the machine boots, issue
either one of the following commands as root because they both accomplish the
same results:
/sbin/chkconfig --levels 2345 sendmail off
/sbin/chkconfig sendmail off
In addition, SUSE Linux Enterprise Server (SLES) has three ways to work with
daemons:
A text-based UI: /sbin/yast runlevel
A GUI, Yet Another Setup Tool 2 (YaST2), which you can be start with the
following command:
/sbin/yast2 runlevel
Alternatively, you can open Yast2 by clicking Browse: YaST/ → YaST
modules → System → Runlevel editor.
The /sbin/chkconfig command
If you do not want the daemon to start the next time the machine boots, enter
the following command as root:
/sbin/chkconfig -s sendmail off
If a GUI is required, then start and stop it as necessary rather than running it all
the time. In most cases, the server must be running at run level 3, which does not
start the GUI when the machine boots. If you want to restart the X Server, use
startx from a command prompt. Run level 3 is multi-user mode without a GUI.
The kernel parameters that control how the kernel behaves are stored in /proc
(and, in particular, /proc/sys), as shown in Table 11-2. Reading the files in the
/proc directory tree provides a simple way to view configuration parameters that
are related to the kernel, processes, memory, network, and other components.
Each process running in the system has a directory in /proc with the process ID
(PID) as name.
/proc/meminfo Information about memory usage. The free command uses this information.
/proc/sys/abi/* Used to provide support for foreign binaries to Linux: Those compiled under other
UNIX variants such as SCO Unixware 7, SCO OpenServer, and SUN Solaris™ 2.
By default, this support is installed, although you can remove it during installation.
/proc/sys/fs/* Used to increase the number of open files that the OS allows and to handle quota
/proc/sys/kernel/* For tuning purposes, you can enable hotplug, manipulate shared memory, and
specify the maximum number of pid files and level of debug in syslog.
/proc/sys/net/* Tuning of network in general, Internet Protocol Version 4 (IPV4) and IPV6
The next time you reboot, the parameter file is read. You can obtain the same
result without rebooting by issuing the following command:
#sysctl -p
Table 11-3 shows a list of sysctl parameters that you can configure.
net.ipv4.inet_peer_gc-maxtime Sets how often the garbage collector (GC) must pass over the inet peer
storage memory pool during low or absent memory pressure. Default
is 120, measured in jiffies:
sysctl -w.ipv4.inet_peergc_maxtime=240
net.ipv4.inet_peer-gc-mintime Sets the minimum time that the GC can pass cleaning memory. If your
server is heavily loaded, you might want to increase this value. Default
is 10, measured in jiffies:
sysctl -w net.ipv4.inet_peer_gc_mintime=80
net.ipv4.inet_peer_maxttl The maximum time-to-live for the inet peer entries. New entries expire
after this period of time. Default is 600, measured in jiffies:
sysctl -w net.ipv4.inet_peer_maxttl=500
net.ipv4.inet_peer_minttl The minimum time-to-live for inet peer entries. Set to a high enough
value to cover fragment time-to-live in the reassembling side of
fragmented packets. This minimum time must be smaller than
net.ipv4.inet_peer_threshold. Default is 120, measured in jiffies:
sysctl -w net.ipv4.inet_peer_minttl=80
net.ipv4.inet_peer_threshold Sets the size of inet peer sotrage. When this limit is reached, peer
entries are thrown away, using the inet_peer_gc_mintime timeout.
Default is 65644:
sysctl -w net.ipv4.inet_peer_threshold=65644
net.core.rmem_max Maximum receive window. Default is 131071:
sysctl -w net.core.rmem_max 16777216
net.core.wmem_max Maximum Transmission Control Protocol (TCP) send window. Default
is 131071:
sysctl -w net.core.wmem_max 16777216
net.ipv4.tcp_rmem Memory reserved for TCP receive buffers. Default is 4096 87380
174760:
sysctl -w net.ipv4.tcp_rmem 4096 87380 16777216
net.ipv4.tcp_wmem Memory reserved for TCP send buffers. Default is 4096 65536 174760:
sysctl -w net.ipv4.tcp_wmem 4096 65536 16777216
vm.hugetlb_pool The hugetlb feature works the same way as bigpages, but after hugetlb
allocates memory, the physical memory can only be accessed by
hugetlb or shm allocated with SHM_HUGETLB. It is normally used with
databases such as Oracle or IBM DB2®. Default is 0:
sysctl -w vm.hugetlb_pool=4608
vm.inactive_clean_percent Designates the percent of inactive memory that must be cleaned.
Default is 5%:
sysctl -w vm.inactive_clean_percent=30
vm.pagecache Designates how much memory must be used for page cache. This is
important for databases such as Oracle and DB2. Default is 1 15 100.
The three values of this parameter are:
Minimum percent of memory used for page cache. Default is 1%
The initial amount of memory for cache. Default is 15%
Maximum percent of memory used for page cache. Default is
100%
sysctl -w vm.pagecache=1 50 100
Besides storing and managing data on the disks, file systems are also
responsible for guaranteeing data integrity. The newer Linux distributions include
journaling file systems as part of their default installation. Journaling, or logging,
prevents data inconsistency in case of a system crash. All modifications to the
file system metadata have been maintained in a separate journal or log and can
be applied after a system crash to bring it back to its consistent state. Journaling
also improves recovery time, because there is no need to perform file system
checks at system reboot.
As with other aspects of computing, you find that there is a trade-off between
performance and integrity. However, as Linux servers make their way into
corporate data centers and enterprise environments, requirements such as high
availability can be addressed. A server’s disk subsystems can be a major
component of overall system performance. Understanding the function of the
server is key to determining whether the I/O subsystem has a direct impact on
performance.
Examples of servers where disk I/O is not the most important subsystem:
An e-mail server acts as a repository and router for electronic mail and tends
to generate a heavy communication load. Networking is more important for
this type of server.
A Web server that is responsible for hosting Web pages (static, dynamic, or
both) benefits from a well-tuned network and memory subsystem.
Small Low cost Direct-attached Although the standard for more than 10
Computer storage; for example, years, current I/O demands on high-end
System mid-range to high-end servers have stretched the capabilities of
Interface server with local SCSI. Limitations include cable lengths,
(SCSI) storage (x346, x365) transfer speeds, maximum number of
attached drives, and limits on the number of
systems that can actively access devices on
one SCSI bus, affecting clustering
capabilities.
Serial ATA Low cost Midrange data-storage Generally available since late 2002, this new
applications standard in hard disk drive (HDD) or system
board interface is the follow-on technology to
EIDE. With its point-to-point protocol,
scalability improves as each drive has a
dedicated channel. Sequential disk access
is comparable to SCSI but random access is
less efficient. Redundant Array of
Independent Disks (RAID) functionality is
also available.
Serial- Low cost Direct-attached Generally available since late 2003, this new
Attached SCSI storage; for example, standard in HDD or system board interface
mid-range to high-end provides performance and also reliability.
server with local
storage (JS21)
iSCSI Medium cost Mid-end storage; for Became a Request For Comment (RFC)
example, File/Web recently. Currently being targeted toward
server mid-end storage and remote booting.
Primary benefits are savings in infrastructure
cost and diskless servers. It also provides
the scalability and reliability associated with
TCP/IP/Ethernet. High latency of TCP/IP
limits performance.
Note: Red Hat Enterprise Linux currently
does not support iSCSI.
Fibre Channel High cost Enterprise storage; for Provides low latency and high throughput
example, databases capabilities and removes the limitations of
SCSI by providing cable distances of up to
10 km with fiber optic links; 2 Gbps transfer
rate, redundant paths to storage to improve
reliability. In theory, can connect up to
16 million devices; in loop topologies, up to
127 storage devices or servers can share
the same Fibre Channel connection allowing
implementation of large clusters.
The number of disk drives significantly affects performance because each drive
contributes to total system throughput. Capacity requirements are often the only
consideration that is used to determine the number of disk drives that are
configured in a server. Throughput requirements are usually not well understood
or are completely ignored. The key to a good performing disk subsystem
depends on maximizing the number of read-write heads that can service I/O
requests. With RAID technology, you can spread the I/O over multiple spindles.
There are two ways to change the journaling mode on a file system:
When issuing the mount command:
mount -o data=writeback /dev/sdb1 /mnt/mountpoint
Including it in the options section of the /etc/fstab file:
/dev/sdb1 /testfs ext3 defaults,journal=writeback 0 0
If you want to modify the default data=ordered option on the root partition, make
the change to the file listed previously, then issue the mkinitrd command to scan
the changes in the /etc/fstab file and create a new image. Update /etc/lilo.conf to
point to the new image.
For more information about ext3, refer to the following Web site:
http://www.redhat.com/support/wpapers/redhat/ext3/
Tuning ReiserFS
One of the strengths of the ReiserFS is its support for a large number of small
files. Instead of using the traditional block structure of other Linux file systems,
ReiserFS uses a tree structure that has the capability to store the actual contents
of small files or the tails of those that are larger in the access tree itself. This file
system does not use fixed block sizes, therefore only the space that is required
to store a file is consumed, leading to less wasted space.
An example of mounting a ReiserFS file system with the notail option is:
/dev/sdb1 /testfs resierfs notail 0 0
The optimal value of the load is 1, which means that each process has
immediate access to the CPU and there are no CPU cycles lost. The typical load
can vary from system to system: for a uniprocessor workstation, 1 or 2 might be
acceptable, but you might probably see values of 8 to 10 on multiprocessor
servers.
You can further modify the processes using renice to give a new priority to each
process. If a process stops or occupies too much CPU, you can end the process
(kill command). The columns in the output are as follows:
PID: Process identification
USER: Name of the user who owns (and perhaps started) the process
PRI: Priority of the process
NI: Niceness level (that is, whether the process tries to be nice by adjusting
the priority by the number given; see the following section for details)
SIZE: Amount of memory (code + data + stack), in KB, that is used by the
process
RSS: Amount of physical RAM used, in KB
SHARE: Amount of memory shared with other processes, in KB
STAT: State of the process: S=sleeping, R=running, T=traced, D=interruptible
sleep
Z=zombie
%CPU: Share of the CPU usage (since the last screen update)
%MEM: Share of physical memory
TIME: Total CPU time used by the process (since it started)
COMMAND: Command line used to start the task (including parameters)
Linux supports nice levels from 19 (lowest priority) to -20 (highest priority). The
default value is 0. To change the nice level of a program to a negative number
(which makes it a high-priority process), it is necessary to log on or su to root.
To start the program xyz with a nice level of -5, issue the command:
nice -n -5 xyz
To change the nice level of a program already running, issue the command:
renice level pid
If you to change the priority of the xyz program that has a PID of 2500 to a nice
level of 10, issue the following command:
renice 10 2500
The iostat command lets you see average CPU times since the system was
started, in a way that is similar to uptime. In addition, iostat creates a report
about the activities of the disk subsystem of the server. The report has two parts:
CPU utilization and device (disk) utilization.
Tip: Using pipes, it is possible to produce the output in one command. For
example, to generate a report in Hypertext Markup Language (HTML), run:
cat output_traffic-collector | traffic-sort -Hp | traffic-tohtml -o
output_traffic-tohtml.html
Example 11-9 shows the total amount of memory that the cupsd process is
using.
For issues relating to memory and file parameters, and also Web servers with
large numbers of network connections, look at the sysctl command. Database
applications rely heavily on efficient I/O read and write activities. If this is an area
of concern, see 11.2.6, “Tuning the file system” on page 395.
A minimal two-path SAN installation is shown in Figure 12-1. We used the IBM
two-port Fibre Channel (FC) switch module in this book.
BladeCenter Chassis
BladeCente JS21
FC expansion
card (HBA)
Controller 1 Controller 2
Path 1
Figure 12-1 Basic SAN layout with a two-path configuration for a BladeCenter JS21
Important: The JS21 supports only the Qlogic 4Gb Fibre Channel Small Form
Factor expansion card. We recommend that you use at least firmware Version
4.00.22, which includes IBM driver Version 1.14. Additionally, we recommend
that you update the firmware of all SAN components to the latest available or
possible firmware level.
Important: Make sure that the external ports are enabled, because this
affects the Ethernet and Fibre Channel ports.
1
Physical connected SAN
Figure 12-2 SAN switch module configuration panel using the BladeCenter SAN utility
The BladeCenter JS21 HBA and especially the worldwide name (WWN)
does not show up in the name server of the SAN Switch without activation.
To activate the card, scan for all boot devices using the SMS menu (see
6.8, “System Management Services interface” on page 133). After this
procedure, the HBA is visible in the name server list. The scan activates
the card, but without any bootable partitions, it is not possible to assign the
Fibre Channel expansion card as a boot device. The next step is the
configuration of a zone.
d. The zone configuration is also done on a SAN switch of the fabric. One
zone is like a logical network inside the SAN. All components in one zone
are visible to each other. Typically there is a switch for global visibility for
all devices and ports which are not in a zone.
To create a functional zone for the first path of the example setup shown in
Figure 12-1 on page 410, use the Zone configuration panel to create:
i. ZoneSet, for example, zone_1
ii. Zone, for example, zone_1
iii. An alias2 for HBA Port 1 (for example, JS21_Bay3_Port1) and Storage
Unit Controller 1 (for example, Sage1_Controller 1)
e. Assign the alias to the zone.
2
An alias is a selectable name and might be used instead of WWNs
f. Now activate the zone, which is usually done in the zone menu.
Afterwards the zone configuration must also emerge in the name server.
After activation, the zone configuration is distributed to all SAN switches in
a fabric. Even if the HBA is deactivated and not listed in the name server
anymore, the zone setting itself is persistent and works after a reactivation
of the HBA as required.
For a two-path configuration, perform the same steps on the other switch.
After the SAN configuration is complete, you can install the operating system.
The necessary single-path driver for the 4 GB FC expansion card is provided by
AIX, Virtual I/O Server (VIOS), SUSE Linux Enterprise Server (SLES), and Red
Hat Enterprise Linux by default. Almost all configurations presented in this book
were also tested in a single-path SAN environment.
Example 12-1 SMS device scan with a two-path and two-LUN configuration
scan /pci@8000000f8000000/pci@1/fibre-channel@2/diskQLogic QMC2462SHost Adapter
Driver(IBM): 1.17 03/31/06
Firmware version 4.00.22
check
/pci@8000000f8000000/pci@1/fibre-channel@2/disk@200400a0b80ba0ed,0001000000000000
QLogic QMC2462SHost Adapter Driver(IBM): 1.17 03/31/06
Firmware version 4.00.22
QLogic QMC2462SHost Adapter Driver(IBM): 1.17 03/31/06
Firmware version 4.00.22
QLogic QMC2462SHost Adapter Driver(IBM): 1.17 03/31/06
Firmware version 4.00.22
check
/pci@8000000f8000000/pci@1/fibre-channel@2/disk@200400a0b80ba0ed,0002000000000000
QLogic QMC2462SHost Adapter Driver(IBM): 1.17 03/31/06
Firmware version 4.00.22
QLogic QMC2462SHost Adapter Driver(IBM): 1.17 03/31/06
Firmware version 4.00.22
QLogic QMC2462SHost Adapter Driver(IBM): 1.17 03/31/06
Keep these specialties in mind and start the installation process using network as
usual. Double-check if the host type, which means the operating system, is
specified correctly for the LUN that is configured in the storage unit. The
installation procedure for AIX and SLES are similar. After the installation, the
boot sequence must automatically switch to hard disk 2. For Red Hat, it is
important to install the system only with one activated path. After the installation,
select hard disk 2 as boot device and reboot the system. After this step, it is not a
problem to activate the second path.
Example: A-2 Default partition table created by the AutoYaST configuration file
Disk /dev/sda: 146.7 GB, 146772852736 bytes
255 heads, 63 sectors/track, 17844 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
#System language
lang en_US
#Language modules to install
Example: B-2 Default partition table created by the Kickstart configuration file
Disk /dev/sda: 146.7 GB, 146772852736 bytes
255 heads, 63 sectors/track, 17844 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Example: B-3 Default logical volumes created by the Kickstart configuration file
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID 1Vzc3F-6ubR-bCrX-Iv4i-hAtZ-lXMl-AHMLwe
LV Write Access read/write
LV Status available
# open 1
LV Size 134.59 GB
Current LE 4307
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0
The following steps describe the manual approach which the YaST installation
server applet does automatically. This might be helpful for troubleshooting and
for other operating systems or distributions that do not provide this applet.
2. Mount the ISO files using a loop device. The following commands must be
written without a newline/return but issued with return at the end:
mount -t iso9660 -o loop,ro
/srv/data/iso-images/SLES-9-ppc-RC5-CD1.iso
/mnt/loop/sles9_sp3/SUSE-SLES-Version-9/CD1
And
for X in ‘seq 2 6‘; do mount -t iso9660 -o loop,ro
/srv/data/iso-images/SLES-9-ppc-RC5-CD$X.iso
/mnt/loop/sles9_sp3/SUSE-CORE-Version-9/CD$((X-1)); done
And
for X in ‘seq 1 3‘; do mount -t iso9660 -o loop,ro
/srv/data/iso-images/SLES-9-ppc-SP3-CD$X.iso
/mnt/loop/sles9_sp3/SUSE-SLES-9-Service-Pack-Version-3/CD$X; done
3. Copy the now accessible files, create one directory and some soft links by
issuing the following commands. The copy process takes some time. The
final directory structure is shown in Example C-1:
rsync -auv /mnt/loop/sles9_sp3 /srv/ftp
And
mkdir /srv/ftp/sles9_sp3/yast
And
ln -s SUSE-SLES-Version-9/CD1/boot /srv/ftp/sles9_sp3/boot
And
ln -s SUSE-SLES-Version-9/CD1/content /srv/ftp/sles9_sp3/content
All basic preparations to install SLES using the network are now finished.
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see “How to get IBM Redbooks”
on page 444. Note that some of the documents referenced here may be available
in softcopy only.
NIM: From A to Z in AIX 4.3, SG24-5524
Implementing IBM Director 5.10, SG24-6188
The IBM eServer BladeCenter JS20, SG24-6342
IBM TotalStorage: SAN Product, Design, and Optimization Guide,
SG24-6384
AIX 5L Practical Performance Tools and Tuning Guide, SG24-6478
Advanced POWER Virtualization on IBM System p5, SG24-7940
The Cutting Edge: IBM eServer BladeCenter, REDP-3581
IBM eServer BladeCenter Systems Management, REDP-3582
Nortel Networks L2/3 Ethernet Switch Module for IBM eServer BladeCenter,
REDP-3586
IBM eServer BladeCenter Layer 2-7 Network Switching, REDP-3755
Cisco Systems Intelligent Gigabit Ethernet Switch Module for the
IBM eServer BladeCenter, REDP-3869
Virtual I/O Server Integrated Virtualization Manager, REDP-4061
Online resources
These Web sites and URLs are also relevant as further information sources:
Power Module Upgrade Guidelines (Technical Update) - IBM BladeCenter
(Type 8677)
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lnd
ocid=MIGR-53353
ServerProven Web site
http://www.ibm.com/servers/eserver/serverproven/compat/us/eserver.ht
ml
IBM Middleware on Linux
http://www.ibm.com/software/os/linux/software
Information Center VIOS commands
http://publib.boulder.ibm.com/infocenter/eserver/v1r3s/index.jsp?top
ic=/iphb1/iphb1_vios_commandslist.htm
IBM Virtualization
http://www.ibm.com/servers/eserver/about/virtualization/
Advanced POWER Virtualization
http://www.ibm.com/servers/eserver/pseries/ondemand/ve/resources.htm
l
Virtual I/O Server
http://techsupport.services.ibm.com/server/vios/home.html
Numerics B
2-port FC switch module
baseboard management controller (BMC) 70
firmware update 108
bkprofdata command 255
8677 19
blade management 356
8844-31X 11
blade server 2
8844-51X 11
assigning media tray 94
8852 21
assigning names 90
eth0 port 123
A I/O expansion card 169
activating NIM master 268 setting the boot sequence 91
active console, selecting 134 Blade System Management Processor (BSMP) 97
advanced management module 71, 77, 98 BladeCenter 1, 5
command-line interface 72 advantages
connectors 73 high availability 3
Ethernet port 74 lower cost 2
firmware update 104 SAN optimization 3
management network 77 server consolidation 2
Advanced Micro Devices (AMD) 5 switch technology 3
advanced POWER Virtualization 5, 36, 53 benefits 2
Web site 38 chassis 19, 64
AIX device drivers 95
performance tuning 386 documentation 46
AIX 5L 26 high-performance, low-latency interconnection
environment 244 network 51
installation CDs/DVDs 260 integrated switches 2
levels supported by JS21 26 internal management network 69
mirroring with LVM 250 internal network 66
network installation 261 management interface 32
new virtual disks 239 management module 3, 29–30, 411
performance tuning 386 managing multiple chassis 33, 53
RAID 182 media tray 24
RAID configuration 170 memory 17
technology levels 53 multi-chassis interconnection 47
Index 447
dmesg 401 gunzip command 370
double data rate 2 (DDR2) 4
DVD drive, VIOS installation media 194
Dynamic Host Configuration Protocol (DHCP) 284
H
hard disk drives (HDD) 9
dynamic logical partitioning (DLPAR) 29, 43
hardware
alerts 33
E vital product data (VPD) 81
e-mail alert 84 hardware management 27, 29, 41, 49, 61
Enhanced Integrated Drive Electronics (EIDE) 396 subnet 47
error-checking and correction (ECC) 4 Hardware Management Console (HMC) 27, 29–30,
Ethernet 41, 194
expansion card 68, 103, 162, 169 high availability 3
firmware update process 99 high-performance computing (HPC) 4
port enumeration 162 ideal blade server solution 4
changing in AIX 162 vector processing 14
switch module 68, 89, 108, 122, 169 host bus adapter (HBA) 411, 413–416
discovery period 417
HS20 60
F Hypertext Markup Language (HTML) 284
Fibre Channel (FC) 410–411, 416
Hypervisor 29
field-replaceable unit (FRU) 24
field-replaceable units (FRU) 59P6629 24
file system I
access time updates 397 I/O module 57
ext3 journaling mode 398 accessing management interface 68
tuning 395 advanced configuration 85, 88
File Transfer Protocol (FTP) basic setup 85
configuring service 288, 324 differences in bay location 68
installation server 284, 296 Ethernet expansion card 51
installation service 293 Ethernet ports 68
firmware expansion card options 169
committing update 101–102 external ports 89
copying permanent to temporary 101 firmware update 106
identifying levels 96 IP address 77
rejecting update 101–102 management 32
update_flash (Linux command) 101 by management module 77
updating redundant MM 83 interface 47, 49, 69, 85–87
updating with AIX 99 IP address 85
floating-point registers (FPRs) 16 port 68
free command 406 port enabling 88
fully qualified domain name (FQDN) 203 setting IP address 85
SoL 89, 241
updating firmware 106
G IBM diagnostic tools and utilities 96
General Parallel File System (GPFS) 52
IBM Director 57, 346
general purpose register (GPR) 16
Agent 352
geninstall command 368
components 58
GNU C Compiler (gcc) 27
Core Services 352
graphical user interface (GUI) 31, 388
disabling SLPD on SLES 352
Index 449
L logical unit number (LUN) 216, 250, 416
license -accept command 208 Logical volume (LV)
Lightweight Directory Access Protocol (LDAP) 26, creation 222
78 logical volume (LV)
lilo command 433 advanced storage management 244
Linux assigning 239
basic preparation 283 commands 215
IBM diagnostic tools and utilities 96 data mirroring 250
network installation 281 default storage pool 217
nodes 379 RAID support 41
supported kernel 386 viewing 223
tuning options on POWER 386 VIOS concepts 216
logical partition (LPAR) 28, 245, 253–256 virtual disk extension 245
adding disks 239 lpcfgop command 209
assigning lsdev command 209
memory 224 lsdev -virtual command 31
optical device 234 lshwres command 38
benefits 35 lspv command 275
configuration change 235, 240 lsrefcode command 231
creation 218, 223
disk space 252
M
dynamic 43 man dhcpd command 285
Ethernet adapters in SMS 149 management module 24
Integrated Virtualization Manager (IVM) 30, 211 10/100BaseT Ethernet interface 49
Linux installation 282 CLI timeout 132
logical volume 217, 222 command-line interface 72, 123, 128, 130
maintenance 256 configuration 71
management 29 connection drawing 73
moving optical device 234 controlling SoL using commands 130
next reboot 240 default gateway 75
non-dynamic operations 211 default IP address 74–75
opening virtual terminal 231 default user ID and password 133
operating system installation 232 earlier firmware 124
optical device 229, 234 external interface 49, 68, 74, 78
physical adapters 28 firmware 105
power on 231 firmware update 104
processing mode 225 hardware management subnet 49
processing unit 240 hardware VPD 81
rootvg 217 initial configuration 75
secure environment 32 installation guide 71
security layer 36 interface 49
SMS mode 232 internal network interface 49
storage assignment 228 IP address 49, 74
virtual Ethernet 226 (no DHCP) 75
adapaters 240 login session limits 124
virtual LAN channel 36 management interface 71, 73
virtual SCSI adapater 42 network requirements 47
virtual terminal 231–232, 257 redundancy 82
virtualization feature 38
Index 451
IP parameters 160 AutoYaST with Open Firmware 321
ls command 161 boot sequence 335
network installation 322 configuration tool 336
setenv 321 CSM support 59
operating system (OS) disabling daemons 387
boot phase 417 general information 26
installation source 307 importing configuration 313
supported by JS21 26 installation 283
optical device sharing 233 iSCSI support 397
optical pass-thru I/O module 51 Kickstart 342
option blades 2 maintenance contract 26
supported level 26
sysctl 390
P Red Hat Enterprise Linux (RHEL)
parallel network installation 33
network installation 322
PCI-X SCSI disk array 173, 177, 182
Red Hat Package Manager (RPM) 59
Performance Optimization with Enhanced RISC
Redundant Array of Independent Disks (RAID) 9
(POWER) 4
ReiserFS 399
performance tuning tool 401
remote console 359
Peripheral Component Interconnect (PCI) 3
remote shell (rsh) 276
physical disk 188, 216, 244
Request For Comment (RFC) 397
physical volume 216–218, 245, 250
resource monitor 360
commands 215
resource monitoring 57
PKT file 104, 106
RETAIN tip H181655 56
Pluggable Authentication Modules (PAM) 26
RPM
pmap command 407
AutoUpdate 59
POWER Hypervisor 36–37
bcmflashdiag-js20 102
power management 358
conserver 60
power modules 23
fping-2.4b2-5 60
Power On Self Test (POST) 133, 135, 153
IBMJava2-JRE 60
power requirements for BladeCenter 4
openssl 277
power/thermal management architecture 23
sg3_utils-1.06-1.ppc64 59
PowerPC
tftp-HPA 60
970MP 13
rstprofdata command 255
AltiVec extensions 16
Run-Time Abstraction Services (RTAS) 101
processor
features 13
operating frequency 19 S
proxy ARP 49, 69–70 Samba Web Administration Tool (SWAT) 388
PUTTY 124 Secure Shell (SSH) 78, 359
during installation 308
Secure Sockets Layer (SSL) 78
R selecting active console 134
RAID 18
Serial Attached SCSI (SAS) 3–4, 18, 396
configuration tool 170
Serial over LAN (SoL) 32
preparing the disks 170
attaching to open console 130
Red Hat
boot sequence 92
disabling daemons 387
configuring management module 125
Red Hat Enterprise Linux
description 121
attended installation 330
Index 453
Trivial File Transfer Protocol (TFTP) 54, 56 creating logical partitions 223
configuration 287, 299 data protection 244
network installation 284 default storage pool 219
Red Hat bootable image 325 default user 212
testing service 290 define NIM client 202
tuning definition 39
file system 395 device sharing 233
memory 394 disk mirroring 249
ReiserFS 399 installation 193–194
Transmission Control Protocol (TCP) 399 from NIM server 194
User Datagram Protocol (UDP) 399 on RAID array 252
twgstart command 354 with NIM 202
without DVD drive 194
Integrated Virtualization Manager (IVM) 28, 42,
U 211
Universal Manageability Initiative 32
maintenance 253
Universal Serial Bus (USB) 24
micro-partitioning 39–40
uplink command 82
moving memory 237
uptime command 401
network configuration 242
USB CD-ROM 145
opening a virtual terminal 231
usessh 307
partition configuration 221
RAID support 41
V required version 38
varyoff command 246 rootvg mirroring 250
varyonvg command 248 storage management 216
vector length 13 storage pool 250
vector processing 14 rootvg 217
vector processor (VXU) 15 supported operating systems 38
vector register file (VRF) 16 VIOS partition 221
Vector/SIMD Multimedia eXtension 13 virtual disks 250
virtual disk 216, 227, 254 virtual Ethernet 37
LPAR operation 238 virtual processor operation 237
virtual Ethernet virtual SCSI 37
adapter 146, 226, 240 virtualization 35–36, 209
boot device 149 virtual LAN (VLAN) 46
bridging 242 4095 for SoL 50
with IVM 241 planning 46, 70
design 37 SoL 89
dynamic operations 237 virtual Ethernet 37
features 37 virtualization 36
Integrated Virtualization Manager (IVM) 211 Virtual Management Channel (VMC) 29–30
management 29 Virtual Network Computing (VNC) 130, 307
network traffic 41 virtual processor 39
VIOS setup 209 virtual SCSI
virtual LAN (VLAN) 42 adapter 42
virtual I/O adapters 37 client adapter 41
Virtual I/O Server (VIOS) 5, 27, 39 introduction 37, 41
backup and restore 255 server adapter 41
command-line interface 214 virtual terminal 231
W
worldwide name (WWN) 413
X
xinetd 289
Y
Yet Another Setup Tool (YaST) 294
Index 455
456 IBM BladeCenter JS21: The POWER of Blade Innovation
IBM BladeCenter JS21: The
POWER of Blade Innovation
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
Back cover ®