You are on page 1of 556

Front cover

IBM TotalStorage Productivity Center: The Next Generation


Effectively use the IBM TotalStorage Productivity Center Efficiently manage your storage subsystems using one interface Easily customize reports for your environment

Mary Lovelace Tom Conway Werner Eggli Marta Greselin Hartmut Harder Stefan Lein Massimo Mastrorilli

ibm.com/redbooks

International Technical Support Organization IBM TotalStorage Productivity Center: The Next Generation September 2006

SG24-7194-00

Note: Before using this information and the product it supports, read the information in Notices on page xi.

First Edition (September 2006) This edition applies to Version 3, Release 1, of IBM TotalStorage Productivity Center (product number 5608-VC0).

Copyright International Business Machines Corporation 2006. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii xiii xvi xvi

Chapter 1. Introduction to IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . 1 1.1 What is IBM TotalStorage Productivity Center? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 TotalStorage Productivity Center structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.2 IBM TotalStorage Productivity Center components . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 What is new in IBM TotalStorage Productivity Center V3.1 . . . . . . . . . . . . . . . . . . . . . . 8 1.2.1 Changes between TPC V3.1 and V2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.1 IBM TotalStorage Productivity Center Limited Edition . . . . . . . . . . . . . . . . . . . . . 11 1.3.2 IBM TotalStorage Productivity Center Standard Edition . . . . . . . . . . . . . . . . . . . . 12 1.3.3 IBM TotalStorage Productivity Center for Replication. . . . . . . . . . . . . . . . . . . . . . 12 Chapter 2. Key concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Standards used in IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 ANSI standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Web-Based Enterprise Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 Simple Network Management Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.5 Fibre Alliance MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Service Location Protocol overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 SLP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 SLP communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Configuration recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Common Information Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Component interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 CIMOM discovery with SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 How CIM Agent works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Tivoli Common Agent Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Tivoli Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Common Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Communication in TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 IBM TotalStorage Productivity Center basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Collecting data in general . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.2 Role-based Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. Installation planning and considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Installation prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Preinstallation check list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 TCP/IP ports used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 14 14 14 15 17 18 18 18 23 25 28 30 30 31 32 34 35 35 36 37 40 43 44 44 44 45 45 46 iii

Copyright IBM Corp. 2006. All rights reserved.

3.4 User IDs and security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 User IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Increasing user security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Certificates and key files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 Services and service accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Starting and stopping the managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Server recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Supported subsystems, devices, filesystems, databases . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Storage subsystem support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.2 Tape library support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.3 File system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.4 Network File System support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.5 Database support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49 50 50 51 51 52 52 53 53 53 53 54 54 54

Chapter 4. TotalStorage Productivity Center installation on Windows 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.1 TotalStorage Productivity Center installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.1 Typical installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.2 Custom installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.1.3 CD layout and components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.1 One-server environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.2.2 Two-server environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.3 Hardware prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.3.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.3.2 Disk space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.4 Software prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.4.1 Databases supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.5 Preinstallation steps for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.5.1 Verify primary domain name systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.5.2 Activate NetBIOS settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.5.3 Internet Information Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.5.4 Create Windows user ID to install Device server and Data server . . . . . . . . . . . . 65 4.5.5 User IDs and password to be used and defined . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.6 DB2 install for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.6.1 Agent Manager installation for Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.7 Install TotalStorage Productivity Center components . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.7.1 Verifying installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.7.2 Installing Data and Device Servers, GUI, and CLI . . . . . . . . . . . . . . . . . . . . . . . 100 4.8 Configuring the GUI for Web Access under Windows 2003 . . . . . . . . . . . . . . . . . . . . 107 4.8.1 Installing Internet Information Services (IIS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.8.2 Configuring IIS for the TotalStorage Productivity Center GUI. . . . . . . . . . . . . . . 110 4.8.3 Launch the TotalStorage Productivity Center GUI . . . . . . . . . . . . . . . . . . . . . . . 113 Chapter 5. TotalStorage Productivity Center installation on AIX . . . . . . . . . . . . . . . . 5.1 TotalStorage Productivity Center installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Typical installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Custom installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 CD layout and components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Hardware Prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Software Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Databases supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
IBM TotalStorage Productivity Center: The Next Generation

117 118 118 118 119 119 120 120 122

5.5 Preinstallation steps for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Verify primary domain name servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 User IDs, passwords, and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Create the TotalStorage Productivity Center user ID and group. . . . . . . . . . . . . 5.5.4 Creating and sizing file systems and logical volumes . . . . . . . . . . . . . . . . . . . . . 5.5.5 Verify port availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 DB2 installation for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Accessing the installation media with CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Accessing the installation media with a downloaded image . . . . . . . . . . . . . . . . 5.6.3 Preparing the display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.4 Beginning the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.5 Verifying the DB2 installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.6 Removing the CD from the server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Installing the DB2 fix pack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Obtaining and installing the latest DB2 fix pack . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.2 Add the root user to the DB2 instance group . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Agent Manager installation for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.1 Accessing the installation media using CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.2 Accessing the installation media using a downloaded image . . . . . . . . . . . . . . . 5.8.3 Preparing the display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.4 Beginning the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.5 Removing the CD from the server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Installing IBM TotalStorage Productivity Center for AIX . . . . . . . . . . . . . . . . . . . . . . . 5.9.1 Order of component installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.2 Accessing the installation media with a CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.3 Accessing the installation media with a downloaded image . . . . . . . . . . . . . . . . 5.9.4 Preparing the display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.5 Sourcing the environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.6 Assigning file system ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.7 Installing the database schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.8 Installing Data server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.9 Installing Device server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.10 Installing agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.11 Installing the Java Graphical User and the Command Line Interface . . . . . . . . 5.10 Installing the user interface for access with a Web browser . . . . . . . . . . . . . . . . . . . 5.10.1 Distributing the Graphical User Interface with a Web browser . . . . . . . . . . . . . Chapter 6. Agent deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Functional overview: Which agents do I need?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Types of agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 TotalStorage Productivity Center component use of agents. . . . . . . . . . . . . . . . 6.2 Agent infrastructure overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Agent deployment options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Local installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Remote installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Local installation of Data and Fabric Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Interactive installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Unattended (silent) installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Remote installation of Data and Fabric Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Preparing the remote installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Performing the remote installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Logfiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

122 122 123 124 125 125 127 127 127 127 128 143 143 143 143 145 146 146 146 147 147 157 157 157 158 158 158 159 159 159 165 171 177 182 186 187 195 196 196 197 200 201 201 202 202 203 210 211 212 214 223 226

Contents

6.7 Uninstalling Data and Fabric Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 Remote uninstallation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.2 Local uninstallation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Upgrading the Data Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 7. CIMOM installation and customization. . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Planning considerations for CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 CIMOM configuration recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 SNIA certification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Installing CIM Agent for ESS 800/DS6000/DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 CIM Agent and LIC level relationship for DS8000. . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 CIM Agent and LIC level relationship for DS6000. . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 ESS CLI Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.4 DS CIM Agent install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.5 Post-installation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.6 Configuring the DS CIM Agent for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.7 Restart the CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.8 CIMOM user authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Verifying connection to the storage subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Adding your CIMOM to the TotalStorage Productivity Center GUI . . . . . . . . . . . 7.5.2 Problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.3 Confirming that ESS CIMOM is available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.4 Start the CIM Browser. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Installing CIM agent for IBM DS4000 family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Registering DS4000 CIM agent if SLP-DA is in place. . . . . . . . . . . . . . . . . . . . . 7.6.2 Verifying and managing CIMOM availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Configuring CIMOM for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.1 Adding the SVC TotalStorage Productivity Center for Disk user account. . . . . . 7.7.2 Registering the SAN Volume Controller host in SLP . . . . . . . . . . . . . . . . . . . . . 7.8 Configuring CIMOM for McData switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.1 Planning the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.2 Supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.3 Installing SMI-S Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.4 Configuring the SMI-S interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.5 Verifying the connection with TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9 Configuring the CIM Agent for Brocade Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.1 Planning the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.2 Installing the CIM Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.3 SLP installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.4 Changing the ports used in the CIM Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.5 Connecting the CIM Agent with TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10 Configuring the CIM Agent for Cisco switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10.1 Enabling and configuring the CIM Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10.2 Connecting the CIM Agent with TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11 Configuring the CIM Agent for IBM Tape Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.2 SMI-S Agent for Tape Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.3 Configuring the SMI-S Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12 Discovering endpoint devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13 Verifying and managing CIMOMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14 Interoperability namespace summary table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.15 Planning considerations for SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

228 228 229 235 239 240 240 241 241 242 243 244 244 249 259 260 264 265 266 269 270 271 271 274 281 281 285 286 292 292 293 294 296 299 303 304 304 305 315 315 315 316 317 317 318 318 318 326 327 328 330 331

vi

IBM TotalStorage Productivity Center: The Next Generation

7.15.1 Considerations for using SLP DA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.15.2 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.15.3 General performance guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16 CIMOM registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16.1 Manual method to add a CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16.2 Automated method to add CIMOMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16.3 Configuring TotalStorage Productivity Center for SLP discovery . . . . . . . . . . . 7.16.4 Registering the CIM Agent to SLP-DA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16.5 Creating slp.reg file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8. Getting Started with TotalStorage Productivity Center. . . . . . . . . . . . . . . 8.1 Infrastructure summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 TotalStorage Productivity Center function overview . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 First steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Starting the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Logging on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 GUI basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Configuring CIMOMs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Verifying Data and Fabric Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Configuring out-of-band fabric connectivity (SNMP/API) . . . . . . . . . . . . . . . . . . 8.5 Collecting data about your infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Creating Probes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Creating Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Creating Pings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.4 Creating Performance Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Retrieving and displaying data about your Infrastructure . . . . . . . . . . . . . . . . . . . . . . 8.6.1 Viewing data about your storage subsystems. . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.2 Viewing data about your fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Viewing data about your tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.1 Viewing data about your computers and file systems . . . . . . . . . . . . . . . . . . . . . 8.8 Alerting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9 Configuring your storage subsystems and switches . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9. Topology viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Design principles and concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Progressive information disclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.2 Semantic Zooming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.3 Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.4 Information Overlays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.5 Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.6 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.7 Status propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.8 Hovering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.9 Pinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.10 Zone and zonesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.11 Removing entities from the database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.12 Refreshing the Topology Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Getting started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Launch the Topology Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

331 332 332 333 333 333 335 336 336 339 340 341 342 342 343 343 344 346 357 358 360 361 365 368 370 376 376 386 391 393 398 409 415 416 416 417 418 422 423 424 425 429 430 431 433 433 434 434 436 437 442

Contents

vii

9.2.5 Storage Subsystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 9.3 Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 9.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 Chapter 10. Managing and monitoring your storage subsystem . . . . . . . . . . . . . . . . 10.1 Case study 1: adding new servers and storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Server tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.2 Storage provisioning and zoning on Colorado . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Storage Provisioning and Zoning on AZOV ( 002-AZOV ) . . . . . . . . . . . . . . . . 10.1.4 Post activities for Colorado and AZOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.5 Manual zone configuration with TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Case Study 2: Detect and alert unwanted files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Prerequisite steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Create a Scan targeted to the specific filesystem . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Create a customized report for the scan-results . . . . . . . . . . . . . . . . . . . . . . . . 10.2.4 Set up the constraint definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.5 Rescan the filesystem to see if there is a violation . . . . . . . . . . . . . . . . . . . . . . 10.2.6 Activate Archive/Delete Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Case Study 3: Policy management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 The steps to perform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Create a Scan and a Report targeted to a specific filesystem . . . . . . . . . . . . . 10.3.3 Define an alert for a filesystem threshold condition (optional) . . . . . . . . . . . . . Chapter 11. Hints, tips and good to knows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Selecting an SMS or DMS tablespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 DB2 installation known issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 How to get rid of Engenio provider 5989 port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 DB2 view for CIMOM discovered each subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Valid characters for user ID and passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.1 Typical installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.2 Custom installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 How to change timeout value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7 Common Agent and Agent Manager documentation . . . . . . . . . . . . . . . . . . . . . . . . 11.7.1 For an installed Common Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.2 For an installed Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.3 Scripts to clean up TPC V3.1 component or complete install . . . . . . . . . . . . . . 11.8 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.8.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.9 Tivoli Common Agent Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.9.1 Locations of configured user IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.9.2 Tivoli Agent Manager status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.10 Verifying if a port is in use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User IDs and passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User IDs and passwords for key files and installation. . . . . . . . . . . . . . . . . . . . . . . . . . Storage device information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM TotalStorage Enterprise Storage Server, DS6000, DS8000 . . . . . . . . . . . . . . . . . IBM DS4000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 454 454 459 465 470 473 481 481 483 487 490 494 498 499 500 503 506 509 510 511 511 511 511 512 512 513 513 513 513 513 514 515 515 515 516 518 519 520 520 521 521 522 523 524

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 viii


IBM TotalStorage Productivity Center: The Next Generation

Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

525 525 526 526

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527

Contents

ix

IBM TotalStorage Productivity Center: The Next Generation

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.

Copyright IBM Corp. 2006. All rights reserved.

xi

Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX 5L AIX Cloudscape DB2 Universal Database DB2 DS4000 DS6000 DS8000 Enterprise Storage Server ESCON eServer FlashCopy ibm.com IBM iSeries NetView Power PC POWER4 POWER5 pSeries Redbooks (logo) Redbooks System Storage Tivoli Enterprise Console Tivoli Enterprise Tivoli TotalStorage WebSphere xSeries z/OS zSeries

The following terms are trademarks of other companies: Java, JDBC, JDK, JRE, JVM, Solaris, Sun, Sun Microsystems, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Active Directory, Internet Explorer, Microsoft, Windows Server, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Pentium, Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.

xii

IBM TotalStorage Productivity Center: The Next Generation

Preface
IBM TotalStorage Productivity Center is a suite of infrastructure management software that can centralize, automate, and simplify the management of complex and heterogeneous storage environments. It can help reduce the effort of managing complex storage infrastructures, improve storage capacity utilization, and improve administration efficiency. IBM TotalStorage Productivity Center allows you to respond to on demand storage needs and brings together, in a single point, the management of storage devices, fabric, and data. TotalStorage Productivity Center V3.1 is a rewrite of previous versions. This IBM Redbook shows you how to access the functions as compared to the previous releases. This IBM Redbook is intended for administrators and users who are installing and using IBM TotalStorage Productivity Center. It provides an overview of the product components and functions. It describes the hardware and software environment required and provides a stepby-step installation procedure. Customization and usage hints and tips are also provided. This book is not a replacement for existing IBM Redbooks or product manuals that detail the implementation and configuration of individual products that make up the IBM TotalStorage Productivity Center, or the products as they may have been called in previous versions. We refer to those books as appropriate throughout this book.

The team that wrote this redbook


This redbook was produced by specialists from around the world working at the International Technical Support Organization (ITSO), San Jose Center. The book was completed in two phases. The team pictures are shown in on this page and on page xiv.

From left to right: Massimo, Marta, Tom , and Mary Copyright IBM Corp. 2006. All rights reserved.

xiii

From left to right: Hartmut, Stefan, and Werner

Mary Lovelace is a Consulting IT specialist at the International Technical Support Organization. She has more than 20 years of experience with IBM in large systems, storage and Storage Networking product education, system engineering and consultancy, and systems support. She has written many redbooks on TotalStorage Productivity Center and z/OS storage products. Tom Conway is an Infrastructure Architect in the United States of America. He has 16 years of experience in the Open Systems Infrastructure field. He joined IBM in 2001 and became the Chief Engineer of the IBM Global Services SAN Interoperability Lab at the IBM National Test Center in Gaithersburg, Maryland. His areas of expertise include Open Systems server hardware, operating systems, networking, and storage hardware and software, including the IBM TotalStorage Productivity Center. He is an IBM Certified Professional Server Expert. Werner Eggli is a Senior IT Specialist with IBM Switzerland. He has more than 20 years experience in Software Development, Project Managment and Consulting concentrating in the Telecommunication Segment. Werner joined IBM in 2001 and works in presales as a Storage SE for Open Systems. His expertise is the design and implementation of IBM Storage Solutions (ESS/FAStT/LTO/NAS/SAN/SVC). He holds a degree in Dipl.Informatiker (FH) from Fachhochschule Konstanz, Germany. Marta Greselin is an IT specialist working for IBM Software Group in Italy. She joint IBM in 1999. Her role is Technical Sales Support in Tivoli. She has seven years of experience in selling and implementing Proof of Concept scenarios for Storage Management software solutions both in Tivoli and TotalStorage Open Software. She holds a degree in Physics from Universit Statale di Milano. Her area of expertise include IBM Tivoli Storage Manager, IBM TotalStorage Productivity Center, IBM TotalStorage SAN Volume Controller, IBM TotalStorage SAN File System. Hartmut Harder is an IT Specialist based in Karlsruhe, Germany. Before joining IBM in 1985 he ompleted his education as a certified engineer of electronics. He started with IBM ITS Delivery as a Hardware Specialist for Large System customers. In his more than 20 years of IT experience, he has been working two thirds of this time in several areas of Systems Management software products like Tivoli Framework,Tivoli Monitoring,Software Distribution xiv
IBM TotalStorage Productivity Center: The Next Generation

with NetView DM/2. For the last six years he has focussed on storage related products like Tivoli Storage Manager. His working knowledge is gained from supporting customers with planning,implementing and supporting IBM Storage Management Solutions. Stefan Lein is a Consulting IT Specialist working for Field Technical Sales Support in the IBM Storage Sales Organization in Germany. He joined IBM in 1993 and has worked in several sales and technical roles. He has five years of experience experience providing presales and postsales support for IBM TotalStorage solutions for open systems. His areas of special expertise include IBM Disk Systems and the IBM Storage Software solution portfolio. Stefan is a certified IBM Certified Specialist for TotalStorage Networking and Virtualization Architecture and for Open Systems Storage Solutions. He holds a degree in Computer Science of the University of Applied Science in Nrnberg Germany and a degree in economical engineering of the University of Applied Science in Wrzburg/Schweinfurt, Germany. Massimo Mastrorilli is an Advisory IT Storage Specialist in Switzerland. He joined IBM Italy in 1989 and seven years ago he moved to IBM Switzerland, based in Lugano. He has 16 years of experience in implementing, designing, and supporting Storage solutions in S390 and Open Systems environment. His areas of expertise include IBM Tivoli Storage Manager, SAN Storage Area Network, and Storage solutions for Open Systems. He is an IBM Certified Specialist for TSM, Storage Sales and Open System Storage Solutions. He is a member of Tivoli GRT Global Response Team group. Thanks to the following people for their contributions to this project: Robert Haimowitz Sangam Racherla International Technical Support Organization Diana Duan Doug Dunham Paul Lee Curtis Neal Jeanne Ostdiek Scott Venuti San Jose, California IBM USA Russ Warren Storage Software Project Management Research Triangle Park, North Carolina IBM USA Mike Griese Technical Support Marketing Rochester, Minnisota IBM USA Derek Jackson Advanced Technical Support Gaithersburg, Maryland IBM USA

Preface

xv

Tina Dunton Nancy Hobbs Sudhir Koka Bryant Lee Arvind Surve Bill Tuminaro Miki Walters TotalStorage Productivity Center Development San Jose, California IBM USA Eric Butler Andreas Dieberger Roberto Pineiro Ramani Routray IBM Research San Jose, California IBM USA

Become a published author


Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at:
ibm.com/redbooks

Send your comments in an email to:


redbook@us.ibm.com

Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400

xvi

IBM TotalStorage Productivity Center: The Next Generation

Chapter 1.

Introduction to IBM TotalStorage Productivity Center


IBM TotalStorage Productivity Center is a storage management solution that can help you reduce the effort required to manage complex storage infrastructures and improve storage utilization and administration efficiency. This solution allows you to respond to on demand storage needs and brings together, in a single point, the management of storage devices, fabric, and data. In this chapter, we provide an overview of IBM TotalStorage Productivity Center V3.1.

Copyright IBM Corp. 2006. All rights reserved.

1.1 What is IBM TotalStorage Productivity Center?


IBM TotalStorage Productivity Center is an integrated set of software components that provides end-to-end storage management, from the host and application to the target storage device in a heterogeneous platform environment. This software offering provides disk and tape library configuration and management, performance management, SAN fabric management and configuration, and host-centered usage reporting and monitoring from the perspective of the database application or file system. IBM TotalStorage Productivity Center: Simplifies the management of storage infrastructures. Manages, configures, and provisions SAN-attached storage. Monitors and tracks performance of SAN-attached devices. Monitors, manages, and controls (through zones) SAN fabric components. Manages the capacity utilization and availability of file systems and databases. TotalStorage Productivity Center V3.1 is an integrated storage infrastructure management solution that simplifies, automates and optimizes the management of storage devices, storage networks and capacity utilization of file systems and databases. It helps you manage the capacity utilization of file systems and databases and automate file system capacity provisioning, perform device configuration and management of multiple devices from a single user interface, tune and proactively manage the performance of storage devices on the SAN and manage, monitor and control your SAN fabric. TotalStorage Productivity Center V3.1 provides a single management platform that allows you to centralize how you manage your storage infrastructure. By providing an integrated suite with management modules focused on various aspects of the storage infrastructure, TotalStorage Productivity Center delivers the capability to do role based administration, single sign-on and a single management server and repository. The central console provides a centralized place to monitor, plan, configure, report and do problem determination on the SAN fabric, storage arrays and storage capacity.

1.1.1 TotalStorage Productivity Center structure


In this section, we look at the TotalStorage Productivity Center structure from the logical and physical view.

Logical Structure
The logical structure of TotalStorage Productivity Center V3.1 has three layers, as shown in Figure 1-1 on page 3. The infrastructure layer consists of basic function such as messaging, scheduling, logging, device discovery, and a consolidated database shared by all components of TotalStorage Productivity to ensure consistent operation and performance. The application layer consists of core TotalStorage Productivity Center management functions, based on the infrastructure implementation, that provide different disciplines of storage or data management. These application components are most often associated with the product components that make up the product suite, such as fabric management, disk management, replication management and data management. The interface layer presents integration points for the products that make up the suite. The integrated graphical user interface (GUI) brings together product and component functions into a single representation that seamlessly interacts with the components to centralize the tasks for planning, monitoring, configuring, reporting, topology viewing, and problem resolving.

IBM TotalStorage Productivity Center: The Next Generation

Interfaces

Automated Best Practices

Integrated User Interface


Provisioning / Workflow

WSDL

CL

Management Applications

Fabric Fabric

Disk Disk

Replication Replication

Performance Performance

Data Data

Others Others

Infrastructure

Device Discovery Device Discovery And Control And Control


Control Interface Control Translator
Queue

Consolidated Database Consolidated Database

Monitor Discovery Interface Interface


Discover API Proc ess Proc ess Proc ess Job Engine Queue Queue Queue

Monitor Copy Services (eCS) CIM ESSNI Agent SNMP Client Library Library Library Library
CIMSc anner SLPSc annerSLPPars er CIMPars er CIMObj ect Pars er Infr astr ucture Domai n s pecific Plug-ins Legend CIMProc ess or Profil e

CIMXMLParser

C l C a l C s a l s C s a M l s s a a M s p s a M p s p a e M p p r a DB e Driver p p r e p r e r

Scheduling, Scheduling, Messages Messages Logging Logging

Figure 1-1 TotalStorage Productivity Center V3.1 logical structure

Physical structure
IBM TotalStorage Productivity Center is comprised of the following elements: A data component IBM TotalStorage Productivity Center for Data (formerly IBM Tivoli Storage Resource Manager) A fabric component IBM TotalStorage Productivity Center for Fabric (formerly IBM Tivoli SAN Manager) A disk component IBM TotalStorage Productivity Center for Disk (formerly IBM TotalStorage Multiple Device Manager) A replication component (formerly IBM TotalStorage Multiple Device Manager Replication Manager) IBM TotalStorage Productivity Center includes a centralized suite installer and IBM TotalStorage Productivity Center for Data and IBM TotalStorage Productivity Center for Fabric share a common agent to manage the fabric as well as capacity utilization of file systems and databases. Figure 1-2 on page 4 shows the TotalStorage Productivity Center V3.1 physical structure.

Chapter 1. Introduction to IBM TotalStorage Productivity Center

T PC D atabase

Figure 1-2 TotalStorage Productivity Center structure

The Data server is the control point for product scheduling functions, configuration, event information, reporting, and GUI support. It coordinates communication with agents and data collection from agents that scan file systems and databases to gather storage demographics and populate the database with results. Automated actions can be defined to perform file system extension, data deletion, and Tivoli Storage Manager backup or archiving or event reporting when defined thresholds are encountered. The Data server is the primary contact point for GUI user interface functions. It also includes functions that schedule data collection and discovery for the Device server. The Device server component discovers, gathers information from, analyzes performance of, and controls storage subsystems and SAN fabrics. It coordinates communication with agents and data collection from agents that scan SAN fabrics. The single database instance serves as the repository for all TotalStorage Productivity Center components. The Data agents and Fabric agents gather host, application, and SAN fabric information and send this information to the Data server or Device server. The GUI allows you to enter information or receive information for all TotalStorage Productivity Center components. The command-line-interface (CLI) allows you to issue commands for major TotalStorage Productivity Center functions.

IBM TotalStorage Productivity Center: The Next Generation

1.1.2 IBM TotalStorage Productivity Center components


In this section we provide more details onthe components that make up the TotalStorage Productivity Center.

IBM TotalStorage Productivity Center for Data


IBM TotalStorage Productivity Center for Data is designed to provide a comprehensive storage resource management (SRM) solution for heterogeneous storage environments across the enterprise. It includes enterprise-wide reporting and monitoring, policy-based management, and automated capacity provisioning for direct attached storage (DAS), network attached storage (NAS), and storage area network (SAN) environments. TotalStorage Productivity Center for Data enables administrators to identify, manage, control, and predict storage usage. It also provides file system and database management, reporting on storage capacity and growth. TotalStorage Productivity Center for Data provides over 300 enterprise-wide reports, monitoring and alerts, policy based action and file system capacity automation in the heterogeneous environment. TotalStorage Productivity Center for Data helps improve capacity utilization of filesystems and databases and helps add intelligent data protection and retention practices. TotalStorage Productivity Center for Data performs the following functions: Discover and monitor disks, partitions, shared directories, and servers. Monitor and report on capacity and utilization across platforms to help you to identify trends and prevent problems. Monitor storage assets associated with enterprise-wide databases and issues notifications of potential problems. Provides a wide variety of standardized reports about filesystems, databases, and storage infrastructure to track usage and availability. Provide file analysis across platforms to help you to identify and reclaim space used by non-essential files. Provide policy-based management and automated capacity provisioning for file systems when user-defined thresholds are reached. Generate invoices that charge back for storage usage on a departmental, group, or user level. These functions that are available with Data Manager are designed to help lower storage costs by: Improving storage utilization Enabling intelligent capacity planning Supporting application availability through computer uptime reporting and application database monitoring. The architecture of IBM TotalStorage Productivity Center for Data enables system administrators to see all of the storage assets including direct-attached storage and network-attached storage. This comprehensive view of the entire storage map allows the administrators to manage much larger environments, but also get the information about utilization and usage that is typically required in large environments. The information collected by TotalStorage Productivity Center for Data can help you make intelligent decisions optimizing the utilization of your open system environments. The data collected by TotalStorage Productivity Center for Data helps you understand what is really going on with the data that resides on your servers. This includes views as to when files are
Chapter 1. Introduction to IBM TotalStorage Productivity Center

created, accessed and modified and by what group or user. This type of information enables system administrators to map the actual storage resource to the consumers of that resource. The ability to map storage consumption to storage hardware has become increasingly important as the size of open systems environments have increased. In addition to understanding the current consumption and usage of data within the enterprise, TotalStorage Productivity Center for Data keeps track of this information over time. Not only does this historical view of storage consumption and utilization allow you to see usage trends over time, it also enables the system administrator to see a projected use of storage into the future. This allows the system administrator to plan the purchase of additional capacity in a planned proactive manner rather than just reacting to being out of space. The major components of TotalStorage Productivity Center for Data are: Data Manager The manager controls the discovery, reporting, and alert functions. It does the following: Receives information from the agents and stores that information in the central repository Issues commands to agents for jobs. Receives requests from clients for information and retrieves the requested information from the central data repository. Data agents on managed systems An agent resides on each managed system. Each agent performs the following functions: Runs probes and scans. Collects storage-related information about the volumes or file systems that are accessible to the managed systems. Forwards information to the manager to be stored in the database repository. Web server The optional Web server permits remote Web access to the server. Clients Clients communicate directly to Data Manager to perform administration, monitoring, and reporting. A client can be a locally installed interface to Data Manager, or it can use the Web server to access the user interface through a Web browser.

IBM TotalStorage Productivity Center for Disk


IBM TotalStorage Productivity Center for Disk enables device configuration and management of SAN attached devices from a single console. In addition, it also includes performance capabilities to monitor and manage the performance of the disks. TotalStorage Productivity Center for Disk simplifies the complexity of managing multiple SAN attached storage devices. It allows you to manage SANs and heterogeneous storage from a single console. TotalStorage Productivity Center for Disk allows you to manage network storage components based on SMI-S, such as: IBM TotalStorage SAN Volume Controller (SVC) IBM TotalStorage Enterprise Storage Server (ESS) IBM TotalStorage Disk Subsystems (DS4000, DS6000, and DS8000 Series) Other storage subsystems that support the SMI-S standards Device discovery is performed by the Service Location Protocol (SLP), as specified by SMI-S. Configuration of the discovered devices is possible in conjunction with CIM agents associated with those devices, using the standard mechanisms defined in SMI-S. TotalStorage Productivity Center for Disk gathers events, and can launch an element manager specific to each discovered device.

IBM TotalStorage Productivity Center: The Next Generation

TotalStorage Productivity Center for Disk performance functions include: Collect and store performance data and provide alerts Provide graphical performance reports Helps optimize storage allocation Provide volume contention analysis Through the use of data collection, setting of thresholds and use of performance reports, performance can be monitored for the ESS, DS4000, DS6000, DS8000, SVC, and any other storage subsystem that supports the SMI-S block server performance subprofile. The performance function starts with the data collection task, responsible for capturing performance statistics for the devices and storing the data in the database. Thresholds can be set for certain performance metrics depending on the type of device. Threshold checking is performed during data collection. When performance is outside the specified boundaries, alerts can be generated. Once performance data has been collected, you can configure TotalStorage Productivity Center for Disk to present graphical or text reports on the historical performance behavior of specified devices. The performance reports provide information about the performance metrics and display past or current performance in graphical form.

IBM TotalStorage Productivity Center for Fabric


TotalStorage Productivity Center for Fabric provides automated device discovery, topology rendering, error detection fault isolation, SAN error predictor, zone control, real-time monitoring and alerts and event management for heterogeneous enterprise SAN environments. TotalStorage Productivity Center for Fabric simplifies the management and improves the availability of the SAN environment. The Fabric manager can monitor and report on SAN resources and switch performance. It provides a single location for zone control. TotalStorage Productivity Center for Fabric discovers existing zones and zone members and allows you to modify or delete them. In addition, you can create new zones. Support for aliases is also provided. Switch performance and capacity management reporting and monitoring can help determine if more bandwidth is needed. TotalStorage Productivity Center for Fabric gives you the ability to view events happening in your SAN environment and records state changes. The events are displayed in a color-coded fashion and can be further customized to reflect organizational priorities. TotalStorage Productivity Center for Fabric forwards events signaling topology changes or updates to the IBM Tivoli Enterprise Console, to another SNMP manager, or both. TotalStorage Productivity Center for Fabric supports host bus adapters (HBAs), disk subsystems, tape systems, SAN switches, routers and gateways. For a complete list of the supported devices, go to the following URL and navigate to the Fabric Manager support pages.
http://www.ibm.com/servers/storage/support/software/tpc

The components of TotalStorage Productivity Center for Fabric are: Fabric Manager The manager performs the following functions: Discovers SAN components and devices, Gathers data from agents on managed hosts, such as descriptions of SANs, and host information.

Chapter 1. Introduction to IBM TotalStorage Productivity Center

Generates Simple Network Management Protocol (SNMP) events when a change is detected in the SAN fabric. Forwards events to the Tivoli Enterprise Console or an SNMP console. Monitors switch performance by port and by constraint violations Fabric agents on managed hosts Each agent performs the following functions: Gathers information about the SAN by querying switches and devices for attribute and topology information. Gathers event information detected by host bus adapters (HBAs).

IBM TotalStorage Productivity Center for Replication


IBM TotalStorage Productivity Center for Replication simplifies copy services management for the IBM TotalStorage Enterprise Storage Server (ESS). IBM TotalStorage Productivity Center for Replication provides configuration and management of the FlashCopy and Synchronous PPRC capabilities of the ESS.

1.2 What is new in IBM TotalStorage Productivity Center V3.1


This section describes the enhancements and functions in Total Storage Productivity Center V3.1.

Topology Viewer
Within TotalStorage Productivity Center the Topology Viewer is designed to provide an extended graphical topology view; a graphical representation of the physical and logical resources (for example, computers, fabrics, and storage subsystems) that have been discovered in your storage environment. In addition, the Topology Viewer depicts the relationships among resources (for example, the disks comprising a particular storage subsystem). Detailed, tabular information (for example, attributes of a disk) is also provided. With all the information that topology viewer provides, you can easily and more quickly monitor and troubleshoot your storage environment. The overall goal of the Topology Viewer is to provide a central location to view a storage environment, quickly monitor and troubleshoot problems, and gain access to additional tasks and function within the TotalStorage Productivity Center UI without users losing their orientation to the environment. This kind of flexibility through the Topology Viewer UI will afford better cognitive mapping between the entities within the environment, and provide data about entities and access to additional tasks and functionality associated with the current environmental view and the user's role. The Topology Viewer uses the TotalStorage Productivity Center database as the central repository for all data that it displays. It actually reads the data in user definable intervals from the database and updates, if necessary, the displayed information automatically. Figure 1-3 on page 9 shows the Topology Viewer Overview view.

IBM TotalStorage Productivity Center: The Next Generation

Figure 1-3 Topology Viewer Overview view

Tape library support


TotalStorage Productivity Center provides support for tape library management. Tape Manager is present and available in the GUI whenever the Device Server is installed. Using Tape Manager, you can discover tape libraries, group libraries to monitor multiple libraries, view alerts generated by tape libraries, and launch tape library element managers. The tape libraries supported are: IBM 3584 IBM 3494 - limited support

Summary of changes in TotalStorage Productivity Center V3.1


TotalStorage Productivity Center V3.1 offers: A single, integrated package with a new topology viewer, providing an end-to-end view of the Storage Area Network (from hosts to physical disks). A simple, easy to install package with management server support added for IBM AIX V5.3, and integrating IBM DB2 as the management server database. The server components of IBM TotalStorage Productivity Center can now be installed on: Microsoft Windows 2003 AIX 5.3 (p5 supported) Linux RedHat 3.0 on xSeries Performance management support for the IBM TotalStorage DS4000 family and additional performance support for select IBM TotalStorage, Brocade, Cisco, and McData fabric switches and directors. Role-based task authentication which assists with implementing storage management that conforms to government initiatives. Support for managing IBM TotalStorage 3584 and limited support for IBM TotalStorage Enterprise Automated 3494 tape libraries. Support for third-party disk array systems that include Storage Management Interface Specification (SMI-S) Providers certified by the SNIA Conformance Test Program (CTP) to

Chapter 1. Introduction to IBM TotalStorage Productivity Center

be SMI-S 1.02 or SMI-S 1.1 compliant. This support includes storage provisioning, as well as asset and capacity reporting. Consolidated and enhanced device discovery and control through CIMOM All CIMOM related information gathered during CIMOM discovery are shared by all TotalStorage Productivity Center components Consistent reporting capabilities (scheduled and ad hoc) The scheduling capabilities of TotalStorage Productivity Center for Data have been extended to all components. Consolidated message logging Data export capabilities (HTML, CSV) Single set of services for consistent administration and operations: Policy definitions Event handling Resource groups New command line interface tpctool for configuration, fabric and disk management, and performance reporting. TotalStorage Productivity Center for Fabric adds support for SMI-S-based fabric, collecting performance statistics from IBM and third-party SAN fabrics. TotalStorage Productivity Center for Fabric is designed to provide an extended graphical topology view of your storage area network that displays the hosts, SAN fabric and storage, showing the SAN connectivity and its availability and also the fabric performance metrics and the status of the ports on the SAN fabric. TotalStorage Productivity Center V3.1 will provide the following support for any disk subsystems (including non IBM devices) that are SNIA SMI-S 1.0.2 or 1.1 compliant (for example, SNIA CTP provider certified). The support provided for these SMI-S compliant subystems will include those functions enabled by support for the required profiles of the SMI-S standard. Typically, that will include: Discovery of CIMOMs and storage subsystems (through SLP) Reporting on subsystem asset and capacity data (with details on storage subsystems, disk groups, disks, storage pools and volumes) Monitoring Provisioning (volume creation and volume mapping/masking to host server) Performance metrics for storage subsystem ports, subsystem volumes, and top level storage Computer Systems (including overall performance metrics for the storage device). It should also be noted that IBM will rely on the testing and certification being performed through SNIA for SMI-S compliance. The anticipated list of non IBM disk subsystems that TotalStorage Productivity Center V3.1 will support through SMI-S compliance for the functions listed here include: EMC Symmetrix EMC Clariion Engenio subsystems HDS Thunder 9500V HDS Lightning 9900V HPQ XP 512, XP 1024 HPQ Storage Works Virtual Array family

10

IBM TotalStorage Productivity Center: The Next Generation

For a complete list of third-party device support of SMI-S, consult the SNIA Web site:
http://www.snia.org/ctp

1.2.1 Changes between TPC V3.1 and V2


The following is a list of TotalStorage Productivity Center items in prior releases that are not in TotalStorage Productivity Center V3.1. Workflow integration of TotalStorage Productivity Center with Tivoli Provisioning Manager is not supported. ED-FI or SAN error predictor functionality is not supported in this release. TotalStorage Productivity Center for Replication is not integrated with the V3.1 components (TotalStorage Productivity Center for Data, TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Fabric). Prerequisite Software Installer Suite Installer ICAT installation Tivoli NetView installation, uninstallation, and upgrade IBM Director base IIBM TotalStorage Productivity Center for Data supported its database repository on Oracle, SQL Server, and Cloudscape in Version 2.x. These databases are not supported in TotalStorage Productivity Center V3.1. Installing Simple Network Management Protocol (SNMP) service with community name of public and Management Information Base (MIB) files is no longer necessary. iSCSI support in Fabric Manager

1.3 Licensing
All IBM TotalStorage Productivity Center licenses offer a common set of general functionalities, like the TotalStorage Productivity Center Administrative Services, Disk Management (not including performance management), Fabric Management (not including performance management) and Tape Management (reporting). IBM TotalStorage Productivity Center licensing is based on the full usable terabyte capacity of the associated storage devices to be managed

1.3.1 IBM TotalStorage Productivity Center Limited Edition


IBM TotalStorage Productivity Center Limited Edition is packaged with IBM TotalStorage Family of DS Platforms, including DS8000 DS6000 DS4000 SAN Volume Controller SVC IBM 3584 tape library Productivity Center Limited Edition is designed to help you: Discover and configure IBM and heterogeneous SMI-S supported devices. Perform event gather, error logging and launch device element managers. Provide basic asset and capacity reporting.
Chapter 1. Introduction to IBM TotalStorage Productivity Center

11

Display an end-to-end topology view of your storage infrastructure and health console. Enable a simple upgrade path to IBM TotalStorage Productivity Center Standard Edition (or single-priced modules). IBM TotalStorage Productivity Center Limited Edition is a management option offered with IBM midrange and high-end storage systems. This tool provides storage administrators with a simple way to conduct device management for multiple storage arrays and SAN fabric components from a single integrated console that also is the base of operations for the IBM TotalStorage Productivity Center suite. This offering is available with Productivity Center for Data, Fabric and Disk.

Enabling additional function


To enable additional function when IBM TotalStorage Productivity Center Limited Edition is installed you would add the corresponding license key. Installing the IBM TotalStorage Productivity Center for Disk license enables: Subsystem performance monitors Performance triggering in subsystem alerts Advanced performance reports Performance overlay in topology Installing the IBM TotalStorage Productivity Center for Fabric license enables: Fabric performance monitors Performance triggering in fabric alert Advanced performance report Performance overlay in topology

1.3.2 IBM TotalStorage Productivity Center Standard Edition


IBM TotalStorage Productivity Center Standard Edition brings together TotalStorage Productivity Center for Data, TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Fabric with the advanced performance management capabilities in one package and can provide an end-to-end view of the storage infrastructure. IBM TotalStorage Productivity Center Standard Edition provides all the management capabilities to better manage your heterogeneous storage infrastructure from application to back-end storage system disk at a single bundled price. With Productivity Center Standard Edition you can centralize the management of your storage infrastructure from a single interface using role-based administration and single sign-on. This also provides a single management application with modular integrated components that easy to install, configure and operate. Productivity Center Standard Edition additionally can manage performance and connectivity from the host file system to the physical disk, including in-depth performance monitoring and analysis on SAN fabric performance. TotalStorage Productivity Center Standard Edition provides a main storage management application to monitor, plan, configure, report and do problem determination on heterogeneous storage infrastructure.

1.3.3 IBM TotalStorage Productivity Center for Replication


IBM TotalStorage Productivity Center for Replication simplifies the configuration of advanced copy services for the IBM TotalStorage ESS, attached to z/OS or open systems. Productivity Center for Replication will also monitor the advanced copy services and notify the storage administrator if problems are encountered.

12

IBM TotalStorage Productivity Center: The Next Generation

Chapter 2.

Key concepts
There are industry standards and protocols that are the basis of the IBM TotalStorage Productivity Center. The understanding of these concepts is important for installing and customizing the IBM TotalStorage Productivity Center. This chapter describes the standards on which the IBM TotalStorage Productivity Center is built, as well as the methods of communication used to discover and manage storage devices. Communication between various components of IBM TotalStorage Productivity Center will also be discussed. Diagrams are provided to show the relationship and interaction of the various elements in the IBM TotalStorage Productivity Center environment.

Copyright IBM Corp. 2006. All rights reserved.

13

2.1 Standards used in IBM TotalStorage Productivity Center


This section presents an overview of the standards that are used within IBM TotalStorage Productivity Center by the different components. SLP and CIM are described in detail since they are new concepts to many people that work with TotalStorage Productivity Center and are important to understand. Vendor specific tools are available to manage devices in the SAN, but these proprietary interfaces are not used within TotalStorage Productivity Center. The only exception is the application programming interface (API) that Brocade has made available to manage their Fibre Channel switches. This API is used within TotalStorage Productivity Center.

2.1.1 ANSI standards


Several standards have been published for the in-band management of storage devices, for example, SCSI Enclosure Services (SES).

T11 committee
Since the 1970s, the objective of the ANSI T11 committee is to define interface standards for high-performance and mass storage applications. Since that time, the committee has completed work on three projects: High-Performance Parallel Interface (HIPPI) Intelligent Peripheral Interface (IPI) Single-Byte Command Code Sets Connection (SBCON) Currently the group is working on Fibre Channel (FC) and Storage Network Management (SM) standards.

Fibre Channel Generic Services


The Fibre Channel Generic Services (FC-GS-3) Directory Service and the Management Service are being used within IBM TotalStorage Productivity Center for the SAN management. The availability and level of function depends on the implementation by the individual vendor. TotalStorage Productivity Center uses this standard. For more information about the T11 committee, go to:
http://www.t11.org/index.htm

2.1.2 Web-Based Enterprise Management


Web-Based Enterprise Management (WBEM) is a initiative of the Distributed Management Task Force (DMTF) with the objective to enable the management of complex IT environments. It defines a set of management and Internet standard technologies to unify the management of complex IT environments. The three main conceptual elements of the WBEM initiative are: Common Information Model (CIM) CIM is a formal object-oriented modeling language that is used to describe the management aspects of systems. See also 2.3, Common Information Model on page 28. xmlCIM This is a grammar to describe CIM declarations and messages used by the CIM protocol. Hypertext Transfer Protocol (HTTP)

14

IBM TotalStorage Productivity Center: The Next Generation

Hypertext Transfer Protocol over Secure Socket Layer (HTTPS) HTTP and HTTPS are used as a way to enable communication between a management application and a device that both use CIM. For more information go to:
http://www.dmtf.org/standards/wbem/

WBEM architecture
The WBEM architecture defines the following elements: CIM Client The CIM Client is a management application that uses CIM to manage devices. A CIM Client can reside anywhere in the network, because it uses HTTP(S) to talk to CIM Object Managers and Agents. TotalStorage Productivity Center incorporates a CIM Client. CIM Managed Object A CIM Managed Object is a hardware or software component that can be managed by a management application using CIM. CIM Agent The CIM Agent is embedded into a device or it can be installed on the server using the CIM provider as the translator of devices proprietary commands to CIM calls, and interfaces with the management application (the CIM Client). The CIM Agent is linked to one device. CIM Provider A CIM Provider is the element that translates CIM calls to the device-specific commands. That is the reason why a CIM-Provider is - in most of the cases - delivered from the hardware vendor. You can imagine it like a device driver. A CIM Provider is always closely linked to a CIM Object Manager or CIM Agent. CIM Object Manager A CIM Object Manager (CIMOM) is a part of the CIM Server that links the CIM Client to the CIM Provider. It enables a single CIM Agent to talk to multiple devices. CIM Server A CIM Server is the software that runs the CIMOM and the CIM provider for a set of devices. This approach is used when the devices do not have an embedded CIM Agent. This term is often not used. Instead people often use the term CIMOM when they really mean the CIM Server.

2.1.3 Storage Networking Industry Association


The Storage Networking Industry Association (SNIA) defines standards that are used within IBM TotalStorage Productivity Center. You can find more information at the following Web site:
http://www.snia.org

Fibre Channel Common HBA API


The Fibre Channel Common HBA API is used as a standard for in-band storage management. It acts as a bridge between a SAN management application like Fabric Manager and the Fibre Channel Generic Services. IBM TotalStorage Productivity Center for Fabric Agent uses this standard.

Chapter 2. Key concepts

15

Storage Management Initiative - Specification


SNIA has fully adopted and enhanced the CIM for Storage Management in its SMI-S. SMI-S was launched in mid-2002 to create and develop a universal open interface for managing storage devices including storage networks. The idea behind SMI-S is to standardize the management interfaces so that management applications can use these and provide cross device management. This means that a newly introduced device can be immediately managed as it conforms to the standards. SMI-S extends CIM and WBEM with the following features: A single management transport Within the WBEM architecture, the CIM-XML over HTTP protocol was selected for this transport in SMI-S. A complete, unified, and rigidly specified object model SMI-S defines profiles and recipes within the CIM that enables a management client to reliably use a component vendors implementation of the standard, such as the control of LUNs and zones in the context of a SAN. Consistent use of durable names As a storage network configuration evolves and is reconfigured, key long-lived resources, such as disk volumes, must be identified uniquely and consistently over time. Rigorously documented client implementation considerations SMI-S provides client developers with vital information for traversing CIM classes within a device or subsystem and between devices and subsystems such that complex storage networking topologies can be successfully mapped and reliably controlled. An automated discovery system SMI-S compliant products, when introduced in a SAN environment, automatically announce their presence and capabilities to other constituents using SLP (see 2.2.1, SLP architecture on page 18). Resource locking SMI-S compliant management applications from multiple vendors can exist in the same storage device or SAN and cooperatively share resources through a lock manager. The models and protocols in the SMI-S implementation are platform-independent, enabling application development for any platform, and enabling them to run on different platforms. The SNIA also provides interoperability tests which help vendors to test their applications and devices if they conform to the standard. For information about SNIA certified devices, go to the SNIA OEM Vendor Certification site:
http://www.snia.org/ctp/conformingproviders

Managers or components that use this standard include: IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Data

16

IBM TotalStorage Productivity Center: The Next Generation

2.1.4 Simple Network Management Protocol


The SNMP is an Internet Engineering Task Force (IETF) protocol for monitoring and managing systems and devices in a network. Functions supported by the SNMP protocol are the request and retrieval of data, the setting or writing of data, and traps that signal the occurrence of events. SNMP is a method that enables a management application to query information from a managed device. The managed device has software running that sends and receives the SNMP information. This software module is usually called the SNMP agent.

Device management
An SNMP manager can read information from an SNMP agent to monitor a device. Therefore the device needs to be polled on an interval basis. The SNMP manager can also change the configuration of a device, by setting certain values to corresponding variables. Managers or components that use these standards include the IBM TotalStorage Productivity Center for Fabric.

Traps
A device can also be set up to send a notification to the SNMP manager (this is called a trap) to inform this SNMP manager asynchronously of a status change. Depending on the existing environment and organization, it is likely that your environment already has an SNMP management application in place. The managers or components that use this standard are: IBM TotalStorage Productivity Center for Fabric (sending and receiving of traps) IBM TotalStorage Productivity Center for Data can be set up to send traps, but does not receive traps

Management Information Base


SNMP use a hierarchical structured Management Information Base (MIB) to define the meaning and the type of a particular value. An MIB defines managed objects that describe the behavior of the SNMP entity, which can be anything from a IP router to a storage subsystem. Each SNMP managed device has its own MIB. The information is organized in a tree structure. Note: For more information about SNMP, refer to TCP/IP Tutorial and Technical Overview, GG24-3376.

IBM TotalStorage Productivity Center for Data MIB file


For users planning to use the IBM TotalStorage Productivity Center for Data SNMP trap alert notification capabilities, an SNMP MIB is included in the server installation. You can find the SNMP MIB in the file tivoli_install_directory/snmp/tivoliSRM.MIB. The MIB is provided for use by your SNMP management console software. Most SNMP management station products provide a program called an MIB compiler that can be used to import MIBs. This allows you to better view Productivity Center for Data generated SNMP traps from within your management console software. Refer to your management console software documentation for instructions on how to compile or import a third-party MIB.

Chapter 2. Key concepts

17

2.1.5 Fibre Alliance MIB


The Fibre Alliance has defined an MIB for the management of storage devices. The Fibre Alliance is presenting the MIB to the IETF standardization. The intention of putting together this MIB was to have one MIB that covers most (if not all) of the attributes of storage devices from multiple vendors. The idea was to have only one MIB that is loaded onto an SNMP manager, one MIB file for each component. However, this requires that all devices comply with that standard MIB, which is not always the case. Note: This MIB is not part of IBM TotalStorage Productivity Center. To learn more about Fibre Alliance and MIB, refer to the following Web sites:
http://www.fibrealliance.org http://www.fibrealliance.org/fb/mib_intro.htm

2.2 Service Location Protocol overview


The Service Location Protocol (SLP) is an IETF standard, documented in Request for Comments (RFCs) 2165, 2608, 2609, 2610, and 2614. SLP provides a scalable framework for the discovery and selection of network services. SLP enables the discovery and selection of generic services, which can range in function from hardware services such as those for printers or fax machines, to software services such as those for file servers, e-mail servers, Web servers, databases, or any other possible services that are accessible through an IP network. Traditionally, to use a particular service, an end-user or client application needs to supply the host name or network IP address of that service. With SLP, however, the user or client no longer needs to know individual host names or IP addresses (for the most part). Instead, the user or client can search the network for the desired service type and an optional set of qualifying attributes. For example, a user can specify to search for all available printers that support PostScript, based on the given service type (printers), and the given attributes (PostScript). SLP searches the users network for any matching services and returns the discovered list to the user.

2.2.1 SLP architecture


The SLP architecture includes three major components, a Service Agent (SA), a User Agent (UA), and a Directory Agent (DA). The SA and UA are required components in an SLP environment, where the SLP DA is optional. The SMI-S specification introduces SLP as the method for the management applications (the CIM clients) to locate managed objects. In SLP, an SA is used to report to UAs that a service that has been registered with the SA is available. For more information see:
http://www.caldera.com/support/docs/volution/vm/ig/appendixb.html#AEN2455

The following sections describes the components of SLP.

18

IBM TotalStorage Productivity Center: The Next Generation

Service Agent
The SLP SA is a component of the SLP architecture that works on behalf of one or more network services to broadcast the availability of those services by using broadcasts. The SA replies to external service requests using IP unicasts to provide the requested information about the registered services, if it is available. The SA can run in the same process or in a different process as the service itself. In either case, the SA supports registration and deregistration requests for the service (as shown in the right part of Figure 2-1). The service registers itself with the SA during startup, and removes the registration for itself during shutdown. In addition, every service registration is associated with a life-span value, which specifies the time that the registration will be active. In the left part of the diagram, you can see the interaction between a UA and the SA.

Figure 2-1 SLP SA interactions (without SLP DA)

A service is required to reregister itself periodically, before the life-span of its previous registration expires. This ensures that expired registration entries are not kept. For instance, if a service becomes inactive without removing the registration for itself, that old registration is removed automatically when its life span expires. The maximum life span of a registration is 65535 seconds (about 18 hours).

User Agent
The SLP UA is a process working on the behalf of the user to establish contact with some network service. The UA retrieves (or queries for) service information from the Service Agents or Directory Agents. The UA is a component of SLP that is closely associated with a client application or a user who is searching for the location of one or more services in the network. You can use the SLP UA by defining a service type that you want the SLP UA to locate. The SLP UA then retrieves a set of discovered services, including their service Uniform Resource Locator (URL) and any service attributes. You can then use the services URL to connect to the service. The SLP UA locates the registered services, based on a general description of the services that the user or client application has specified. This description usually consists of a service type, and any service attributes, which are matched against the service URLs registered in the SLP Service Agents. The SLP UA usually runs in the same process as the client application, although it is not necessary to do so. The SLP UA processes find requests by sending out multicast messages to the network and targeting all SLP SAs within the multicast range with a single User Datagram Protocol (UDP) message. The SLP UA can, therefore, discover these SAs with a

Chapter 2. Key concepts

19

minimum of network overhead. When an SA receives a service request, it compares its own registered services with the requested service type and any service attributes, if specified, and returns matches to the UA using a unicast reply message. The SLP UA follows the multicast convergence algorithm and sends repeated multicast messages until no new replies are received. The resulting set of discovered services, including their service URL and any service attributes, are returned to the client application or user. The client application or user is then responsible for contacting the individual services, as needed, using the services URL (see Figure 2-2).

Figure 2-2 SLP UA interactions without SLP DA

An SLP UA is not required to discover all matching services that exist in the network, but only enough of them to provide useful results. This restriction is mainly the result of the transmission size limits for UDP packets. They can be exceeded when there are many registered services or when the registered services have lengthy URLs or a large number of attributes. However, in most modern SLP implementations, the UAs can recognize truncated service replies and establish TCP connections to retrieve all of the information of the registered services. With this type of UA and SA implementation, the only exposure that remains is when there are too many SAs within the multicast range. This can cut short the multicast convergence mechanism. This exposure can be mitigated by the SLP administrator by setting up one or more SLP DAs. The SLP process shown in the diagram above can be described as follows. The application needs a specific service and issues an find request The UA starts a multicast service request One or more SAs are answering and providing service information The application gets feedback from the UA The application is able to contact and use the service by their own Note: TotalStorage Productivity Center acts as a User Agent in terms of SLP communication. 20
IBM TotalStorage Productivity Center: The Next Generation

Directory Agent
The SLP DA is an optional component of SLP that collects and caches network service broadcasts. The DA is used primarily to simplify SLP administration and to improve SLP performance. You can consider the SLP DA as an intermediate tier in the SLP architecture. It is placed between the UAs and the SAs so that both UAs and SAs communicate only with the DA instead of with each other. This eliminates a large portion of the multicast request or reply traffic in the network. It also protects the SAs from being overwhelmed by too many service requests if there are many UAs in the environment. Figure 2-3 shows the interactions of the SLP UAs and SAs in an environment with SLP DAs.

Figure 2-3 SLP User Agent interactions with User Agent and Service Agent

When SLP DAs are present, the behavior of both SAs and UAs changes significantly. When an SA is first initializing, it performs a DA discovery using a multicast service request. It also specifies the special, reserved service type service:directory-agent. This process is also called active DA discovery. It is achieved through the same mechanism as any other discovery using SLP. Similarly, in most cases, an SLP UA also performs active DA discovery using multicasting when it first starts. However, if the SLP UA is statically configured with one or more DA addresses, it uses those addresses instead. If it is aware of one or more DAs, either through static configuration or active discovery, it sends unicast service requests to those DAs instead of multicasting to SAs. The DA replies with unicast service replies, providing the requested service URLs and attributes. Figure 2-4 on page 22 shows the interactions of UAs and SAs with DAs, during active DA discovery.

Chapter 2. Key concepts

21

Figure 2-4 SLP Directory Agent discovery interactions

The SLP DA functions similarly to an SLP SA, receiving registration and deregistration requests, and responding to service requests with unicast service replies. There are a couple of differences, where DAs provide more functionality than SAs. One area, mentioned previously, is that DAs respond to service requests of the service:directory-agent service type with a DA advertisement response message, passing back a service URL containing the DAs IP address. This allows SAs and UAs to perform active discovery on DAs. One other difference is that when a DA first initializes, it sends a multicast DA advertisement message to advertise its services to any existing SAs (and UAs) that may already be active in the network. UAs can optionally listen for, and SAs are required to listen for, such advertisement messages. This listening process is also sometimes called passive DA discovery. When the SA finds a new DA through passive DA discovery, it sends registration requests for all its currently registered services to that new DA. Figure 2-5 shows the interactions of DAs with SAs and UAs, during passive DA discovery.

Figure 2-5 Service Location Protocol passive DA discovery

22

IBM TotalStorage Productivity Center: The Next Generation

Why use an SLP DA?


The primary reason to use DAs is to reduce the amount of multicast traffic involved in service discovery. In a large network with many UAs and SAs, the amount of multicast traffic involved in service discovery can become so large that network performance degrades. By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA discovery. SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for normal UA request, delays and time outs are eliminated. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load. In networks without multicasting enabled, you can configure SLP to use broadcast. However, broadcast is inefficient, because it requires each host to process the message. Broadcast also does not normally propagate across routers. As a solution in a network without multicast, DAs can be deployed on multihomed hosts to bridge SLP advertisements between the subnets.

When to use DAs


Use DAs in your enterprise when any of the following conditions are true: Multicast SLP traffic exceeds one percent (1%) of the bandwidth on your network, as measured by using a network traffic analyzer. UA clients experience long delays or timeouts during multicast service request. You want to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. Your network does not have multicast enabled and consists of multiple subnets that must share services.

2.2.2 SLP communication


SLP uses three methods to send messages across an IP network, unicast, broadcast, or multicast. Data can be sent to one single destination (unicast) or to multiple destinations that are listening at the same time (multicast). The difference between a multicast and a broadcast is quite important. A broadcast addresses all stations in a network. Multicast messages are only by those stations that are member of a multicast group (that have joined a multicast group).

Unicast
The most common communication method, unicast, requires that a sender of a message identifies one and only one target of that message. The target IP address is encoded within the message packet, and is used by the routers along the network path to route the packet to the proper destination. If a sender wants to send the same message to multiple recipients, then multiple messages must be generated and placed in the network, one message per recipient. When there are many potential recipients for a particular message, then this places an unnecessary strain on network resources, since the same data is duplicated many times, where the only difference is the target IP address encoded within the messages.

Chapter 2. Key concepts

23

Broadcast
In cases where the same message must be sent to many targets, broadcast is a much better choice than unicast, because it puts much less strain in the network. Broadcasting uses a special IP address, 255.255.255.255, which indicates that the message packet is intended to be sent to all nodes in a network. As a result, the sender of a message needs to generate only a single copy of that message, and can still transmit it to multiple recipients, that is to all members of the network. The routers multiplex the message packet, as it is sent along all possible routes in the network to reach all possible destinations. This puts much less strain on network bandwidth, because only a single message stream enters the network, as opposed to one message stream per recipient. However, it puts much more strain on the individual nodes (and routers) in the network, since every node receives the message, even though most likely not every node is interested in the message. This means that those members of the network that were not the intended recipients, who receive the message anyway, must receive the unwanted message and discard it. Due to this inefficiency, in most network configurations, routers are configured to not forward any broadcast traffic. This means that any broadcast messages can only reach nodes on the same subnet as the sender.

Multicast
The ability of the SLP to discover automatically services that are available in the network, without a lot of setup or configuration, depends in a large part on the use of IP multicasting. IP multicasting is a broad subject in itself, therefore only a brief, simple overview is provided here. Multicasting can be thought of as more sophisticated broadcast, which aims to solve some of the inefficiencies inherent in the broadcasting mechanism. With multicasting, again the sender of a message has to generate only a single copy of the message, saving network bandwidth. However unlike broadcasting, with multicasting, not every member of the network receives the message. Only those members who have explicitly expressed an interest in the particular multicast stream receive the message. Multicasting introduces a concept called a multicast group, where each multicast group is associated with a specific IP address. A particular network node (host) can join one or more multicast groups, which notifies the associated router or routers that there is an interest in receiving multicast streams for those groups. When the sender, who does not necessarily have to be part of the same group, sends messages to a particular multicast group, that message is routed appropriately to only those subnets, which contain members of that multicast group. This avoids flooding the entire network with the message, as is the case for broadcast traffic.

Multicast addresses
The Internet Assigned Numbers Authority (IANA), which controls the assignment of IP addresses, has assigned the old Class D IP address range to be used for IP multicasting. Of this entire range, which extends from 224.0.0.0 to 239.255.255.255, the 224.0.0.* addresses are reserved for router management and communication. Some of the 224.0.1.* addresses are reserved for particular standardized multicast applications. Each of the remaining addresses corresponds to a particular general purpose multicast group. The Service Location Protocol uses address 239.255.255.253 for all its multicast traffic. The port number for SLP is 427, for both unicast and multicast.

24

IBM TotalStorage Productivity Center: The Next Generation

SLP messages
In order to be able to provide a framework for service location, SLP agents communicate with each other using eleven different types of messages. The dialog between agents is usually limited to these very simple exchanges of request and reply messages shown in Figure 2-6.

Message Service Request (SrvRqst) Service Reply (SrvRply) Service Registration (SrvReg) Service Deregister (SrvDeReg) Service Acknowledge (SrvAck) Attribute Request (AttrRqst) Attribute Reply (AttrRply)

Description Sent by UAs to SAs and DAs to request the location of a service. Sent by SAs and DAs containing information about a service that is no longer available. Sent by SAs to DAs containing information about a service that is available. Sent by SAs to inform DAs that a service is no longer available. A generic acknowledgement that is sent by DAs to SAs as a reply to SrvReg and SrvDeReg messages. Sent by UAs to request the attributes of a service. Sent by SAs and DAs in reply to a AttrRqst. The AttrRply contains the list of attributes that were requested. Sent by UAs to SAs and DAs requesting the types of services that are available. Sent by SAs and DAs in reply to a SrvTypeRqst. The SrvTypeRply contains a list of requested service types. Sent by DAs to let SAs and UAs know where they are. Sent by SAs to let UAs know where they are.

Service Type Request (SrvTypeRqst) Service Type Reply (SrvTypeReply) DA Advertisement (DAAdvert) SA Advertisement (SAAdvert)

Figure 2-6 Dialog syntax used in SLP communications

2.2.3 Configuration recommendations


Ideally, after TotalStorage Productivity Center is installed, it would discover all storage devices that it can reach physically over the IP network. However in most situations, this is not the case. This is primarily the result of the previously mentioned limitations of multicasting and the fact that the majority of routers have multicasting disabled by default. As a result, in most cases without any additional configuration, TotalStorage Productivity Center discovers only those storage devices that reside in its own subnet, but no more. The following sections provide some configuration recommendations to enable TotalStorage Productivity Center to discover a larger set of storage devices.

Router configuration
The majority of components which allow multicasting to work is implemented in the router operating system software. As a result, it is necessary to configure the routers properly in the network to allow multicasting to work effectively. Unfortunately, there is a dizzying array of protocols and algorithms which can be used to configure particular routers to enable multicasting. The most common ones are: Internet Group Management Protocol (IGMP) is used to register individual hosts in particular multicast groups, and to query group membership on particular subnets. Distance Vector Multicast Routing Protocol (DVMRP) is a set of routing algorithms that use a technique called Reverse Path Forwarding to decide how multicast packets are to be routed in the network.

Chapter 2. Key concepts

25

Protocol-Independent Multicast (PIM) comes in two varieties: dense mode (PIM-DM) and sparse mode (PIM-SM). They are optimized to networks where either a large percentage of nodes require multicast traffic (dense), or a small percentage require the traffic (sparse). Multicast Open Shortest Path First (MOSPF) is an extension of OSPF, a link-state unicast routing protocol that attempts to find the shortest path between any two networks or subnets to provide the most optimal routing of packets. The routers of interest are all those which are associated with subnets that contain one or more storage devices which are to be discovered and managed by TotalStorage Productivity Center. You can configure the routers in the network to enable multicasting in general, or at least to allow multicasting for the SLP multicast address, 239.255.255.253, and port, 427. This is the most generic solution and permits discovery to work the way that it was intended by the designers of SLP. To properly configure your routers for multicasting, refer to your router manufacturers reference and configuration documentation. Although older hardware may not support multicasting, all modern routers do. However, in most cases, multicast support is disabled by default, which means that multicast traffic is sent only among the nodes of a subnet but is not forwarded to other subnets. For SLP, this means that service discovery is limited to only those agents which reside in the same subnet.

Firewall configuration
In the case where one or more firewalls are used between TotalStorage Productivity Center and the storage devices that are to be managed, the firewalls need to be configured to pass traffic in both directions as SLP communication is 2-way. This means that when TotalStorage Productivity Center, for example, queries an SLP DA which is behind a firewall for the registered services, the response will not use an already opened TCP/IP session but will establish another connection in the direction from the SLP DA to the TotalStorage Productivity Center. For this reason port 427 should be opened in both directions otherwise the response will not be received and TotalStorage Productivity Center will not recognize services offered by this SLP DA.

SLP DA configuration
If router configuration is not feasible, another technique is to use SLP DAs to circumvent the multicast limitations. Since with statically configured DAs all service requests are unicast instead of multicast by the UA, it is possible to simply configure one DA for each subnet that contains storage devices which are to be discovered by TotalStorage Productivity Center. One DA is sufficient for each of such subnets, although more can be configured without harm perhaps for reasons of fault tolerance. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow Productivity Center to discover all of the devices, you must statically configure it with the addresses of each of these DAs. You accomplish this using the IBM TPC panel shown here. As described previously, Productivity Center unicasts service requests to each of these statically configured DAs, but also multicasts service requests on the local subnet on which Productivity Center is installed. Figure 2-7 on page 27 displays a sample environment where DAs have been used to bridge the multicast gap between subnets in this manner.

26

IBM TotalStorage Productivity Center: The Next Generation

Figure 2-7 Recommended SLP configuration - redraw graphic - change MDM to TPC

You can easily configure an SLP DA by changing the configuration of the SLP SA included as part of an existing CIM Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA instead. The procedure to perform this configuration is explained in SLP directory agent configuration on page 332. Note that the change from SA to DA does not affect the CIMOM service of the subject CIM Agent, which continues to function as normal, sending registration and deregistration commands to the DA directly.

SLP configuration with services outside local subnet


SLA DA and SA can also be configured to cache CIM services information from non local subnets. Usually CIM Agents or CIMOMs will have local SLP SA function. When there is a need to discover CIM services outside the local subnet and the network configuration does not permit the use of SLP DA in each of them (for example firewall rules do not allow 2-way communication on port 427) remote services can be registered on the SLP DA in the local subnet. This configuration can be done by using slptool which is part of SLP installation packages. The registration is not persistent across system restarts. To achieve persistent registration of services outside of the local subnet, these services need to be defined in registration file used by SLP DA at startup. Refer to 7.16.5, Creating slp.reg file on page 336 for information about creating the slp.reg file.

Chapter 2. Key concepts

27

2.3 Common Information Model


The CIM Agent provides a means by which a device can be managed by common building blocks rather than proprietary software. If a device is CIM-compliant, software that is also CIM-compliant can manage the device. Vendor applications can benefit from adopting the common information model because they can manage CIM-compliant devices in a common way, rather than using device-specific programming interfaces. Using CIM, you can perform tasks in a consistent manner across devices and vendors. CIM uses schemas as a kind of class library to define objects and methods. The schemas can be categorized into three types:

Core schema define classes and relationships of objects. Common schema define common components of systems. Extension schema are the entry point for vendors to implement their own schema.
The CIM/WBEM architecture defines the following elements: Agent code or CIM Agent An open-systems standard that interprets CIM requests and responses as they transfer between the client application and the device. The Agent is embedded into a device, which can be hardware or software. CIM Object Manager The common conceptual framework for data management that receives, validates, and authenticates the CIM requests from the client application. It then directs the requests to the appropriate component or a device provider such as a CIM Agent. Client application or CIM Client A storage management program, such as TotalStorage Productivity Center, that initiates CIM requests to the CIM Agent for the device. A CIM Client can reside anywhere in the network, because it uses HTTP to talk to CIM Object Managers and Agents. Device or CIM Managed Object A Managed Object is a hardware or software component that can be managed by a management application by using CIM, for example, a IBM SAN Volume Controller. Device provider A device-specific handler that serves as a plug-in for the CIMOM. That is, the CIMOM uses the handler to interface with the device. Note: The terms CIM Agent and CIMOM are often used interchangeably. At this time, few devices come with an integrated CIM Agent. Most devices need a external CIMOM for CIM to enable management applications (CIM Clients) to talk to the device. For ease of installation, IBM provides an Integrated Configuration Agent Technology (ICAT), which is a bundle that includes the CIMOM, the device provider, and an SLP SA.

Integrating existing devices into the CIM model


Because these standards are still evolving, we cannot expect that all devices will support the native CIM interface. As a result, the SMI-S is introducing CIM Agents and CIM Object Managers. The agents and object managers bridge proprietary device management to device management models and protocols used by SMI-S. The agent is used for one device and an object manager for a set of devices. This type of operation is also called proxy model and is shown in Figure 2-8 on page 29.

28

IBM TotalStorage Productivity Center: The Next Generation

The CIM Agent or CIMOM translates a proprietary management interface to the CIM interface. The CIM Agent for the IBM TotalStorage ESS includes a CIMOM inside it. In the future, more and more devices will be native CIM compliant, and will therefore have a built-in Agent as shown in the Embedded Model in Figure 2-8. When widely adopted, SMI-S will streamline the way that the entire storage industry deals with management. Management application developers will no longer be required to integrate incompatible feature-poor interfaces into their products. Component developers will no longer be required to push their unique interface functionality to application developers. Instead, both will be better able to concentrate on developing features and functions that have value to end-users. Ultimately, faced with reduced costs for management, end-users will be able to adopt storage-networking technology faster and build larger, more powerful networks.

CIM Client Management Application


0..n

CIMxml CIM operations over http [TCP/IP]

Agent
1 1

Object Manager 0..n Agent Device or Subsystem 0..n Provider


1 n

0..n

Proprietary

Proprietary

Device or Subsystem

Device or Subsystem

Proxy Model

Embedded Model

Proxy Model

Figure 2-8 CIM Agent and Object Manager overview

CIM Agent implementation


When a CIM Agent implementation is available for a supported device, the device can be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. This interface enables IBM TotalStorage Productivity Center for Data, IBM TotalStorage Productivity Center for Disk, IBM TotalStorage Productivity Center for Replication, IBM Director and vendor tools to manage the SAN infrastructure more effectively. By implementing a standard interface over all devices, an open environment is created in which tools from a variety of vendors can work together. This reduces the cost of developing integrated management applications, installing and configuring management applications, and managing the SAN infrastructure. Figure 2-9 on page 30 shows an overview of the CIM Agent.

Chapter 2. Key concepts

29

Figure 2-9 CIM Agent overview

The CIM Agent includes a CIMOM, which adapts various devices using a plug-in called a provider. The CIM Agent can work as a proxy or can be embedded in storage devices. When the CIM Agent is installed as a proxy, the IBM CIM Agent can be installed on the same server that supports the device user interface.

CIM Object Manager


The SNIA SMI-S standard designates that either a proxy or an embedded agent may be used to implement CIM. In each case, the CIM objects are supported by a CIM Object Manager. External applications communicate with CIM through HTTP to exchange XML messages that are used to configure and manage the device. In a proxy configuration, the CIMOM runs outside of the device and can manage multiple devices. In this case, a provider component is installed into the CIMOM to enable the CIMOM to manage specific devices such as the ESS or SAN Volume Controller. The providers adapt the CIMOM to work with different devices and subsystems. In this way, a single CIMOM installation can be used to access more than one device type and more than one device of each type on a subsystem. The CIMOM acts as a catcher for requests that are sent from storage management applications. The interactions between the catcher and sender use the language and models defined by the SMI-S standard. This enables storage management applications, regardless of vendor, to query status and perform command and control using XML-based CIM interactions.

2.4 Component interaction


This section provides an overview of the interactions between the different components by using standardized management methods and protocols.

2.4.1 CIMOM discovery with SLP


The SMI-S specification introduces SLP as the method for the management applications (the CIM clients) to locate managed objects. SLP is explained in more detail in 2.2, Service Location Protocol overview on page 18. Figure 2-10 on page 31 shows the interaction between CIMOMs and SLP components.

30

IBM TotalStorage Productivity Center: The Next Generation

Lock Manager
SA SA 0..n UA

CIM Client Management Application


0..n

Directory Manager
DA 0..n

SLP TCP/IP CIMxml CIM operations over http [TCP/IP]


SA Agent
1 1

SA Object Manager 0..n SA Agent Device or Subsystem 0..n Provider


1 n

0..n

Proprietary

Proprietary

Device or Subsystem

Device or Subsystem

Proxy Model

Embedded Model

Proxy Model

Figure 2-10 SMI-S extensions to WBEM/CIM

2.4.2 How CIM Agent works


The CIM Agent typically works as explained in the following sequence and as shown in Figure 2-11 on page 32: 1. The client application locates the CIMOM by calling an SLP directory service. 2. The CIMOM is invoked. 3. The CIMOM registers itself to the SLP and supplies its location, IP address, port number, and the type of service it provides. 4. With this information, the client application starts to communicate directly with the CIMOM. 5. The client application sends CIM requests to the CIMOM. As requests arrive, the CIMOM validates and authenticates each request. 6. The CIMOM directs the requests to the appropriate functional component of the CIMOM or to a device provider. 7. The provider makes calls to a device-unique programming interface on behalf of the CIMOM to satisfy client application requests. 8. 10. The client application requests are made.

Chapter 2. Key concepts

31

Figure 2-11 CIM Agent work flow - graphic must be updated

2.5 Tivoli Common Agent Services


The Tivoli Common Agent Services is a new concept with the goal to provide a set of functions for the management of agents that will be common to all Tivoli products. See Figure 2-12 on page 33 for an overview of the three elements in the Tivoli Common Agent Services infrastructure. The Agent Manager is the central network element, that together with the distributed Common Agents, builds an infrastructure which is used by other applications to deploy and manage an agent environment. Each application uses a Resource Manager that is built into the application server (Productivity Center for Data or Productivity Center for Fabric) to integrate in this environment. Note: You can have multiple Resource Managers of the same type using a single Agent Manager. This might be necessary to scale the environment when, for example, one Data Manager cannot handle the load related to all the agents. But the individual agents will be managed by only one of the Data Managers.

32

IBM TotalStorage Productivity Center: The Next Generation

Figure 2-12 Tivoli Common Agent Services

The Common Agent provides the platform for the application specific agents. Depending on the tasks for which a subagent is used, the Common Agent is installed on application servers, desktop PCs, or notebooks. Note: In different documentation, Readme files, directory and file names, you also see the terms Common Endpoint, Endpoint, or simply EP. This always refers to the Common Agent which is part of the Tivoli Common Agent Services. The Common Agent talks to the application specific subagent, with the Agent Manager and the Resource Manager, but the actual system level functions are invoked by the subagent see Figure 2-13 on page 34). The information that the subagent collects is sent directly to the Resource Manager by using the applications native protocol. This is enabled to have down-level agents in the same environment, as the new agents that are shipped with the TotalStorage Productivity Center. Certificates are used to validate if a requester is allowed to establish a communication. Demo keys are supplied to set up and configure a small environment quickly. Because every installation CD uses the same certificates, this is not secure. If you want to use Tivoli Common Agent Services in a production environment, we recommend that you use your own keys that can be created during the Tivoli Agent Manager installation. One of the most important certificates is stored in the agentTrust.jks file. The certificate can also be created during the installation of Tivoli Agent Manager. If you do not use the demo certificates, you must have this file available during the installation of the Common Agent and the Resource Manager. This file is locked with a password (the agent registration password) to secure the access to the certificates. You can use the ikeyman utility in the java\jre subdirectory to verify your password.
Chapter 2. Key concepts

33

Data Server
Resource Manager

Device Server
Resource Manager

Agentmanager RegistrationService RecoveryServices


Ports: 9511 - registration 9512 - authentication 9513 - updates

IBMCDB
Holds the Registration Of all Agents And Resource Managers

Ports: 80 recovery 9513 alternate for recovery

SubAgent Data

SubAgent Fabric

CommonAgent
Figure 2-13 Schematic diagram of Agent Manager and its services

Application Server with Agent

In this picture you can see a simplified diagram showing the two most important services the Agent Manager provides. Also shown are the ports which are used for these services. A more detailed list of all the ports and their relationship can be found in the Installation and Configuration Guide.

2.5.1 Tivoli Agent Manager


:Using the Tivoli Agent Manager requires two conditions to be met before installing it: a database to store information in the registry The only options for installing the database is using a local DB2 database or a remote DB2 database. WebSphere Application Server This is installed as an integral part of the IBM TotalStorage Productivity Center installation. We recommend that you do not install WebSphere Application Server manually. Three dedicated ports are used by the Agent Manager (9511-9513). Port 9511 is the most important port because you have to enter this port during the installation of a Resource Manager or Common Agent, if you choose to change the defaults. Attention: When the Tivoli Agent Manager is being installed, make sure that the Microsoft Internet Information Server (IIS) is not running, or, even better, that it is not installed. Port 80 is used by the Tivoli Agent Manager for the recovery of agents that can no longer communicate with the manager, because of lost passwords or certificates. This Agent 34
IBM TotalStorage Productivity Center: The Next Generation

Recovery Service is located by a DNS entry with the unqualified host name of TivoliAgentRecovery. During the installation, you also have to specify the agent registration password and the Agent Registration Context Root. The password is stored in the AgentManager.properties file on the Tivoli Agent Manager. This password is also used to lock the agentTrust.jks certificate file. Important: A detailed description about how to change the password is available in the corresponding Resource Manager Planning and Installation Guide. Because this involves redistributing the agentTrust.jks files to all Common Agents, we encourage you to use your own certificates from the beginning. To control the access from the Resource Manager to the Common Agent, certificates are used to make sure that only an authorized Resource Manager can install and run code on a computer system. This certificate is stored in the agentTrust.jks and locked with the agent registration password.

2.5.2 Common Agent


As mentioned earlier, the Common Agent is used as a platform for application specific agents. These agents sometimes are called subagents. The subagents can be installed using two different methods: Using an application specific installer From a central location once the Common Agent is installed When you install the software, the agent has to register with the Tivoli Agent Manager. During this procedure, you must specify the registration port on the manager (by default 9511). Additionally, you must specify an agent registration password. This registration is performed by the Common Agent, which is installed automatically if not already installed. If the subagent is deployed from a central location, the port 9510 is by default used by the installer (running on the central machine), to communicate with the Common Agent to download and install the code. When this method is used, no password or certificate is required, because these were already provided during the Common Agent installation on the server. If you choose to use your own certificate during the Tivoli Agent Manager installation, you must supply it for the Common Agent installation.

2.6 Communication in TotalStorage Productivity Center


This section describes the communication methods used by the TotalStorage Productivity Center components.

IBM TotalStorage Productivity Center for Disk


IBM TotalStorage Productivity Center for Disk uses the Storage Management Initiative Specification (SMI-S) standard ( Storage Management Initiative - Specification on page 16) to collect information about subsystems. For devices that are not CIM ready, this requires the installation of a proxy application (CIM Agent or CIM Object Manager (CIMOM)). It does not use its own agent as the Data Manager and Fabric Manager do.

Chapter 2. Key concepts

35

IBM TotalStorage Productivity Center for Fabric


IBM TotalStorage Productivity Center for Fabric uses mainly two methods to collect information: Inband and Outband discovery. You can use either method or you can use both at the same time to obtain the most complete picture of your environment. Using just one of the methods will give you incomplete information, but topology information will be available in both cases. With some vendors and their specific devices, you can even have a third path of collecting data, for example the Switch API for Brocade SAN Switches.

Outband discovery is the process of discovering SAN information, including topology and
device data, without using the Fibre Channel data paths. Outband discovery uses SNMP queries, invoked over IP network. Outband management and discovery is used normally to manage devices such as switches and hubs which support SNMP. Inband discovery is the process of discovering information about the SAN, including topology and attribute data, through the Fibre Channel data paths. Inband discovery uses the following general process: The Agent sends commands through its Host Bus Adapters (HBA) and the Fibre Channel network to gather information about the switches. The switch returns the information through the Fibre Channel network and the HBA to the Agent. The Agent queries the endpoint devices using RNID and SCSI protocols. The Agent returns the information to the Manager over the IP network. The Manager then responds to the new information by updating the database and redrawing the topology map if necessary.

IBM TotalStorage Productivity Center for Data


Within the IBM TotalStorage Productivity Center, the data manager is used to collect information about logical drives, file systems, individual files, database usage and more. Agents are installed on the application servers and perform a regular scan to report back the information. To report on a subsystem level, a SMI-S interface is also built in. This information is correlated with the data that is gathered from the agents to show the LUNs that a host is using (an agent must be installed on that host). This correlation is done in the TotalStorage Productivity Center database. In contrast to Productivity Center for Disk and Productivity Center for Replication, the SMI-S interface in Productivity Center for Data is only used to retrieve information, but not to configure a device. Note: The SLP User Agent integrated into the Data Manager uses SLP Directory Agents and Service Agents to find services in a local subnet. To discover CIM Agents from remote networks, they have to be registered to either the Directory Agent or Service Agent which is located in the local subnet unless routers are configured to also route multicast packets. You must add each CIM Agent that is not discovered manually to the Data Manager. Refer to Chapter 7, CIMOM installation and customization on page 239.

2.7 IBM TotalStorage Productivity Center basics


This section reviews the basics of using the TotalStorage Productivity Center.

36

IBM TotalStorage Productivity Center: The Next Generation

2.7.1 Collecting data in general


IBM TotalStorage Productivity Center uses multiple techniques to collect data from the various subsytems and devices in your environment. Figure 2-14 shows presents these various data paths where data is coming from and passing across TotalStorage Productivity Center. The data paths are: CIM Agents The IBM TotalStorage Productivity Center Data and Fabric Agents SNMP Traps and Events StorageManagement Interface-Specification (SMI-S) queries API calls through vendor specific product APIs

CI M Ag e n t Storage and Tape Subsystems

TPC f o r Di s k

Topology Viewer

Data Agent

Fabric Agent

TPC f o r Da t a

cent r al TPC DB

Computers

Proprietary API SAN Components

SNMP

TPC f or Fabr i c

Figure 2-14 Data paths within the TotalStorage Productivity Center environment

These data paths can be used on demand to collect data. In this case the queries are triggered by manual interaction with the system. But in general you will setup your TotalStorage Productivity Center environment to perform data collection at intervals you define. Each time when these jobs are run, another record of information is stored in the associated database tables. The continuously gathered information is the basis for all reports, statistics, performance charts and other reports that can be produced with TotalStorage Productivity Center. The frequency of this data collection is a critical factor for the timeliness of the data which is displayed in other elements of the GUI. This brings up the crucial fact, that elements that are shown in the explorer - the elements you might want to perform a specific action on - might be missing until you go out for the next data collection and update the database contents. The behavior of the Topology Viewer is to look up the information in the database instead of collecting it from the managed systems. The principle with updating the database contents is that only the scheduled jobs, scans, or probes will update the information.

Chapter 2. Key concepts

37

There are only two exceptions: A subsystem can issue an indication to its associated CIM Agent to have it update a specific information in the database (for example, the addition of a newly created LUN). A device managed through SNMP ( for example, a switch) can send a trap indicating there was a change to which TotalStorage Productivity Center should be notified. In this version of the product, the information is stored in a single database. There is a second database for Tivoli Agent Manager which is described this later in the book.

Data correlation within IBM TotalStorage Productivity Center


The design of the database and its tables are the basis for the information you can derive from IBM TotalStorage Productivity Center. The logical relation - in terms of how a storage infrastructure is functioning - between single entries in the database makes it possible to tell t the condition and state of the subsystems and its counterparts to the left and to the right.

Startup of the main user interface


After installation of the product the user will be presented with the main GUI. The GUI holds several containers which are filled up with items related to the following tasks. The next paragraph tries to explain these tasks in brief fashion. Later in this redbook more details are provided.

Figure 2-15 The main GUI window

38

IBM TotalStorage Productivity Center: The Next Generation

Discovery
Discovery is the first task to be performed, after the infrastructure is completely rolled out. Discovery is the process of using the established datapaths to see which devices are out there. Discovery is about seeing if you can reach all your installed storage subsystems. The discovery should be repeated after you have installed new resources in your environment. For a successful recovery there are involved several protocols and standards which are explained later in this chapter. To name only a few of them you have to have a properly setup the IP, SLP, SNMP infrastructure so that protocols like SLP, WBEM and SMI-S can find your hardware devices. The following table gives an overview of the various types of discovery.

Type of discovery

Data source

Top-level entities

CIMOM

CIMOM

Fabric ( switches ) Storage subsystems Tape subsystems Fabric ( switches). A fabric discovery gets all the fabric information. NetWare trees Computers Files Clusters Names or agents

Out-of-band

SNMP agents on the switches NetWare agent Data agent

NetWare Filer Windows domain,NAS, and SAN FS

Probing
Probing is the process of gathering more detailed information from within the subsystems which have been discovered before. The goal is to get statistic and asset information about components such as disk controllers, hard disks, clusters, fabrics, storage subsystems, LUNs, and so forth. When you perform your probes on a regular basis you can become aware of any changes in your environment. such as the addition or deletion of LUNs on a storage subsystem.

Reporting
Within IBM TotalStorage Productivity Center there are several reporting categories. Predefined are some dozens of reports in the area of Data,Disk and Fabric. A user can define its own reports. He can export the reports to formats like CSV,HTML,PDF and others. The scope of these reports can be like: Asset Reporting Availability Reporting Capacity Reporting Usage Reporting Usage Violation Reporting Backup Reporting In addition to the predefined reports the user can define and customize its own reports. Important to understand is the principle, that every report - a user can define as its own report - is based on the standard definitions which are built in the predefined reports. That means they inherit most of their definitions, but every aspect can be changed. Each user defined report can be scheduled to run on a regular base.

Chapter 2. Key concepts

39

Another important concept in IBM TotalStorage Productivity Center in the context of Reporting is the Grouping-Concept. The administrator can define his reports, his monitoring jobs, his schedules scans in that way, that they are running against pre,-or self-defined groups. A group is a collection of entities of the same type. The concept of grouping can be found at several other areas within IBM TotalStorage Productivity Center.

Monitoring
Monitoring can be broken up into three main tasks. They are, depending on which area of TPC on which you are working: Pings Scans Probes They are the simplest way to test the connectivity to its devices. They are used, for example, for collecting data from filesystems on a computer/ They are the most intensive queries to collect data from subsystems.

Profiles
Profiles are a means to control what the contents of your scan should be. There are a lot of predefined profiles which can be modified, but you can create your own.

Alerting
Each of the functions in IBM TotalStorage Productivity Center can be described as active, meaning they are directly interacting with the storage devices, computer or the SAN devices, you as the administrator are able to define what kind of actions should be performed to raise an alarm. The possibilities vary from plain logging, SNMP, e-mail to issuing an TEC Event, and even others.

Policy Management
Within the widespread capabilities you can achieve with Policies are things like Quota definitions Definition of which file type, sizes and so forth are acceptable (constraint definitions) Automation of Backup/Archive operation on identified files Filesystem Extension on filesystems exceeding the specifications

2.7.2 Role-based Administration


The role-based Administration is implemented to have the different tasks within IBM TotalStorage Productivity Center assigned to different groups or people in your company. IBM TotalStorage Productivity Center has its own, strictly defined set of roles which you have to map with groups which are established on the operating system side of your server. Table 2-1 lists the different roles and their level of authorization.
Table 2-1 Role Base Administration: Roles versus level of authorization Role Authorization level

Superuser Productivity Center Administrator Disk Administrator Disk Operator Fabric Administrator

Has full access to all IBM TotalStorage Productivity Center functions Has full access to operations in the Administration section of the GUI. Has full access to IBM TotalStorage Productivity Center disk functions Has access to reports only for IBM TotalStorage Productivity Center disk functions. This includes reports on tape devices. Has full access to IBM TotalStorage Productivity Center for Fabric functions

40

IBM TotalStorage Productivity Center: The Next Generation

Role

Authorization level

Fabric Operator Data Administrator Data Operator

Has access to reports only for IBM TotalStorage Productivity Center for Fabric functions. Has full access to IBM TotalStorage Productivity Center for Data functions Has access to reports only for Data functions.

1. The following is an example of role mappings. In the TPC GUI you must navigate to the Role-to-Group-Mappings view, as shown in Table 2-1 on page 40.

Figure 2-16 Role-to-Group Mappings Dialog

In this panel we see an entry for each of the roles listed in the table above. The administrator has to group the user entities into specific groups within the operating system where the TPC server is running. 2. Navigate to your Windows User Administration and create the groups and populate them with the users you want to have in these groups. Table 2-17 on page 42 gives an example of how inputting these groups looks. The other possible platforms use their own tools to administer the users and groups, but the process is the same.

Chapter 2. Key concepts

41

Figure 2-17 Windows User and Group Administration

3. The group name you used here is to be entered in the Role-to-Group Mappings view, shown in Figure 2-18.

Figure 2-18 Edit the Group name for the Data Operators sample

42

IBM TotalStorage Productivity Center: The Next Generation

Chapter 3.

Installation planning and


considerations
IBM TotalStorage Productivity Center is made up of several products which can be installed individually, as a complete suite, or any combination in between. By installing multiple products, a synergy is created, allowing the products to interact with each other to provide a more complete solution to help you meet your business storage management objectives. This chapter contains information that you will need to gather before beginning the installation. It also discusses the supported environments and preinstallation tasks.

Copyright IBM Corp. 2006. All rights reserved.

43

3.1 Configuration
You can install the components of IBM TotalStorage Productivity Center on a variety of platforms. However, for the IBM TotalStorage Productivity Center suite, when all components are installed on the same system, the only common platforms for the managers are: AIX 5.3 Red Hat Enterprise Linux v3.0 (Intel only) Windows 2003 Server Windows 2003 Enterprise Server Note: Refer to the following Web site for the updated support summaries, including specific software, hardware, and firmware levels supported:
http://www.storage.ibm.com/software/index.html

3.2 Installation prerequisites


This section lists the minimum prerequisites for installing IBM TotalStorage Productivity Center.

3.2.1 Hardware
The following hardware is required: For all servers: Network connectivity Subsystem Device Driver (SDD), for IBM TotalStorage Productivity Center for Disk (optional) Up to 80 GB of free space for databases For Intel servers running Microsoft Windows 2003 Server or Microsoft Windows 2003 Enterprise Server: Dual Pentium 4 or Intel Xeon 2.4 GHz or faster processors 4 GB of DRAM 4 GB of available disk space For Intel servers running Red Hat Enterprise Linux v3.0: Dual Pentium 4 or Intel Xeon 2.4 GHz or faster processors 4 GB of DRAM 4 GB of available disk space 500 MB free space in /tmp 1.3 GB free space in /opt 1 GHz or faster processor 4 GB DRAM 500 MB free space in /tmp 1.3 GB free space in /opt

For IBM pSeries servers:

44

IBM TotalStorage Productivity Center: The Next Generation

3.2.2 Database
This section describes the database requirements. As we write this section, we referenced the following Web site:
http://www-1.ibm.com/support/docview.wss?rs=1133&uid=ssg1S1002813

For the Agent Manager Repository: IBM DB2 UDB v8.2 with FixPak 7a (or higher) is supported for local and remote installation For the IBM TotalStorage Productivity Center Server repository IBM DB2 UDB v8.2 with Fixpak 7a (or higher) is supported: The Data Agent can monitor these databases: IBM DB2 UDB v7.2, 8.1, and 8.2 Microsoft SQL Server 7.0, 2000, and 7.0 2000 Oracle 8i, 9i, and 10g Sybase

Note: We recommend using IBM DB2 UDB V8.2 with Fixpak 7a (or higher) as the single repository for the Agent Manager as well as the TotalStorage Productivity Center.

Default databases created during the installation


In TotalStorage Productivity Center V3.1, there are only two default databases created during the installation process. These databases are named IBMCDB (for the Agent Manager repository) and TPCDB (for the TotalStorage Productivity Center Server repository). It is possible to use only one database, IBMCDB, for both repositories, but this configuration is not recommended. Note: For a description of our ITSO TotalStorage Productivity Center lab environment ,refer to 8.1, Infrastructure summary on page 340.

3.3 Preinstallation check list


You must complete the following tasks before installing the TotalStorage Productivity Center. 1. Print the tables in Appendix A, Worksheets on page 519, to keep track of the information you will need during the installation, such as user names, ports, IP addresses, and locations of servers and managed devices. 2. Determine which elements of the TotalStorage Productivity Center you will install. 3. For installations in a Windows environment, either disable or uninstall Internet Information Services (if it is already installed) from the servers on which the Agent Manager and TotalStorage Productivity Center Server software will be installed. In our environment, we uninstalled IIS. For detailed information, refer to 4.5.3, Internet Information Services on page 65. 4. Grant the following privileges to the Windows user account that will be used to install the TotalStorage Productivity Center: Act as part of the operating system Create a token object Adjust memory quotas for a process Replace a process-level token Logon as a service

Chapter 3. Installation planning and considerations

45

5. Identify any firewalls and obtain the required authorization to pass network traffic through them. 6. Obtain the static IP addresses that will be used for the TotalStorage Productivity Center servers.

3.3.1 TCP/IP ports used


Table 3-1 lists the TCP/IP ports used by the Agent Manager component. You can change any of the port numbers except for port 80, which is used by the agent recovery service. If there is a firewall between the Agent Manager and the agents and resource managers in your deployment, you must open the Agent Manager ports for inbound TCP traffic to the Agent Manager.
Table 3-1 TCP/IP ports used by the Agent Manager Port Use Connection security

9511 9512

Registering agents. Registering resource manager Providing configuration updates Renewing and revoking certificates Querying the registry for agent information Requesting ID resets Requesting updates to the certificate revocation list Requesting Agent Manager information Downloading the truststore file Alternate port for the agent recovery service Recovery Service (optional)

Secure SSL Secure SSL with client authentication

9513

This is a public port and is insecure

80

Insecure

Table 3-2 lists the ports for the different components of IBM TotalStorage Productivity Center.
Table 3-2 TCP/IP ports used by IBM TotalStorage Productivity Center Component session initiator (server perspective) Inbound / Outbound (server perspective) Firewall port Inbound / Outbound (agent perspective) Session initiator (agent perspecti ve)

Data server Device server Common Agent Agent Manager Agent Manager Agent Manager Yes No Yes No

Both Both Outbound Inbound Both Inbound

9549 9550 9510 9511 9512 9513 Inbound Outbound Both Outbound No Yes Yes Yes

46

IBM TotalStorage Productivity Center: The Next Generation

Component

session initiator (server perspective)

Inbound / Outbound (server perspective)

Firewall port

Inbound / Outbound (agent perspective)

Session initiator (agent perspecti ve)

Common Agent (no access needed) Common Agent (no access needed) Agent Manager Recovery Service PUSH UNIX PUSH WINDOWS No Yes Yes Inbound Outbound Outbound

9514

Local to server

9515

Local to server

80 SSH(22) NetBIOS sessions service (139) RSH (514) REXEC (512) 601 High ports 3000+ TPCD Server 2078 427 162

Outbound Both

Yes No

PUSH UNIX PUSH UNIX PUSH UNIX PUSH ALL PUSH ALL

Yes Yes Yes Yes Yes

Outbound Outbound Inbound Inbound Inbound

Both Both

No No

Both

No

SLP SNMP Listener Port

Both Inbound

You can find the port numbers used on your system by running the netstat -a or netstat -ano commands (to see the PID using that port, too), as shown in Figure 3-1 on page 48 and Figure 3-2 on page 49.

Chapter 3. Installation planning and considerations

47

Figure 3-1 netstat -a sample

48

IBM TotalStorage Productivity Center: The Next Generation

Figure 3-2 netstat -ano sample

3.4 User IDs and security


This section discusses the user IDs that are used during the installation and those that are used to manage and work with TotalStorage Productivity Center. It also explains how you can increase the basic security of the different components.

Chapter 3. Installation planning and considerations

49

3.4.1 User IDs


This section lists and explains the user IDs used in a IBM TotalStorage Productivity Center environment.

Granting privileges
Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center for Data, IBM TotalStorage Productivity Center for Disk, and the IBM TotalStorage Productivity Center for Fabric. These user rights are governed by the local security policy and are not initially set as the defaults for administrators. They might not be in effect when you log on as the local administrator. If the TotalStorage Productivity Center installation program does not detect the required user rights for the logged on user name, the program can set them optionally. The program can set the local security policy settings to assign these user rights. Alternatively, you can set them manually prior to performing the installation. To set these privileges manually, follow these steps: 1. Click Start Settings Control Panel. 2. Double-click Administrative Tools. 3. Double-click Local Security Policy. 4. The Local Security Settings window opens. Expand Local Policies. Then double-click User Rights Assignments to see the policies in effect on your system. For each policy added to the user, perform the following steps: a. Highlight the policy to be selected. b. Double-click the policy and look for the users name in the Assigned To column of the Local Security Policy Setting window to verify the policy setting. Ensure that the Local Policy Setting and the Effective Policy Setting options are selected. c. If the user name does not appear in the list for the policy, you must add the policy to the user. Perform the following steps to add the user to the list: i. In the Local Security Policy Setting window, click Add. ii. In the Select Users or Groups window, under the Name column, highlight the user of group. iii. Click Add to place the name in the lower window. iv. Click OK to add the policy to the user or group. 5. After you set these user rights, either by using the installation program or manually, log off the system and then log on again for the user rights to take effect. 6. Restart the installation program to continue with the TotalStorage Productivity Center installation.

3.4.2 Increasing user security


The goal of increasing security is to have multiple roles available for the various tasks that can be performed. Each role is associated with a certain group. The users are only added to those groups that they need to be part of to fulfill their work. Not all components have the possibility to increase the security. Others methods require some degree of knowledge about the specific components to perform the configuration successfully.

50

IBM TotalStorage Productivity Center: The Next Generation

Microsoft Windows domain


If you are installing the TotalStorage Productivity Center in a Microsoft Windows environment, it does not support the following Microsoft Windows domain login formats for logging into the server component: (domain name)/(username) (username)@(domain) Because TotalStorage Productivity Center does not support these formats, you must set up users in a domain account that can log into the server. Perform the following steps before you install Productivity Center for Data in your environment: 1. Create a Local Admin Group. 2. Create a Domain Global Group. 3. Add the Domain Global Group to the Local Admin Group. Productivity Center for Data looks up the SID for the domain user when the login occurs. You only need to specify a user name and password.

3.4.3 Certificates and key files


A TotalStorage Productivity Center environment uses certificates to ensure a high level of security. The security certificates can be generated during the installation of the Agent Manager component. During the installation, key files can be generated as self-signed certificates, but you must enter a password for each file to lock it. Although you can leave the password for the Certificate Authority file blank, we do not recommend it. If you leave this password blank, a random password will be generated for you during the install process and you will then be unable to unlock the Certificate Authority file. We therefore recommend that you enter a password for this file, which will allow you to unlock the Certificate Authority file later, if needed. The default file names are: CARootKeyRing.jks (the Certificate Authority file) agentTrust.jks (the Common Agent registration file) The default directory for the key file on the Agent Manager in a Windows environment is C:\Program Files\IBM\AgentManager\certs. In a UNIX or LINUX environment, the default directory is /opt/IBM/AgentManager/certs.

Agent Manager certificates


The Agent Manager comes with demonstration certificates that you can use. However, you can also create new certificates during the installation of Agent Manager (see Figure 4-37 on page 88). If you choose to create new files, the password that you enter on the panel, as shown in Figure 4-38 on page 89, as the Agent registration password is used to lock the agentTrust.jks key file. The default directory for that key file on the Agent Manager is C:\Program Files\IBM\AgentManager\certs. There are more key files in that directory, but during the installation and first steps, the agentTrust.jks file is the most important one. This is only important if you allow the installer to create your keys, which is recommended.

3.4.4 Services and service accounts


The managers and components that belong to the TotalStorage Productivity Center are started as Windows Services. Table 3-3 on page 52 provides an overview of the most important services. To keep it simple, we did not include all the DB2 services in the table.
Chapter 3. Installation planning and considerations

51

Table 3-3 Services and service accounts Element Service name Service account Comment

DB2 Agent Manager IBM WebSphere Application Server V5 Tivoli Agent Manager IBM Tivoli Common Agent C:\Program Files\tivoli\ep IBM TotalStorage Productivity Center for Data server IBM WebSphere Application Server V5 Fabric Manager

db2admin LocalSystem

The account needs to be part of Administrators and DB2ADMNS. You need to set this service to start automatically, after the installation.

Common Agent Productivity Center for Data Productivity Center for Fabric

itcauser TSRMsrv1 LocalSystem

3.5 Starting and stopping the managers


To start, stop or restart one of the managers or components, you use the Windows control panel. Table 3-4 shows a list of the services.
Table 3-4 Services used for TotalStorage Productivity Center Element Service name Service account

DB2 Agent Manager Common Agent Productivity Center for Data Productivity Center for Fabric

Note that there are several services for db2 IBM WebSphere Application Server V5 - Tivoli Agent Manager IBM Tivoli Common Agent C:\Program Files\tivoli\ca IBM TotalStorage Productivity Center Data Server IBM WebSphere Application Server V5 - Device Server

db2admin LocalSystem itcauser TSRMsrv1 LocalSystem

3.6 Server recommendations


The IBM TotalStorage Productivity Center for Data server component acts as a traffic officer for directing information and handling requests from the agent and UI components installed within an environment. You need to install at least one server within your environment. We recommend that you do not manage more than 1000 agents with a single server. If you need to install more than 1000 agents, we suggest that you install an additional server for those agents to maintain optimal performance.

52

IBM TotalStorage Productivity Center: The Next Generation

3.7 Supported subsystems, devices, filesystems, databases


This section contains the subsystems, tape libraries, file system formats, and databases that the TotalStorage Productivity Center supports.

3.7.1 Storage subsystem support


IBM TotalStorage Productivity Center V3.1 supports IBM and third-party disk systems that are Storage Management Interface Specification (SMI-S) 1.0.2 or SMI-S 1.1 compatible. This support includes storage provisioning, as well as asset and capacity reporting. TotalStorage Productivity Center V3.1 implements many of its disk, tape, and fabric management functions through exploitation of the SMI-S 1.0.2 and 1.1 levels of the standard. SMI-S 1.1 supports all of the functions of SMI-S 1.0.2 plus additional functionality (for example, performance management). These systems include, but are not limited to: IBM TotalStorage SAN Volume Controller (SVC) IBM TotalStorage Enterprise Storage Server (ESS) IBM TotalStorage Disk Subsystems (DS4000, DS6000, and DS8000 Series)

3.7.2 Tape library support


IBM TotalStorage Productivity Center 3.1 supports the following tape libraries: IBM 3494 IBM 3584

3.7.3 File system support


Data Manager supports the monitoring and reporting of the following file systems: AIX_JFS AIX_JFS2 AIX_OLD EXT2 EXT3 FAKE FAT FAT32 HP_HFS NF NTFS4 NTFS5 NetWare FAT NetWare NSS REISERFS IBM TotalStorage SAN File System (SFS) TMPFS UFS VERITAS File System (VxFS) WAFL

Chapter 3. Installation planning and considerations

53

3.7.4 Network File System support


Data Manager currently supports the monitoring and reporting of the following Network File Systems (NFS): IBM TotalStorage SAN File System 1.0 (Version 1 Release 1), from AIX V5.1 (32-bit) and Windows 2000 Server/Advanced Server clients IBM TotalStorage SAN File System 2.1, 2.2 from AIX V5.1 (32-bit), Windows 2000 Server/Advanced Server, Red Hat Enterprise Linux 3.0 Advanced Server, and SUN Solaris 9 clients IBM General Parallel File System (GPFS) v2.1, v2.2

3.7.5 Database support


Data Manager currently supports the monitoring of the following relational database management systems (RDBMS): IBM DB2 UDB v7.2, 8.1, and 8.2 Microsoft SQL Server 7.0, 2000, and 7.0 2000 Oracle 8i, 9i, and 10g

3.8 Security considerations


This section describes the security issues that you must consider when installing Data Manager.

Role-based levels of users


There are mainly two levels of users within TotalStorage Productivity Center. They are an Operator user and an administrator for each of the specific work areas in IBM TotalStorage Productivity Center ( Data, Fabric, Disk and Tape ). Besides these two roles, there is a general TotalStorage Productivity Center administrator who can work in all four areas as an administrator. On top of that is a so called Superuser. This is the standard user you are working with right after installing the product. The level of these users determine how they can use IBM TotalStorage Productivity Center. Operator users can: View the data collected by TotalStorage Productivity Center. Create, generate, and save reports. Administrator users can: Create, modify, and schedule Pings, Probes, and Scans. Create, generate, and save reports. Perform administrative tasks and customize the TotalStorage Productivity Center environment. Create Groups, Profiles, Quotas, and Constraints. Set alerts. Important: Security is set up by using the certificates. You can use the demonstration certificates or you can generate new certificates. It is recommended that you generate new certificates when you install the Agent Manager.

54

IBM TotalStorage Productivity Center: The Next Generation

Chapter 4.

TotalStorage Productivity Center installation on Windows 2003


In this chapter we describe the installation of the TotalStorage Productivity Center components and the remote GUI on Microsoft Windows 2003. This includes the installation of the prerequisite components, DB2 and Agent Manager. Of the available installation paths, Typical and Custom, we used the Custom installation for our environment.

Copyright IBM Corp. 2006. All rights reserved.

55

4.1 TotalStorage Productivity Center installation


IBM TotalStorage Productivity Center uses an install wizard that guides you through the installation of the TotalStorage Productivity Center servers and agents. Figure 4-1 describes the install of the TotalStorage Productivity Center Standard Edition. The prerequisite components are installed prior to invoking the installation wizard.

TPC Version 3 Release 1


TPC Standard Edition
Disk Data Fabric

A Single Application with modular components

Streamlined Installation and Packaging Single User Interface - File System, Database, Fabric and Storage management - Asset, Capacity and Performance Reporting Single Database Correlated host, fabric, storage information

Consistent Reporting Capabilities (Scheduled, Ad-Hoc) Data Export Capabilities (html, CSV) Single set of services for consistent administration and operations Policy Definitions, Event Handling, Resource Groups

NEW: Storage topology viewer NEW: Role based administration NEW: AIX support for TPC server NEW: Fabric performance management NEW: Tape discovery and asset/capacity reporting NEW: Performance Management for DS4000

TPC Standard Edition for the full suite or orderable separately

Figure 4-1 TotalStorage Productivity Center Standard Edition installation

IBM TotalStorage Productivity Center provides an integrated storage infrastructure management solution that is designed to allow you to manage every point of your storage infrastructure, between the hosts through the network and fabric through to the physical disks. It can help simplify and automate the management of devices, data, and storage networks. IBM TotalStorage Productivity Center V3.1 offers a simple, easy to install package with management server support added for IBM AIX V5.3, and integrating IBM DB2 as the management server database. The default installation directory is: c:\Program Files\IBM\... (for Windows) /opt/IBM/...(for Unix and Linux) You can change this path during installation setup. There are two types of installation: typical and custom.

4.1.1 Typical installation


The Typical installation allows you to install all the components of the TotalStorage Productivity Center on the local server by selecting the options Servers, Agents and Clients. Our recommendation is not to use the Typical installation, because the control of the installation process is much better when you use the Custom installation method.

56

IBM TotalStorage Productivity Center: The Next Generation

4.1.2 Custom installation


The Custom installation allows you to install each component of the TotalStorage Productivity Center separately and deploy remote Fabric and or Data agents on different computers. This is the installation method we recommend.

4.1.3 CD layout and components


In this section we describe the contents of the product CDs at the time of writing. The actual layout of your CDs might be different. The purpose of this section is to give you an idea of the CD content. You can install the following components from different CDs: CD1: OS: Windows, AIX, and Linux RH 3 Database Schema Data Server Device Server GUI CLI Local Data Agent Local Device Agent Remote installation of Device Agent OS: Windows, AIX, Linux RH 3, Linux Power, Linux s390 (zLinux), Solaris, HP-UX Local Data Agent Local Device Agent Remote installation of Data Agent Remote installation of Device Agent

CD2:

CD3: Data upgrade for all platforms For information about the deployment of agents refer to Chapter 6, Agent deployment on page 195.

4.2 Configuration
The TotalStorage Productivity Center components are: Data server Device server Database Agents GUI CLI There are two supported environments, a one-server and a two-server environment. You must install the Data Server and Device Server on one computer.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

57

4.2.1 One-server environment


In this environment all the components are installed on one server. DB2 Agent Manager Data Server Device Server CLI GUI

4.2.2 Two-server environment


In a two server environment, you would install the components as follows. Note that you can have DB2 on a remote server. Server 1 DB2 Agent Manager Server 2 DB2 Data Server Device Server GUI CLI

4.3 Hardware prerequisites


This section lists the minimum hardware prerequisites. In the ITSO we installed all TotalStorage Productivity Center components on a single computer. For other possible configurations, refer to 4.2, Configuration on page 57.

4.3.1 Hardware
For Windows and Linux on Intel, IBM eServer xSeries server or other Intel technology-compatible platforms the hardware requirements are: Dual Pentium 4 or Xeon 2.4 GHz or faster processors 4 GB of RAM Network connectivity For AIX on IBM eServer iSeries and IBM eServer pSeries: Machine - Minimum 1.0 GHz processor 4 GB of RAM Network connectivity

4.3.2 Disk space


For Windows, you need 4 GB of available disk space for the code and up to approximately 80 GB of hard disk space for databases. For Unix or Linux, you need 500 MB in /tmp and 1.3 GB in /opt and up to approximately 80 GB of hard disk space for databases.

58

IBM TotalStorage Productivity Center: The Next Generation

4.4 Software prerequisites


Table 4-1 shows the platform supported by TotalStorage Productivity Center for server components.
Table 4-1 Platform support for Data server, Device server, GUI and Agent Manager Platform Data server, Device server, database schema (DB2), CLI GUI Agent Manager v1.2 (install fix pack 2, even if you can optionally use the Agent Manager you installed with v2.3)

Windows 2000 Advanced Server Windows 2000 Datacenter Windows Server 2003 Standard Edition 32-bit or 64-bit Windows Server 2003 Enterprise Edition 32-bit or 64-bit Red Hat Enterprise Linux AS Version 3.0 IBM eServer xSeries Red Hat Enterprise Linux AS Version 3.0 IBM eServer pSeries on POWER5 Red Hat Enterprise Linux AS Version 3.0 IBM eServer iSeries on POWER5 United LInux 1.0 IBM eServer xSeries United LInux 1.0 IBM eServer zSeries SUSE LINUX Enterprise Server 8 IBM eServer pSeries on POWER4 zSeries

NO NO YES

NO NO YES

YES NO YES

YES

YES

YES

YES

YES

YES

NO

NO

NO

NO

NO

NO

NO NO NO

NO NO NO

NO NO NO

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

59

Platform

Data server, Device server, database schema (DB2), CLI

GUI

Agent Manager v1.2 (install fix pack 2, even if you can optionally use the Agent Manager you installed with v2.3)

SUSE LINUX Enterprise Server 9 IBM eServer xSeries, pSeries on POWER5 iSeries on POWER5 zseries (Data agent only) IBM AIX 5.1 (32-bit) IBM AIX 5.1 (64-bit) IBM AIX 5L (32-bit) IBM AIX 5L (64-bit) IBM AIX 5.2 (32-bit) IBM AIX 5.2 (64-bit) IBM AIX 5.3 (32 bit)

NO

NO

NO

NO NO NO NO NO NO YES (with AIX 5300-01 maintenance level and APAR IY70336 NO NO NO

NO NO NO NO NO NO YES (with AIX 5300-01 maintenance level and APAR IY70336 NO NO NO

YES YES YES YES YES YES YES (32- and 64-bit)

Solaris 8 Solaris 9 HP-UX 11 and 11i

YES YES NO

Table 4-2 shows the platforms supported to install and deploy Data agents and Fabric agents. Important: Data agents and Fabric agents at the 2.x version are supported by IBM TotalStorage Productivity Center V3.1 managers.
Table 4-2 Platform support for Data agent and Fabric agent Platform Data agent and Fabric agent

Windows 2000 Advanced Server Windows 2000 Datacenter Windows Server 2003 Standard Edition 32-bit or 64-bit Windows Server 2003 Enterprise Edition 32-bit or 64-bit Red Hat Enterprise Linux AS Version 3.0 IBM eServer xSeries

YES YES YES YES YES

60

IBM TotalStorage Productivity Center: The Next Generation

Platform

Data agent and Fabric agent

Red Hat Enterprise Linux AS Version 3.0 IBM eServer pSeries on POWER5 Red Hat Enterprise Linux AS Version 3.0 IBM eServer iSeries on POWER5 United LInux 1.0 IBM eServer xSeries United LInux 1.0 IBM eServer zSeries SUSE LINUX Enterprise Server 8 IBM eServer pSeries on POWER4 zSeries SUSE LINUX Enterprise Server 9 IBM eServer xSeries, pSeries on POWER5 iSeries on POWER5 zseries (Data agent only) SUSE LINUX Enterprise Server 9 IBM eServer xSeries, pSeries on POWER5 iSeries on POWER5 zseries (Data agent only) IBM AIX 5.1 (32-bit) IBM AIX 5.1 (64-bit)

Only Data agent Only Data agent

Only Data agent Only Data agent Only Data agent

Only Data agent

Data agent (ALL) Fabric agent (xSeries only) YES with AIX 5100-05 maintenance level YES in compatibility mode with AIX 5100-05 maintenance level YES in compatibility mode with AIX 5100-05 maintenance level YES in compatibility mode with AIX 5100-05 maintenance level YES with AIX 5200-02 maintenance level YES in compatibility mode with AIX 5200-02 maintenance level YES with AIX 5300-01 maintenance level and APAR IY70336 YES YES Only Data agent

IBM AIX 5L (32-bit)

IBM AIX 5L (64-bit)

IBM AIX 5.2 (32-bit) IBM AIX 5.2 (64-bit)

IBM AIX 5.3 (32 bit)

Solaris 8 Solaris 9 HP-UX 11 and 11i

4.4.1 Databases supported


At time of writing, we used the information contained in the following Web site as a reference:
http://www-1.ibm.com/support/docview.wss?rs=1133&uid=ssg1S1002813

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

61

The Agent Manager repository is supported only on the following databases. DB2 Enterprise Server Edition version 8.2 with fix pack 7a or higher. There is only one database instance created for IBM TotalStorage Productivity Center on DB2. The default database name for the repository is TPCDB. The Agent Manager repository uses its own database. The default name of this database is IBMCDB. The Data agent can monitor these databases: DB2 7.2 - 32 bit only DB2 8.1 - 32 or 64 bit DB2 8.2 - 32 or 64 bit Microsoft SQL Server 7.0 Microsoft SQL Server 2000 Oracle 8i Oracle 9i Oracle 10g Sybase support is not included in TotalStorage Productivity Center V3.1. Note: We recommend that you install a single instance of IBM DB2 UDB Enterprise Server Edition version 8.2 with fix pack 7a or higher as your repository for both the Agent Manager and IBM TotalStorage Productivity Center.

4.5 Preinstallation steps for Windows


The prerequisite components for IBM TotalStorage Productivity Center V3.1 are: IBM DB2 UDB Enterprise Server Edition V8.2 fix pack 7a or higher Agent Manager 1.2, 1.2.2 or later (Agent Manager V1.1 is not supported with TotalStorage Productivity Center V3.1)

Order of component installation


The components are installed in the following order: 1. DB2 2. Data Manager 3. TotalStorage Productivity Center components Tip: We recommend that you install the Database Schema first. After that, install Data Server and Device Server in a separate step. If you install all the components in one step, if any part of the installation fails for any reason (for example space or passwords) the installation suspends and rolls back, uninstalling all the previously installed components.

4.5.1 Verify primary domain name systems


Before you start your installation, we recommend you verify if a primary domain name system (DNS) suffix is set. This can require a computer restart. To verify the primary DNS name follow these steps: 1. Right-click My Computer on your desktop.

62

IBM TotalStorage Productivity Center: The Next Generation

2. Click Properties. The System Properties panel is displayed as shown in Figure 4-2. 3. On the Computer Name tab, click Change.

Figure 4-2 system properties

4. Enter the host name in the Computer name field. Click More to continue (see Figure 4-3).

Figure 4-3 computer name

5. In the next panel, verify that Primary DNS suffix field displays a domain name. Click OK (see Figure 4-4 on page 64).

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

63

Figure 4-4 DNS domain name

6. If you made any changes, you might need to restart your computer (see Figure 4-5).

Figure 4-5 you must restart

4.5.2 Activate NetBIOS settings


If NetBIOS is not enabled on Microsoft Windows 2003, then GUID is not generated. You must verify and activate NetBIOS settings. On your TotalStorage Productivity Center Server, go to Start Settings Network and Dial-up Connections. Select your Local Area Connections. From the Local Area Connection Properties panel, double-click Internet Protocol (TCP/IP). The next panel is the Internet Protocol (TCP/IP) Properties. Click Advanced as shown in Figure 4-6.

Figure 4-6 TPC/IP properties

64

IBM TotalStorage Productivity Center: The Next Generation

On the WINS tab, select Enable NetBIOS over TCP/IP and click OK (Figure 4-7).

Figure 4-7 Advanced TCP/IP properties

4.5.3 Internet Information Services


Port 80 is used by Internet Information Services (IIS). Port 80 is also used by the Tivoli Agent Manager for the recovery of agents that can no longer communicate with the manager, because of lost passwords or certificates. If any service is using port 80, then Agent Recovery Service installs, but it does not start. Before beginning the installation of TotalStorage Productivity Center, you must do one of the following: Change the IIS port to something other than 80, for example 8080. Uninstall IIS Disable IIS In our installation, we uninstalled IIS to avoid any port conflicts. It must be reinstalled to use the Web-based GUI (see 4.8, Configuring the GUI for Web Access under Windows 2003 on page 107). To uninstall IIS, use the following procedure: 1. 2. 3. 4. Click Start Settings Control Panel. Click Add/Remove Programs. In the Add or Remove Programs window, click Add/Remove Windows Components. In the Windows Components panel, deselect IIS.

4.5.4 Create Windows user ID to install Device server and Data server
In order to install Device server and Data server, you must have a Windows User with all the proper required rights. We created a unique user ID, as described in Table 4-5 on page 67. It is a good practice to use the worksheets in Appendix A, Worksheets on page 519 to record the user IDs and passwords used during the installation of TotalStorage Productivity Center.
Chapter 4. TotalStorage Productivity Center installation on Windows 2003

65

4.5.5 User IDs and password to be used and defined


In this section, we describe the user IDs and passwords you need to define or set up during TotalStorage Productivity Center installation. Table 4-3 points you to the appropriate table that contains the user IDs and passwords used during the installation of TotalStorage Productivity Center.
Table 4-3 Index to tables describing required user IDs and passwords item table

Installing DB2 and Agent Manager Installing Device server or Data server Installing Data agent or Fabric agent DB2 administration server user Certificate authority password Common agent registration Common agent service logon user ID and password Host authentication password NAS filer login user ID and password Resource manager registration user ID and password WebSphere Application Server administrator user ID and password

Table 4-4 on page 66 Table 4-5 on page 67 Table 4-6 on page 67 Table 4-7 on page 68 Table 4-8 on page 69 Table 4-9 on page 69 Table 4-10 on page 70 Table 4-11 on page 70 Table 4-12 on page 70 Table 4-13 on page 70 Table 4-14 on page 71

Important: To verify the valid characters you could use for these user IDs and passwords, refer to 11.5, Valid characters for user ID and passwords on page 511. Table 4-4 through Table 4-14 on page 71 contain information about the user IDs and passwords used during the installation of the TotalStorage Productivity Center prerequisites and components.
Table 4-4 Installing DB2 and Agent Manager item Installing DB2 and Agent Manager OS description created when used when

All

Log on Windows as a local Administrator


user ID password

Used to log on Windows to install DB2 and Agent Manager


ITSOs user ID and password

group

Administr ators

user ID used to log on

used to log on

Administrator / password

66

IBM TotalStorage Productivity Center: The Next Generation

Table 4-5 Installing Device server or Data server item Installing Device server or Data server OS description created when used when

All

Add user ID to DB2 Admin group or assign the user rights: - Log on as a service - Act as part of the operating system - Adjust memory quotas for a process - Create a token object - Debug programs - Replace a process level token. On Linux or Unix give root authority
user ID

Has to be created before starting Device server and Data server installation.

Used to log on Windows to install Device and Data server

group

password

ITSOs user ID and password

Administr ators

new user ID used to log on Windows

new password used to log on Windows

tpcadmin / tpcadmin (group Administrators)

Table 4-6 Installing Data agent or Fabric agent item Installing Data agent or fabric agent OS description created when used when

All

User rights: - act as part of the operating system - Log on as a service. On Linux or Unix, give root authority
user ID

Has to be created before starting Data agent or Fabric agent installation


password

Used to logon Windows to install Data agent or Fabric agent

group

ITSOs user ID and password

Administr ators

new user ID used to log on Windows

new password used to log on Windows

tpcadmin / tpcadmin (group Administrators)

To install a GUI or CLI you do not need any particular authority or special user ID.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

67

Table 4-7 DB2 administration server item DB2 administration server user OS description created when used when

All

Used to run the DB2 administration server on your system. Used by the DB2 GUI tools to perform administration tasks. See rules below.
user ID

Specified when DB2 is installed

Used by the DB2 GUI tools to perform administration tasks.


ITSOs user ID and password

group

password

new user ID

new password

db2tpc / db2tpc

DB2 user ID and password rules


DB2 user IDs and passwords must follow these rules. User IDs cannot be any of the following: UNIX user names and passwords cannot be more than eight characters long. They cannot start with a numeric digit or end with $. Windows 32-bit user IDs and passwords can contain 1 to 20 characters. Group and instance names can contain 1 to 8 characters. User IDs cannot be any of the following: USERS ADMINS GUESTS PUBLIC LOCAL

User IDs cannot begin with: IBM SQL SYS User IDs cannot include accented characters. UNIX users, groups, and instance names must be lowercase. Windows 32-bit users, groups, or instance names can be any case. DB2 creates a user group with the following administrative rights: Act as a part of an operating system Create a token object Increase quotas Replace a process-level token Log on as a service. Note: Adding the user ID used to install TotalStorage Productivity Center to the DB2 Admin group gives the user ID the necessary administrative rights.

68

IBM TotalStorage Productivity Center: The Next Generation

Table 4-8 Certificate authority password r item Certificate authority password OS description created when used when

All

This password locks the CARootKeyRing.jks file. Specifying a value for this password is optional. You need to specify this password only if you want to be able to unlock the certificate authority files.

Specified when you install Agent Manager

We recommend you create a password.


user ID

group

password

ITSOs user ID and password

no default, if not specified one is generated automatically

tpctpc

Important: Do not change the Agent Registration password under any circumstances. Changing this password will render the certificates unusable.
Table 4-9 Common agent registration passwords item Common agent registration OS description created when used when

All

This is the password required by the common agent to register with the Agent Manager
user ID

Specified when you install Agent Manager

Used during common agent, Data agent and Fabric agent installation
ITSOs user ID and password

group

password

changeMe (is the default)

changeMe

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

69

Table 4-10 Common agent service logon user ID and password item Common agent service logon user ID and password OS description created when used when

Windows

This creates a new service account for the common agent to run under.
user ID

Specified when you install Data agent or Fabric agent (only local).
password ITSOs user ID and password

group

Administra tors

If you do not specify anything, itcauser is created by default

tpcadmin / tpcadmin

Table 4-11 Host authentication password item Host authentication password OS description created when used when

All

Specified when you install the Device server

Used when you install Fabric agent, to communicate with the Device server.
ITSOs user ID and password

group

user ID

password

must be provided
Table 4-12 NAS filer login user ID and password item NAS filer login user ID and password OS description created when

tpctpc

used when

Windows

Specified when you run NAS discovery


user ID password ITSOs user ID and password

group

Table 4-13 Resource manager registration user ID and password r item Resource manager registration user ID and password OS description created when used when

ALL

Specified when you install Device server and Data server


user ID password

Used when Device server and Data server have to register to Agent Manager
ITSOs user ID and password

group

manager (by default)

password (by default)

manager / password

70

IBM TotalStorage Productivity Center: The Next Generation

Table 4-14 WAS Webpshere administrator user ID and password item WebSphere Application Server WebSphere administrator user ID and password OS description created when used when

ALL

You may use tpcadmin, in order to avoid to create a new one


user ID

Specified when you install Device server


password

Used when Device server has to communicate with WebSphere


ITSOs user ID and password

group

if not provided, it will be created.

if not provided, it will be created.

tpcadmin / tpcadmin

4.6 DB2 install for Windows


In this section, we show a typical installation of DB2 8.2. Before beginning the installation, it is important that you log on to your system as a local administrator with Administrator authority for Windows or root authority for UNIX and Linux (see Table 4-4 on page 66). Attention: If you update DB2 from an older version, for example from DB2 7.2 to DB2 8.2, the TotalStorage Productivity Center installer might not recognize the DB2 version. See 11.2, DB2 installation known issues on page 511. To begin the installation of DB2, follow these steps: 1. Insert the IBM TotalStorage Productivity Center Prerequisite Software Installer CD into the CD-ROM drive. If Windows autorun is enabled, the installation program should start automatically. If it does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center CD-ROM drive. Go to the DB2 Installation image path and double-click setup.exe. You will see the first panel, as shown in Figure 4-8 on page 72. Select Install Product to proceed with the installation.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

71

Figure 4-8 DB2 setup welcome

2. The next panel allows you to select the DB2 product to be installed. Click Next to proceed as shown in Figure 4-9 on page 73.

72

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-9 Select product

The InstallShield Wizard starts (see Figure 4-10).

Figure 4-10 Preparing to install

3. The DB2 Setup wizard panel is displayed, as shown in Figure 4-11 on page 74. Click Next to proceed.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

73

Figure 4-11 setup wizard

4. You have to read and click I accept the terms in the license agreement (Figure 4-12).

Figure 4-12 license agreement

5. To select the installation type, accept the default of Typical and click NEXT to continue (see Figure 4-13 on page 75).

74

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-13 Typical installation

6. Accept the defaults and proceed with Install DB2 Enterprise Server Edition on this computer (see Figure 4-14). Click NEXT to continue.

Figure 4-14 Installation action

7. The panel shown in Figure 4-15 on page 76 shows defaults for drive and directory to be used as the installation folder. You can change these or accept the defaults, then click NEXT to continue.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

75

Figure 4-15 Installation folder

8. Set the user information for the DB2 Administration Server; choose the domain of this user. If it is a local user, leave the field blank. Type a user name and password of the DB2 user account that you want to create (Figure 4-16 on page 77). You can refer to Table 4-7 on page 68. DB2 creates a user with the following administrative rights: Act as a part of an operating system. Create a token object. Increase quotas. Replace a process-level token. Log on as a service.

76

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-16 User Information

9. Accept the defaults in the panel shown in Figure 4-17, and click Next to continue.

Figure 4-17 Administration contact list

10.Click OK when the warning window shown in Figure 4-18 on page 78 opens.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

77

Figure 4-18 Warning

11.In the Configure DB2 instances panel, accept the default and click Next to continue (see Figure 4-19).

Figure 4-19 Configure DB2 instances

12.Accept the defaults, as shown in Figure 4-20 on page 79. Verify that Do not prepare the DB2 tools catalog on this computer is selected. Click Next to continue.

78

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-20 Prepare db2 tools catalog

13.In the panel shown in Figure 4-21, click Defer the task until after installation is complete and then click Next to continue.

Figure 4-21 Health Monitor

14.The panel shown in Figure 4-22 on page 80 is presented, click Install to continue.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

79

Figure 4-22 Start copying files

The DB2 installation proceeds and you see a progress panel similar to the one shown in Figure 4-23.

Figure 4-23 installing DB2 Enterprise Server Edition installation progress

15.When the installation completes, click Finish, as shown in Figure 4-24 on page 81.

80

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-24 DB2 setup wizard

16.If you see the DB2 product updates panel shown in Figure 4-25, click No because you have already verified that your DB2 version is at the latest recommended and supported level for TotalStorage Productivity Center, as mentioned in Databases supported on page 61.

Figure 4-25 DB2 product tapes

17. Click Exit First Steps (Figure 4-26 on page 82) to complete the installation.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

81

Figure 4-26 Universal Database First Steps panel

Verifying the installation


To verify the DB2 installation, check if db2tpc user has been created and included in DB2ADMINS group. Open a Command Prompt window and enter the db2level command to check the version installed as shown in Figure 4-27.

Figure 4-27 db2 clplevel 20

Figure 4-28 on page 83 shows the DB2 window services created at the end of the installation.

82

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-28 Windows services showing DB2 services

4.6.1 Agent Manager installation for Windows


This is a typical installation of Agent Manager 1.2.2. When you install the Agent Manager, you will also be installing the Embedded version of IBM WebSphere Application Server - Express, V5.0 (WebSphere Express), without the Administrative console to manage WebSphere itself, that was present in TotalStorage Productivity Center V2.3. 1. To start the Agent Manager installer, run the following program from the EmbeddedInstaller directory (Table 4-15).
Table 4-15 EmbeddedInstaller directory commands Operating system Command Java error failure alternate command setupwin32.exe -is:javahome ..\jre\windows setupAix.bin -is:javahome ../ setupAix.bin -is:javahome ../ setupLinux.bin -is:javahome ../

Microsoft Windows AIX Linux Linux on Power PC Solaris

setupwin32.exe setupAix.bin setupLinux.bin setupLinuxPPC.bin setupSolaris.bin

setupLinux.bin -is:javahome ../

Note: Log on with a user ID that has administrative authority on Windows and root authority on UNIX or Linux. 2. The Installation Wizard starts; you see a panel similar to the one in Figure 4-29 on page 84.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

83

Figure 4-29 Install wizard panel

3. Select a language and click OK (see Figure 4-30).

Figure 4-30 Language selection panel

4. Read and accept the terms of license agreement, select I accept the terms of the license agreement. Click Next to continue (see Figure 4-31).

Figure 4-31 License panel

5. Figure 4-32 on page 85 shows the Directory Name for the installation. Click Next to accept the default or click Browse to install to a different directory.

84

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-32 Installer directory name

6. The Agent Manager Registry information panel is displayed, as shown in Figure 4-33. On this panel specify the type of database, Database Name or Directory and type of Database Connection. You can accept the defaults and click Next to continue.

Figure 4-33 Wizard installing

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

85

7. In the next panel shown in Figure 4-34, enter the following database information: Database Software Directory Enter the directory where DB2 is installed on your system. The default directory is: C:\Program Files\IBM\SQLLIB (Microsoft Windows) /opt/IBM/SQLLIB (UNIX or Linux)

Database User Name Record this name also in Table A-4 on page 521. Database Password Record this password also in the tables in Appendix A, Worksheets on page 519. Host name of the Database Server Record this host name in the tables in Appendix A, Worksheets on page 519. Database Port This port number is required for a remote database. After entering the information, click Next to continue.

Figure 4-34 Agent Manager database information

8. Enter the following information in the window in Figure 4-35 on page 87: Host name Alias or Fully Qualified Host name Review the preinstallation task mentioned in 4.5.1, Verify primary domain name systems on page 62. If you specify an IP address, you will see the warning panel shown in Figure 4-36 on page 88. Application Server Name for Agent Manager Accept the default name or enter a different name.

86

IBM TotalStorage Productivity Center: The Next Generation

Registration port The default port is 9511 for the server-side SSL. Refer to 3.3.1, TCP/IP ports used on page 46. Secure port The default port is 9512 for client authentication, two-way SSL. Refer to 3.3.1, TCP/IP ports used on page 46. Public Port and Secondary Port for the Agent Recovery Service The public communication port default is 9513. Refer to 3.3.1, TCP/IP ports used on page 46. Do not use port 80 for the agent recovery service. Start the Agent Manager after the installation is complete. Autostart the Agent Manager each time the system restarts. It is recommended that you select both Start agent manger and Autostart the agent manager. To check for other applications which are using port 80, run the netstat -an command. Look for port 80 in the listening state. If there is an application using port 80, stop that application and then continue with the installation of Agent Manager. Note: If you want Agent Recovery Service to run, you must stop any service using port 80. If any service is using port 80, Agent Recovery Service installs, but does not start.

Figure 4-35 Wizard host name without DO NOT USE port 80

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

87

9. If you specify an IP address instead of a fully qualified host name for the WebSphere Application Server, you see the panel shown in Figure 4-36. We recommend you click the Back button and specify a fully qualified host name.

Figure 4-36 Warning IP specified panel

10.In the Security Certificates panel (see Figure 4-37), it is highly recommended that you accept the defaults to generate new certificates for a secure environment. Click Next to continue.

Figure 4-37 Create security certificates

88

IBM TotalStorage Productivity Center: The Next Generation

11.In the panel shown in Figure 4-38, specify the Security Certificate settings. To create Certificates, you must specify a Certificate Authority Password. You must specify this password to look at the certificate files after they are generated. Make sure you record this password in the worksheets in Appendix A, Worksheets on page 519. The second password specified in this panel is the Agent Registration password. The default Agent Registration password is changeMe. We recommend you specify a unique password and record it in the worksheets provided in Appendix A, Worksheets on page 519. After entering the passwords click Next to continue.

Figure 4-38 Certificate password panel

12.The User Input Summary panel is displayed (see Figure 4-39 on page 90). If you want to change any settings, click Back and return to the window where you set the value. If you do not need to make any changes, click Next to continue.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

89

Figure 4-39 Input summary

13.Review the summary information panel (see Figure 4-40) and click Next to continue.

Figure 4-40 Summary info

The Agent Manager installation starts and you see several messages similar to those in Figure 4-41 on page 91 and Figure 4-42 on page 91.

90

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-41 Installation progress panel

Figure 4-42 is the database IBMCDB creation status panel.

Figure 4-42 Registry database

14.The Summary of Installation and Configuration Results panel is displayed in Figure 4-43 on page 92. Verify that the Agent Manager has successfully installed all of its components. Review the panel and click Next to continue.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

91

Figure 4-43 Summary of Agent Manager configuration options summary

15.The last panel (Figure 4-44) shows that the Agent Manager has been successfully installed. Click Finish to complete the Agent Manager installation.

Figure 4-44 Finish the Agent Manager install

Verifying the installation


You can verify the installation by running the healthcheck utility from a command-prompt. From a command prompt, navigate to the directory <InstallDir>\Program Files\IBM\AgentManager\toolkit\bin and run healthcheck.

92

IBM TotalStorage Productivity Center: The Next Generation

Verify that the ARS.version field shows the level you have installed (in our installation it is 1.2.2.2) and that at the end you see the message Health Check passed as shown in Figure 4-45.

Figure 4-45 Healthcheck utility

After the completion of the Agent Manager installation, you can verify the connection to the database (see Figure 4-46 on page 94). From a command-prompt, enter:
db2cmd db2 connect to IBMCDB user db2tpc using db2tpc

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

93

Figure 4-46 DB2 command line CONNECT

4.7 Install TotalStorage Productivity Center components


Now that the prerequisites have been installed, we install the TotalStorage Productivity Center components. Before starting the installation, verify that DB2 8.2 Enterprise Edition fix pack 7a has been installed and has been started. Important: Log on to your system as a local administrator with database authority, for Windows. The user ID must have root authority, for Unix or Linux. 1. For Windows, if Windows autorun is enabled, the installation program should start automatically. If it does not, open Windows Explorer and go to the TotalStorage Productivity Center CDROM drive or directory. Double-click setup.exe. 2. Chose your language and click OK (see Figure 4-47).

Figure 4-47 Language selection panel

3. The License Agreement panel is displayed. Read the terms and select I accept the terms of the license agreement. Then click Next to continue (see Figure 4-48 on page 95).

94

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-48 License panel

4. Figure 4-49 on page 96 shows how to select typical or custom installation. You have the following options: Typical installation allows you to install all of the components on the same computer by selecting Servers, Agents and Clients. Custom installation allows you to install each component separately. Installation licenses This selection installs the TotalStorage Productivity Center licenses. The TotalStorage Productivity Center license is on the CD. You only need to run this option when you add a license to a TotalStorage Productivity Center package that has already been installed on your system. For example, if you have installed TotalStorage Productivity Center for Data package, the license will be installed automatically when you install the product. If you decide to later enable TotalStorage Productivity Center for Fabric, run the installer and select Installation licenses. This option will allow you to install the license key from the CD. You do not have to install the IBM TotalStorage Productivity Center for Fabric product. In this chapter, we document Custom Installation. Click Next to continue.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

95

Figure 4-49 Custom installation

5. In the Custom installation, you can select all the components in the panel shown in Figure 4-50. This is the recommended installation scenario. In our scenario, we show the installation in stages. As the first step, we select the option to Create database schema. and click Next to proceed (see Figure 4-50).

Figure 4-50 Custom installation component selection

6. To start the Database creation, you must specify a DB2 user ID. We suggest you use the same DB2 user ID you created before (see Table 4-7 on page 68). Click Next, as shown in Figure 4-51 on page 97.

96

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-51 DB2 user and password

7. Enter your DB2 user ID and password again (see Table 4-7 on page 68). Do not take the default of Use Local Database. Click Create local database. By default, a database named TPCDB is created. Click Next to continue (Figure 4-52). .

Figure 4-52 DB2 user and create local database

8. The next panel allows you to change the default space assigned to the database. At this time, you do not need to change these values and you can accept defaults. You have to specify the Schema name. In our installation we chose TPC. For better performance, we recommend that you: Allocate TEMP DB on a different physical disk that the TotalStorage Productivity Center components. Create larger Key and Big Databases. 9. Select System managed (SMS) and click OK to proceed (see Figure 4-53 on page 98). To understand the advantage of an SMS database versus a DMS database, refer to 11.1, Selecting an SMS or DMS tablespace on page 510.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

97

Figure 4-53 DB schema space

Figure 4-54 is the the Database schema installation progress panel. Wait for the installation to complete.

Figure 4-54 installing DB

10.Upon completion, the successfully installed panel is displayed. Click Finish to continue (Figure 4-55 on page 99).

98

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-55 Installation summary information

4.7.1 Verifying installation


To check the installation, start DB2 Control Center, verifying that you have two DB2 instances in your environment, as shown in Figure 4-56.

Figure 4-56 Verifying DB2 installation

Attention: Do not edit or modify anything in DB2 Control Center. This could cause serious damage to your tablespace. Simply use DB2 Control Center to browse your configuration.

Log files
Check for errors and java exceptions in the log files at the following locations: Install<timestamp>.log file from system temp directory or <InstallLocation>. <InstallLocation>\dbschema\log

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

99

Look for dbschema.out, dbschema.err, DBSchema.log <InstallLocation>\log <InstallLocation>\TPC.log Check for the success message at the end of the INSTALL<timestamp>.log file for successful installation.

4.7.2 Installing Data and Device Servers, GUI, and CLI


In our environment, we performed a custom installation of Data Server, Device Server, GUI, and CLI.

Preinstallation tasks
To install Data Server and Device Server components, you must log on to Windows 2003 with a User that has the following rights: Log on as a service. Act as part of the operating system. Adjust memory quotas for a process. Create a token object. Debug programs. Replace a process-level token. Be certain the following tasks are completed: We recommend you create a user ID for installation. We created the user ID TPCADMIN (refer to Table 4-5 on page 67). The database schema must be installed successfully to start the Data server installation. An accessible Agent manager must be available to start the Device server installation. Data server must be successfully installed prior to installing GUI. Device server must be successfully installed prior to installing CLI.

Custom installation
To perform a custom installation, follow these steps: 1. Start the TotalStorage Productivity Center installer. 2. Select the components you want to install. In our scenario, we select the four server components, as shown in Figure 4-57 on page 101. Tip: We recommend that you install Data Agent and Device Agent in a separate step. If you install all the components at the same time, if one fails for any reason (for example, space or passwords) the installation suspends and a rollback occurs, uninstalling all the previously installed components.

100

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-57 Installation selection

Specify the DB2 user ID and password defined for Table 4-7 on page 68 in the panel in Figure 4-58 and click Next.

Figure 4-58 User ID and password

3. Enter the DB2 user ID and password and click Use local database. Click Next to continue (Figure 4-59 on page 102).

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

101

Figure 4-59 Use local database selection

4. In the panel in Figure 4-60 on page 103, enter the following information: Data Server Name Enter the fully qualified host name of the Data Server. Data Server Port Enter the Data Server port. The default is 9549. See 3.3.1, TCP/IP ports used on page 46 for more details. Device Server Name Enter the fully qualified host name of the Device Server. Device Server Port Enter the Device Server port. The default is 9550. TPC Superuser Enter the Administrators Group for the TPC Superuser. We created the user ID TPCADMIN and added this to the existing Administrators group. See 4.5.5, User IDs and password to be used and defined on page 66 for more details. Host Authentication Password This is the password used for the Fabric agents to communicate with the Device Server. Remember to record this password. See Table 4-11 on page 70. WebSphere Application Server admin ID and Password You can use the TPC Superuser here. In our case we used TPCADMIN. See Table 4-14 on page 71 for further details. Click Next to continue.

102

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-60 Component information for installation

5. In the panel shown in Figure 4-61 on page 104, enter the Agent Manager information. You must specify the following information: Hostname or IP address Fully qualified name or IP address of the agent manager server. For further details about the fully qualified name, refer to 4.5.1, Verify primary domain name systems on page 62. Port (Secured) Port number of the Agent Manager server. If acceptable (not in use by any other application), use the default port 9511.See 3.3.1, TCP/IP ports used on page 46 for further details. Port (Public) The public communication port. If acceptable (not in use by any other application), use the default of 9513. User ID This is the user ID used to register the Data Server or Device Server with the Agent Manager. The default is manager. You previously specified this user ID during the Agent Manager install (see Figure 4-38 on page 89). Password This is the password used to register the Data Server or Device Server with the Agent Manager. The default is password. You previously specified this user ID during the Agent Manager install (see Figure 4-38 on page 89). Password - common agent registration password This is the password used by the common agent to register with the agent manager. this was specified when you installed the Agent Manager. The default is changeMe. See Table 4-9 on page 69 for further details. Click Next to continue.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

103

Figure 4-61 Agent manager

6. The Summary information panel is displayed. Review the information, then click Install to continue (see Figure 4-62).

Figure 4-62 Summary of installation

The installation starts. You might see several messages related to Data Server installation similar to Figure 4-63 on page 105.

104

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-63 Installing Data Server

You might also see several messages about the Device Server installation, as shown in Figure 4-64.

Figure 4-64 Installing Device Server

7. After the GUI and CLI installation messages, you see the summary information panel (Figure 4-65 on page 106). Read and verify the information and click Finish to complete the installation.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

105

Figure 4-65 Component installation completion panel

Verifying installation
At the end of the installation, the Windows Services shows the Data Server and Device Server services (shown in Figure 4-66) have been installed.

Figure 4-66 Windows service

Check that the Administrators group contains the newly created TPC user ID. The user ID TSRMsrv1 is created by default by the install program.

Log files for data server


Check the logs for any errors/java exceptions. Install<timestamp>.log file from system temp directory or <InstallLocation> <InstallLocation>\data\log Look for dataserver.out, dataserver.err, DataServer.log <InstallLocation>\log

106

IBM TotalStorage Productivity Center: The Next Generation

From <InstallLocation>\data\log: Cimom_000001.log createdLuns.log guidinstallog.txt Scheduler_000001.log Server_000001.log TPCD_000001.log TSRMsrv1.out

INSTALL<timestamp>.log file should not have any exceptions and it should show install successful at the bottom. Server_000001.out file mentions that the server is ready to accept connections.

Log files for device server


The log files for the device server are: <InstallLocation>\device\apps\was\logs <InstallLocation>\TPC.log <InstallLocation>\device.log <InstallLocation>\device\Mgrlog.txt If successful, you see the Server server1 open for e-business message in <InstallationLocation>\device\apps\was\logs\server1\SystemOut.log. In case of failure you can find errors and exceptions here.

Log files for GUI


Check the log files for any errors. The log files for the GUI are: Install<timestamp>.log file from system temp directory or <InstallLocation>. <InstallLocation>\gui\log Look for gui.out, gui.err, GUI.log

Log files for CLI


Check the log files for any errors. The log files for the CLI are: <InstallLocation> <InstallLocation>\cli\log <InstallLocation>\cli\log

4.8 Configuring the GUI for Web Access under Windows 2003
You can configure the TotalStorage Productivity Center V3.1 user interface to be accessible from a Web browser. After this is done, a user can access the TotalStorage Productivity Center GUI by browsing to the URL. The TotalStorage Productivity Center GUI applet is downloaded into the browser and executed. It looks and acts exactly as though you were interacting with the native server. You can install the interface on any of the TotalStorage Productivity Center servers, management consoles, or workstations.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

107

4.8.1 Installing Internet Information Services (IIS)


To configure the Web server on the TotalStorage Productivity Center Server, make sure that IIS is installed. It is not installed by default, but is a part of Windows Server 2003. To install it, follow these directions. 1. On the Windows Taskbar, select Start Control Panel Add or Remove programs 2. Select Add/Remove Windows Components from the task bar on the left (see Figure 4-67).

Figure 4-67 Add/Remove Windows Components icon

3. Highlight the entry for Application Server. Click Details to continue (Figure 4-68).

Figure 4-68 Windows Components panel

4. Check the Internet Information Services (IIS) box. Click OK to continue (Figure 4-69 on page 109).

108

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-69 Installing IIS

5. You are returned to the panel in Figure 4-68 on page 108. Click Next to install IIS. An Installation Progress window shows the installation progress. Have your Windows Server 2003 CD-ROM available or have a 2003 I386 directory installed on a hard disk (see Figure 4-70). If you have Service Pack 1 installed, you also need to have the Service Pack 1 CD_ROM or SP1 I386 directory available as well.

Figure 4-70 Service Pack media request

6. When the installation has completed successfully, click the Finish button to complete the Windows Component Wizard (see Figure 4-71 on page 110).

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

109

Figure 4-71 Installation complete window

7. Cancel the Add or Remove Programs dialog box. You now have IIS installed.

4.8.2 Configuring IIS for the TotalStorage Productivity Center GUI


You can now configure IIS to serve up the TotalStorage Productivity Center User interface. 1. Open the IIS Configuration Panel by clicking Start All Programs Administrative Tools Internet Information Services (IIS) Manager (Figure 4-72).

Figure 4-72 Open the IIS configuration panel

2. When the Internet Information Services (IIS) Manager opens, open the Explorer panel to display the Web Site Default Web Site. Right-click the Default Web Site name, and click the Properties entry from the context menu (see Figure 4-73 on page 111).

110

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-73 IIS Default Web Site Properties

3. The Default Web Site Properties panel opens. There are three tabs that you must configure: Web Site tab On the Web Site tab (see Figure 4-74 on page 112), you can change the Description to TPC V3.1 GUI. Attention: Agent Manager is configured to use port 80 for the Agent Recovery service, you must change the default port to something other than 80. Port 8080 is a good alternative.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

111

Figure 4-74 Default Web Site Properties - Web Site tab

Home Directory tab On the Home Directory tab (Figure 4-75), change the Local Path to the GUI directory for TotalStorage Productivity Center. The default is C:\Program Files\IBM\TPC\gui.

Figure 4-75 Default Web Site Properties Home Directory tab

112

IBM TotalStorage Productivity Center: The Next Generation

Documents tab On the Documents tab (see Figure 4-76), click the Add button, and add TPCD.html to the list. Highlight the new name in the list, and click Move Up to move TPCD.html to the top of the list.

Figure 4-76 Default Web Site Properties Documents tab

4. Click OK to save these changes. Close the IIS Manager by clicking the X in the upper right corner of the panel.

4.8.3 Launch the TotalStorage Productivity Center GUI


To start the TotalStorage Productivity Center GUI, follow these steps: 1. Start your Web browser (Internet Explorer), and enter the URL you have just created. If you changed the port, you must incorporate the port into the URL also. The format of the URL is:
http://<hostname>:port# For example, if your hostname for the TotalStorage Productivity Center server is fred.mycompany.com, and you changed the port to 8080, the URL would be http://fred.mycompany.com:8080

If you left the port set to 80, the URL is:


http://fred.mycompany.com

If you start the Web browser on your TotalStorage Productivity Center server machine, you can use localhost rather than the network name:
http://localhost

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

113

2. The IBM TotalStorage Productivity Center for Data panel is displayed (see Figure 4-77). This is the anchor page for the TotalStorage Productivity Center GUI java applet, and must remain open as long as the TotalStorage Productivity Center GUI is running.

Figure 4-77 TotalStorage Productivity Center for Data panel

3. A security certificate approval panel is displayed (see Figure 4-78 on page 115). Depending on network transmission rates, it could take a few minutes for the panel to open. Click Yes to accept the certificate. (If you click No, the TotalStorage Productivity Center GUI does not load, and you must relaunch the TotalStorage Productivity Center GUI URL to restart.)

114

IBM TotalStorage Productivity Center: The Next Generation

Figure 4-78 Security certificate

At this point, the java applet for the TotalStorage Productivity Center GUI downloads. The applet jar file is 15.6 MB, and can take some time to load into your browser the first time. Be patient because there is no progress bar displayed while it downloads. When the applet jar file has been loaded into your browser, it remains in your browser cache until you dump it. Subsequent starts of the TotalStorage Productivity Center GUI will load much faster. 4. After the applet has loaded, it launches the TotalStorage Productivity Center GUI. In the center of the GUI, the Sign On panel opens. It should be prefilled with the Server address and access port (9549 for TotalStorage Productivity Center V3.1). Enter your TotalStorage Productivity Center Server User ID and password, and click OK to continue (see Figure 4-79).

Figure 4-79 IBM TotalStorage Productivity Center Sign on panel

The TotalStorage Productivity CenterGUI is displayed (see Figure 4-80 on page 116), and has all the functionality of the native GUI on the TotalStorage Productivity Center Server.

Chapter 4. TotalStorage Productivity Center installation on Windows 2003

115

Figure 4-80 TotalStorage Productivity Center V3.1 GUI

116

IBM TotalStorage Productivity Center: The Next Generation

Chapter 5.

TotalStorage Productivity Center installation on AIX


In this chapter we describe the installation of TotalStorage Productivity Center on AIX. This includes the installation of the prerequisite components - DB2 and Agent Manager. Of the available installation paths, Typical and Custom, we used the custom installation for our environment.

Copyright IBM Corp. 2006. All rights reserved.

117

5.1 TotalStorage Productivity Center installation


IBM TotalStorage Productivity Center is a unified software product which contains multiple, modular components as shown in Figure 5-1.

TPC Version 3 Release 1


TPC Standard Edition
Disk Data Fabric

A Single Application with modular components

Streamlined Installation and Packaging Single User Interface - File System, Database, Fabric and Storage management - Asset, Capacity and Performance Reporting Single Database Correlated host, fabric, storage information

Consistent Reporting Capabilities (Scheduled, Ad-Hoc) Data Export Capabilities (html, CSV) Single set of services for consistent administration and operations Policy Definitions, Event Handling, Resource Groups

NEW: Storage topology viewer NEW: Role based administration NEW: AIX support for TPC server NEW: Fabric performance management NEW: Tape discovery and asset/capacity reporting NEW: Performance Management for DS4000

TPC Standard Edition for the full suite or orderable separately

Figure 5-1 A single application

TotalStorage Productivity Center provides an integrated storage infrastructure management solution that is designed to allow you to manage every point of your storage infrastructure, including hosts, network and fabric, and physical disks. It can help simplify and automate the management of devices, data, and storage networks. TotalStorage Productivity Center V3.1 offers a simple, easy to install package with management server support added for IBM AIX V5.3, and integrating IBM DB2 as the management server database. The default installation directory is: /opt/IBM/TPC for AIX. You may change this path during installation setup. There are two types of installation, typical and custom.

5.1.1 Typical installation


The typical installation allows you to install all the components on the same computer, by selecting Servers, Agents and Clients. It is a local installation only.

5.1.2 Custom installation


The Custom installation allows you to install each component separately and deploy Remote Agents, Fabric and, Data on a different computer.

118

IBM TotalStorage Productivity Center: The Next Generation

5.1.3 CD layout and components


You can install the components from different CDs: CD1: OS: Windows, AIX and Linux Red Hat 3 Database Schema Data Server Device Server GUI CLI Local Data Agent Local Device Agent Remote installation of Device Agent OS: Windows, AIX, Linux Red Hat 3, Linux Power, Linux s390 (zLinux), Solaris, HP-UX Local Data Agent Local Device Agent Remote installation of Data Agent Remote installation of Device Agent

CD2:

CD3: Data Upgrade for all supported platforms. If you need to perform a remote installation of Data Agents on an Operating System other than AIX, you need to copy the contents of both CD1 and CD2 to one location.

5.2 Configuration
The IBM TotalStorage Productivity Center components are listed below: Data Server Device Server DB2 Agent Manager Graphical User Interface (GUI) Command Line Interface (CLI) The Data Server and Device Server must be installed on the same server. You can install all the components on one server, or you can have a two server configuration. In a two server configuration you would install the components as follows: Server 1 DB2 Agent Manager Server 2 Device Server Data Server GUI CLI

You can install DB2 on a remote server.

Chapter 5. TotalStorage Productivity Center installation on AIX

119

5.3 Hardware Prerequisites


The minimum hardware prerequisites are listed below. For AIX on IBM eServer iSeries and IBM eServer pSeries: Processor - Minimum 1.0 GHz 4 GB of RAM Network connectivity For AIX, you need 500 MB in /tmp and 1.3 GB in /opt and up to approximately 80 GB of hard disk space for databases.

5.4 Software Prerequisites


Table 5-1 shows the platforms supported by IBM TotalStorage Productivity Center for AIX server components.
Table 5-1 Platform support for Data server, Device server, GUI and Agent Manager Platform Data server, Device server, database schema (DB2), CLI GUI Agent Manager v1.2 (install fix pack 2, even if you can optionally use the Agent Manager you installed with v2.3)

IBM AIX 5.1 (32-bit) IBM AIX 5.1 (64-bit) IBM AIX 5L (32-bit) IBM AIX 5L (64-bit) IBM AIX 5.2 (32-bit) IBM AIX 5.2 (64-bit) IBM AIX 5.3 (32 bit)

NO NO NO NO NO NO YES (with AIX 5300-01 maintenance level and APAR IY70336

NO NO NO NO NO NO YES (with AIX 5300-01 maintenance level and APAR IY70336

YES YES YES YES YES YES YES (32- and 64-bit)

Table 5-2 shows the platform supported to install and deploy Data agent and Fabric agent. Important: Data agents and Fabric agents at the 2.x version are supported by IBM TotalStorage Productivity Center V3.1 managers.
Table 5-2 Platform support for Data agent and Fabric agent Platform Data agent and Fabric agent

Windows 2000 Advanced Server Windows 2000 Datacenter Windows Server 2003 Standard Edition

Only Data agent Only Data agent YES

120

IBM TotalStorage Productivity Center: The Next Generation

Platform

Data agent and Fabric agent

Windows Server 2003 Enterprise Edition Red Hat Enterprise Linux AS Version 3.0 IBM eServer xSeries Red Hat Enterprise Linux AS Version 3.0 IBM eServer pSeries on POWER5 Red Hat Enterprise Linux AS Version 3.0 IBM eServer iSeries on POWER5 United LInux 1.0 IBM eServer xSeries United LInux 1.0 IBM eServer zSeries SUSE LINUX Enterprise Server 8 IBM eServer pSeries on POWER4 zSeries SUSE LINUX Enterprise Server 9 IBM eServer xSeries, pSeries on POWER5 iSeries on POWER5 zseries (Data agent only) SUSE LINUX Enterprise Server 9 IBM eServer xSeries, pSeries on POWER5 iSeries on POWER5 zseries (Data agent only) IBM AIX 5.1 (32-bit) IBM AIX 5.1 (64-bit)

YES YES Only Data agent Only Data agent Only Data agent Only Data agent Only Data agent Only Data agent

Data agent (ALL) Fabric agent (xSeries only) YES with AIX 5100-05 maintenance level YES in compatibility mode with AIX 5100-05 maintenance level YES in compatibility mode with AIX 5100-05 maintenance level YES in compatibility mode with AIX 5100-05 maintenance level YES with AIX 5200-02 maintenance level YES in compatibility mode with AIX 5200-02 maintenance level YES with AIX 5300-01 maintenance level and APAR IY70336 YES YES Only Data agent

IBM AIX 5L (32-bit)

IBM AIX 5L (64-bit)

IBM AIX 5.2 (32-bit) IBM AIX 5.2 (64-bit)

IBM AIX 5.3 (32 bit)

Solaris 8 Solaris 9 HP-UX 11 and 11i

Chapter 5. TotalStorage Productivity Center installation on AIX

121

5.4.1 Databases supported


At time of writing, we used the information contained on the following Web site as a reference.
http://www-1.ibm.com/support/docview.wss?rs=1133&uid=ssg1S1002813

The Agent Manager repository is supported only on the following databases. DB2 Enterprise Server Edition version 8.2 with fix pack 7a or higher. There is only one database instance created for IBM TotalStorage Productivity Center on DB2. The default database name for the repository is TPCDB. The Agent Manager repository uses its own database. The default name of this database is IBMCDB.

Data agent monitored databases


The Data agent can monitor these databases: DB2 7.2 - 32 bit only DB2 8.1 - 32 or 64 bit DB2 8.2 - 32 or 64 bit Microsoft SQL Server 7.0 Microsoft SQL Server 2000 Oracle 8i Oracle 9i Oracle 10g Sybase support is not included in TotalStorage Productivity Center V3.1. Note: We recommend that you install a single instance of IBM DB2 UDB Enterprise Server Edition version 8.2 with fix pack 7a or higher as your repository for both the Agent Manager and IBM TotalStorage Productivity Center.

5.5 Preinstallation steps for AIX


The prerequisite components for IBM TotalStorage Productivity Center V3.1 are: IBM DB2 UDB Enterprise Server Edition V8.2 fix pack 7a or higher Agent Manager 1.2, 1.2.2 or later (V1.1 is not supported with TotalStorage Productivity Center V3.1) In addition, you must perform the steps outlined below.

5.5.1 Verify primary domain name servers


Before starting your installation, we recommend that you verify that a primary domain name system (DNS) suffix is set. This may require a computer restart. Follow these steps: 1. Log on to the system as a user with root-level authority, 2. Display the contents of the file /etc/resolv.conf. Two ways to do this are shown below: cat /etc/resolv.conf

or
lsnamsv -C

122

IBM TotalStorage Productivity Center: The Next Generation

3. The file should contain a reference to one or more name servers, as well as the name of the DNS domain for the host. A sample file may look like this:
nameserver 1.2.3.4 domain mycompany.com

4. If the file does not contain those two references, edit the file to insert the IP address of a DNS server, and the domain name which should be used by the host. Save the file when you have finished your edits. Alternatively, you can use SMIT to accomplish this. From the main SMIT menu, choose Communications Applications and Services TCP/IP Minimum Configuration & Startup. From the menu that appears, select the appropriate network interface. Then in the NAMESERVER options, edit the properties for Internet ADDRESS (dotted decimal) and DOMAIN name. Change the START Now option to Yes, and press Enter. When the changes have been committed and the OK status appears, press F10 to exit SMIT.

5.5.2 User IDs, passwords, and groups


We have included worksheets in Appendix A, Worksheets on page 519 to help you keep track of the user IDs, passwords, and groups required to install and configure TotalStorage Productivity Center. We reprint two of those sheets here to show you the user IDs, passwords, and groups we used in our environment. Note: You should create user IDs, groups, and passwords which conform to the requirements and standards of your organization whenever possible. However, we recommend that you do not use any characters other than letters and numbers in creating the user IDs and passwords used by the TotalStorage Productivity Center components. Some of the user IDs, passwords, and groups we used in creating this Redbook would not be considered secure enough for many production deployments. Table 5-3 shows the password we use when generating the security certificates in 5.8, Agent Manager installation for AIX on page 146.
Table 5-3 Password used to lock the key file Default key file name Password

agentTrust.jks

tpctpc

Table 5-4 shows the user IDs and passwords we use for the various components of TotalStorage Productivity Center. You may wish to refer to this table, or the worksheets in the Appendix as you read the remainder of this chapter.
Table 5-4 User IDs and passwords used in our environment Element Default user ID Our user ID Our password

DB2 DAS user DB2 instance owner DB2 fenced user Resource Manager Agent Manager Common Agent

db2admin db2inst1 db2fenc1 manager1 AgentMgr itcauser1

db2tpc db2inst2 db2fenc1 manager1 AgentMgr itcauser1

db2tpc passw0rd passw0rd password changeMe2 changeMe2

Chapter 5. TotalStorage Productivity Center installation on AIX

123

Element

Default user ID

Our user ID

Our password

WebSphere Administrator Host authentication TotalStorage Productivity Center administrator

(user defined) (user defined) (user defined)3

tpcadmin tpctpc tpcadmin3

tpcadmin tpctpc tpcadmin

1 - This user ID is created by the installer and cannot be changed. 2 - Despite what it says, we recommend that you do NOT change this password. 3 - This user ID is created manually.

Some of these user IDs and passwords will be created by the installers, but there is one user ID that we recommend you create, the TotalStorage Productivity Center administrator. The next section discusses the creation of this user.

5.5.3 Create the TotalStorage Productivity Center user ID and group


Prior to installing TotalStorage Productivity Center, we recommend that you create a user ID which will be used to administer the TotalStorage Productivity Center. This user, as well as any user that will be authorized to manage your storage environment, must be made a member of the adm AIX group. All defined AIX users on the TotalStorage Productivity Center server will be automatically granted read-only privileges to use the Graphical User Interface (GUI), but only users in the adm group will be authorized to use the GUI to make changes to the environment. Follow these steps to create the administrative user and configure the adm group: 1. If you have not already done so, log on to the system as a user with root-level authority. 2. Launch the SMIT interface by typing smit at a command prompt. 3. From the main SMIT menu, choose Security & Users, Users Add a User. 4. Complete the Add a User form according to your organizations standards. When you have completed the form, press Enter to create the user. When the user has been created and the OK status appears, press F3 three times to return to the Security & Users menu. Note: The user ID you create is subject to the following constraints: It must contain only lower-case alpha and numeric characters (a-z, 0-9) and be 8 characters or less It must not start with a numeric digit The password must be 8 characters or less The password may contain upper- or lower-case alpha and numeric characters 5. From the Security & Users menu, highlight Groups and press Enter. 6. From the Groups menu, highlight Change / Show Characteristics of a Group and press Enter. 7. In the Group NAME field, type adm and press Enter. The properties for the adm group will display. 8. Highlight the USER List field. You may type user IDs directly into this field, separating multiple user IDs with a comma. Alternatively, you can press F4 to display a list of users to 124
IBM TotalStorage Productivity Center: The Next Generation

add to the group. Move the cursor to the desired user ID and press F7 to select that user ID. Multiple users can be selected by moving the cursor each desired user ID and pressing F7 while the user ID is highlighted. When you have finished selecting user IDs to add to the group, press Enter. You will be returned to the Change Group Attributes menu. 9. Press Enter to commit the changes you have made to the adm group. When the OK status appears, press F10 to exit SMIT (F12 if using the graphical version). 10.Set the initial password for the user ID you just created by typing passwd username at a command prompt, where username is the user ID you just created. Follow the on-screen prompts to enter the new password two times. Note: After DB2 is installed, and you have applied the latest fix pack, you must add the root user to the group db2grp1. The group will be automatically created during the installation of DB2. Adding the root user to this group allows the root user to source the instance owners environment prior to installing TotalStorage Productivity Center. Refer to 5.7.2, Add the root user to the DB2 instance group on page 145 for details.

5.5.4 Creating and sizing file systems and logical volumes


At a minimum, the installation of IBM TotalStorage Productivity Center V3.1 requires that you have 500 MB free space in /tmp and 1.3 GB free space in /opt. In addition, if you are installing any of the products from a downloaded image, you will need additional space to house the image plus space to house the extracted files contained within the image. For best performance, we recommend creating separate logical volumes and file systems to house the various components of TotalStorage Productivity Center. For Enterprise-class deployments, these should ideally be on separate disk devices. For example, you could create a volume for DB2 application code, another for DB2 databases, a third for DB2 temporary storage, and a fourth for DB2 log files (only if you choose Database managed storage). In addition, you could create a volume for Agent Manager application code and for TotalStorage Productivity Center server code. For all deployments, we recommend creating separate file systems to house each component of IBM DB2, Agent Manager, and TotalStorage Productivity Center whenever possible. Tip: If you performed a default installation of AIX, it is likely that you will need to extend the logical volumes and file systems that house /tmp and /opt prior to beginning the installation procedure for TotalStorage Productivity Center. For detailed instructions on creating or extending logical volumes and file systems in AIX, consult AIX 5L Version 5.3 System Management Guide: Operating Systems and Devices, SC23-4910.

5.5.5 Verify port availability


TotalStorage Productivity Center communicates on a variety of TCP ports. Table 5-5 on page 126 lists the ports used by the various components of TotalStorage Productivity Center and indicates the direction of communication. You will need to ensure that your servers and agents can communicate with each other on all of these ports, including traversing any intervening firewalls or routers.

Chapter 5. TotalStorage Productivity Center installation on AIX

125

Table 5-5 TCP ports used by TotalStorage Productivity Center components Component Port Number Inbound to or outbound from the server Inbound to our outbound from the agent

Remote agent deployment for UNIX agents Agent Manager recovery service (optional) or Web GUI (optional) Remote agent deployment for Windows agents Remote agent deployment for UNIX agents Remote agent deployment for UNIX agents Remote agent deployment for UNIX agents Remote agent deployment for all agents; Graphical User Interface Remote agent deployment for all agents; Common Agent Agent Manager Agent Manager Agent Manager Common Agent (no access needed) Common Agent (no access needed) Data Server; Web GUI Device Server

22 (SSH)

Outbound

Both

80 (HTTP)

Inbound

Outbound

139 (NetBIOS)

Outbound

Both

512 (REXEC)

Outbound

Both

514 (RSH)

Outbound

Both

601

Inbound

Outbound

2078

Inbound

Outbound

3000+ (High ports)

Inbound

Both

9510 9511 9512 9513 9514 9515

Outbound Inbound Both Inbound Local to server Local to server

Inbound Outbound Both Outbound None None

9549 9550

Both Both

Both Both

126

IBM TotalStorage Productivity Center: The Next Generation

5.6 DB2 installation for AIX


Once you have verified that your system has met the minimum system requirements for installing TotalStorage Productivity Center, including adequate free disk space, the first component to be installed is IBM DB2 UDB Enterprise Server Edition V8.2. Once it is successfully installed, you will then install the DB2 fix pack. At a minimum, fix pack 7a is required. Follow these steps to perform the installation of IBM DB2 UDB:

5.6.1 Accessing the installation media with CD


If you are accessing the installation media using a downloaded image, skip to the next section. 1. If you have not already done so, log on to the system as the root user. 2. Insert the CD into the CD-ROM or DVD-ROM drive on your system. 3. Create a mount point for the media. For example, if you wish to mount the disc at /cdrom, type the following command: mkdir /cdrom 4. Next, mount the disc in read-only mode at the mount point you created in the previous step. For example: mount -o ro /dev/cd0 /cdrom 5. Change to the newly mounted fleshiest. For example: cd /cdrom 6. Proceed to the section entitled Preparing the display, below.

5.6.2 Accessing the installation media with a downloaded image


1. If you have not already done so, log on to the system as the root user. 2. Create a temporary directory to contain the installation image and the compressed image files. This directory must be created on a fleshiest which has approximately 2 GB of free space. Also, the directory must NOT contain a space anywhere in its path. For example, to create a directory in /usr called tarfiles, type the following command: mkdir /usr/tarfiles 3. Download or copy the installation image to the temporary directory you created. 4. Change to the directory where you have stored the image. For example: cd /usr/tarfiles 5. Extract the image files by following the instructions supplied at the repository from which you downloaded the image. This may involve running the tar or gunzip commands, or a combination of both. For example: tar -xvf db2.tar 6. Change to the installation directory which you extracted from the image. For example: cd ese.sbcsaix1

5.6.3 Preparing the display


This version of the DB2 installer uses a graphical, Java-based interface. If you are installing with a local graphical display, proceed to the next section entitled 5.6.4, Beginning the

Chapter 5. TotalStorage Productivity Center installation on AIX

127

installation on page 128. However, if you are installing from a remote terminal session, you must setup an X-Windows display prior to beginning the installation process. First, you must start your local X-Windows server application. Examples are Hummingbird Exceed or Cygwin. After your local X-Windows server application is running, you must set the DISPLAY variable on your host. You must know the IP address of the system from which you intend to perform the installation. For example, if the IP address is 2.3.4.5, type the following command at the servers command prompt: export DISPLAY=2.3.4.5:0.0 You can verify that the X-Windows environment is properly set up by executing the following command on the host: xclock If the environment is successfully configured, you will see a graphical clock display, similar to the one shown in Figure 5-2:

Figure 5-2 Graphical clock display

5.6.4 Beginning the installation


Follow these steps to perform the installation of IBM DB2 UDB: 1. At the command prompt on the host, type the following command: ./db2setup The DB2 setup launchpad opens, and appears similar to the one shown in Figure 5-3.

128

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-3 DB2 Launchpad

2. Click Install products to begin the installation. A new window will display, similar to the one in Figure 5-4 which asks you which products you would like to install.

Figure 5-4 DB2 product selection

3. Select the option DB2 UDB Enterprise Server Edition and click Next. The DB2 setup wizard will now display, and will be similar to the one shown in Figure 5-5.

Chapter 5. TotalStorage Productivity Center installation on AIX

129

Figure 5-5 Welcome to the DB2 Setup wizard

4. Click Next. The license agreement opens, and is similar to the one shown in Figure 5-6.

Figure 5-6 DB2 Software License Agreement

130

IBM TotalStorage Productivity Center: The Next Generation

5. You must click Accept, then click Next to proceed. The Installation Type window opens and is similar to the one shown in Figure 5-7 on page 131.

Figure 5-7 DB2 installation type

6. Select the Typical installation option, then click Next. The installation action window opens and is similar to the one shown in Figure 5-8 on page 132.

Chapter 5. TotalStorage Productivity Center installation on AIX

131

Figure 5-8 DB2 installation action

7. Select Install DB2 UDB Enterprise Server Edition on this computer and click Next. The DAS user window opens and is similar to the one shown in Figure 5-9 on page 133.

132

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-9 DB2 Administration Server user information

8. If you would like the installer to create a DB2 Administration Server (DAS) user ID, you must enter a unique username for the DAS user in the User name field. You must also enter a password in both the Password and Confirm password fields. If you leave the UID and GID fields blank, and check the Use default boxes, the system assign a UID and GID for you. Optionally, you can check the Existing user button, and enter the name of an existing user ID which will become the DAS user. When you have completed this form, click Next. The Set up a DB2 instance window opens and appears similar to the one shown in Figure 5-10 on page 134.

Chapter 5. TotalStorage Productivity Center installation on AIX

133

Figure 5-10 Set up a DB2 instance

9. Select the option Create a DB2 Instance - 32 bit and click Next. The instance partitioning window opens and appears similar to the one shown in Figure 5-11 on page 135.

134

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-11 DB2 instance partitioning

10.Select Single-partition instance and click Next. The DB2 instance owner window opens and is similar to the one shown in Figure 5-12 on page 136.

Chapter 5. TotalStorage Productivity Center installation on AIX

135

Figure 5-12 DB2 instance owner

11.If you would like the installer to create a DB2 instance owner user ID, you must enter a unique username for the instance owner in the User name field. You must also enter a password in both the Password and Confirm password fields. If you leave the UID and GID fields blank, and check the Use default boxes, the system will assign a UID and GID for you. Optionally, you can check the Existing user button, and enter the name of an existing user ID which will become the instance owner. After you have completed this form, click Next. The DB2 fenced user window opens and is similar to the one shown in Figure 5-13 on page 137.

136

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-13 DB2 fenced user

12.If you would like the installer to create a DB2 fenced user ID, you must enter a unique username for the fenced user in the User name field. You must also enter a password in both the Password and Confirm password fields. If you leave the UID and GID fields blank, and check the Use default boxes, the system will assign a UID and GID for you. Optionally, you can check the Existing user button, and enter the name of an existing user ID which will become the fenced user. When you have completed this form, click Next. The Prepare the DB2 tools catalog window opens and is similar to the one shown in Figure 5-14 on page 138.

Chapter 5. TotalStorage Productivity Center installation on AIX

137

Figure 5-14 DB2 tools catalog

13.Click Do not prepare the DB2 tools catalog on this computer, then click Next. The Administration contact window opens and is similar to the one shown in Figure 5-15 on page 139.

138

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-15 DB2 administration contact

14.Choose the options on this screen which pertain to your specific environment. If you already have DB2 servers in your environment, it can benefit you to use a contact list on an existing DB2 server. If so, select Remote and enter the name of the remote DB2 server from which to obtain the contact list. Otherwise, choose the default options of Local and Enable notification. The local host name is displayed in the Notification SMTP server field by default. You can change this option to suit your environment. After you have completed this form, click Next. The health monitor window opens, similar to the one shown in Figure 5-16 on page 140.

Chapter 5. TotalStorage Productivity Center installation on AIX

139

Figure 5-16 DB2 health monitor contact

15.Enter information in this screen also in accordance with your particular environment. If you do not want to specify a contact, choose Defer this task until after installation is complete, then click Next. The Start copying files window opens, similar to the one shown in Figure 5-17 on page 141.

140

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-17 DB2 start copying files

16.Scroll through the window to review the installation summary. When you are ready to proceed, click Finish. The DB2 installer begins the product installation. A progress screen opens and appears similar to the one shown in Figure 5-18.

Figure 5-18 DB2 installation progress

17.When DB2 has been installed successfully, an installation summary screen opens, similar to the one shown in Figure 5-19 on page 142.

Chapter 5. TotalStorage Productivity Center installation on AIX

141

Figure 5-19 DB2 post installation steps

18.Review the information in the Post-install steps tab to see if there are any additional tasks you need to complete. You can also select the Status report tab. It appears similar to the one shown in Figure 5-20.

Figure 5-20 DB2 installation status report

19.Each of the items in the status report should indicate Success. Click Finish to close the installer.

142

IBM TotalStorage Productivity Center: The Next Generation

5.6.5 Verifying the DB2 installation


To verify that DB2 was successfully installed, first change to the instance owner user ID by using the su command. For example, if your instance owner user ID is db2inst1, type the following at the host command prompt:
su - db2inst1

This will log you on to the system as the instance owner. Then, type the following commands
db2level exit

The output of the db2level command will appear similar to the text shown in Figure 5-21:

DB21085I Instance "db2inst1" uses "32" bits and DB2 code release "SQL08020" with level identifier "03010106". Informational tokens are "DB2 v8.1.1.64", "s040812", "U498350", and FixPak "7". Product is installed at "/usr/opt/db2_08_01". Figure 5-21 Output of the db2level command

5.6.6 Removing the CD from the server


If you accessed the installation media from the CD, you should now unmount the CD and remove it from the system. To do this, complete the following steps: 1. Type the following at a host command prompt:
umount /cdrom

2. Remove the CD from the drive by pressing the button on the front panel of the CD-ROM or DVD-ROM drive. This will eject the media tray. 3. Remove the CD from the media tray, and close the media tray by pressing the button again.

5.7 Installing the DB2 fix pack


TotalStorage Productivity Center requires you to install IBM DB2 UDB with fix pack 7a or higher. At the time of this writing, fix pack 10 was the latest available, and that is what we used during out testing. The procedure below is the one we followed for installing fix pack 10. You should always consult the README file for specific installation instructions for the version of the fix pack you are installing.

5.7.1 Obtaining and installing the latest DB2 fix pack


Complete the following steps to download and install the latest DB2 fix pack: 1. If you have not already done so, login to your DB2 server as the root user. 2. Create a temporary directory to contain the fix pack image and the compressed image files. This directory must be created on a fleshiest which has approximately 2 GB of free space. Also, the directory must NOT contain a space anywhere in its path. For example, to create a directory in /usr called tarfiles, type the following command: mkdir /usr/tarfiles

Chapter 5. TotalStorage Productivity Center installation on AIX

143

3. Download the latest IBM DB2 UDB fix pack from the IBM support FTP site. We donwnloaded fix pack 10 from the following location:
ftp://ftp.software.ibm.com/ps/products/db2/fixes2/english-us/db2aix5v8/fixpak/ FP10_U803920/FP10_U803920.tar.Z

4. Change to the directory where you stored the fix pack image. For example, if you downloaded the file to /usr/tarfiles, type the following command:
cd /usr/tarfiles

5. Extract the compressed image files. For version 10 of the fix pack, the command is:
gunzip -c FP10_U803920.tar.Z | tar -xvf -

6. Switch to the instance authority. For example, if your DB2 instance is db2inst1, type the following command:
su - db2inst1

7. Source the environment by issuing the following command:


. $HOME/sqllib/db2profile

8. Type the following commands to shut down the DB2 environment:


db2 force applications all db2 terminate db2stop db2licd -end $HOME/sqllib/bin/ipclean exit

9. Switch to the DAS user authority. For example, if your DB2 DAS user is db2tpc, type the following command: su - db2tpc 10.Type the following commands to source the environment and shutdown DB2 DAS: . $HOME/das/dasprofile db2admin stop exit 11.As the root user, issue the following commands to unload shared libraries and disable the DB2 fault monitor: /usr/sbin/slibclean cd /usr/opt/db2_08_01/bin Note: The above listed location is the default for DB2 installation. However, if you elected to install DB2 at another location, you should change into that directory structure instead. ./db2fmcu -d ./db2fm -i db2tpc -D Where db2tpc is the user ID of your DB2 DAS user. 12.Change to the directory which was created automatically when the fix pack files were uncompressed. For version 10 of the fix pack, the directory is named fixpak.s050811. To do this, type the following command: cd fixpak.s050811 13.Install the fix pack by issuing the following command: ./installFixPak -y

144

IBM TotalStorage Productivity Center: The Next Generation

14.After the fix pack has been successfully installed, you must bind the database instance to the updated code. To do this, you must issue the db2iupdt command. For example, if your instance name is db2inst1, and you installed DB2 in the default location, type the following commands: /usr/opt/db2_08_01/instance/db2iupdt db2inst1 15.Next you must update the DB2 DAS. For example, if your DB2 DAS user ID is db2tpc, type the following command: /usr/opt/db2_08_01/instance/dasupdt db2tpc 16.Next, you must update the db2 instance owners user profile to update the number of shared memory segments allowed for a process. To do this, edit the userprofile located in the sqllib directory under the instance owners home directory. For example, if your instance is db2inst1, then type the following command to change to that directory: cd /home/db2inst1/sqllib 17.Then, edit the userprofile contained in that directory. Add the following lines to the file, then save the file: EXTSHM=ON export EXTSHM db2set DB2ENVLIST=EXTSHM 18.Next, you must restart DB2. To do this, switch to the instance authority. For example, if your DB2 instance is db2inst1, type the following command: su - db2inst1 19.Source the environment by issuing the following command: . $HOME/sqllib/db2profile 20.Type the following commands to start the instance and exit from the instance authority: db2start exit 21.Finally, you must login as the DAS user and restart DB2 DAS. To do this, switch to the DAS user authority. For example, if your DB2 DAS user is db2tpc, type the following command: su - db2tpc 22.Type the following commands to source the environment and start DB2 DAS: . $HOME/das/dasprofile db2admin start exit

5.7.2 Add the root user to the DB2 instance group


You must add the root user to the group you created when you created the database instance. If you used the default group ID, the group name is db2grp1. This step is necessary in order to perform the TotalStorage Productivity Center installation. Follow these steps to add the root user to the instance group: 1. Type the following at a command prompt: smit group The SMIT interface will display, and will show the Groups menu. 2. Select the option Change / Show Characteristics of a Group.
Chapter 5. TotalStorage Productivity Center installation on AIX

145

3. In the Group NAME field, enter the name of the group to be modified. If you used the default group ID, enter the name db2grp1 and press Enter. 4. Highlight the USER list field and press F4. The USER list menu appears. 5. Highlight the root user ID and press F7 to select it. 6. Highlight the DB2 instance owner user ID and press F7 to select it. For example, select the user ID db2inst1. Then press Enter. The Change Group Attributes screen will reappear. Ensure that both the root user and the DB2 instance owner user ID appear in the USER list field, then press Enter. 7. When the OK status appears, press F10 to exit SMIT (F12 if using the graphical version).

5.8 Agent Manager installation for AIX


After you have completed the installation of IBM DB2 UDB, and applied the latest fix pack, then you can install Agent Manager. For small and medium-sized deployments, Agent Manager should be installed on the same server as DB2. For Enterprise deployments, Agent Manager can be installed on a separate server if desired. The only change to the install procedure will be connecting to a remote DB2 server, instead of a local one. If you are upgrading from TotalStorage Productivity Center V2.3, you do not need to reinstall Agent Manager. Agent Manager 1.2 will work and is supported with IBM TotalStorage Productivity Center V3.1. By default, the Agent Manager installer will also install IBM WebSphere Application Server Express, V5.0 (WebSphere Express), unless it finds an existing copy of the full version of WebSphere Application Server (WAS) V5.1.1.3 or higher. If the installer does find an existing copy of WAS V5.1.1.3 or higher, the existing WAS installation will be used for Agent Manager.

5.8.1 Accessing the installation media using CD


If you are accessing the installation media using a downloaded image, skip to 5.8.2, Accessing the installation media using a downloaded image on page 146. 1. If you have not already done so, log on to the system as the root user. 2. Insert the CD into the CD-ROM or DVD-ROM drive on your system. 3. If you have not already done so, create a mount point for the media. For example, if you wish to mount the disc at /cdrom, type the mkdir /cdrom command. 4. Next, mount the disc in read-only mode at the mount point you created in the previous step, mount -o ro /dev/cd0 /cdrom, for example. 5. Change to the newly mounted file system, cd /cdrom, for example. 6. Proceed to the section entitled Preparing the display.

5.8.2 Accessing the installation media using a downloaded image


1. If you have not already done so, log on to the system as the root user. 2. Create a temporary directory to contain the installation image and the compressed image files. This directory must be created on a file system which has approximately 2 GB of free space. Also, the directory must NOT contain a space anywhere in its path. For example, to create a directory in /usr called tarfiles, type the following command: mkdir /usr/tarfiles

146

IBM TotalStorage Productivity Center: The Next Generation

3. Download or copy the installation image to the temporary directory you created. 4. Change to the directory where you have stored the image, for example, cd /usr/tarfiles 5. Extract the image files by following the instructions supplied at the repository from which you downloaded the image. This may involve running the tar or gunzip commands, or a combination of both, for example, tar -xvf agentmanager.tar 6. Change to the installation directory that was created automatically which you extracted from the image. For example: cd EmbeddedInstaller

5.8.3 Preparing the display


This version of the Agent Manager installer uses a graphical, Java-based interface. If you are installing with a local graphical display, proceed to the next section entitled 5.6.4, Beginning the installation on page 128. However, if you are installing from a remote terminal session, you must setup an X-Windows display prior to beginning the installation process. First, you must start your local X-Windows server application. Examples are Hummingbird Exceed or Cygwin. Once your local X-Windows server application is running, you must set the DISPLAY variable on your host. You must know the IP address of the machine from which you intend to perform the installation. For example, if you IP address is 2.3.4.5, type the following command at the servers command prompt: export DISPLAY=2.3.4.5:0.0 You can verify that the X-Windows environment is properly set up by executing the following command on the host: xclock If the environment is successfully configured, you will see a graphical clock display, similar to the one shown in Figure 5-22:

Figure 5-22 Graphical clock display

5.8.4 Beginning the installation


Follow these steps to perform the installation of Agent Manager: 1. At the command prompt on the host, type the following command: ./setupAix.bin 2. The Agent Manager setup installer will open, and will prompt you to select an installation language. The prompt will appear similar to the one shown in Figure 5-23 on page 148.

Chapter 5. TotalStorage Productivity Center installation on AIX

147

Figure 5-23 Agent Manager installation language selection

3. Select the language you wish to use for the installation from the drop-down box, then click OK. The Agent Manager setup wizard will then initialize. The first item to display will be the license agreement screen, and it will appear similar to the one shown in Figure 5-24.

Figure 5-24 Agent Manager license agreement

4. Click the button labeled I accept the terms of the license agreement, then click Next. The installation location window displays, and appears similar to the one shown in Figure 5-25 on page 149.

148

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-25 Agent Manager installation location

5. The default location for installing Agent Manager is /opt/IBM/AgentManager. You may choose to change this location to suit your requirements. Once you have entered the installation location path in the Directory Name field, click Next. The installing panel displays, and appears similar to the one shown in Figure 5-26.

Figure 5-26 Agent Manager installation progress

Note: At this point, if you have an existing version of Agent Manager installed, it will be detected and you will be given a choice to upgrade to the latest version. If your Agent Manager version is older that 1.2, you must upgrade it. If you wish to upgrade, click Next. If not, click Cancel. Either way, the Agent Manager installer will continue.

Chapter 5. TotalStorage Productivity Center installation on AIX

149

6. The Agent Manager Registry Information window displays, and appears similar to the one shown in Figure 5-27.

Figure 5-27 Agent Manager registry information

7. Click the option DB2 Universal Database at the top of the screen. The Database Name or Directory field contains the name of the database which Agent Manager will use. The default database name is IBMCDB. We recommend that you do not change this name. Then, select the Type of Database Connection used in your environment. If you are installing Agent Manager on the same server on which you installed DB2, then choose Local database. If you are installing Agent Manager on a separate server, then choose Remote database. Once you have made your selections, click Next. The Database Connection Information window displays, and appears similar to the one shown in Figure 5-28 on page 151.

150

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-28 Agent Manager database connection information

8. The Database Software Directory is the sqllib directory under the home directory of the instance owner. For example, if your instance name is db2inst1, then you would enter /home/db2inst1/sqllib. If you are installing Agent Manager on the same server as you installed DB2, you can also click Browse to navigate the directory structure and select the directory. Enter the user ID for the instance owner in the Database User Name field. Enter the password for the instance owner in the Database Password field. If you are installing Agent Manager on a separate machine from where you installed DB2, you must enter the host name of the DB2 server in the Host Name of the Database Server field, and the TCP port number in the Database Port field. The default port number for DB2 databases is 50000. Once you have completed this form, click Next. The WebSphere Application Server Information window displays, and appears similar to the one shown in Figure 5-29 on page 152.

Chapter 5. TotalStorage Productivity Center installation on AIX

151

Figure 5-29 Agent Manager WebSphere Application Server information

9. Enter the fully qualified host name in the Host Name or Fully Qualified Host Name field. We recommend that you do not enter an IP address in this field. If you do, an information panel will display asking you to confirm the use of an IP address instead of a host name. The default name AgentManager is entered in the Application Server Name for Agent Manager field. We recommend that you do not change this from the default setting. The defaults of 9510, 9511, and 9512 are entered in the Registration Port, Secure Port, and Public Port and Secondary Port for the Agent Recovery Service fields, respectively. We recommend that you do not change these from their default settings. The Do not use port 80 for the agent recovery service option is unchecked by default, which means that port 80 will be used for this service. We recommend that you check this box. See the note below regarding this option. The Start the agent manager after the installation is complete and Autostart the agent manager each time this system restarts options are checked by default. We recommend that you leave these options checked. Note: By default, the Agent Manager agent recovery service will use port 80 for its communication. However, if your server functions as a Web server, or if you will install the IBM TotalStorage Productivity Center Graphical User Interface (GUI) Web access component, you should check the box labeled Do not use port 80 for the agent recovery service. This will avoid conflicts with Web services. Once you have completed this form, click Next. The Security Certificates screen displays, and appears similar to the on shown in Figure 5-30 on page 153.

152

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-30 Agent Manager security certificates

10.The default option is Create certificates for this installation. We recommend that you accept the default. Note: Tip: Earlier versions of TotalStorage Productivity Center encountered difficulty creating and distributing security certificates. In those versions, we recommended using the demonstration certificates. However, TotalStorage Productivity Center V3.1 has resolved these issues, and can create and distribute the certificates correctly. We have therefore updated our recommendation to create certificates during the installation process. Click Next after you have made your selection. If you chose the option to create certificates, the Security Certificate Settings window displays, and appears similar to the one shown in Figure 5-31 on page 154.

Chapter 5. TotalStorage Productivity Center installation on AIX

153

Figure 5-31 Agent Manager security certificate settings

11.The Certificate Authority Name field contains TivoliAgentManagerCA by default. This name must be unique in your environment. We recommend you use the default setting unless it has already been used in your environment. the Security Domain field is automatically populated with your DNS information. We recommend you accept this default setting. the Certificate Authority Password field is blank by default. If you leave it blank, a random password will be generated by the system. We recommend you enter a password in this field, in case you need to unlock the certificate files in the future. The Agent Registration Password is changeMe. Despite its setting, we recommend that you do not change it. Once you have completed this form, click Next. The User Input Summary screen displays, and appears similar to the one shown in Figure 5-32 on page 155.

154

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-32 Agent Manager user input summary

12.Review the settings displayed on this screen. If you need to make changes, click Back. If you are ready to proceed with the installation, click Next. When you do, the Summary Information screen displays, and appears similar to the one shown in Figure 5-33.

Figure 5-33 Agent Manager summary information

13.When you have reviewed the summary information, click Next. The installation progress screen displays, and appears similar to the one shown in Figure 5-34 on page 156.

Chapter 5. TotalStorage Productivity Center installation on AIX

155

Figure 5-34 Agent Manager installation progress

14.When the installation has finished, the installation results screen displays, and is similar to the one shown in Figure 5-35.

Figure 5-35 Agent Manager installation results

15.When you have reviewed the results, click Next. The installation summary screen displays, and is similar to the one shown in Figure 5-36 on page 157.

156

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-36 Agent Manager installation completed

16.Click Finish to close the installation wizard.

5.8.5 Removing the CD from the server


If you accessed the installation media using a CD, you should now unmount the CD and remove it from the system. To do this, complete the following steps: 1. Type the following at a host command prompt: umount /cdrom 2. Remove the CD from the drive by pressing the button on the front panel of the CD-ROM or DVD-ROM drive. This will eject the media tray. 3. Remove the CD from the media tray, and close the media tray by pressing the button again.

5.9 Installing IBM TotalStorage Productivity Center for AIX


After you have completed the installation of DB2 and Agent Manager, then you can install TotalStorage Productivity Center. In this chapter we document the Custom install. We show how you can select each component individually. While we do recommend the Custom install, you can select all components to be installed in one invocation of the installer.

5.9.1 Order of component installation


After installing the prerequisites, it is ideal to install the components in the following order: 1. 2. 3. 4. 5. DB Schema Data Server Device Server GUI and CLI Data agent and Device agent.
Chapter 5. TotalStorage Productivity Center installation on AIX

157

Tip: We recommend that you install the Database Schema first. Then, install Data Server and Device Server in a separate step. If you install all the components in one step, if any part of the installation fails for any reason (for example space or passwords) the installation suspends and rolls back, uninstalling all the previous components.

5.9.2 Accessing the installation media with a CD


If you are accessing the installation media with a downloaded image, skip to the next section. Follow these steps to perform the installation of IBM TotalStorage Productivity Center V3.1: If you have not already done so, log on to the DB2 system as the root user. 1. Insert the CD into the CD-ROM or DVD-ROM drive on your system. 2. If you have not already done so, create a mount point for the media. For example, if you wish to mount the disc at /cdrom, type the mkdir /cdrom command. 3. Next, mount the disc in read-only mode at the mount point you created in the previous step, mount -o ro /dev/cd0 /cdrom for example, 4. Change to the newly mounted file system, cd /cdrom, for example. Proceed to the section entitled 5.8.3, Preparing the display on page 147, below.

5.9.3 Accessing the installation media with a downloaded image


Follow these steps to install the media with a downloaded image: 1. If you have not already done so, log on to the DB2 system as the root user. 2. Create a temporary directory to contain the installation image and the compressed image files. This directory must be created on a file system which has approximately 2 GB of free space. Also, the directory must not contain a space anywhere in its path. For example, to create a directory in /usr called tarfiles, type the following command: mkdir /usr/tarfiles 3. Download or copy the installation image to the temporary directory you created. 4. Change to the directory where you have stored the image, for example: cd /usr/tarfiles 5. Extract the image files by following the instructions supplied at the repository from which you downloaded the image. This might involve running the tar or gunzip commands, or a combination of both, tar -xvf TPC_3.1.0_aix_disk1.tar, for example.

5.9.4 Preparing the display


This version of the TotalStorage Productivity Center installer uses a graphical, Java-based interface. If you are installing with a local graphical display, please proceed to the next section entitled 5.6.4, Beginning the installation on page 128. However, if you are installing from a remote terminal session, you must setup an X-Windows display prior to beginning the installation process. First, you must start your local X-Windows server application. Examples are Hummingbird Exceed or Cygwin.

158

IBM TotalStorage Productivity Center: The Next Generation

When your local X-Windows server application is running, you must set the DISPLAY variable on your host. You must know the IP address of the machine from which you intend to perform the installation. For example, if you IP address is 2.3.4.5, type the following command at the servers command prompt:
export DISPLAY=2.3.4.5:0.0

You can verify that the X-Windows environment is properly set up by executing the following command on the host: xclock If the environment is successfully configured, you will see a graphical clock display, similar to the one shown in Figure 5-37:

Figure 5-37 Graphical clock display

5.9.5 Sourcing the environment


If you will be installing TotalStorage Productivity Center on the same server on which you installed DB2, you need to source the DB2 environment. For example, if you DB2 instance is named db2inst1, type the following command:
. /home/db2inst1/sqllib/db2profile

Note: You must install TotalStorage Productivity Center as the root user. However, you must still source the environment with the instance owner.

5.9.6 Assigning file system ownership


If you created file systems to contain DB2 tables and temporary space, you must change the owner of those file systems to the DB2 instance owner so that the appropriate database files can be created. For example, if you created the file system /dbfiles to house DB2 tables, and created the file system /dbtemp to house DB2 temporary storage, and your instance owner is db2inst1, issue the following command on your DB2 server:
chown db2inst1 /dbfiles /dbtemp

5.9.7 Installing the database schema


Follow these steps to perform the installation of the database schema for TotalStorage Productivity Center: 1. At the command prompt on the DB2 host, type the ./setup.sh command. 2. The TotalStorage Productivity Center installer will open, and will prompt you to select an installation language. The prompt will appear similar to the one shown in Figure 5-38 on page 160.

Chapter 5. TotalStorage Productivity Center installation on AIX

159

Figure 5-38 TotalStorage Productivity Center installation language selection

3. Select the language you wish to use for the installation from the drop-down box, then click OK. The TotalStorage Productivity Center setup wizard will then initialize. The first item to display will be the license agreement screen, and it will appear similar to the one shown in Figure 5-39.

Figure 5-39 TotalStorage Productivity Center license agreement

4. Click the button labeled I accept the terms of the license agreement, then click Next. The installation types window displays, and appears similar to the one shown in Figure 5-40 on page 161.

160

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-40 TotalStorage Productivity Center installation types

5. Click Custom installation. In addition, you can change the TPC Installation Location from the default location of /opt/IBM/TPC to suit your requirements. Once you have filled out this form, click Next. The Select one or more components to install window displays, and appears similar to the one shown in Figure 5-41.

Figure 5-41 TotalStorage Productivity Center component selection

6. Deselect all the options except Create database schema. Then click Next. The database administrator information window displays, and appears similar to the one found in Figure 5-42 on page 162.

Chapter 5. TotalStorage Productivity Center installation on AIX

161

Figure 5-42 TotalStorage Productivity Center database administrator

7. Enter the user ID for the DB2 instance owner in the Database administrator field, and the instance owners password in the Password field, then click Next. The database schema information window displays, and appear similar to the one shown in Figure 5-43.

Figure 5-43 TotalStorage Productivity Center database schema information

8. You must enter the DB2 instance owners user ID in the DB user ID field, and the instance owners password in the Password field. Then you can choose which database connection type to use for TotalStorage Productivity Center: If you are upgrading from a current version of TotalStorage Productivity Center, choose the option Use local database. Enter in the port, database name, full path, and instance name in the appropriate fields. If this is a new installation (not an upgrade), choose the option Create local database. The default database name is TPCDB. We recommend that you do not change this name. After you have selected the database connection type which suits your requirements, click Schema creation details. The schema details window displays, and appears similar to the one shown in Figure 5-44 on page 163.

162

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-44 TotalStorage Productivity Center database schema details

9. The default entry in the Schema name field is TPC. We recommend that you do not change this from the default setting. You then have the option of placing the various table spaces in different directories or file systems, and for setting an initial database size. For all but the largest Enterprise deployments, database sizes of 200 MB should be sufficient for initial creation. For best performance in medium and Enterprise deployments, you should consider placing the table spaces on separate file systems and on separate disk devices. If you have already created these file systems, enter their paths in the Normal, Key, Big, and Temp fields, or click the Browse button to search for them. The Normal, Key, and Big databases can be housed in the same file system. The Temp database should be housed on a separate file system for best performance. The differences between choosing System managed (SMS) and Database managed (DMS) containers are discussed in 11.1, Selecting an SMS or DMS tablespace on page 510. If you select Database managed, you can enter a path in which to house log files, and an initial size. Log files should be housed on a separate fleshiest from the table spaces for best performance. For all but the largest Enterprise deployments, an initial size of 20 MB should suffice. After you have filled out the form, click OK. You will be returned to the database schema information window. In that window, click Next. The summary information window displays, and appears similar to the one shown in Figure 5-45 on page 164.

Chapter 5. TotalStorage Productivity Center installation on AIX

163

Figure 5-45 TotalStorage Productivity Center install summary

10.Click Install to begin the database schema installation. A progress screen opens, and is similar to the one shown in Figure 5-46.

Figure 5-46 TotalStorage Productivity Center installation progress

11.When the installation is complete, the installation results window displays, and appears similar to the one shown in Figure 5-47 on page 165.

164

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-47 TotalStorage Productivity Center installation results

12.Click Finish to exit the installer.

5.9.8 Installing Data server


When you have finished creating the database schema, you are ready to install Data server. Follow these steps to complete the installation process for Data sever. 1. At the command prompt on the Data server host, in the installation media directory, type the ./setup.sh command. 2. The TotalStorage Productivity Center installer will open, and will prompt you to select an installation language. The prompt will appear similar to the one shown in Figure 5-48.

Figure 5-48 TotalStorage Productivity Center installation language selection

3. Select the language you want to use for the installation from the drop-down box, then click OK. The TotalStorage Productivity Center setup wizard will then initialize. The first item to display will be the license agreement screen, and it will appear similar to the one shown in Figure 5-49 on page 166.

Chapter 5. TotalStorage Productivity Center installation on AIX

165

Figure 5-49 TotalStorage Productivity Center license agreement

4. Click the button labeled I accept the terms of the license agreement, then click Next. The installation types window displays, and appears similar to the one shown in Figure 5-50.

Figure 5-50 TotalStorage Productivity Center installation types

5. Click Custom installation. In addition, you can change the TPC Installation Location from the default location of /opt/IBM/TPC to suit your requirements. Once you have filled out this form, click Next. The Select one or more components to install window displays, and appears similar to the one shown in Figure 5-51 on page 167.

166

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-51 TotalStorage Productivity Center component selection

6. Deselect all the options except Data server. Then click Next. The database administrator information window displays, and appears similar to the one found in Figure 5-52.

Figure 5-52 TotalStorage Productivity Center database administrator

7. Enter the user ID for the DB2 instance owner in the Database administrator field, and the instance owners password in the Password field, then click Next. The database schema information window displays, and appears similar to the one found in Figure 5-53 on page 168.

Chapter 5. TotalStorage Productivity Center installation on AIX

167

Figure 5-53 Data server database schema information

8. Enter the DB2 instance owners user ID in the DB user ID field, and the instance owners password in the Password field. If you are installing Data server on the same machine on which DB2 is installed, check the option Use local database. The local database information should be populated automatically. Highlight the local database to be used. If you are installing Data server on a separate machine, check the option Use remote database. The default database name is TPCDB, but if you created a database with a different name, enter it here. Enter the DB2 servers host name in the Host name field. Enter the communication port number in the Port field. The default port number for DB2 is 50000. You will also need to enter the path to the JDBC driver. If your instance name is db2inst1, then the default path is /home/db2inst1/sqllib/java.db2jcc.jar. You can enter the path directly, or click Browse to search for it. When you have selected your database connection information and completed the form, click Next. The Data server information window displays, and appears similar to the one shown in Figure 5-54.

Figure 5-54 Data server information

168

IBM TotalStorage Productivity Center: The Next Generation

9. If it is not already displayed, you must enter the fully qualified host name of the server on which you are installing TotalStorage Productivity Center in the Data server name field. The default port of 9549 is listed in the Data server port field. You may change this to suit your requirements, but we recommend that you do not change it. The adm group is listed by default in the TPC superuser field. We recommend that you do not change it. If you want to perform advanced security role mapping, click Security roles. If you do, the security roles mapping screen displays, and appears similar to the one shown in Figure 5-55.

Figure 5-55 Data server security roles mapping

You can optionally enter group names to map to each specific role in TotalStorage Productivity Center. This allows for more customized control over your management environment. When you have finished filling out the form click OK. The Data server information screen will display. If you want to discover Network Attached Storage (NAS) devices in your environment, click NAS discovery. The NAS discovery options window will display, and will appear similar to the one shown in Figure 5-56.

Figure 5-56 Data server NAS discovery options

Chapter 5. TotalStorage Productivity Center installation on AIX

169

You can add login information in the User name and Password fields in order to attach to Network Appliance storage devices. You can also add Simple Network Management Protocol (SNMP) community strings to search during the discovery process. To add an SNMP community, enter the community name in the SNMP community field and click Add. When you have finished filling out the form click OK. The Data server information screen will display. When you have finished making your selections, click Next. The Agent Manager information panel displays, and appears similar to the one shown in Figure 5-57.

Figure 5-57 Agent Manager information

10.Enter the fully qualified host name of the Agent Manager server in the Hostname or IP address field. The Port (secured) and Port (Public) fields will be populated with the defaults of 9511 and 9513, respectively. We recommend that you do not change them. In the User ID field, enter manager. In the Password field for that ID, enter password. These are the defaults and cannot be changed. The default agent registration password is changeMe. Enter it, or the agent registration password you created during the Agent Manager installation, into the final Password field. When you have finished filling out the form click Next. The summary information window displays, and appears similar to the on shown in Figure 5-58.

Figure 5-58 Data server installation summary

170

IBM TotalStorage Productivity Center: The Next Generation

11.Click Install. Data server installation begins. A progress window displays, and appears similar to the one shown in Figure 5-59.

Figure 5-59 Data server installation progress

12.When the installation completes, an installation results window displays, and appears similar to the one shown in Figure 5-60.

Figure 5-60 Data server installation results

13.Click Finish to close the installer.

5.9.9 Installing Device server


Follow these steps to complete the installation process for Device sever. 1. At the command prompt on the Device server host, in the installation media directory, type the ./setup.sh command. 2. The TotalStorage Productivity Center installer will open, and will prompt you to select an installation language. The prompt will appear similar to the one shown in Figure 5-61 on page 172.

Chapter 5. TotalStorage Productivity Center installation on AIX

171

Figure 5-61 TotalStorage Productivity Center installation language selection

3. Select the language you want to use for the installation from the drop-down box, then click OK. The TotalStorage Productivity Center setup wizard will then initialize. The first item to display will be the license agreement screen, and it will appear similar to the one shown in Figure 5-62.

Figure 5-62 TotalStorage Productivity Center license agreement

4. Click the button labeled I accept the terms of the license agreement, then click Next. The installation types window displays, and appears similar to the one shown in Figure 5-63 on page 173.

172

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-63 TotalStorage Productivity Center installation types

5. Click Custom installation. In addition, you can change the TPC Installation Location from the default location of /opt/IBM/TPC to suit your requirements. After you have completed this form, click Next. The Select one or more components to install window displays, and appears similar to the one shown in Figure 5-64.

Figure 5-64 TotalStorage Productivity Center component selection

6. Deselect all the options except Device server. Then click Next. The database administrator information window displays, and appears similar to the one found in Figure 5-65 on page 174.

Chapter 5. TotalStorage Productivity Center installation on AIX

173

Figure 5-65 TotalStorage Productivity Center database administrator

7. Enter the user ID for the DB2 instance owner in the Database administrator field, and the instance owners password in the Password field, then click Next. The database schema information window displays, and appears similar to the one found in Figure 5-66.

Figure 5-66 Device server database schema information

8. Enter the DB2 instance owners user ID in the DB user ID field, and the instance owners password in the Password field. If you are installing Device server on the same machine on which DB2 is installed, check the option Use local database. The local database information should be populated automatically. Highlight the local database to be used. If you are installing Device server on a separate machine, check the option Use remote database. The default database name is TPCDB, but if you created a database with a different name, enter it here. Enter the DB2 servers hos tname in the Host name field. Enter the communication port number in the Port field. The default port number for DB2 is 50000. You will also need to enter the path to the JDBC driver. If your instance name is db2inst1, then the default path is /home/db2inst1/sqllib/java.db2jcc.jar. You can enter the path directly, or click Browse to search for it. 174
IBM TotalStorage Productivity Center: The Next Generation

Once you have selected your database connection information and filled out the form, click Next. The Device server information window displays, and appears similar to the one shown in Figure 5-67.

Figure 5-67 Device server information

9. If it is not already displayed, you must enter the fully qualified host name of the server on which you are installing TotalStorage Productivity Center in the Device server name field. The default port of 9550 is listed in the Device server port field. You may change this to suit your requirements, but we recommend that you do NOT change it. The adm group is listed by default in the TPC superuser field. We recommend that you do not change it. Enter a password in the Host authentication password field that will be used for fabric agents to communicate with the device server. The password should contain eight characters or less, and should contain only alphanumeric characters. The password is case-sensitive. Enter a user ID and password in the WAS admin ID and Password fields. This user ID and password is only used during the installation process by the Device server so that it can communicate with WebSphere. If you did not have a previous version of WebSphere installed, these entries can be anything. However, if you are installing Device server with an existing version of WebSphere, these must be the authentication credentials used by the installed version of WebSphere. If you want to perform advanced security role-mapping, click Security roles. If you do, the security roles mapping screen displays, and appears similar to the one shown in Figure 5-68 on page 176.

Chapter 5. TotalStorage Productivity Center installation on AIX

175

Figure 5-68 Device server security roles mapping

You can optionally enter group names to map to each specific role in TotalStorage Productivity Center. This allows for more customized control over your management environment. When you have finished filling out the form click OK. The Device server information screen will display. When you have finished making your selections, click Next. The summary information window displays, and appears similar to the on shown in Figure 5-69.

Figure 5-69 Device server installation summary

10.Click Install. Device server installation begins. A progress window displays, and appears similar to the one shown in Figure 5-70 on page 177.

176

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-70 Device server installation progress

11.When the installation completes, an installation results window displays, and appears similar to the one shown in Figure 5-71.

Figure 5-71 Device server installation results

12.Click Finish to close the installer.

5.9.10 Installing agents


TotalStorage Productivity Center Data agents and Fabric agents use the same installation instructions. You can install either agent (or both agents together) using these directions. Follow these steps to complete the installation process for the agents. 1. At the command prompt on the agent host, in the installation media directory, type the ./setup.sh command. 2. The TotalStorage Productivity Center installer will open, and will prompt you to select an installation language. The prompt will appear similar to the one shown in Figure 5-72 on page 178.

Chapter 5. TotalStorage Productivity Center installation on AIX

177

Figure 5-72 TotalStorage Productivity Center installation language selection

3. Select the language you wish to use for the installation from the drop-down box, then click OK. The TotalStorage Productivity Center setup wizard will then initialize. The first item to display will be the license agreement screen, and it will appear similar to the one shown in Figure 5-73.

Figure 5-73 TotalStorage Productivity Center license agreement

4. Click the button labeled I accept the terms of the license agreement, then click Next. The installation types window displays, and appears similar to the one shown in Figure 5-74 on page 179.

178

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-74 TotalStorage Productivity Center installation types

5. Click Custom installation. In addition, you can change the TPC Installation Location from the default location of /opt/IBM/TPC to suit your requirements. After you have completed this form, click Next. The Select one or more components to install window displays, and appears similar to the one shown in Figure 5-75.

Figure 5-75 TotalStorage Productivity Center component selection

6. Deselect all the options except the agent(s) you wish to install. For example, you can leave Fabric agent checked. When you have finished selecting the agent(s) to install click Next. The Data server and Device server information window displays, and appears similar to the one shown in Figure 5-76 on page 180.

Chapter 5. TotalStorage Productivity Center installation on AIX

179

Figure 5-76 Data server and Device server information

7. If they are not already displayed, you must enter the fully qualified host name of the Data server or the Device server in the appropriate fields, depending on which agents you are installing. The default ports of 9549 for Data server and 9550 for Device server are listed in the Data Server port and Device server port fields, respectively. You may change these to suit your requirements, but we recommend that you do not change them. The adm group is listed by default in the TPC superuser field. We recommend that you do not change it. If you are installing Fabric agent, you will need to enter the password used for authenticating with the Device server in the Host authentication password field. If you are installing Data agent, you can click Data agent options. If you do, the data agent options window displays, and is similar to the one shown in Figure 5-77.

Figure 5-77 Data agent options

There are two options, Agent should perform a scan when first installed, and Agent may run scripts sent by server. We recommend that you leave both options checked. However, if you are installing into a production environment and the agent host is heavily utilized, you may elect to uncheck the option Agent should perform a scan when first installed. This will use less resources on the agent host during the installation process.

180

IBM TotalStorage Productivity Center: The Next Generation

The agent will not collect statistics about itself until a scan is scheduled by the TotalStorage Productivity Center administrator. When you have completed the options on this form click OK. The Data server and Device server information window will display. When you have finished making your selections, click Next. The common agent selection window displays, and appears similar to the one shown in Figure 5-78.

Figure 5-78 Common agent selection

8. If you have an existing Common agent installed, you can select the option Select an existing common agent from the list below. Highlight the common agent you want to use. If not, select the option Install the new common agent at the location listed below. You can change the installation path to suit your requirements by entering the path directly in the path field, or clicking Browse. When you have made your selection, click Next. The summary information window displays, and appears similar to the on shown in Figure 5-79.

Figure 5-79 Agent installation summary

9. Click Install. Agent installation begins. A progress window displays, and appears similar to the one shown in Figure 5-80 on page 182.

Chapter 5. TotalStorage Productivity Center installation on AIX

181

Figure 5-80 Agent installation progress

10.When the installation completes, an installation results window displays, and appears similar to the one shown in Figure 5-81.

Figure 5-81 Agent installation results

Click Finish to close the installer.

5.9.11 Installing the Java Graphical User and the Command Line Interface
TotalStorage Productivity Center Data Java Graphical User Interface (GUI) and Command Line Interface (CLI) use the same installation instructions. You can install either interface (or both interfaces together) using these directions. You can install the interfaces on any of the TotalStorage Productivity Center servers, management consoles, or workstations. Follow these steps to complete the installation process for the interfaces. 1. At the command prompt on the interface host, in the installation media directory, type the following command:
./setup.sh

182

IBM TotalStorage Productivity Center: The Next Generation

2. The TotalStorage Productivity Center installer will open, and will prompt you to select an installation language. The prompt will appear similar to the one shown in Figure 5-82.

Figure 5-82 TotalStorage Productivity Center installation language selection

3. Select the language you wish to use for the installation from the drop-down box, then click OK. The TotalStorage Productivity Center setup wizard will then initialize. The first item to display will be the license agreement screen, and it will appear similar to the one shown in Figure 5-83.

Figure 5-83 TotalStorage Productivity Center license agreement

4. Click the button labeled I accept the terms of the license agreement, then click Next. The installation types window displays, and appears similar to the one shown in Figure 5-84 on page 184.

Chapter 5. TotalStorage Productivity Center installation on AIX

183

Figure 5-84 TotalStorage Productivity Center installation types

5. Click Custom installation. In addition, you can change the TPC Installation Location from the default location of /opt/IBM/TPC to suit your requirements. When you have completed this form, click Next. The Select one or more components to install window displays, and appears similar to the one shown in Figure 5-85.

Figure 5-85 TotalStorage Productivity Center component selection

6. Deselect all the options except the interfaces you want to install. For example, you can leave GUI checked. When you have finished selecting the interfaces to install, click Next. The Data server and Device server information window displays, and appears similar to the one shown in Figure 5-86 on page 185.

184

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-86 Data server and Device server information

7. If they are not already displayed, you must enter the fully qualified host name of the Data server and/or the Device server in the appropriate fields. The default ports of 9549 for Data server and 9550 for Device server are listed in the Data Server port and Device server port fields, respectively. You may change these to suit your requirements, but we recommend that you do not change them. The adm group is listed by default in the TPC superuser field. We recommend that you do not change it. You will also need to enter the password used for authenticating with the Device server in the Host authentication password field. When you have finished making your selections, click Next. The summary information window displays, and appears similar to the on shown in Figure 5-87.

Figure 5-87 Interface installation summary

8. Click Install. Interface installation begins. A progress window displays, and appears similar to the one shown in Figure 5-88 on page 186.

Chapter 5. TotalStorage Productivity Center installation on AIX

185

Figure 5-88 Interface installation progress

9. When the installation completes, an installation results window displays, and appears similar to the one shown in Figure 5-89.

Figure 5-89 Interface installation results

Click Finish to close the installer.

5.10 Installing the user interface for access with a Web browser
This section details how to setup the TotalStorage Productivity Center Graphical User Interface (GUI) for use with a Web browser. To distribute the GUI with a Web browser, there are two prerequisites which must be met: 1. The Java-based GUI must be installed on the system that will act as the Web server. 2. A Web application server must be installed. Examples of this are Microsoft Internet Information Server (IIS) or IBM HTTP Server. For more information about installing IIS on Windows 2003 refer to 4.8.1, Installing Internet Information Services (IIS) on page 108.

186

IBM TotalStorage Productivity Center: The Next Generation

After the prerequisite software has been installed, distributing the GUI with a Web interface is mostly a matter of configuring the Web application server.

5.10.1 Distributing the Graphical User Interface with a Web browser


Follow the instructions in this section to set up your environment to distribute the Graphical User Interface (GUI) with the Web when you are using AIX systems. 1. If you have not already done so, log in as the root user on the AIX system which you will use as a Web server. 2. Install the TotalStorage Productivity Center GUI if it is not already installed on this system. Refer to Installing the Java Graphical User and the Command Line Interface on page 182 for detailed instructions. 3. If you already have a Web application server installed, proceed to Configuring the Web application server on page 191. If not, you can follow the instructions in the next section to download and install IBM HTTP Server.

Installing IBM HTTP Server


If you do not already have a Web application server installed, you can download and install IBM HTTP Server. This package is based on the Apache Geronimo open source project. Your server must have a minimum of 512 MB of RAM, and 1 GB of free disk space in order to install IBM HTTP Server. You will also need an additional 1 GB of free space to temporarily house the software image and uncompressed installation files. Follow these steps to download an installation image of IBM HTTP Server: 1. Enter the following URL into your Web browser:
http://www.ibm.com/software/webservers/httpservers

2. Click on the link on that page to download the latest version of IBM HTTP Server. As of this writing, the latest version was 6.0.2.0. 3. Follow the instructions for downloading the image to a temporary location on your system. Note: Registration at the ibm.com Web site is required. There is no charge to download the software. 4. Change to the location where you stored the image file. For example:
cd /tarfiles

5. Extract the compressed image files. For example, if you downloaded Version 6.0.2.0, you would type the following command:
tar -xvf ihs.6020.aix.ppc32.tar

6. Change to the installation directory that was created automatically during the image extraction. For example:
cd IHS

Preparing the display


This version of the IBM HTTP Server installer uses a graphical, Java-based interface. If you are installing using a local graphical display, proceed to the next section entitled Beginning the installation on page 188. However, if you are installing from a remote terminal session, you must set up an X-Windows display prior to beginning the installation process.

Chapter 5. TotalStorage Productivity Center installation on AIX

187

First, you must start your local X-Windows server application. Examples are Hummingbird Exceed or Cygwin. Once your local X-Windows server application is running, you must set the DISPLAY variable on your host. You must know the IP address of the machine from which you intend to perform the installation. For example, if you IP address is 2.3.4.5, type the following command at the servers command prompt: export DISPLAY=2.3.4.5:0.0 You can verify that the X-Windows environment is properly set up by executing the following command on the host: xclock If the environment is successfully configured, you will see a graphical clock display, similar to the one shown in Figure 5-37 on page 159:

Figure 5-90 Graphical clock display

Beginning the installation


To start the installation, follow these steps: 1. In the installation directory, type the following command:
./install

The graphical installation displays, and appears similar to the one shown in Figure 5-91.

Figure 5-91 IBM HTTP Server installer welcome screen

2. Click Next. The license agreement screen displays, and appears similar to the one shown in Figure 5-92 on page 189.

188

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-92 IBM HTTP Server license agreement

3. Choose I accept the terms in the license agreement and click Next. The installation location screen displays, and appears similar to the one shown in Figure 5-93.

Figure 5-93 IBM HTTP Server installation location

4. The default installation location is /usr/IBMIHS. You may change the installation location to suit your requirements by entering a path in the Directory Name field or by clicking Browse to search for it. Once you have entered the installation location, click Next. The setup type screen displays, and appears similar to the one shown in Figure 5-94 on page 190.

Chapter 5. TotalStorage Productivity Center installation on AIX

189

Figure 5-94 IBM HTTP Server installation type selection

5. Select Typical, then click Next. The installation summary screen displays, and appears similar to the one shown in Figure 5-95.

Figure 5-95 IBM HTTP Server installation summary

6. Click Next to begin the installation. A progress screen displays, and appears similar to the one shown in Figure 5-96 on page 191.

190

IBM TotalStorage Productivity Center: The Next Generation

Figure 5-96 IBM HTTP Server installation progress

When the installation is complete, an installation results screen displays, and appears similar to the one shown in Figure 5-97.

Figure 5-97 IBM HTTP Server installation results

7. Click Finish to close the installer.

Configuring the Web application server


This section assumes that you are configuring a single instance of a Web application server and that no other Web services are configured (with the exception of the standard TotalStorage Productivity Center services). If you are installing on a shared Web application server on which other Web services are configured, you must tailor the configuration shown here to your specific environment.

Chapter 5. TotalStorage Productivity Center installation on AIX

191

Configuring the IBM HTTP Server for AIX


1. Edit the httpd.conf file. If you installed IBM HTTP Server in the default location, you can find the file at /usr/IBMIHS/conf/httpd.conf. In Section 2 of the file, locate the following line:
DocumentRoot "/usr/IBMIHS/htdocs/en_US"

Change the text in quotes to point to the location where the TotalStorage Productivity Center GUI is installed. If you installed it at the default location, your edited line would look like this:
DocumentRoot "/opt/IBM/TPC/gui"

Note: The path is case-sensitive. 2. Farther down in the same section, locate the following line:
<Directory "/usr/IBMIHS/htdocs/en_US">

Change the text in quotes to point to the location where the TotalStorage Productivity Center GUI is installed. If you installed it at the default location, your edited line would look like this:
<Directory "/opt/IBM/TPC/gui">

Note: The path is case sensitive. 3. Farther down in the same section, locate the following line:
DirectoryIndex index.html index.html.var

At the end of the line, add a space, and then type TPCD.html. Your edited line will look like this:
DirectoryIndex index.html index.html.var TPCD.html

4. Save the changes you made to the file. Important: If you installed the IBM HTTP Server on the same machine that is running Agent Manager, and Agent Manager is using TCP port 80 for the agent recovery service, you will have to configure the Web server to listen on a different port than 80, which is the default for HTTP requests. To do this, locate the line in Section 1 of the httpd.conf file that looks like this:
Listen 80

Change the number 80 to an unused port. For example, you might configure the HTTP server to listen on port 8000. Then, when you attempt to access the GUI with the Web server, you must append the port number to the URL. For example:
http://myserver.com:8000

5. Start your Web application server. For IBM HTTP server installed at the default location, the command is:
/usr/IBMIHS/bin/apachectl start

6. If you want the Web application server to start automatically every time the server is booted, edit the /etc/inittab file and add the following line to it:
ibmihs:2:once:/usr/IBMIHS/bin/apachectl start >/dev/console 2>&1

192

IBM TotalStorage Productivity Center: The Next Generation

7. Open a Web browser and enter your Web application servers fully qualified name in the URL field, for example:
http://myserver.com

8. You should initially see the TotalStorage Productivity Center welcome screen, which appears similar to the one shown in Figure 5-98.

Figure 5-98 TotalStorage Productivity Center welcome screen

You must leave this window open while you are using the GUI. The GUI applet will be downloaded to your system automatically. You must answer Yes to the security questions regarding the installation of the security certificate. When the GUI finishes loading the logon box displays, and appears similar to the one shown in Figure 5-99.

Figure 5-99 GUI logon

9. Enter a valid user ID and password in the appropriate fields and click OK. The GUI displays, and appears similar to the one shown in Figure 5-100 on page 194.

Chapter 5. TotalStorage Productivity Center installation on AIX

193

Figure 5-100 Web GUI

194

IBM TotalStorage Productivity Center: The Next Generation

Chapter 6.

Agent deployment
TotalStorage Productivity Center uses different methods to collect information from the various systems and to interact with them to give you a complete view of the environment and a single point of control for your storage infrastructure. These methods are: SMI-S through Common Information Model (CIM) agents to communicate and interact with Storage Subsystems, Tape Libraries and Switches Simple Network Management Protocol (SNMP) Software agents on the attached servers (Data Agent and Fabric Agent) Proprietary interfaces (for only a few devices) This chapter discusses the function of the Data Agent and the Fabric Agent and provides a description of the various methods and necessary steps to roll out the agent infrastructure to your managed servers and computers.

Copyright IBM Corp. 2006. All rights reserved.

195

6.1 Functional overview: Which agents do I need?


Although you can run TotalStorage Productivity Center without the Data Agents and the Fabric Agents being present on your servers, this gives you only limited functionality and an incomplete view of your infrastructure. As shown in Figure 6-1, the Data Agents and Fabric Agents interact with each component of the TotalStorage Productivity Center product.

CIM Agent Storage and Tape Subsystems

TPC for Disk

Topology Viewer

Data Agent

Fabric Agent

TPC for Data

central TPC DB

Computers

Proprietary API SAN Components

SNMP

TPC for Fabric

Figure 6-1 TotalStorage Productivity Center V3.1 Information Flow

The TotalStorage Productivity Center components collect data about the infrastructure using the Data Agents and Fabric Agents along with other information sources (CIM agents and SNMP) and, in turn, feed the central TotalStorage Productivity Center database. The TotalStorage Productivity Center for Data and Fabric components also use the Data Agent and the Fabric Agent to initiate and perform management and configuration tasks on the servers and the fabric devices. The TotalStorage Productivity Center Topology Viewer (see Chapter 9, Topology viewer on page 415) as a central common interface to all TotalStorage Productivity Center components reads the data from the central database fed by the various product components, combines and correlates it. As a result, this produces a complete end-to-end view of your server and storage infrastructure.

6.1.1 Types of agents


TotalStorage Productivity Center uses four different types of agents to gather data about the devices and servers to be managed and monitored. Different combinations of these agents are required to effectively enable the functions of Data Manager, Fabric Manager, Disk Manager, and Tape Manager. In addition to these manager functions, the topology viewer is greatly affected by the proper discovery of all the managed entities in the management scope of TotalStorage Productivity Center.

CIMOM agents
These agents are provided by the vendor of the storage device, fabric switch, or tape library. For storage, they are needed for storage asset information, provisioning, alerting, and performance monitoring. For fabric switches, they are used only for performance monitoring. For tape libraries, they are used for asset and inventory information. 196
IBM TotalStorage Productivity Center: The Next Generation

Data agents
These are the traditional Tivoli Storage Resource Manager agents. They are installed on every computer system you want TotalStorage Productivity Center to manage. This is commonly referred to as an Agents Everywhere methodology. These agents collect information from the server on which they are installed. Asset information, file and file system attributes, and any other information needed from the computer system is gathered. Data agents can also gather information about database managers installed on the server, Novell NDS tree information, and NAS device information. In TotalStorage Productivity Center, you can create pings, probes, and scans to run against the servers that have Data agents installed. Data agents can be remotely installed only by running the TotalStorage Productivity Center agent installer from the TotalStorage Productivity Center server machine. This will install both the common agent and the Data agent.

Fabric agents
These are the traditional Tivoli SAN Manager agents. They are installed on computer systems that have fiber connectivity (through HBAs) into the SAN fabrics you want to manage and monitor. Fabric agents use scanners to collect information. The scanners are written in O/S native code, and communicate through the HBA to collect fabric topology information, port state information, and zoning information. They also can identify other SAN attached devices (if they are in the same zone). Using O/S system calls, they collect information about the system on which they are installed. Fabric agents are discovered during the agent install process, and do not need to be discovered separately, nor is it possible to do so. You can only remotely deploy Fabric agents from the TotalStorage Productivity Center server. If you run the agent installer from the TotalStorage Productivity Center server to remotely deploy Fabric Agents, the common agent must already be installed, and registered to the Agent Manager with which TotalStorage Productivity Center is associated. If you install the Fabric agent locally on the server, the installer installs the common agent for you.

Out of Band Fabric (OOBF) agents


Out Of Band Fabric (OOBF) agents are used to collect topology information from fabric switches through the IP network using SNMP queries to the switches. These agents discover less information than fabric agents. You must have an OOBF agent pointing to each switch in each SAN fabric you are managing. If the switches are behind private IP networks, as is normally the case with McData, you will not be able to use OOBF agents. OOBF agents are also required to collect zoning information for Brocade switches (where the admin user ID and password are needed), and VSAN information for Cisco switches. TotalStorage Productivity Center only responds to SNMP V1 queries. If the switches that you register OOBF agents to use SNMP V2 or V3, the agent does not work. You must either reconfigure the switch to use SNMP V1, if possible, or rely on fabric agents to collect switch information for these switches.

6.1.2 TotalStorage Productivity Center component use of agents


Although all TotalStorage Productivity Center components use the server agents, the contribution of the agents to the various components differ considerably. The following considerations provide some guidelines to determine if the deployment of a server agent infrastructure should be pursued. The TotalStorage Productivity Center for Disk mainly operates on the basis of SMI-S through CIM agents. Complete subsystem and tape-information collection, device management,
Chapter 6. Agent deployment

197

performance management, monitoring and alerting is performed through this path. There are reports within the TotalStorage Productivity Center for Disk component, correlating storage subsystem information and information about computers which would not be available without the presence of Data Agents out on the servers. So the capabilities of TotalStorage Productivity Center for Disk can be exploited almost completely without a server agent infrastructure. The TotalStorage Productivity Center for Fabric is designed to use three communication paths to gather information and interact with the fabric devices. However, at the time of writing, SMI-S can be used solely for switch-performance data collection and performance monitoring and alerting. Fabric configuration and status collection as well as fabric management (such as zone control) are either performed out-of-band or in-band through the Fabric Agents. Which one of these two ways TotalStorage Productivity Center actually employs is determined by the particular fabric components and is documented on the TotalStorage Productivity Center for Fabric Support Web site:
http://www-306.ibm.com/software/sysmgmt/products/support/IBM_TSANM_Device_Compatibility.html

To sum up, a decision whether or not the Fabric Agent should be deployed is largely determined by the fabric components which are to be managed. TotalStorage Productivity Center for Data receives almost all the information it provides in its reports and repositories solely through the Data Agents on the managed servers and computers. It also relies on the presence of a Data Agent infrastructure to perform policy driven management. Although TotalStorage Productivity Center for Data can also receive some information directly from the storage subsystems through CIM agents without the presence of any Data Agents, we recommend this only for very special requirements because this would limit the overall product capabilities to a large extent. The TotalStorage Productivity Center Topology Viewer is able to deliver a complete and meaningful end-to-end view of the server and storage-infrastructure only if both agents, the Data Agent and the Fabric Agents, are present on the managed computers. This end-to-end view of the infrastructure and the graphical presentation of the links and relations between the different elements of the environment is one of the big advantages of the Topology Viewer. Without the presence of the Data Agent and the Fabric Agents, most information about the attached servers and computers would be missing. If you use switches which support out-of-band discovery of the SAN infrastructure, only the SAN ports of the servers and computers would be depicted in the Topology Viewer. They would appear in the Other Group with their WWNNs as a label. In this case, you would be able to label the depicted SAN ports manually and classify them as Computers. If you use switches which support only in-band fabric discovery, the servers and computers would be missing in the Topology Viewer altogether. In both cases, the relationship between storage subsystem volumes and the managed computers would not be presented in the Topology Viewer. The following figures show two examples to give you an idea of the capabilities of the Topology Viewer with and without the server agents having been deployed. Figure 6-2 on page 199 shows an environment where several computers are present, but none of them has any agents installed. The switch is an IBM 2005-B32 and supports out-of-band infrastructure discovery. TotalStorage Productivity Center is able to discover the the HBAs of the computers. The Topology Viewer depicts them in the category Other and labels them with their WWNNs. The status is unknown.

198

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-2 Topology viewer without agents deployed to the managed servers

Figure 6-3 on page 200 shows the same environment, but this time the Data Agent and the Fabric Agent have been deployed on the computers. Now TotalStorage Productivity Center discovers them correctly as Computers and is able to display status information and even more details in the subsequent layers of the Topology Viewer.

Chapter 6. Agent deployment

199

Figure 6-3 Topology viewer with agents deployed to the managed servers

Recommendation: Except for limited, well-defined requirements, we strongly recommend the deployment of the server agents infrastructure regardless of the TotalStorage Productivity Center component being used.

6.2 Agent infrastructure overview


The TotalStorage Productivity Center server agent infrastructure is built on the Tivoli Common Agent Services infrastructure and is described in detail in 2.5, Tivoli Common Agent Services on page 32 and in the IBM TotalStorage Productivity Center Installation and Configuration Guide, GC32-1774. At the heart of the Tivoli Common Agent Services infrastructure, the Tivoli Agent Manager provides authentication and authorization services and maintains a registry of configuration information about the managed computer systems in your environment. Those computers run the Tivoli Common Agent software which acts as a container to application-specific agent code such as the TotalStorage Productivity Center Data and Fabric Agent. Those application-specific agents are called subagents. The Common Agent software provides common services, shared machine resources, and secure connectivity for the subagents. Management applications such as the TotalStorage Productivity Center use the services of the Agent Manager to communicate securely with and to obtain information about the computer systems running the Tivoli Common Agent software. They are also able to interact directly with the subagents using their native protocols. Figure 6-4 on page 201 shows an overview of the TotalStorage Productivity Center server agent infrastructure:

200

IBM TotalStorage Productivity Center: The Next Generation

Tivoli Common Agent


Port 9510

Subagents
Fabric Agent Data Agent

Tivoli Agent Manager Server


Agent Registration Database

Ports 9511, 9512, 9513

Port 9550 native Protocol Port 9549 Authentication and

authorization services

IBM TotalStorage Productivity Center Server


Central TPC Database

Figure 6-4 TotalStorage Productivity Center server agent infrastructure

The Tivoli Agent Manager can be installed on the same server as the TotalStorage Productivity Center Server or can run on a different machine (for example, larger installations or in cases where an Tivoli Common Agent Services infrastructure is already in place). The Tivoli Agent Manager is installed in a separate step with its own installer. The installation is discussed in detail in 4.6.1, Agent Manager installation for Windows on page 83 . The Tivoli Common Agent, which has to be present on each managed computer as the container for the Data Agent and Fabric Agent, is installed during the TotalStorage Productivity Center Agent installation with the TotalStorage Productivity Center Installer, and is discussed in detail in the subsequent sections.

6.3 Agent deployment options


There are two ways of deploying the Tivoli Common Agent and the TotalStorage Productivity Center Data and Fabric Agent. You can choose between local installation and remote installation. If you decide for local installation, you can either install the agents interactively or perform a unattended (silent) installation. Remote installation can only be performed interactively.

6.3.1 Local installation


Local installation means that you must be logged on to the computer on which you want to
install the agent code. You can install the Data Agent and the Fabric Agent in any order and you can install them both at one time. In any case, the TotalStorage Productivity Center Installer will check if there is a Tivoli Common Agent already installed. If not, TotalStorage Productivity Center installs this component. You can perform the local installation process either interactively, using a GUI or in unattended (silent) mode. This is useful in cases where you have to script the installation or where you do not have access to the GUI (for example, telnet access only).
Chapter 6. Agent deployment

201

Local agent installation might be practical for a limited number of computers, but becomes rather elaborate and time-consuming as the number of managed computers grows.

6.3.2 Remote installation


Remote installation is the process of pushing the agent code from a central computer over the
network to any number of remote computers on which you would like to install the TotalStorage Productivity Center agents. The TotalStorage Productivity Center Installer pushes the agent code to the target computers concurrently, so the software can be installed onto a large number of remote computers at one time. You can install all required agent components remotely, the Tivoli Common Agent as well as the Data Agent and the Fabric Agent. The supported operating systems for the target computers are Windows, UNIX, and LINUX. In a remote installation, the Tivoli Common Agent is only installed with the Data Agent. It has to be already present when you attempt to do a remote installation of the Fabric Agent. Consequently, you always must install the Data Agent first when performing remote installations. However, you can also choose to install the Data Agent and the Fabric Agent together. In this case, the TotalStorage Productivity Center Installer manages the proper sequence. The remote installation of the Data Agent can be performed from any computer running one of the supported operating systems and having a network connection to the remote target computers. The Fabric Agent installation has to be performed from the same system, where the TotalStorage Productivity Center Server is installed and running. A remote agent installation is always interactive. Unattended (silent) remote installation is not supported at the time of writing. In the following sections we will guide you through both agent deployment methods.

6.4 Local installation of Data and Fabric Agents


This section guides you through the local installation of the TotalStorage Productivity Center Data Agent and Fabric Agent. Before you can start to install the Agents, you must verify that the TotalStorage Productivity Center Server and the Tivoli Agent Manager are installed and running and can both be reached over the network. For a successful installation, you must provide the following information: The host name or the IP Address of the Data Server The port to communicate with the Data Server The host name or the IP Address of the Device Server The port to communicate with the Device Server The host authentication password The host name or the IP Address of the Tivoli Agent Manager Server The host name or the IP Address of the Tivoli Agent Manager Server The ports to communicate with the Tivoli Agent Manager Server The Common Agent registration password

202

IBM TotalStorage Productivity Center: The Next Generation

6.4.1 Interactive installation


Start the TotalStorage Productivity Center Installer by running setup.exe on Microsoft Windows systems and setup.sh on UNIX and LINUX systems. These programs in the root directory of the installation CD. In the following screen captures, we show the dialog windows of an installation on a Windows platform. While here in this book, we install Data Agent and Fabric Agent at the same time, of course you can install the Data Agent or the Fabric Agent separately. 1. In the first panel, select the preferred language for the installer as shown in Figure 6-5.

Figure 6-5 Local interactive installation, language selection

Click Ok. 2. The International Program License Agreement is shown (Figure 6-6). Read the terms and select I accept the terms of the license agreement.

Figure 6-6 Local interactive installation, License agreement

Click Next to continue. 3. In Figure 6-7 on page 204 you can choose the type of installation. We recommend you always use Custom Installation when you install the agents. Select Custom Installation. In this same panel, you can also choose the installation path of the agents. The default is C:\Program Files\IBM\TPC under Windows and /opt/IBM/TPC under UNIX and LINUX. In our example we keep the defaults. Note that the installer not only installs files in the location you specify in this panel. There are also some files installed to the C:\Program Files\Tivoli\ep Directory under Windows and the /usr/Tivoli/ep Directory under LINUX and UNIX.

Chapter 6. Agent deployment

203

Make sure, the installation location you specify in this panel is empty. Otherwise the installer fails.

Figure 6-7 Local interactive installation, Type of installation, installation location

Click Next to continue. 4. In the panel in Figure 6-8, select which components of TotalStorage Productivity Center you want to install. Select only Data Agent and Fabric Agent. Deselect any other options.

Figure 6-8 Local interactive installation, component selection

Click Next to continue. 5. In the panel shown in Figure 6-9 on page 205, enter the following information: Data Server Name This is the fully qualified host name or the IP address of the machine on which the TotalStorage Productivity Center Data Server and Device Server are running. At the time of writing, the Data Server and the Device Server must be installed on the same machine, so the Data Server Name and the Device Server Name will always be the same. In our environment, the TotalStorage Productivity Center Server is on gallium.almaden.ibm.com. 204
IBM TotalStorage Productivity Center: The Next Generation

Data Server Port The Data Agent uses the Data Server Port to communicate with the Data Server. It is set when you install the Data Server. We recommend keeping the default, 9549. Device Server Name This is is the fully qualified host name or the IP address of the Device Server. In TotalStorage Productivity Center V3.1 it has to match with the Data Server Name. In our environment the name of the Data Server and Device Server is gallium.almaden.ibm.com. Device Server Port. The Fabric Agent uses the Device Server Port to communicate with the Device Server. It is set when installing the Device Server. We recommend keeping the default, 9550. Host authentication password. This is the password used by the Fabric Agent to communicate with the Device Server. You specify this password when you install the Device Server. 6. Select options for the Data Agent, as shown in Figure 6-9.

Figure 6-9 Local interactive installation, server and agent settings

Click Data Agent Options. The panel in Figure 6-10 on page 206 is displayed. Here, you have two options: Agent should perform a scan when first installed This option is selected by default. We suggest you accept this default, so that you make sure your Data Server receives a solid information base about your computer right after installation. Deselect this option if you do not want to have the Data Agent perform an initial scan of your computer after installation. Agent may run scripts sent by server This option is checked by default. The advantage of enabling this option is that you can store scripts in the server's \scripts directory, and do not have to keep a copy of the script on every agent computer. When a script must be run on a particular agent, the server accesses the script from its local \scripts directory and sends it to the appropriate agent. If you deselect Agent may run scripts sent by server, you must make sure that the script is stored in every agent's \scripts directory.
Chapter 6. Agent deployment

205

Note: If a script with the same name exists on both the server and the agent, the script stored on the agent will take precedence. This is useful if you want to run a special version of a script on one of your agents, while running a different version of the same script across all the other agents in your environment.

Figure 6-10 Local interactive installation, Data Agent setting

Click OK to continue. This will bring you back to the panel shown in Figure 6-9 on page 205 where you will have to click Next. 7. In the next panel shown in Figure 6-11 on page 207 you must enter the fully qualified host name or IP address of the Tivoli Agent Manager. Tivoli Agent Manager must already be installed and running. The Tivoli Agent Manager may run on the same machine as your TotalStorage Productivity Center Server, or on a separate machine. In our environment, we installed the Tivoli Agent Manager on the TotalStorage Productivity Center Server. You also must specify the ports which the agents use to communicate with the Tivoli Agent Manager. They are specified during the installation of the Agent Manager. We recommend keeping the default ports, which are 9511 (secure) and 9513 (public). Finally, enter the Common Agent Registration password. This is the password required by the Common Agent to register with the Agent Manager. It is specified when you install the Agent Manager. The default password is changeMe. Note: If you do not specify the correct Agent Manager password, you are not permitted to continue the installation.

206

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-11 Local interactive installation, Tivoli Agent Manager information

Click Next to continue. 8. The Common Agent selection panel is displayed (Figure 6-12). If a Tivoli Common Agent is already running (for example, when you install a Fabric Agent and a Data Agent is already installed, or vice versa), you can choose to install your agent under the control of this Common Agent by selecting it in the lower selection box. If a Common Agent is not already installed on the system, you must elect to install it and specify a location. The default location is C:\Program Files\IBM\TPC\ca under Windows and /opt/IBM/TPC/ca under UNIX and LINUX.

Figure 6-12 Local interactive installation, Common Agent options

9. If you click Window Service Info in Figure 6-12l, you open the Common Agent Service Information Panel (Figure 6-13 on page 208). This information is optional. You can enter a Common Agent service name, user ID, and password that the Installer will use to create a Windows service for the Common Agent. Otherwise, by default itcauser is created.

Chapter 6. Agent deployment

207

Figure 6-13 Local interactive installation, Common Agent services name and user information

10.Enter the information and click OK. This returns you to the panel shown in Figure 6-12 on page 207, where you click Next. 11.The Summary information panel is displayed (see Figure 6-14). You can review some of the information you have entered during the installation process.

Figure 6-14 Local interactive installation, summary information panel

Click Install to continue. The installer begins to install the Data Agent first (Figure 6-15 on page 209) and then the Fabric Agent (Figure 6-16 on page 209).

208

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-15 Local interactive installation, installing the Data Agent

Figure 6-16 Local interactive installation, installing the Fabric Agent

Important : Although you could Cancel the installation while the progress bars are displayed, we strongly recommend that you not do this. An inconsistent state of your system might be the result. Finally a panel is displayed announcing that the installation has finished successfully, as shown in Figure 6-17 on page 210.

Chapter 6. Agent deployment

209

Figure 6-17 Local interactive installation, installation complete

12.Click Finish to exit the installer.

6.4.2 Unattended (silent) installation


For an unattended (silent) installation of the agents, you must set up a special response file for the TotalStorage Productivity Center Installer. TotalStorage Productivity Center provides an example response file for the agent installation which you can modify according to your environment. The name of this response file is setup_agents.iss. It is located in the root directory of Disk 1 of the installation CDs. There are variables for all the information you can supply during the interactive installation. They are summarized in Example 6-1.
Example 6-1 Local unattended installation, response file variables -V LICENSE_ACCEPT_BUTTON="true" -V LICENSE_REJECT_BUTTON="false" -P installLocation="/opt/IBM/TPC" -V varCreateDBSchm="false" -V varInstallDataSrv="false" -V varInstallDevSrv="false" -V varInstallGUI="false" -V varInstallCLI="false" -V varInstallDataAgt="true" -V varInstallDevAgt="true" -V varAMHostname="gallium.almaden.ibm.com" -V varAMRegPort="9511" -V varAMPubPort="9513" -V varCAPort="9510" -V varCAPassword="changeMe" -V varCAInstallLoc="/opt/IBM/TPC/ca" #-V varCASvcName= #-V varCASvcUsrID= #-V varCASvcUsrPW= -V varInstallNewCA="true" -V varUseOldCA="false" -V varDataSrvName="gallium.almaden.ibm.com" -V varDataSrvPort="9549"

210

IBM TotalStorage Productivity Center: The Next Generation

-V varDevSrvName="gallium.almaden.ibm.com" -V varDevSrvPort="9550" -V varHostAuthUsrPW="tpctpc" #-V varDataAgtScan="true" #-V varDataAgtScripts="true"

However, as with the interactive installation, the variables most often will be used with their default values and need not to be touched when preparing the response file. Normally, you must review at least the variables in bold in Example 6-1 on page 210, on a LINUX system, for example. You should also check that the the target directory you specify is empty. Otherwise, the unattended (silent) installation fails. After having modified and reviewed the response file according to your needs, you can start the installer with the following command, executed from within the directory in which the response file is located as shown in Example 6-2.
Example 6-2 Microsoft Windows and Linux and Unix local unattended installation setup.exe -options "setup_agents.iss" -silent (for Windows) ./setup.sh -options "setup_agents.iss" -silent (for LINUX and UNIX)

The installer exits with a return code which can be used in your scripts. In addition, you should verify that the installation has completed successfully using the methods summarized in 6.6, Verifying the installation on page 223.

6.5 Remote installation of Data and Fabric Agents


This section guides you through the necessary steps to perform a remote installation of the TotalStorage Productivity Center Data and Fabric Agents. You can only perform a remote installation interactively. At the time of writing, an unattended (silent) remote installation is not supported. You can perform remote agent installations to machines running either Windows (same domain or different domain), LINUX or UNIX operating systems. You must install the Data Agent before installing the Fabric Agent to a remote computer. The Common Agent is only installed with the Data Agent. The remote installation of the Fabric Agent requires that the Common Agent be installed and running on the target computer. So either install the Data Agent first, and then the Fabric agent, or install both agents together. In the latter case, the TotalStorage Productivity Center Installer performs in the proper sequence.

Chapter 6. Agent deployment

211

6.5.1 Preparing the remote installation


Before you can start to install the agents, you must verify that the TotalStorage Productivity Center Server and the Tivoli Agent Manager are installed and running. You can install the Data Agents from any workstation having a network connection to the target machine. However, you can perform a remote installation of the Fabric Agent only from the server where the Device Server is installed and running. 1. For a successful installation, you must provide the information summarized here: The fully qualified host name or the IP address of the computers on which you want to install the agents A user ID and password that has administrative privileges on each of the the target computers For Windows systems, the user ID must be a local administrative account on the target computer (not a domain administrative account). When installing to a foreign Windows domain, the domain from which you are installing has to trust the foreign domains, and your login must be an administrator on the local box (the computer from which you are installing) and a domain administrator on the foreign domains. The host name or the IP Address of the Data Server The port to communicate with the Data Server The host name or the IP Address of the Device Server The port to communicate with the Device Server The host authentication password The host name or the IP Address of the Tivoli Agent Manager Server The host name or the IP Address of the Tivoli Agent Manager Server The ports to communicate with the Tivoli Agent Manager Server The Common Agent registration password Disk 1 of the TotalStorage Productivity Center installation CDs permits only a remote installation of the Fabric Agent, which requires a Common Agent already running on the target computer. Note: For a remote installation of the Data Agent and the Common Agent, you must use Disk 2 of the TotalStorage Productivity Center Installation CDs together with the CD containing the cross-platform agent files. 2. For a remote installation of the Data Agent and the Common Agent, you must copy the installation files from Disk 2 of the product installation CDs together with the cross-platform agent files to a local directory of the machine from which you initiate the remote installation. For this, copy the content of Disk 2 of the TotalStorage Productivity Center installation CD to a local directory of your hard drive <C:\TPCinstall> for Microsoft Windows or </TPCinstall> for LINUX. If you have an electronic image of the product, unzip or untar the image to the respective directories. This results in a tree structure similar to the one shown in Figure 6-18 on page 213.

212

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-18 Remote installation, Tree structure installation disk 2

3. Insert the cross platform agent CD. It contains a directory for each supported agent platform. In each directory, there is a file called upgrade.zip. Copy this file for each platform you on which you want to install the product remotely in the <C:\TPCinstall>\data\upgrade (Windows) or </TPCinstall>/data/upgrade (LINUX or UNIX) directory of your installation directory. In our example, we perform a remote installation to Windows, AIX, and LINUX Systems. Our installation directory looks similar to Figure 6-19.

Figure 6-19 Remote installation, installation directory with cross-platform agent files

Chapter 6. Agent deployment

213

4. If you are installing the Data Agent to a LINUX system, you must perform a last check before you can start the installation. Verify, that the /etc/ssh/sshd_config file parameter PasswordAuthentication is set to yes. To set the parameter, follow these steps: a. Go to the following directory: /etc/ssh. b. Use a text editor such as vi to open and edit the /etc/ssh/sshd_config file. Change the PasswordAuthentication parameter to yes. c. Stop the daemon by running the following command: /etc/init.d/sshd stop. d. Start the daemon by running the following command: /etc/init.d/sshd start.l

6.5.2 Performing the remote installation


The following steps show a remote installation done from a Windows system to Windows (non domain), AIX and LINUX computers. The installation dialog will not differ if the remote installation is performed from AIX or LINUX. In our example we installed the Data Agent and the Fabric Agent together. To invoke the remote installer, go to <C:\TPCinstall> on Windows and double-click setup.exe. On LINUX or AIX, go to the /TPCinstall/ directory and run setup.sh. 5. In the first panel, select the preferred language for the installer (Figure 6-20) and Click OK.

Figure 6-20 Remote installation, language selection

6. The International Program License Agreement is shown (see Figure 6-21). Read and accept the terms by selecting I accept the terms of the license agreement.

Figure 6-21 Remote installation, license agreement

Click Next.

214

IBM TotalStorage Productivity Center: The Next Generation

7. In the panel shown in Figure 6-22, choose Custom Installation. Click Next to continue.

Figure 6-22 Remote installation, type of installation / installation location

8. In the panel in Figure 6-23, select which components you want to install. Select Remote Data Agent and Remote Fabric Agent. Deselect other options. Click Next.

Figure 6-23 Remote installation, component selection

9. In the next panel (Figure 6-24 on page 216), enter the Host authentication password. This is the password used by the Fabric Agent to communicate with the Device Server. You specify this password when you install the Device Server. All other information is already set correctly because we are installing from the machine which is running the Data Server and the Device Server. Click Next to continue.

Chapter 6. Agent deployment

215

Figure 6-24 Remote installation, server and agent settings

10.Next, a panel similar to the one shown in Figure 6-25 is displayed where you enter the remote computers on which you want to install the Data Agents and the Common Agents. You must enter remote LINUX and UNIX computers manually by host name or IP address. Microsoft Windows computers can be added either manually or from Microsoft Directory if you have an Active Directory environment.

Figure 6-25 Remote installation, select remote computers

11.We enter our target computers manually, so click Manually Enter Agents. You will see the panel in Figure 6-26 on page 217.

216

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-26 Remote installation, manually add agents

12.In Figure 6-26, enter the fully qualified host name or the IP address of the computer on which you want the Data Agent and Common Agent to be installed. Note: You can add multiple computers in this panel if they share the same user ID and password. If computers do not share the same user ID and password, you must add them individually. In our example we added five remote systems, each individually, because our target machines have different user IDs and passwords as shown in Figure 6-27 on page 218.

Chapter 6. Agent deployment

217

Figure 6-27 Remote Installation, list of remote computers to install the Data Agent

Note: Right-click a column name to filter or sort the listed computers. If you filter the names in the computer list, the computers you selected for an agent installation that do not match the filter criteria will not appear in the list. Those agents will still be installed to the unlisted computers with names that do not match the filter. 13.When you are satisfied with your list of target computers click Next. 14.The panel shown in Figure 6-28 on page 219 is displayed. Here, you specify the settings for the Common Agent Service on your Windows target machines. This information is optional. You can enter a Common Agent service name, user ID and password, and a listener port that the installer will use to create a Windows service for the Common Agent. Otherwise, itcauser is created and a random listener port is used by default. We recommend that you keep those defaults. This panel corresponds to the panel shown in Figure 6-13 on page 208 for the local installation. Click Next.

218

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-28 Remote installation, Windows Common Agent service settings

15.The TotalStorage Productivity Center Installer runs a mini-probe on all the computers you selected, to verify all prerequisites. The status for each computer will change several times. Finally you will see the panel in Figure 6-29.

Figure 6-29 Remote installation, ready to install

In this panel, the TotalStorage Productivity Center Installer shows you the default installation directory for each target computer. 219

Chapter 6. Agent deployment

Note: The default installation directory for the Data Agent remote installation differs from the default installation directory for the Data Agent for a local installation. The installation directory for the Data Agent remote installation defaults to: C:\Program Files\Tivoli\ep for Windows /usr/tivoli/ep for LINUX and UNIX. The defaults for a local installation are: C:\Program Files\IBM\TPC for Windows d /opt/IBM/TPC for LINUX and UNIX. In this panel you can also select two settings for the Data Agents: Agent should perform a scan when first installed This option is enabled by default. We suggest you accept this default, so that your Data Server receives a solid information base about your computer right after installation. Deselect this option if you do not want to have the Data Agent perform an initial scan of your computer after installation. Agent may run scripts sent by server This option is enabled by default. The advantage of enabling this option is that you can store scripts in the server's \scripts directory, and do not have to keep a copy of the script on every agent computer. When a script must be run on a particular agent, the server accesses the script from its local \scripts directory and sends it to the appropriate agent. If you deselect Agent may run scripts sent by server, you must make sure that the script is stored in every agent's \scripts directory. Note: If a script with the same name exists on both the server and the agent, the script stored on the agent will take precedence. This is useful if you want to run a special version of a script on one of your agents, while running a different version of the same script across all the other agents in your environment. Click Install to continue. The TotalStorage Productivity Center Installer starts to install the Data Agents and the Common Agent to the remote target computers. The installation status (see Figure 6-30 on page 221) is shown in the upper pane and you can monitor the installation log in the lower pane. When the installation is complete, you will see the window in Figure 6-31 on page 221.

220

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-30 Remote installation, progress

When completed, you will see the status Probed for all successfully installed Data Agents as shown in Figure 6-31.

Figure 6-31 Remote installation, installation complete

Note: You can review the Installation log for each computer by double-clicking the computer name or the IP address.

Chapter 6. Agent deployment

221

16.Click Done to continue. The remote installation of the Data Agent and the Common Agent is now finished. The remote computers are now ready for the remote installation of the Fabric Agents because they now run a Common Agent. Because we have selected to install both, the Data Agent and the Fabric Agent, the TotalStorage Productivity Center Installer opens the panel shown in Figure 6-32, where we can select on which of the remote computers we want to install the remote Fabric Agents. 17.Select the remote computers where the Fabric Agents shall be deployed. Click Next.

Figure 6-32 Remote installation, select remote computers to deploy Fabric Agents

18.Review the list of the computers as shown in Figure 6-33 and click Next.

Figure 6-33 Remote installation, list of selected computers for remote Fabric Agent installation

You can see a panel as shown in Figure 6-34 on page 223, indicating the progress of the remote Fabric Agent deployment.

222

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-34 Remote installation, Fabric Agent deployment progress panel

19.The TotalStorage Productivity Center Installer (see Figure 6-35) shows you a panel with a summary of the remote Fabric Agent deployment process. Click Next.

Figure 6-35 Remote installation, Fabric Agent deployment complete

20.Verify that the installation has completed successfully using the methods summarized in 6.6, Verifying the installation on page 223.

6.6 Verifying the installation


After installing the server agents, you should check that the agents have registered successfully with the TotalStorage Productivity Center Server. They should do this without any further discovery. 1. To check the communication between the TotalStorage Productivity Center Server and the agents start the TotalStorage Productivity Center graphical user interface and log on. 2. In the Navigation Tree shown in Figure 6-36 on page 224 select Administrative Services Agents and expand Data Agents and Fabric Agents. Look for an entry for each of the newly installed agents.
Chapter 6. Agent deployment

223

3. Right-click the entries and check if the TotalStorage Productivity Center Server can reach the agent and if the agent is up and running. In addition to that, the same context menu provides a look at the log files and the possibility to set up a trace for each agent.

Figure 6-36 Verify agent installation

4. On the servers, the install process creates a directory structure, which should be similar to the one shown in Figure 6-37 on page 225 for a Windows server. For UNIX and LINUX System the tree is created under /opt/IBM/ by default and, otherwise, looks the same.

224

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-37 Directory tree for Data and Fabric Agent installation

Note: The Remote Installer however will create a different tree structure by default. It will install the Data Agent and the Common Agent to the following directory: C:\Program\Tivoli\ep on Windows and /usr/tivoli/ep on UNIX and LINUX. 5. Under Windows, look for a service called IBM Tivoli Common Agent (see Figure 6-38).

Figure 6-38 Windows Services after agent installation

The Data Agent and Fabric Agent do not show up as a service. They run under the context of the Common Agent.

Chapter 6. Agent deployment

225

6. Under UNIX and LINUX, look for two processes: One is the nonstop process, which launches the Common Agent process The Common Agent itself Run the ps -ef command to shows results similar to Figure 6-39.

Figure 6-39 UNIX and LINUX process status after agent installation

6.6.1 Logfiles
The agent installation process creates a number of logs, which can be checked to retrace the installation process and also to monitor the activity during normal operation. These logs provide detailed information and are especially useful in case of a failed installation to determine the reason for the failure and to troubleshoot the installation. They are spread over several locations: Note: The default <InstallLocation> will differ in local and remote administration.

Data Agent logs


The following installation logs are for the Data Agent when installed locally. <InstallLocation>\log\subagents\TPC\Data\install\ for Windows <InstallLocation>/log/subagents/TPC/Data/install/ for UNIX AND LINUX The following installation logs are for the Data Agent when installed remotely for Windows. <InstallLocation>\logs\ <InstallLocation>\logs\install\ <InstallLocation>\subagents\TPC\Data\log\ The following installation logs are for the Data Agent when installed remotely for UNIX and LINUX. <InstallLocation>/logs/ <InstallLocation>/subagents/TPC/Data/log/ <InstallLocation>/logs/

226

IBM TotalStorage Productivity Center: The Next Generation

The following operational logs are for the Data Agent when installed locally for Windows. <InstallLocation>\ca\subagents\TPC\Data\log\<hostname>\ The following operational logs are for the Data Agent when installed locally for UNIX and LINUX <InstallLocation>/ca/subagents/TPC/Data/log/<hostname>/ The following operational logs are for the Data Agent when installed remotely for Windows. <InstallLocation>\logs\ <InstallLocation>\subagents\TPC\Data\log\<hostname> The following operational logs are for the Data Agent when installed remotely for UNIX and LINUX. <InstallLocation>/logs/ <InstallLocation>/subagents/TPC/Data/log/<hostname>

Fabric Agent logs


The following installation logs are for the Fabric Agent when installed locally for Windows. <InstallLocation>\log\subagents\TPC\Fabric\install\ The following installation logs are for the Fabric Agent when installed locally for UNIX and LINUX. <InstallLocation>/log/subagents/TPC/Fabric/install/ The following installation logs are for the Fabric Agent when installed remotely for Windows. <InstallLocation>\log\subagents\TPC\Fabric\install\ <InstallLocation>\subagents\TPC\Fabric\log\ The following installation logs are for the Fabric Agent when installed remotely for UNIX and LINUX. <InstallLocation>/log/subagents/TPC/Fabric/install/ <InstallLocation>/subagents/TPC/Data/log/ The following operational logs are for the Fabric Agent when installed locally for Windows <InstallLocation>\ca\subagents\TPC\Fabric\log\<hostname>\ The following operational logs are for the Fabric Agent when installed locally for UNIX and LINUX. <InstallLocation>/ca/subagents/TPC/Fabric/log/<hostname>/ The following operational logs are for the Fabric Agent when installed remotely for Windows. <InstallLocation>\logs\ <InstallLocation>\subagents\TPC\Fabric\log\ The following operational logs are for the Fabric Agent when installed remotely for UNIX and LINUX. <InstallLocation>/logs/ <InstallLocation>/subagents/TPC/Fabric/log/

Chapter 6. Agent deployment

227

Common Agent logs


The following installation and operational logs are for the Common Agent when installed locally for Windows. <InstallLocation>\ca\logs\ <InstallLocation>\ca\logs\install\ The following installation and operational logs are for the Common Agent when installed locally for UNIX and LINUX. <InstallLocation>/ca/logs/ <InstallLocation>/ca/logs/install/ The following installation and operational logs are for the Common Agent when installed remotely for Windows. <InstallLocation>\logs\ <InstallLocation>\logs\install\ The following installation and operational logs are for the Common Agent when installed remotely for UNIX and LINUX. <InstallLocation>/logs/ <InstallLocation>/logs/install/

6.7 Uninstalling Data and Fabric Agent


For the Data Agent, TotalStorage Productivity Center V3.1 offers you two methods of uninstallation, remote and local. The remote uninstallation procedure is independent of the installation method (local or remote). The local uninstallation procedures differ slightly, depending on how the installation has been performed because TotalStorage Productivity Center V3.1 uses different installers for remote and local installation. The following sections guide you through the different uninstallation procedures.

6.7.1 Remote uninstallation


You can uninstall the Data Agent remotely from the TotalStorage Productivity Center Server GUI. This method is only supported for the Data Agent. The Fabric Agent cannot be uninstalled remotely. 1. To uninstall the Data Agent, start the TotalStorage Productivity Center GUI and log on. 2. In the Navigation Tree as shown in Figure 6-40 on page 229 select Administrative Services Agents and expand Data Agents. 3. Right-click the entry and select Delete. This erases the entry for the Data Agent from the Navigation Tree and uninstalls the Data Agent on the remote computer.

228

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-40 Remote Data Agent uninstall

Although the context menu for the Fabric Agents also offer the option to delete the agent, this procedure erases the entries for the Fabric Agents from the navigation tree and the TotalStorage Productivity Center Repository as well, but does not uninstall the agent on the remote computer. Note: If you perform a remote uninstallation of the Data Agent on a remote computer where no Fabric Agent is installed, the remote uninstallation process also uninstalls the Common Agent. Otherwise, the remote uninstallation process keeps the Common Agent on the target computer.

6.7.2 Local uninstallation


If you have performed the installation of the Data Agent and the Fabric Agent locally, the local uninstallation of the Data Agent, the Fabric Agent and the Common Agent can be performed in a single step. In the following example, we show the uninstallation procedure for a Windows system. There are no differences on UNIX or LINUX systems in the dialog box. 1. To invoke the uninstall on a windows computer select the following: Start Settings Control Panel Add/Remove Programs as shown in Figure 6-41 on page 230.

Chapter 6. Agent deployment

229

Figure 6-41 Local agent uninstall, Add or Remove Programs

2. Select the entry TotalStorage Productivity Center and Click Change/Remove. On a UNIX or LINUX machine go to the directory /opt/IBM/TPC/_uninst/ and run the uninstall program. In the first panel shown in Figure 6-42, select the preferred language for the uninstaller.

Figure 6-42 Local agent uninstall, language selection

Click Ok to continue. You can see the TotalStorage Productivity Center Installer Welcome Panel shown in Figure 6-43 on page 231.

230

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-43 Local agent uninstall, welcome panel

Click Next. 3. In the panel shown in Figure 6-44 you select the components you want to uninstall. The TotalStorage Productivity Center Installer offers all components which it has detected on your system. In our example the Data Agent and the Fabric Agent are installed and we want to uninstall them both in one step. Although there is a check box where you select to force the uninstallation of the Common Agent, it is not necessary to check this box, because the Common Agent will be deinstalled automatically when the last subagent is removed.

Figure 6-44 Local agent uninstall, select component to uninstall

Click Next to continue. You can see a summary of the components which the TotalStorage Productivity Center Installer is about to uninstall, as shown in Figure 6-45 on page 232.

Chapter 6. Agent deployment

231

Figure 6-45 Local agent uninstall, summary panel

Click Next to continue. 4. The system uninstalls the selected components. You can see the panel Figure 6-46 when the uninstallation is finished. You must reboot a Microsoft Windows machine. Click Finish.

Figure 6-46 Local agent uninstall, uninstall complete

If you have installed the Fabric Agent and the Data Agent remotely, you cannot uninstall both agents and the Common Agent in one step. We recommend that you uninstall the Fabric Agent first, then uninstall the Data Agent, which uninstalls the Common Agent also. 1. To invoke the uninstall on a Microsoft Windows computer, select Start Settings Control Panel Add/Remove Programs. You now see a separate entry for the Data Agent. The uninstallation for the Fabric Agent is invoked by selecting TotalStorage Productivity Center Change/Remove, as shown in Figure 6-47 on page 233.

232

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-47 Local agent uninstall, Add or Remove Programs for remotely deployed agents

2. The uninstallation dialog box is the same as described previously, with the exception that the installer only offer to uninstall the Fabric Agent. You are not able to select the Data Agent. 3. When the uninstallation of the Fabric Agent is complete, you must reboot your system and again select Start Settings Control Panel Add/Remove Programs. 4. Now the TotalStorage Productivity Center entry is gone and you must select TotalStorage Productivity Center for Data - Agent as shown in Figure 6-48 on page 234. Click Change/Remove.

Chapter 6. Agent deployment

233

Figure 6-48 Local agent uninstall, Add or Remove Programs for remotely deployed agents

Now a different installer is presented, which is the one you have used to perform the remote installation of the Data Agent as shown in Figure 6-49. 5. Although there is a radio button, you are not be able to make any selections. Click Next.

Figure 6-49 Local agent uninstall, uninstall of remotely deployed Data Agent

A panel is shown (Figure 6-50 on page 235), where you are able to monitor the log of the deinstallation process in the lower pane.

234

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-50 Local agent uninstall, uninstall of remotely deployed Data Agent

When the deinstallation completes, you can see a panel announcing the successful installation of the Data Agent (see Figure 6-51).

Figure 6-51 Local agent uninstall, uninstall of remotely deployed Data Agent completed

6. Click OK and restart your system.

6.8 Upgrading the Data Agent


The TotalStorage Productivity Center GUI allows you to upgrade your Data Agent infrastructure from a central point of management. This central upgrade is only supported for the Data Agent and the Common Agent. The Fabric Agent cannot be upgraded this way. 1. Before you can upgrade your Data Agents from your central TotalStorage Productivity Center GUI, you have to copy the upgrade.zip files of the new Data Agent Version for the operating systems, you want to perform an upgrade for to the C:\Program

Chapter 6. Agent deployment

235

File\IBM\TPC\data\upgrade (for Windows) and /opt/IBM/TPC/data/upgrade (for LINUX and UNIX) path of your TotalStorage Productivity Center Server installation. Note that if you do not copy the upgrade.zip file, you will break all of your agents. The tree structure should look similar to the figure in Figure 6-52.

Figure 6-52 Data Agent upgrade, copy the upgrade.zip file to the server upgrade directories

2. After having copied the files you need to the respective directories, launch the TotalStorage Productivity Center GUI and log on. In the navigation tree, select Administrative Services Configuration and right-click Data Agent Upgrade as shown in Figure 6-53 on page 237.

236

IBM TotalStorage Productivity Center: The Next Generation

Figure 6-53 Data Agent upgrade, create an upgrade job

3. You will then see a panel similar to Figure 6-54 where you can select the computers for which you want to perform a Data Agent upgrade.

Figure 6-54 Data Agent upgrade, create an upgrade job, select computers to upgrade

Chapter 6. Agent deployment

237

You can either select Computer Groups (if you have defined them in the TotalStorage Productivity Center) or select single computers or all computers which have Data Agents installed. Verify that the Enable box in the top right corner of the panel is checked. 4. In the When to Run tab, specify if the upgrade should run immediately or should be scheduled at a later time. 5. The Options tab gives you some options for the upgrade of the Data Agents. You can specify if the Data Agent should be overwritten if the server already has the upgraded level installed, and you can select the correct language option. 6. In the Alert tab, you can chose what alerts the TotalStorage Productivity Center Server will generate for the upgrade job. 7. After having reviewed all tabs select File Save. You must specify a name for the job. The upgrade job will be saved and run either immediately, or at the time you chose in the When to Run tab. 8. To check if the upgrades have completed successfully, right-click the Data Agent upgrade and select Refresh. You can see an entry for the upgrade job you submitted. Click the little plus sign (+) to the left of your job name and an entry with the time stamp of the submission of your job opens. Click this entry and you can see the log for the job on the right pane, as shown in Figure 6-55.

Figure 6-55 Data Agent upgrade, job log of the upgrade job

9. You can click the symbol next to the job log entry and examine the log for your upgrade job.

238

IBM TotalStorage Productivity Center: The Next Generation

Chapter 7.

CIMOM installation and customization


This chapter provides a step-by-step guide to configure Service Location Protocol (SLP) and the Common Information Model Object Manager (CIMOM) for supported storage subsystems, tapes, and switches and that are required to use TotalStorage Productivity Center.

Copyright IBM Corp. 2006. All rights reserved.

239

7.1 Introduction
After you have completed the installation of TotalStorage Productivity Center, you must install and configure the Common Information Model Object Manager (CIMOM) and Service Location Protocol (SLP) agents. Note: For the remainder of this chapter, we refer to the TotalStorage Productivity Center, TotalStorage Productivity Center for Fabric and TotalStorage Productivity Center for Data simply as TotalStorage Productivity Center. The TotalStorage Productivity Center uses SLP as the method for CIM clients to locate managed objects. The CIM clients may have built in or external CIM agents. When a CIM agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. In this chapter we describe the steps for: Planning considerations for CIMOM Installing and configuring CIM agent for Enterprise Storage Server and DS6000/DS8000 Installing and configuring CIM agent for DS 4000 Family Configuring CIM agent for SAN Volume Controller Planning considerations for Service Location Protocol (SLP) SLP configuration recommendation Setting up Service Location Protocol Directory Agent (SLP DA) General performance guidelines

7.2 Planning considerations for CIMOM


The CIM agent includes a CIM Object Manager (CIMOM) which adapts various devices using a plug-in called a provider. The CIM agent can work as a proxy or can be imbedded in storage devices. When the CIM agent is installed as a proxy, the IBM CIM agent can be installed on the same server that supports the device specific management interface. Figure 7-1 shows an overview of CIM agent.

Figure 7-1 CIM Agent Overview

You can plan to install CIM agent code on the same server which also has the specific management or you can install it on a separate server.

240

IBM TotalStorage Productivity Center: The Next Generation

Attention: At this time, only a few devices come with an integrated CIM Agent, most devices need a external CIMOM for CIM-enabled management applications (CIM Clients) to be able to communicate with a device. For the ease of the installation, IBM provides an Integrated Configuration Agent Technology (ICAT) which is a bundle that includes the CIMOM, the device provider, and an SLP SA.

7.2.1 CIMOM configuration recommendations


The following recommendations are based on our experience in the ITSO lab environment: The CIMOM agent code that you plan to use must be supported by the installed version of TotalStorage Productivity Center. Refer to this link for latest updates:
http://www-1.ibm.com/servers/storage/support/software/tpc/

You must have the CIMOM supported firmware level on the storage devices. It you have an incorrect version of the firmware, you might not be able to discover and manage the storage devices which have the incorrect level of software or firmware installed. The data traffic between CIMOM agent and device can be very high, especially during performance data collection. As a result, it is recommended that you have a dedicated server for the CIMOM agent. Although, you can configure the same CIMOM agent for multiple devices of same type. Locate the server containing the CIMOM within the same data center where the managed storage devices are located. This is in consideration of firewall port requirements. Typically, it is a best practice to minimize firewall port openings between the data center and the external network. If you consolidate the CIMOM servers within the data center, then you might be able to minimize the need to open the firewall ports only for TotalStorage Productivity Center communication with the CIMOM. It is strongly recommended that you have separate and dedicated servers for CIMOM agents and TotalStorage Productivity Center. The reason for this is resource contention, TCP/IP port requirements, and system services coexistence. It is highly recommended that you have separate systems for each CIMOM that you need to install. They do not need to be dedicated servers. It depends on the workload you expect the servers to support. This recommendation is based on the following: CIMOMs by default should use either port 5988 (HTTP) or port 5989 (HTTPS) for communication. If you collocate two CIMOMs, you will have a port conflict. Most CIMOMs have different interoperability namespaces. If you collocate two CIMOMs, you could have an interoperability namespace mismatch. The user ID that will be used by the CIMOM to communicate with the storage device must have superuser, administrator, or equivalent read and write authorization on the storage device. This level of authorization is required to manage, manipulate, and configure the storage device, as well as to gather performance data. Read-only authorization is insufficient for all but basic inventory collection tasks.

7.3 SNIA certification


Support is for third-party disks systems and switches that are Storage Management Interface Specification (SMI-S) 1.0.2 or SMIS 1.1 compatible. This support includes storage provisioning, as well as asset and capacity reporting. TotalStorage Productivity Center V3.1

Chapter 7. CIMOM installation and customization

241

implements many of its disk, tape, and fabric management functions through exploitation of the SMIS 1.0.2 and 1.1 levels of the standard. SMIS 1.1 supports all of the functions of SMIS 1.0.2 plus additional functionality (such as performance management). If you are interested in finding out what is new in SMI-S 1.1 compared to 1.0.2 or just generally, you can visit the following Web site (subscription needed): http://www.snia.org/smi/tech_activities/smi_spec_pr/spec You can find a list of all device vendors that are participating the SMI-S initiative and that have successfully passed the SNIA Conformance Testing Program by visiting the following Web site: http://www.snia.org/ctp/conformingproviders If you select one of the vendors you are brought to a vendor-specific page where certified devices are listed together with minimum software requirements for the SMI-S Agent. We strongly suggest you visit this Web site to find all the latest information about conforming SNIA CTP provider devices. Any new device that is SMI-S 1.0.2 or 1.1 can be monitored and managed by TotalStorage Productivity Center because of the standard approach adopted on both sides.

7.4 Installing CIM Agent for ESS 800/DS6000/DS8000


Before starting TotalStorage Productivity Center Common Information Model Object Manager (CIMOM) discovery, you must first install and configure the related CIMOM. In order to manage IBM ESS 800, DS6000 and DS8000 disk subsystems using SMI-Specifications V1.1 you must be certain that your storage subsystem is listed on the following Web sites: SNIA - conforming providers IBM
http://www.snia.org/ctp/conformingproviders/ibm

IBM Support for IBM Common Information Model (CIM) Agent for DS Open (API) The IBM Web site for the IBM CIM Agent for DS Open API Code is located at:
http://www-1.ibm.com/support/search.wss?rs=1118&tc=STC4NKB&dc=D400&dtm

The software components have requirements that have to be checked in advance. Verify the minimum ESS LIC level and ESSCLI version documented on these Web pages: IBM ESS CIM Agent Compatibility Matrix
http://www-1.ibm.com/support/docview.wss?rs=586&context=STHUUM&dc=DB520&dc=DA460&dc= DB540&uid=ssg1S1002397&loc=en_US&cs=utf-8&lang=en

IBM CIM Agent for DS Open API Compatibility Matrix


http://www-1.ibm.com/support/docview.wss?rs=1118&context=STC4NKB&dc=DB500&uid= ssg1S1002714&loc=en_US&cs=utf-8&lang=en

Figure 7-2 on page 243 shows the interaction between the DS Open API and the storage subsystems it manages.

242

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-2 DS Open API overview

7.4.1 CIM Agent and LIC level relationship for DS8000


You must identify the relationship between CIM Agent Level and LIC Level or Bundle Version on a DS8000. The Web site mentions Bundle Versions. In a DS8000, you can determine the active LIC level by following these steps. 1. Log into the HMC console. 2. Navigate through this path: Management Environment <HMC Hostname> Licensed Internal Code Maintenance Licensed Internal Code Updates Change Internal Code (In new window) Select Storage Facility From the menu, choose Selected. Display Installed Code Levels.

3. In the new window CDA Install History you should be able to locate the latest entry. There you will find a relation from LIC-Level to Bundle Version. This information makes it possible to find the relationship between the CIM Agent, LIC level, and Bundle Version. Be sure that the prerequisites are met before proceeding with the software installation.

Chapter 7. CIMOM installation and customization

243

7.4.2 CIM Agent and LIC level relationship for DS6000


The following information will help you to determine the relationship between CIM Agent Level and LIC Level or Bundle Version on a DS6000. 1. Logon to the DS6000 StorageManager Web Interface. 2. Navigate through this sequence: Real-time Manager Manage Hardware Storage Units Your DS6000 check box Apply Firmware Update Go

3. This results in the panel hold the first part of the information that you need. The second part is found on the following Web site:
http://www-1.ibm.com/support/dlsearch.wss?rs=1112&lang=en&loc=en_US&r=10&cs= utf-8&rankfile=0&cc=&spc=&stc=&apar=include&q1=ssg1*&q2=&dc=D420&atrn=SWPlatform&atrv= all&atrn1=SWVersion&atrv1=all&tc=HW2A2&Go.x=17&Go.y=9

4. Here you can cross-reference the internal LIC level to a Release Level. 5. Lastly, on the IBM CIM Agent for DS Open API Compatibility Matrix site you can find the CIM Agent Version requiered for your DS6000.

7.4.3 ESS CLI Installation


The ESS CLI is required if you monitor and manage ESS disk subsystems. If you will not monitor any ESS Model Fxx,7xx or 800 devices, then you do not need to install the CLI. You can go directly to 7.4.4, DS CIM Agent install on page 249. The ESS CLI installation wizard detects if you have an earlier level of the ESS CLI software installed on your system and uninstalls the earlier level. After you uninstall the previous version, you must restart the ESS CLI installation program to install the current level of the ESS CLI. You must have a minimum ESS CLI level of 2.4.0.236. You should also uninstall the ESS CIM Agent and replace it with CIM Agent for DS Open API To install the ESS CLI insert the CD for the ESS CLI in the CD-ROM drive and run the setup. Alternatively, use the Windows Explorer to select the directory where you have stored the ESS CLI Code. 1. The first panel is the InstallShield Wizard Welcome as shown in Figure 7-3 on page 245. Click Next to continue.

244

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-3 ESS CLI InstallShield Wizard I

1. On the license agreement panel in Figure 7-4, select I accept the terms of the license agreement and click Next to continue.

Figure 7-4 ESS CLI License agreement

2. Verify the target operating system as shown in Figure 7-5 on page 246 and click Next to continue.

Chapter 7. CIMOM installation and customization

245

Figure 7-5 ESS CLI choose target system panel

3. The panel in Figure 7-6 specifies the CLI installation directory. Accept the default or enter the appropriate directory and click Next to continue.

Figure 7-6 ESS CLI Setup Status panel

4. The next panel is a summary of the installation requirements as shown in Figure 7-7 on page 247. Click Next to continue.

246

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-7 ESS CLI selected options summary

5. You can see a panel showing the progress of the installation. When the installation is complete, you can see a panel similar to Figure 7-8. Click Next to continue.

Figure 7-8 ESS CLI installation complete panel

6. The next panel contains the ESS CLI readme file (see Figure 7-9 on page 248). Read the information and click Next.

Chapter 7. CIMOM installation and customization

247

Figure 7-9 ESS CLI Readme

7. The next panel (see Figure 7-10) gives you the option to restart your system before proceeding with the ESS CIM Agent installation. You must do this because the ESS CLI is dependent on environmental variable settings which will not be in effect for the ESS CIM Agent. The reason for this is the CIM Agent runs as a service, unless you reboot your system.

Figure 7-10 ESS CLI Restart panel

248

IBM TotalStorage Productivity Center: The Next Generation

8. After your server has restarted, you should verify that the ESS CLI is installed: a. Click Start Settings Control Panel. b. Double-click Add/Remove Programs. c. Verify that there is an IBM ESS CLI entry. 9. Verify that the ESS CLI is operational and can connect to the ESS. From a command prompt window, issue the following command:
esscli -u userid -p password -s 9.1.11.111 list server

Where: 9.1.11.111 represents the IP address of the Enterprise Storage Server userid represents the Enterprise Storage Server Specialist user name password represents the Enterprise Storage Server Specialist password for the user name Figure 7-11 shows the response from the esscli command.

Figure 7-11 ESS CLI verification

7.4.4 DS CIM Agent install


Before you install the DS CIM Agent, you must install the IBM TotalStorage Enterprise Storage System Command Line Interface (ESS CLI) if you plan to manage 2105-F20s or 2105-800s with this CIM agent. The DS CIM Agent installation program checks your system for the existence of the ESS CLI and provides the warning shown in Figure 7-15 on page 253 if no valid ESS CLI is found. As part of the CIMOM installation, there is an installation of the Service Location Protocol (SLP). If there are any previous versions of SLP on the system, you should deinstall them or stop the service prior to this installation. The latest DS Open API code can be downloaded from the following Web site:
http://www-03.ibm.com/servers/storage/support/software/cimdsoapi/downloading.html

At the time of writing, we used the code which is contained in the package ibm-ds-smis-agent-5.1.0.45.zip. To install the DS Open API in your Windows system, perform the following steps: 1. Log on to your system as the local administrator. 2. Insert the CIM Agent for DS Open Api Code CD into the CD-ROM drive. Alternatively, use the Windows Explorer to select the directory where you have stored the DS Open API Code.

Chapter 7. CIMOM installation and customization

249

3. Start the setup.exe found in ......\ibm-ds-smis-agent-5.1.0.45\W2K. The Install Wizard launchpad starts automatically, if you use a CD and have autorun mode set on your system,. Look for a launchpad similar to Figure 7-12. Open and review the readme file from the launchpad menu. Subsequently, you can Click Installation Wizard. The Installation Wizard starts the setup.exe program and shows the Welcome panel in Figure 7-13 on page 251. The DS CIM Agent program starts within 15 - 30 seconds, if you have autorun mode set on your system. If the installer window does not open, perform the following steps: a. Use a command prompt or Windows Explorer to change to the Windows directory on the CD. b. If you are using a Command Prompt window, run launchpad.bat. c. If you are using Windows Explorer, double-click the launchpad.bat file.

Figure 7-12 DSCIM Agent launchpad

4. The Welcome window opens suggesting what documentation you should review prior to installation. Click Next to continue (see Figure 7-13 on page 251).

250

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-13 DS CIM Agent welcome window

5. The License Agreement window opens. Read the license agreement information. Select I accept the terms of the license agreement, then click Next to accept the license agreement (see Figure 7-14 on page 252).

Chapter 7. CIMOM installation and customization

251

Figure 7-14 DS CIM Agent license agreement

6. The window shown in Figure 7-15 on page 253 only opens if no valid ESS CLI installed. If you do not plan to manage an ESS from this CIM agent, then click Next. Important: If you plan to manage an ESS from this CIM agent, then click Cancel. Install the ESS CLI following the instructions in 7.4.3, ESS CLI Installation on page 244.

252

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-15 DS CIM Agent ESS CLI warning

7. The Destination Directory window opens. Accept the default directory and click Next to continue (see Figure 7-16 on page 254).

Chapter 7. CIMOM installation and customization

253

Figure 7-16 DS CIM Agent destination directory panel

8. The Updating CIMOM Port window opens (see Figure 7-17 on page 255). Click Next to accept the default port if it is available and free in your environment. For our ITSO setup, we used default port 5989. Note: As mentioned throughout this book, it is not recommended to use anything other than the default ports. Use the following commands to check which ports are in use:
netstat -a netstat -an | find/f 598

254

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-17 DS CIM Agent port window

9. The Installation Confirmation window opens (see Figure 7-18 on page 256). Click Install to confirm the installation location and file size.

Chapter 7. CIMOM installation and customization

255

Figure 7-18 DS CIM Agent installation confirmation

The Installation Progress window opens (see Figure 7-19 on page 257), indicating how much of the installation has completed.

256

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-19 DS CIM Agent installation progress

10.When the Installation Progress window closes, the Finish window opens (see Figure 7-20 on page 258). Select View post installation tasks to view the post-installation tasks Readme when the wizard closes. We recommend that you review the post-installation tasks. Click Finish to exit the installation wizard (Figure 7-20 on page 258). Note: Before proceeding, you might want to review the log file for any error messages. The log file is located in xxx\logs\install.log, where xxx is the destination directory where the DS CIM Agent for Windows is installed.

Chapter 7. CIMOM installation and customization

257

Figure 7-20 DS CIM Agent install successful

11.If you checked the view post installation tasks box, then the window shown in Figure 7-21 opens. Close the window when you have finished reviewing the post installation tasks.

Figure 7-21 DS CIM Agent post install readme

12.The launch pad window (Figure 7-12 on page 250) opens. Click Exit.

258

IBM TotalStorage Productivity Center: The Next Generation

7.4.5 Post-installation tasks


Figure 7-22 contains a short summary of the post-installation steps. In our lab environment, we did not perform all of them. Note that the term ESS CIM Agent and DS Open CIM Agent are used interchangeably.
Summary: 1. Verifying the DS Open CIM Agent Installation a. Verify the installation of the Service Location Protocol (SLP) b. Verify the installation of the ESS CIM Agent 2. Configuring the DS Open CIM Agent a. Configure the authorized CIMOM user b. Reset the password of the provided default CIMOM username (optional, recommended) c. Configure the ESS devices 3. Configuring the DS Open CIM Agent to run in unsecure mode (optional) 4. Verifying connection to the ESS Figure 7-22 Summary of the post installation tasks

Verify the SLP installation


To verify the SLP installation, follow these steps: 1. Verify that the Service Location Protocol is started. Select Start Settings Control Panel. Double-click Administrative Tools. Double-click Services. 2. Find SLP in the Services window list. For this component, the Status column should be marked Started as shown in Figure 7-23.

Figure 7-23 Verify Service Location Protocol started

Chapter 7. CIMOM installation and customization

259

3. If SLP is not started, right-click the SLP and select Start from the pop-up menu. Wait for the Status column to be changed to Started and make sure that the Startup-Type is Automatic.

Verify the DS CIM Agent installation


To verify the DSCIM Agent installation, follow these steps: 1. Verify that the CIMOM service is started. If you closed the Services window, select Start Settings Control Panel. Double-click Administrative Tools. Double-click Services. 2. Find the CIM Object Manager - DS Open API in the Services window list. For this component, the Status column should be marked Started and the Startup Type column should be marked Automatic, as shown in Figure 7-24.

Figure 7-24 DS CIM Object Manager started confirmation

3. If the CIM Object Manager is not started, right-click the CIM Object Manager - DS Open API and select Start from the pop-up menu. Wait for the Status column to change to Started. If you are able to perform all of the verification tasks successfully, the DS CIM Agent has been installed successfully on your Windows system. Next, perform the configuration tasks. Tip: All commands used for configuration and verification of the CIM Agent are described either in the ReadMe files or in the InstallGude.pdf which are part of the package. It is not advisable to use executables found in the working directory of the CIM Agent.

7.4.6 Configuring the DS CIM Agent for Windows


This task configures the DS CIM Agent after it has been successfully installed.

DS Open CIMOM and ESS subsystem considerations


Here are some issues to consider when using DS Open CIMOM and ESS subsystems: DS CIM agent relies on ESS CLI connectivity from DS CIMOM server to ESS devices. Make sure that the ESS devices you are registering are reachable and available at this point. It is recommended that you verify their availability by launching the ESS specialist browser from the ESS CIMOM server. You should log on to both ESS clusters for each ESS and be certain e you are authenticated with correct ESS passwords and IP addresses. 260
IBM TotalStorage Productivity Center: The Next Generation

If the ESS is on a different subnet than the DS CIMOM server and behind a firewall, then you must authenticate through the firewall first before registering the ESS with the CIMOM. You must set up the firewall between the ESS subsystem and the CIMOM to allow bidirectional traffic between them. Rules might need to be put in place to open specific ports. You must authenticate through the firewall in both directions, that is from the ESS to CIMOM server and also from the CIMOM server to ESS. You must verify the connection between the ESS devices and the CIMOM server using rsTestConnection command of ESS CLI code. When you are satisfied that you are able to authenticate and receive the ESS CLI heartbeat with all registered ESS subsystems, you may proceed with entering ESS IP addresses. If the CIMOM agent fails to authenticate with ESSs, then it will not start-up properly and may be very slow, since it retries the authentication. Figure 7-25 shows an example of the rsTestConnection.exe command to check the connection to your ESS subsystems.

C:\Program rsWebTest: rsWebTest: rsWebTest:

Files\IBM\ESScli>rsTestConnection.exe /v /s 9.12.6.29 Using 9.12.6.29 as server name HeartBeat to the server was successful. command successful

Figure 7-25 Prerequisite is to check the connection to the ESS

Perform the following steps to configure the DS CIM Agent. 1. Configure the ESS CIM Agent with the information for each Enterprise Storage Server the ESS CIM Agent is to access. Select Start Programs CIM agent for the IBM TotalStorage DS Open API Enable DS Communications, as shown in Figure 7-26. The setdevice.bat utility starts.

Figure 7-26 Configuring the ESS CIM Agent

Registering DS6000 and DS8000 subsystems


After you have started the interface to enable the DS Communications successfully, the CMD-Line Interface to the DS Communications opens (Figure 7-27 on page 262).

Chapter 7. CIMOM installation and customization

261

Figure 7-27 CMD Line Interface setdevice.bat

2. Type help to see the available commands as shown in Figure 7-28. Note that the addess, lsess, rmess commands are for the ESS-family subsystems. The and others such as addessserver, lsessserver, rmessserver commands are for the DS6000, DS8000 and the copyservice-server in the ESS-family subsystems.

Figure 7-28 Available commands in setdevice.bat

3. Enter the following commands for each DS6000, DS8000, or ESS copyservices server that is to be configured. Both clusters must be added to the CIM Agent.
addessserver <ip> <user> <password>

<ip> represents the IP address of the storage subsystem <user> represents the DS Storage Server HMC or SMC user name <password> represents the DS Storage Server password for the user name Attention: If the username or password entered is incorrect or the DS CIM agent does not connect to the storage subsystem, this will cause a error and the DS CIM Agent will not start and stop correctly. Use the following command to remove the entry that is causing the problem and reboot the server.
rmessserver <ip>

Whenever you add or remove a storage subsystem from CIMOM registration, you must restart the CIMOM to pick up the updated device list. Figure 7-29 on page 263 shows a sample output of the systems in our lab environment.

262

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-29 List some ESS subsystems versus some DS8K subsystems

Note: The CIMOM collects and caches the information from the defined storage subsystems at startup time. The first time it is started might take longer than subsequent starts.

Registering ESS Devices


Enter the following command for each ESS subsystem that is to be configured. Both clusters must be added to the CIM Agent.
addess <ip> <user> <password>

<ip> represents the IP address of the Enterprise Storage Server <user> represents the Enterprise Storage Server Specialist user name <password> represents the Enterprise Storage Server Specialist password for the user name An example of the addess command is:
addess <ip> <user> <password>

Type this command for each ESS (as shown in Figure 7-30). Where <ip> represents the IP address of the cluster of Enterprise Storage Server.

Figure 7-30 The addess command example

Chapter 7. CIMOM installation and customization

263

7.4.7 Restart the CIMOM


Perform the following steps to use the Windows Start menu facility to stop and restart the CIMOM. This is required so that CIMOM can register new devices or unregister deleted devices. 1. Stop the CIMOM by selecting Start Programs CIM Agent for the IBM TotalStorage DS Open API Stop CIMOM service. A Command Prompt window opens to track the stoppage of the CIMOM (as shown in Figure 7-31). If the CIMOM has stopped successfully, the following message is displayed:

Figure 7-31 Stop DS Open CIM Agent

2. Restart the CIMOM by selecting Start Programs CIM Agent for the IBM TotalStorage DS Open API Start CIMOM service. A Command Prompt window opens to track the progress of the starting of the CIMOM. If the CIMOM has started successfully, the message shown in Figure 7-32 displayed:

Figure 7-32 Restart DS Open CIM Agent

Note: The restarting of the CIMOM might take a while, because it is connecting to the defined storage subsystems and is caching that information for future use. As an alternative, you might use the Services Applet in the Windows Control Panel. 3. You should now have a look at the CIMOM.LOG File. Figure 7-33 on page 265 is a sample output from our lab. Note the bolded text where the most important events are highlighted.

264

IBM TotalStorage Productivity Center: The Next Generation

CMMOM0200I SSG/SSD CIM Object Manager CMMOM0203I **** CIMOM Server Started **** CMMOM0204I CIMOM Version: 5.1.0.45 CMMOM0205I CIMOM Build Date: 01/05/06 Build Time: 07:26:32 PM CMMOM0206I OS Name: Windows 2000 Version: 5.0 CIMServer[initialize]: Namespace \root\ibm initialized CMMOM0400I Authorization module = com.ibm.provider.security.EnhancedAuthModule CMMOM0410I Authorization is active CMMOM0901I IndicationProcessor started CMMOM0902I Indication subscription created for http://9.1.38.36:1272/1303 CMMOM0902I Indication subscription created for http://9.1.38.36:1272/1303 CMMOM0902I Indication subscription created for http://9.1.38.36:1272/1303 CMMOM0902I Indication subscription created for http://9.1.38.36:1272/1303 CMMOM0905I 4 indication subscriptions re-started EssProvider[initialize]: ESS Provider is starting... EssProvider[initialize]: CLI already configured for ESS 2105.22513. Adding aditional information 9.12.6.30 EssProvider[initialize]: End initialize ESS Provider NCache EssProvider[initialize]: com.ibm.provider.common.ProviderContext@26a15b3f : finished for 2107.75BALB1 PrimeCache[run]: Alternative ip = 9.12.6.30 PrimeCache[run]: Starting Thread Creating cache for ip 9.12.6.29 EssProvider[initialize]: ESS 2105.22513 configured at 9.12.6.29 EssProvider[initialize]: ESS 2107.75BALB1 configured at Internal Service 9.12.6.17 EssProvider[initialize]: ESCON Connectivity interval set to: 60 minutes CMMOM0403I Platform is Windows CMMOM0404I Security server starting on port 5989 CMMOM0409I Server waiting for connections... CMMOM0500I Registered service service:wbem:https://9.1.38.35:5989 with SLP SA PrimeCache[run]: Ending Thread Creating cache for ip 9.12.6.29

Figure 7-33 Sample CIMOM Output after restarting the service

7.4.8 CIMOM user authentication


Use the setuser interactive tool to configure the CIMOM for the user who will have the authority to use the CIMOM. The user is the TotalStorage Productivity superuser. Upon installation of the CIM Agent for ESS, the provided default user name is superuser with a default password of passw0rd. The first time you use the setuser tool, you must use this user name and password combination, as shown in Figure 7-34 on page 266. After you have defined other user names, you can start the setuser command by specifying other defined CIMOM user names. Additional user IDs should be less than or equal to eight characters. You should consider deleting the default user ID of superuser after you have defined additional user IDs to the CIMOM.

Chapter 7. CIMOM installation and customization

265

C:\Program Files\IBM\cimagent> setuser -u superuser -p passw0rd Application setuser started in interactive mode To terminate the application enter: exit To get a help message enter: help >>> help Available commands: ? exit rmuser adduser h[elp] setentry chuser lsuser setoutput >>> Figure 7-34 Available commands for setuser.bat

The users which you configure to have authority to use the CIMOM are defined uniquely to the CIMOM software and have no required relationship to operating system user names, the ESS Specialist user names, or the ESS Copy Services user names. Note: We recommend that you change the superuser password to something other than the default or delete the superuser user ID after defining a new CIMOM user ID and password. Following are the steps to define users to the CIMOM: 1. Open a Command Prompt window and change directory to the CIM Agent directory, for example:
C:\Program Files\IBM\cimagent

2. Type the command setuser -u superuser -p passw0rd at the command prompt to start the setuser interactive session to identify users to the CIMOM. 3. Type the command adduser cimuser cimpass in the setuser interactive session to define new users. cimuser represents the new user name to access the CIMOM. cimpass represents the password for the new user name to access the CIMOM. 4. Close the setuser interactive session by typing exit. 5. The users which you configure to have authority to use the CIMOM are now functional.

7.5 Verifying connection to the storage subsystems


During this task, the DS Open API CIM Agent software connectivity to the Enterprise Storage Server (ESS) is verified. The connection to the ESS is through the ESS CLI software. If the network connectivity fails, or if the user name and password that you set in the configuration task is incorrect, the CIM Agent cannot connect successfully to the ESS. The installation, verification, and configuration of the CIM Agent must be completed before you verify the connection to the storage subsystem defined to it. Verify that you have network connectivity to the ESS, DS8000 or DS6000 from the system where the CIM Agent is installed. Issue a ping command to the storage subsystem and check that you can see reply statistics from the storage subsystem IP address.

266

IBM TotalStorage Productivity Center: The Next Generation

Verify that the CIMOM is active by selecting Start Settings Control Panel Administrative Tools Services. Launch the Services panel and select CIM Object Manager service. Verify the Status is shown as Started, as shown in Figure 7-35.

Figure 7-35 Verify ESS CIMOM has started

Verify that the SLP is active by selecting Start Settings Control Panel. Double-click Administrative Tools. Double-Click Services. You should see a panel similar to Figure 7-23 on page 259. Ensure that Status is Started. Verify that SLP has dependency on CIMOM. This was configured automatically when you installed the CIM agent software. Verify this by selecting Start Settings Control Panel. Double-click Administrative Tools. Double-Click Services. Select Properties on Service Location Protocol, as shown in Figure 7-36.

Figure 7-36 SLP properties panel

Chapter 7. CIMOM installation and customization

267

Click Properties and select the Dependencies tab, as shown in Figure 7-37. You must ensure that CIM Object Manager has a dependency on Service Location Protocol. This should be the default.

Figure 7-37 SLP dependency on CIMOM

Verify CIMOM registration with SLP by selecting Start Programs CIM Agent for the IBM TotalStorage DS Open API Check CIMOM Registration. A window opens displaying the wbem services as shown in Figure 7-38. These services have either registered themselves with SLP, or you have registered them explicitly with SLP using slptool. If you changed the default ports for a CIMOM during installation, the port number should be correctly listed here. It might take some time for a CIM Agent to register with SLP.

Figure 7-38 Verify CIM Agent registration with SLP

Note: If the verification of the CIMOM registration is not successful, stop and restart the SLP and CIMOM services. Note that the CIMOM will attempt to contact each storage subsystem registered to it. Therefore, the startup may take some time, especially if it is not able to connect and authenticate to any of the registered devices. Use the verifyconfig -u superuser -p passw0rd command, where superuser is the user name and passw0rd is the password for the user name that you configured to manage the

268

IBM TotalStorage Productivity Center: The Next Generation

CIMOM, to locate all WBEM services in the local network. You need to define the TotalStorage Productivity Center superuser name and passw0rd in order for TotalStorage Productivity Center to have the authority to manage the CIMOM. The verifyconfig command checks the registration for the ESS CIM Agent and checks that it can connect to the ESSs. At the ITSO lab, we configured two ESSs (as shown in Figure 7-39).

Figure 7-39 The verifyconfig command

7.5.1 Adding your CIMOM to the TotalStorage Productivity Center GUI


Now that you have added devices to your CIMOM, they should be made known to TotalStorage Productivity Center. This done by adding the CIMOM to TotalStorage Productivity Center entering the information into the GUI panel similar to Figure 7-40.

Figure 7-40 Adding a CIMOM to TotalStorage Productivity Center

After entering the CIMOM information, test the connectivity to your CIMOM. Click the Test CIMOM connectivity before adding box. This is shown in Figure 7-41 on page 270.

Chapter 7. CIMOM installation and customization

269

Figure 7-41 Test the connectivity - successfully done

7.5.2 Problem determination


In case of errors, you can start debugging by examining the cimom.log file. This file is located in the C:\Program Files\IBM\cimagent directory. A sample is shown in Figure 7-42. The entries of specific interest are:
CMMOM050OI Registered service service:wbem:https://x.x.x.x:5989 with SLP SA CMMOM0409I Server waiting for connections

This first entry indicates that the CIMOM has sucessfully registered with SLP using the port number specified at ESS CIM agent install time. The second entry indicates that it has started sucessfully and is waiting for connections.

Figure 7-42 CIMOM Log file

270

IBM TotalStorage Productivity Center: The Next Generation

If you still have problems, Refer to the DS Open Application Programming Interface Reference for an explanation and resolution of the error messages. You can find this Guide in the \doc directory at the root of the CIM Agent CD.

7.5.3 Confirming that ESS CIMOM is available


Before you proceed, you need to be sure that the DS CIMOM is listening for incoming connections. To do this, run a telnet command from the server where TotalStorage Productivity Center resides. A successful telnet on the configured port (as indicated by a black screen with cursor on the top left) will tell you that the DS CIMOM is active. You selected this port during DS CIMOM code installation. If the telnet connection fails, you will have a panel like the one shown in Figure 7-43. In that event, you must investigate the problem until you get a blank screen for telnetting the CIMOM on its port.

Figure 7-43 Example of telnet fail connection

7.5.4 Start the CIM Browser


Another method to verify that your DS Open CIM Agent is up and running, is to use the CIM Browser interface. The CIM-Browser is an interface to the whole tree of the WBEM language the CIMOM is using in its communication with the CIM-provider. Attention: As with most of the CIM Agents, the DS Open CIM Agent is shipped with its own workshop or browser application. CIM Browsers are not intended as an enduser tool. Instead, they are a tool for the developers of management applications. That is why they are shipped with the CIM Agent. Be careful not to delete or destroy anything while using the workshop.

1. For Microsoft Windows machines, change the working directory to:


c:\Program Files\ibm\cimagent

2. Run startcimbrowser.The WBEM browser in Figure 7-44 on page 272 opens. The default user name is superuser and the default password is passw0rd. If you have already changed it using the setuser command, the new userid and password must be provided. This should be set to the TotalStorage Productivity Center userid and password.

Chapter 7. CIMOM installation and customization

271

Figure 7-44 CIMOM Browser Login Panel

When login is successful, you should see a panel similar to the one in Figure 7-45.

Figure 7-45 CIMOM Browser entry panel

3. Now that the CIM Browser has started, the following scenario will take you through the steps to find information about the installed subsystems. The scenario assumes that you have subsystems registered within your CIMOM. In the CIM Workshop, follow the path down to the element shown in Figure 7-46 on page 273.

272

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-46 Select your Physical Element

4. After selecting the physical element, select Instances Show, as show in Figure 7-47.

Figure 7-47 Show selected elements

In Figure 7-48 on page 274, you can see elements from our subsystem and their corresponding values.

Chapter 7. CIMOM installation and customization

273

Figure 7-48 Elements and their values

This concludes the usage sample for this CIM-Workshop. Note, that other CIMOMs have their own CIMOM-Workshop. They might look and behave a little different than this one, but the principle is the same.

7.6 Installing CIM agent for IBM DS4000 family


To manage IBM DS4000 disk subsystems using SMI-Specifications V1.0.2, we need the software component mentioned on the sites listed in this section. The CIM Agent for DS4000 family is provided by Engenio and it is called Engenio SANtricity SMI-S Provider. For the TotalStorage Productivity Center CIMOM and code levels and the technical documentation go to the following site:
http://www-03.ibm.com/servers/storage/support/software/

The following Engenio sites are useful for the most current CIMOM levels (Note: The information contained on this Web site is not for use with TotalStorage Productivity Center.):
http://www.engenio.com/products/smi_provider.html

Go the following Web site for information about TotalStorage Productivity Center:
http://www.engenio.com/products/smi_provider_archive.html

Before going any further, be sure to run one of the supported firmware levels mentioned there. Attention: Note that there are Web sites from Engenio where you can find the very latest SMI-S provider, but it must be stated that this code is inappropriate for use with TotalStorage Productivity Center. Throughout the remainder of this chapter the terms DS4000 and FaStT are used interchangeably.

274

IBM TotalStorage Productivity Center: The Next Generation

The code level used for this redbook is V1.1.0.614. This information can be found in the directory of your CIMOM server.
C:\Program Files\EngenioProvider\SMI_SProvider\doc\ChangeLog.txt

1. From the Web site mentioned previously, select the operating system used for the server on which the Engenio SMI-S Provider has to be installed. For example, with Windows you have to download a setup.exe file. Save it to a directory on the server you in which will be installing the Engenio SMI-S Provider. 2. Launch the setup.exe file to begin the Engenio SMI-S Provider installation. The InstallShield Wizard for Engenio SMI-S Provider window opens (see Figure 7-49). Click Next to continue.

Figure 7-49 Engenio SMI-S Provider welcome panel

3. The Engenio License Agreement window opens next. If you agree with the terms of the license agreement, click Yes to accept the terms and continue the installation (see Figure 7-50).

Figure 7-50 Engenio License Agreement

4. The System Info window opens. The minimum requirements are listed along with the install system disk free space and memory attributes as shown in Figure 7-51 on

Chapter 7. CIMOM installation and customization

275

page 276. If the target system fails the minimum requirements evaluation, then a notification window will appear and the installation will fail. Click Next to continue.

Figure 7-51 System Info window

5. The Choose Destination Location window opens. Click Browse to choose another location or click Next to begin the installation of the DS4000/FAStT CIM agent (see Figure 7-52).

Figure 7-52 Choose a destination

The InstallShield Wizard prepares and copies the files into the destination directory. Figure 7-53 on page 277 shows the installation progress.

276

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-53 Install progress

6. In the Enter IPs and Hostnames window, enter the IP addresses and hostnames of the DS4000 devices this CIM Agent will manage as shown in Figure 7-54.

Figure 7-54 DS4000 device list

7. Use the Add New Entry button to add the IP addresses or host names of the DS4000 devices with which this DS4000 CIM Agent will communicate. Enter one IP address or host name at a time until all the DS4000 devices have been entered and click Next (see Figure 7-55 on page 278).

Chapter 7. CIMOM installation and customization

277

Figure 7-55 Enter host name or IP address

Important: Do not enter the IP address of a DS4000 device in multiple DS4000 CIM Agents within the same subnet. This can cause unpredictable results on the TotalStorage Productivity Center for Disk server and could cause a loss of communication with the DS4000 devices. 8. If the list of hostnames or IP addresses has been previously written to a file, use the Add File Contents button, opening Windows Explorer. Locate and select the file, then click Open to import the file contents. The file where all the IP-Addresses or hostnames are collected in is located in this Microsoft Windows directory:
C:\Program Files\EngenioProvider\SMI_SProvider\bin\arrayhosts.txt

When all the DS4000 device hostnames and IP addresses have been entered, your panel should list them all as shown in Figure 7-56.

Figure 7-56 Device list

Click Next to start the Engenio SMI-S Provider Service (see Figure 7-57 on page 279).

278

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-57 Provider Service starting

When the Service has started, the installation of the Engenio SMI-S Provider is complete (see Figure 7-58).

Figure 7-58 Installation complete

During the start of the service, the Engenio code processes all the entries in the arrayhosts.txt file. The configuration is stored in an another file named:
C:\Program Files\EngenioProvider\SMI_SProvider\bin\providerStore

Every time you change anything with the registered DS4000/FaStT Controllers and restart the Engenio CIMOM, and when you make a new discovery, the providerStore and arrayhosts.txt are updated with a new time stamp.

arrayhosts.txt file example


If you chose the default installation directory, the installer creates a file named arrayhost.txt in the following path (see Figure Figure 7-59):
C:\Program Files\EngenioProvider\SMI_SProvider\bin\arrayhost.txt

In this file, the IP addresses of installed DS 4000 units can be reviewed, added, or edited. After editing this file, you must restart the service.

Figure 7-59 Arrayhost file

Chapter 7. CIMOM installation and customization

279

Verifying Engenio SMI-S Provider Service availability


You can verify that the Engenio SMI-S Provider service has started from the Windows Services window, as shown in Figure 7-60. If you change the contents of the arrayhost file for adding and deleting DS4000 devices, then you must restart the LSI Provider service using the Windows Services panel.

Figure 7-60 LSI Provider Service

Another method is to use the tool supplied in the installation directory of your DS4000 CIM Agent. The tool is named runservice.bat and can be found in the path:
C:\Program Files\EngenioProvider\SMI_SProvider\bin

Figure 7-61 shows the runservice.bat parameters and their output.

runservice.bat -s Looking in C:\Program Files\EngenioProvider\SMI_SProvider\bin\Service.ini for service name Service Name: Engenio SMI-S Provider. Stopping Engenio SMI-S Provider Server. Engenio SMI-S Provider Server stopped. runservice.bat -q Looking in C:\Program Files\EngenioProvider\SMI_SProvider\bin\Service.ini for service name Service Name: Engenio SMI-S Provider. Service Status: SERVICE_STOPPED runservice.bat -t Looking in C:\Program Files\EngenioProvider\SMI_SProvider\bin\Service.ini for service name Service Name: Engenio SMI-S Provider. Starting Engenio SMI-S Provider Server. Engenio SMI-S Provider Server started. runservice.bat -q Looking in C:\Program Files\EngenioProvider\SMI_SProvider\bin\Service.ini for service name Service Name: Engenio SMI-S Provider. Figure 7-61 Sample usage of runservice.bat

280

IBM TotalStorage Productivity Center: The Next Generation

7.6.1 Registering DS4000 CIM agent if SLP-DA is in place


If you had to set up a SLP-DA structure, then the DS4000 CIM Agent needs to be registered with an SLP DA. This might be the case when the DS4000 CIM Agent is in a different subnet than that of IBM TotalStorage Productivity Center environment. The registration is not currently provided automatically by the CIM Agent. You register the DS4000 CIM Agent with SLP DA from a command prompt using the slptool command. An example of the slptool command follows this paragraph. You must change the IP address to reflect the IP address of the workstation or server where you installed the DS4000 family DS 4000 CIM Agent. The IP address of our DS4000 CIM Agent is 9.1.38.39 and port 5988. You need to execute this command on your SLP DA server if a SLP-DA setup was chosen before. In our ITSO lab, we did not use a SLP DA server. Go to the command line and switch to the directory listed here, then issue the following command. For example, switch to this directory:
C:\Program Files\IBM\cimagent\slp

Execute slptool register service:wbem:http:\\9.1.38.39:5988

Important: You cannot have the DS4000 management password set if you are using IBM TotalStorage Productivity Center, because the Engenio CIMOM has no capability to keep track of the user ID and password combinations which would be necessary to get into the managed DS4000/FaStT subsystems.

At this point, you can run following command on the SLP DA server, or any other system where SLP is installed, to verify that DS 4000 family DS4000 CIM agent is registered with SLP DA.
slptool findsrvs wbem

The response from this command will show the available services which you may verify.

7.6.2 Verifying and managing CIMOM availability


Verify that TotalStorage Productivity Center can authenticate and discover the CIMOM agent services which are either registered by SLP DA or manually entered into CIMOM registration. The Engenio Provider does not require any specific userid and password to authenticate, just enter any nonzero userid and password when adding the CIMOM from TPC GUI (see Figure 7-62 on page 282). Enter the following values in the IBM TotalStorage Productivity Center panel: Host: your IP address or fully qualified host name of the Engenio SMI-S Provider server Port: unsecure port 5988 (Engenio does not support secure communication as we write) Username: any username Password: any password (not null) Interoperability Namespace: /interop protocol: HTTP Display name: any name that can help you identify the CIM Agent

Chapter 7. CIMOM installation and customization

281

Figure 7-62 Add CIMOM

Starting the CIM Browser


Following are screen captures showing how to log on and find information about your subsystem. Attention: As with most of the CIM Agents, the Engenio Provider is shipped with its own workshop or browser application. CIM Browsers are not intended as an enduser tool. Instead they are a tool for the developers of management applications to be used when an error is encountered. That is why they are shipped with the CIM Agent. Be careful not to delete or destroy anything while using the workshop.

1. Start the cimworkshop.bat found in the path:


C:\Program Files\EngenioProvider\wbemservices\bin

2. Enter a value for userid and password. Because the CIMOM runs unauthenticated, you can use any values (see Figure 7-63 on page 283).

282

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-63 Logon to the CIMWORKSHOP and entry screen

3. After you log on, you must switch to another namespace by using the drop-down button. For the IBM DS4000 it is LSISSI as shown in Figure 7-64 on page 284.

Chapter 7. CIMOM installation and customization

283

Figure 7-64 Change the namespace and navigate to the physical elements.

4. From the Action Menu, select Show Instances, as shown in Figure 7-65 on page 285.

284

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-65 Show instances - the detailed values of the elements.

7.7 Configuring CIMOM for SAN Volume Controller


The CIM Agent for SAN Volume Controller is part of the SAN Volume Controller Console and provides TotalStorage Productivity Center with access to SAN Volume Controller clusters. You must customize the CIM Agents in your enterprise to accept the TotalStorage Productivity Center user name and password. Figure 7-66 on page 286 explains the communication between TotalStorage Productivity Center and SAN Volume Controller Environment.

Chapter 7. CIMOM installation and customization

285

Figure 7-66 TotalStorage Productivity Center SVC communication

For additional details on how to configure the SAN Volume Controller Console refer to the IBM Redbook IBM TotalStorage Introducing the SAN Volume Controller and SAN Integration Server, SG24-6423. To discover and manage the SAN Volume Controller, we need to ensure that our TotalStorage Productivity Center superuser name and password (the account specified in the TotalStorage Productivity Center configuration panel as shown in 7.7.1, Adding the SVC TotalStorage Productivity Center for Disk user account on page 286) matches an account defined on the SAN Volume Controller console. In our case we implemented username TPCSUID and password ITSOSJ. This userid and password combination has to be used later when you authenticate the CIMOM Agent in the TPC GUI. You might want to adapt a similar nomenclature and setup the username and password on each SAN Volume Controller CIMOM to be monitored with TotalStorage Productivity Center.

7.7.1 Adding the SVC TotalStorage Productivity Center for Disk user account
As stated previously, you should implement a unique user ID to manage the SAN Volume Controller devices in TotalStorage Productivity Center for Disk. This can be achieved at the SAN Volume Controller console using the following steps: 1. Log in to the SAN Volume Controller console with a superuser account. 2. Click Users under My Work on the left side of the panel (see Figure 7-67 on page 287).

286

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-67 SAN Volume Controller console

3. Select Add a user in the drop-down under Users panel and click Go (see Figure 7-68).

Figure 7-68 SAN Volume Controller console Add a user

Chapter 7. CIMOM installation and customization

287

4. An introduction screen is opened, click Next (see Figure 7-69).

Figure 7-69 SAN Volume Controller Add a user wizard

5. Enter the User Name and Password and click Next (see Figure 7-70 on page 289).

288

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-70 SAN Volume Controller Console Define users panel

6. Select your candidate cluster and move it to the right, under Administrator Clusters (see Figure 7-71). Click Next to continue.

Figure 7-71 SAN Volume Controller console Assign administrator roles

7. Click Next after you Assign service roles (see Figure 7-72 on page 290).

Chapter 7. CIMOM installation and customization

289

Figure 7-72 SAN Volume Controller Console Assign user roles

8. Click Finish after you Verify user roles (see Figure 7-73 on page 291).

290

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-73 SAN Volume Controller Console Verify user roles

9. After you click Finish, the Viewing users panel opens (see Figure 7-74).

Figure 7-74 SAN Volume Controller Console Viewing Users

Chapter 7. CIMOM installation and customization

291

Confirming SAN Volume Controller CIMOM availability


Before you proceed, you need to be sure that the CIMOM on the SAN Volume Controller is listening for incoming connections. To do this, issue a telnet command from the server where TotalStorage Productivity Center resides. A successful telnet on port 5989 (as indicated by a black screen with cursor on the top left) will tell you that the CIMOM SAN Volume Controller console is active. If the telnet connection fails, you will have a panel similar to the one in Figure 7-75.

Figure 7-75 Example of telnet fail connection

7.7.2 Registering the SAN Volume Controller host in SLP


The next step to detecting an SAN Volume Controller is to register the SAN Volume Controller console manually to the SLP DA. Attention: Starting with SVC V2.1, the default port is 5999. When upgrading SVC from a version earlier than 2.1 to a version 2.1 or later, the CIM port should be changed from 5989 to 5999. For detailed information, refer to this Web site:
http://www-1.ibm.com/support/docview.wss?rs=591&context=STCAFUL&context=STCKNX4&dc=DB500 &uid=ssg1S1002559&loc=en_US&cs=utf-8&lang=en

To register the SAN Volume Controller Console, perform the following command on the SLP DA server:
slptool register service:wbem:https://ipaddress:5999

Where ipaddress is the SAN Volume Controller console IP address. Run a verifyconfig command to confirm that SLP is aware of the SVC console registration. Tip: If your SAN Volume Controller console resides in the same subnet as the TotalStorage Productivity Center server, SLP registration will be automatic so you do not need to perform the SLP registration.

7.8 Configuring CIMOM for McData switches


Before running performance monitoring jobs against switches, you must discern if the switch you want to monitor is either using an embedded CIM agent, or a proxy CIM Agent.

292

IBM TotalStorage Productivity Center: The Next Generation

As an example of exploiting SMI-S for switch monitoring, we focus our tests on two McDATA switches available. McDATA, as Brocade, is providing an proxy CIM Agent to connect to its devices. In order to manage McDATA fabric using SMI-Specifications V1.1, we need the software component mentioned on this site:
http://www.snia.org/ctp/conformingproviders/mcdata

At the time of writing, this is McDATA SMI-S Interface:


http://www-1.ibm.com/support/docview.wss?rs=591&context=STCAFUL&context=STCKNX4&dc=DB500&ui d=ssg1S1002559&loc=en_US&cs=utf-8&lang=en

The McDATA SMI-S Interface is a software product that provides support for the SMI-S for McDATA director and switch products. McDATA SMI-S Interface provides an SMI-S interface for management of McDATA products. It exposes a WBEM (specifically, CIM XML) interface for management.

7.8.1 Planning the installation


These are the planning considerations we suggest you think through before installing the SMI-S Interface.

Prerequisites
The McDATA SMI-S Interface solution requires the hardware and software described in the following sections. Table 7-1 lists the switch hardware supported for this release of McDATA SMI-S Interface and the minimum levels of firmware required by the hardware.
Table 7-1 Supported Hardware Manufacturer Product Minimum Firmware Supported

McData

ED-5000 Intrepid 6064 Intrepid 6140 Sphereon 3016 Sphereon 3032 Sphereon 3216 Sphereon 3232 Sphereon 4300 Sphereon 4500 Sphereon 4700 Intrepid 10000

Firmware version 4.1 EOS version 7.0 EOS version 7.0 EOS version 7.0 EOS version 7.0 EOS version 7.0 EOS version 7.0 EOS version 7.0 EOS version 7.0 EOS version 7.0 E/OSn version 6.3

The McDATA SMI-S Interface, version 1.1, implementation supports the following operating system (OS) platforms: Windows 2000, with Service Pack 2 or higher Solaris 9
Chapter 7. CIMOM installation and customization

293

7.8.2 Supported configurations


McDATA SMI-S Interface communicates with a device using one of two modes. If you prefer to keep the EFCM Management Software as an additional interface to your switch infrastructure, you would most likely choose the second method, EFCM proxy mode. Direct Connection mode: McDATA Provider communicates directly with the SAN Device. EFCM Proxy mode: McDATA Provider and Device communicate through EFCM only. The EFCM Proxy mode has the following attributes that are different from Direct Connection: 1. The McDATA Provider always manages the same set of devices as those managed by EFCM. 2. The McDATA Provider cannot add or delete a device from its list of managed devices; this must be done through EFCM. When EFCM adds a device, that device is automatically added to the McDATA Provider. When EFCM deletes a device, the device is removed from the McDATA Provider. 3. If EFCM is running when the McDATA Provider starts, any devices being managed by EFCM are automatically added to the McDATA Provider.

Direct Connection mode


Figure 7-76 on page 295 depicts the Direct Connection mode, in which a WBEM Server application supports the standard CIM/WBEM interfaces to a client and accesses the managed device using (possibly) nonstandard or proprietary means. Communications take place in Direct Connection mode, in which the data passes directly between the McDATA Provider and the device.

294

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-76 Direct Connection mode

EFCM Proxy model


Figure 7-77 on page 296 depicts the communications in EFCM Proxy mode. In this case, EFCM manages the switch and communications between the device and the McDATA Provider flow through EFCM. When EFCM is used to manage a device, EFCM assumes the management interface with a switch or director. As a result, McDATA SMI-S Interface communications must use EFCM to communicate with the switch.

Chapter 7. CIMOM installation and customization

295

Figure 7-77 EFCM Proxy model

If you use the EFCM, we recommend using the EFCM Proxy model.

Our lab setup


We installed the SMI-S Interface on a Microsoft Windows 2000 server and and we followed the EFCM Proxy model. Our SMI-S Interface points to a EFCM server which can be reached over IP.

7.8.3 Installing SMI-S Interface


The SMI-S Interface code can be downloaded from the McDATA Web site. To download the code, you must log in to the McDATA File Center.
http://www.mcdata.com/filecenter/template?page=index

The McDATA OPENconnectors SMI-S Interface User Guide can be downloaded from the Resource Library.
http://www.mcdata.com/wwwapp/resourcelibrary/jsp/navpages/index.jsp?mcdata_category=www_res ource&resource=index

In this section, we guide you through an EFCM Proxy Model Installation. Note: Note that the figures shown are taken at a SMI-S level V1.0 installation. The installation at SMI-S level v1.1 might vary a little.

296

IBM TotalStorage Productivity Center: The Next Generation

1. Locate the installation image and launch the Setup.exe. 2. When the panel shown in Figure 7-78 is shown. Click Next to proceed.

Figure 7-78 Setup

3. You are presented with the License Agreement. Select I Accept the terms of the license agreement as shown in Figure 7-79 and click Next.

Figure 7-79 License Agreement

The installer gives you the option of installing Service Location Profile (SLP) software. If you already have SLP installed in your system, you can choose to install SMI-S Interface only. In this case, you receive a warning message. 4. To install both components, select SMI-S Interface - SLP as shown in Figure 7-80 on page 298 and then click Next.

Chapter 7. CIMOM installation and customization

297

Figure 7-80 Install options

5. You are prompted for the installation folder. You can either Change to a different directory, or accept the shown default (see Figure 7-81) directory. Click Next to continue.

Figure 7-81 Installation folder

6. The next panel is the summary panel (Figure 7-82 on page 299) that allows you to start the installation. Click Install to continue.

298

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-82 Ready to install

You can see the installation progress, as shown in Figure 7-83.

Figure 7-83 Installation progress

After the McDATA SMI-S interface is installed, the installer launches the SMI-S interface server configuration tool, as detailed in the next subsection.

7.8.4 Configuring the SMI-S interface


To configure the SMI-S Interface, follow these steps: 1. As soon as installation completes, a Java panel is displayed as in Figure 7-84 on page 300. This prompts you for the following information to be entered: Network Address: Specify the network address of the EFCM server for proxy connections. In the User ID field, specify the user name to log in to the EFCM server. This account has to have been previously defined on EFCM. In the Password field, specify the password to use to log in to the EFCM server. Select Test Connection to test the connection settings for the EFCM server.

Chapter 7. CIMOM installation and customization

299

2. A message displays verifying whether the test succeeded or failed. If the user ID or password fails, a message explains that the user ID used could not be validated. If the network address fails, a message explains that the server could not be found at the address. If the test succeeds, a message explains that the server was found and user ID validated. 3. Select the OK button to save changes. .

Figure 7-84 SMI-S Interface Server Configuration

4. When you click the Test Connection, you are presented with the progress panel in Figure 7-85.

Figure 7-85 Test progress

5. When the connection is successfully tested, you see the message shown in Figure 7-86. Click OK to continue.

Figure 7-86 Connection successful

6. The final panel shown in Figure 7-87 on page 301 indicates the McDATA provider is successfully installed. Click Finish to exit the installer.

300

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-87 Installation complete

7. Verify the configuration in the CIMOM properties file as shown in Figure 7-88.
C:\McDATA_Provider\wbemservices\cimom\bin\mcdataProductInterface.Properties

Figure 7-88 CIMOM properties file

We suggest you verify that both Service Location Protocol and SMI-S Interface Server services are running from the Windows Services panel. If JRE is not installed yet in your system, we recommend you install Java JRE v1.4.2 before starting the configuration tool. Use these steps to access the server configuration window. 1. Open a Command Prompt window. 2. Change the working directory to the wbemservices\cimom\bin directory. For Windows, the default location is: C:\McDATA_Provider\wbemservices\cimom\bin 3. Run the McDATA_server_configuration_tool.bat file. The login window is displayed, as in Figure 7-89.

Figure 7-89 SMI-S login

Chapter 7. CIMOM installation and customization

301

4. Enter your user ID and password in the appropriate fields, then choose Login. The default user ID is Administrator. The default password is password. Click Login. 5. The Server Configuration window (see Figure 7-90) opens. You can set options, see users, and perform other tasks.

Figure 7-90 SMI-S Interface server configuration panel

Use the HTTP Interface field to enable and disable the use of the Hyper Text Transfer Protocol (HTTP) interface for the SMI-S Interface server. When the HTTP interface is enabled, a CIM client can communicate with the SMI-S Interface using HTTP. To enable the HTTP interface, select Enable. To disable it, clear the Enable check box. Important: By default, HTTPS (secure) is always enabled. 6. If you click the Management Platform tab, you can import fabric definitions from EFCM into SMI-S Library. The information you can import are the following data, as shown in Figure 7-91 on page 303: Import the Zone Library Importing Nicknames Update the Login ID Password

302

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-91 Management platform tab

7. When the connection is established successfully between SMI-S Interface and EFCM and you can click Import Zone Library. You are presented with the panel in Figure 7-92, where the active zone set in your fabric is shown. By default, they are imported in the SMI-S library.

Figure 7-92 Import zone set library

You are now ready to work with McDATA SMI-S through TotalStorage Productivity Center.

7.8.5 Verifying the connection with TPC


At this point you are ready to test the TPC connection to the McDATA SMI-S Interface. You can either have TPC discover it automatically or add it manually using the Add CIMOM task.

Chapter 7. CIMOM installation and customization

303

You are prompted to provide the following data: Host: The IP address or the host name of the SMI-S Interface Server Port: By default, port 5989 is used by HTTPS or 5988 if you enabled HTTP previously. Username: Administrator Password: password Interoperability Namespace: /interop Protocol: either HTTP or HTTPS, depending on your setting Display Name: Any name you want to identify the CIM Agent test CIMOM connectivity: We suggest you leave the check in the box and let TPC perform the connection test.

Figure 7-93 Add McDATA CIM Agent

Important: The Interoperability namespace for McDATA switches is /interop.

7.9 Configuring the CIM Agent for Brocade Switches


To manage Brocade fabric using SMI-Specifications V1.1, we need the software component mentioned on this site:
http://www.snia.org/ctp/conformingproviders/brocade

At the time of the writing, we are using Brocade SMI-S Agent v110.3.0a. The Brocade SMI Agent (SMI-A) is a proxy agent to multiple fabrics. It resides on a separate host. The SMI-A does not require any modification or upgrade to deployed fabrics when it is deployed. All the support required in Brocade switches is already in place. At the time of the writing, this link can be used to obtain the Brocade SMI-S Agent.
http://www.brocade.com/support/SMIAGENT.jsp

7.9.1 Planning the installation


Review the Brocade SMI-S Agent Developer's Guide (includes installation instructions) and the most recent Brocade SMI-S Release Notes for supported platforms and minimum firmware revisions :

304

IBM TotalStorage Productivity Center: The Next Generation

Your software environment must meet the following requirements before you install the SMI-A: Minimum of 256 MB RAM One of the following operating systems (32-bit versions only): Microsoft Windows 2000 Professional or Windows Server 2003 Sun Solaris version 8, 9, or 10 Linux Red Hat AS 3.0 or Suse 9.1 Professional Sun Microsystems JRE version 1.4.2_06 is bundled with the SMI-A and is automatically installed when the SMI-A is installed. Attention: Note that SMI-A 110.3.0 is not compatible with JDK 1.5. The memory required for the SMI-A depends on the size of the fabric and the number of fabrics being managed. You should increase the memory accordingly as the number of fabrics (number of switches) being managed increases. You should also increase the memory heap size for the JVM based on the number of switches and number of switch ports and devices.

7.9.2 Installing the CIM Agent


To install the CIM Agent, follow these steps: 1. Locate the installation image of the Brocade SMI Agent and launch the CD_Image\windows\install.exe file. You receive the installer window shown in Figure 7-94.

Figure 7-94 Launching the installer

2. You are prompted to accept the Brocade License Agreement. Select Accept the terms of the License Agreement and click Next, as shown in Figure 7-95 on page 306.

Chapter 7. CIMOM installation and customization

305

Figure 7-95 License Agreement

3. The installer checks the system prerequisites before starting the installation as shown in Figure 7-96. If the test is successful, click Next.

Figure 7-96 Disk space requirement

4. Before starting the installation, it is recommended that you close any other open applications (Figure 7-97 on page 307). Click Next to continue

306

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-97 Caution

5. You are prompted for the installation directory. Either accept the default or enter a custom directory, but be careful, there should not be any spaces in the path (see Figure 7-98).

Figure 7-98 Install directory

6. The installer notifies you that JRE 1.4.2 will be installed on your system. Click OK in panel Figure 7-99 to continue.

Figure 7-99 Java install warning

Chapter 7. CIMOM installation and customization

307

The installation progress panel is shown in Figure 7-100.

Figure 7-100 Install progress

7. When the code is copied, you are prompted for the fabric manager Server Configuration panel. The SMI-A can connect to the Brocade Fabric Manager server database, if available, and retrieve historical port statistics information. Otherwise, you can leave those fields blank (this step can be done later) and proceed by clicking Next as shown in Figure 7-101. Note: The Brocade Fabric manager is not required.

Figure 7-101 Fabric Manager configuration

The SMI-A installation wizard provides options for enabling mutual authentication for clients and indications. This can also be done after installation, without rerunning the installation wizard.

308

IBM TotalStorage Productivity Center: The Next Generation

8. If you enable mutual authentication, you should disable the CIM-XML client protocol adapter (CPA) for the SMI-A so that the clients can use only HTTPS communication. If you do not disable the CIM-XML CPA, then any client can communicate with the SMI-A using HTTP access. The client and server certificates that are used in the mutual authentication are only private certificates that are generated by Brocade and are not verified by any certificate authority. To avoid using certificates, just select No and click Next, as shown in Figure 7-102.

Figure 7-102 Mutual authentication

The Enable Mutual Authentication for Indications panel is shown in Figure 7-103 on page 310. 9. You can restrict delivery of indications using mutual SSL authentication to only clients that are trusted by the SMI-A. By default, mutual authentication for indications is disabled, which means that the SMI-A uses SSL to send CIM-XML indications to a WBEM client listener, but does not attempt to verify the identity of the WBEM client listener. When mutual authentication for indications is enabled, then only those clients with certificates that have been added to the SMI-A Indications TrustStore can use SSL to receive indications from the SMI-A. That is, the SMI-A must have a TrustStore that contains a certificate for an entry in the clients Indications KeyStore.

Chapter 7. CIMOM installation and customization

309

Figure 7-103 Mutual authentication for indications

The Enable Security for SMI Agent panel is shown in Figure 7-104. 10.When security is enabled, Windows authentication of the system on which the SMI-A is installed is used by default for authenticating username and password. If domain authentication is enabled on Windows, then the corresponding domain is used for authenticating username and password. If you do not want to enable security for SMI Agent, select No and click Next.

Figure 7-104 Security settings

11.You are prompted with the event settings panel (Figure 7-105 on page 311). The SMI-A delivers events in the form of two types of indications: alert indications and life-cycle indications. If you do not need events to be managed by SMI-A, just click Next.

310

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-105 Eventing and ARR

12.In the Enable Console or through file logging, you can choose how to manage SMI-A logs. If you want to enable both of them, select both Yes buttons and then click Next as shown in Figure 7-106.

Figure 7-106 logging settings

13.Select a file to be used as a log file (see Figure 7-107 on page 312).

Chapter 7. CIMOM installation and customization

311

Figure 7-107 Log file

14.Enter the details of each switch and director you want to manage through the SMI-A, as shown in Figure 7-108. You need to add both clusters in the M10 switch. Proxy IP: IP address of the switch/director Username: username of a switch administrator Password: password of the previously defined administrator Login-scheme: leave standard Number of RPC handles: leave the default of 5.

Figure 7-108 Add switch

15.When all your switches have been entered successfully, you see the complete list in the Configure Multiple Proxy panel as shown in Figure 7-109 on page 313. Click Next to continue.

312

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-109 Switch list

16.All the configurations are saved to the configuration files listed in Figure 7-110. Take note of the locations and click Next.

Figure 7-110 Configuration files

17.You are now ready to choose if you want to start the SMI-A as a Windows Service. If you want to do that, select Yes and click Next as shown in Figure 7-111 on page 314.

Chapter 7. CIMOM installation and customization

313

Figure 7-111 Windows service

18.The panel in Figure 7-112 means that you have installed the Brocade SMI Agent successfully. Click Done to exist the installer.

Figure 7-112 Successful install

19.To check the Windows Service status, open the Services panel to verify that the Brocade SMI Agent is running as in Figure 7-113 on page 315.

314

IBM TotalStorage Productivity Center: The Next Generation

Figure 7-113 Brocade SMI Agent service

7.9.3 SLP installation


By default, Brocade SMI Agent does not install an SLP service. However the SLP code is provided in the SMI Agent image. If you need to install it, complete the following tasks. 1. From a command prompt window, change to the directory where slpd.bat is located:
cd C:\<SMIAgent>\agent\bin

2. Run slpd -install to install the SLP service into Windows.

7.9.4 Changing the ports used in the CIM Agent


If you must use different ports for the Brocade CIM Agent, you can do this by following the steps described here. 1. Open a Linux shell 2. Change to the directory
/opt/SMIAgent/agent/server/jserver/bin

3. Issue the command to stop the server: ./stop_server 4. Issue the following command for each port: echo HTTPPort=5970> cimxmlcpa.properties echo HTTPSPort=5971> cimxmlscpa.properties 5. Issue the command to start the server: ./start_server 6. Verify that the CIMOM is now listening on the newly assigned ports.

7.9.5 Connecting the CIM Agent with TPC


To connect the CIM Agent with TPC, follow these steps: 1. Enter the information in the fields in for the CIM Agent from TPC as shown in Figure 7-114 on page 316. Host: Brocade SMI Agent IP address (In our case, 9.1.39.171 ) Port: Either 5989 (secure) or 5988 (unsecure) by default (Or the appropriate port numbers if you have changed them.) Username: If you specified unsecure, you can enter any username. Password: If you specified unsecure, you can enter any password. Interoperability Namespace: /interop Protocol: HTTPS (secure) or HTTP (unsecure)
Chapter 7. CIMOM installation and customization

315

Display name: any name that can help you identify the CIM Agent We suggest to keep the default selection for Check the CIMOM connectivity before adding. 2. When all the entries have been filled in as requested, click Save.

Figure 7-114 Add CIMOM to TPC

Important: The Interoperability namespace is /interop for Brocade switches.

7.10 Configuring the CIM Agent for Cisco switches


To manage Cisco fabric using SMI-Specifications V1.0.2, we need the software component mentioned on this site:
http://www.snia.org/ctp/conformingproviders/cisco

At the time of the writing, the current version is is McDATA SAN OS V2.0. At the time of the writing SNIA, did not published the SAN OS version which conforms to SMI-S 1.1. Each switch or director in the Cisco MDS 9000 Family includes an embedded CIM server. The CIM server communicates with any CIM client to provide SAN management compatible with SMI-S. The CIM server includes the following standard profiles, subprofiles, and features as defined in SMI-S: Service Location Protocol version 2 (SLPv2) Server profile CIM indications Fabric profile Zoning Control subprofile Enhanced Zoning and Enhanced Zoning Control subprofile Switch profile, including the Blade subprofile xmCIM encoding and CIM operations over HTTP as specified by the WBEM initiative

316

IBM TotalStorage Productivity Center: The Next Generation

HTTPS using Secure Socket Layer (SSL) HTTPS is optional, but provides enhanced security by encrypting communications between the CIM server and the CIM client. Cisco MDS SAN-OS Release 2.0(1b) and later support SLP V2, CIM indications, the Server profile and supports SMI-S 1.0(2). The technical document describing how to set up the CISCO CIM server is Cisco MDS 9000 Family CIM Programming Reference and can be found at the following URL:
http://www.cisco.com/en/US/products/ps5989/products_programming_reference_guide_ chapter09186a0080211ac0.html

Be sure to run the correct level of SAN-OS to support the needed SMI Specifications version.

7.10.1 Enabling and configuring the CIM Agent


The CIM server can be configured through the CLI. Configuring the CIM server involves enabling the CIM server. For added security, you can install an SSL certificate to encrypt the login information and enable HTTPS before enabling the CIM server. The CIM server requires HTTP or HTTPS or both to be enabled. By default, HTTP is enabled and secure HTTPS is disabled. Using HTTPS encrypts all management traffic between the CIM client and the CIM server and is the recommended configuration. We describe the steps to configure the CIM server in an unsecure mode. The steps to configure it in a secure mode are similar, but you first need a valid certificate. You can use OpenSSL to create the private key and certificate needed by the CIM server. To configure a CIM server using the HTTP protocol in Cisco MDS 9000 Family products, follow these two steps from a telnet command line: 1. switch# config t Enters configuration mode. 2. switch(config)# cimserver enable Enables the CIM server using the default HTTP (non-secure) protocol.

7.10.2 Connecting the CIM Agent with TPC


You are now ready to add the CIMOM into the TPC GUI 1. Fill in the fields, as described by Figure 7-115 on page 318. Host: Cisco MDS 9000 switch IP address Port: either 5989 (secure) or 5988 (unsecure) Username: valid username Password: password of the previously defined username Interoperability namespace: For Cisco, it is root/cimv2 Protocol: HTTPS (secure) or HTTP (unsecure) Display name: any name that can help you identify the CIM Agent.

Chapter 7. CIMOM installation and customization

317

2. We suggest to keep the default selection for Check the CIMOM connectivity before adding.

Figure 7-115 Add CIMOM

Important: The Interoperability namespace for Cisco switches is root/cimv2 (no leading slash).

7.11 Configuring the CIM Agent for IBM Tape Libraries


This topic provides a high-level overview of the procedures for installing the IBM TotalStorage SMI-S Agent for Tape on a Linux operating system. The IBM SMI-S Agent for Tape provides an interface that enables management applications to exploit tape library devices in a standardized way.

7.11.1 Prerequisites
All software required to run the IBM SMI-S Agent for Tape is contained on the CD-ROM. The recommended operating systems and hardware requirements are listed here: IBM xSeries with an Intel Pentium 4 processor and a minimum of 550./MB of RAM SuSE Linux Enterprise Server 9 The SMI-S Agent for Tape code can be downloaded from the following Web site:
http://www-03.ibm.com/servers/storage/support/software/smisagent/downloading.html

7.11.2 SMI-S Agent for Tape Installation


Perform the following steps to prepare the SMI-S Agent for Tape installation on Linux systems. They are displayed only as a sample and can vary on your system. 1. Download the SMI-S Agent from the Web site mentioned above. 2. Transfer the file IBM-Tape-SMIS-Agent-1.2.1.502.tgz to your LINUX system, if you have not already downloaded that file on to your system. 3. Switch to the directory where you transferred the file. 318
IBM TotalStorage Productivity Center: The Next Generation

4. Decompress the file using this command


gunzip -d IBM-Tape-SMIS-Agent-1.2.1.502.tgz

This will result in a file named IBM-Tape-SMIS-Agent-1.2.1.502.tar 5. This file has to be deflated using this command:
tar -xvf IBM-Tape-SMIS-Agent-1.2.1.502.tar

The following screen captures show the installation process. 1. Go to the directory
/tmp/TAPE-CIMOM-SLES9/SMI-S/LINUX

See Figure 7-116. 2. Start the graphical installation program by entering the command:
,/launchpad-linux

Figure 7-116 Launch the graphical installation program

3. The SMI-S Agent for Tape main panel is presented as shown in Figure 7-117 on page 320. Select the Installation Wizard.

Chapter 7. CIMOM installation and customization

319

Figure 7-117 Main SMI-S Agent for Tape

4. The next panel is the wizard Welcome (see Figure 7-118). Refer to the Installation Guide and the Readme to modify the CIMOM installation afterwards.

Figure 7-118 Reference to the installation guide and Readme

320

IBM TotalStorage Productivity Center: The Next Generation

5. The documentation can be found in the source directory from where you started the installation. These documents are not copied to the installation target directory.
KLCHL5H:/tmp/TAPE-CIMOM-SLES9/SMI-S/doc

In this directory are the following files: -r-xr-xr-x 1 57686 1311 39310 Jan 10 16:28 Readme_smis.txt -r-xr-xr-x 1 57686 1311 1253083 Jan 10 16:28 install_guide.pdf -r--r--r-- 1 57686 1311 1877 Jan 10 16:28 overview.unix.txt 6. Click Next on this panel and the License Agreement is shown. On this screen (see Figure 7-119) check the radio button to accept the license agreement. Click Next to continue.

Figure 7-119 Prompt to the License Agreement

7. The dialog box for the installation target is shown (see Figure 7-120). Accept the default installation directory path or make changes as necessary. Click Next to continue.

Figure 7-120 Default installation directory

8. The next panel is the summary of the installation information, as shown in Figure 7-121 on page 322. Click Install to start the installation.

Chapter 7. CIMOM installation and customization

321

Figure 7-121 Free space required

The next screen (see Figure 7-122) is the progress for the Tape Agent.

Figure 7-122 Installation in progress

322

IBM TotalStorage Productivity Center: The Next Generation

9. After a few minutes, the Server Communication Configuration is displayed as shown in Figure 7-123. On this screen, you can choose other values for the ports, but it is recommended that you keep the default values. HTTP Port is the port number that SMI-S Object Manager server will use for HTTP transport. The default port is 5989. The port number must be a port that is not being used by another process on the system. HTTPS Port is the port that SMI-S Object Manager server will use for secure HTTP transport (HTTPS). The default port is 5988. The port number must be a port that is not being used by another process on the system. Click Next to continue, Back to go back, or Cancel to exit the installation.

Figure 7-123 Communications options dialog

You can see the progress of the SMI-S Agent for Tape installation, as shown in Figure 7-124 on page 324.

Chapter 7. CIMOM installation and customization

323

Figure 7-124 The SMI-S Agent Service is started

10.After a successful installation you can see the screen in Figure 7-125. Note the location of the installation log file. Click Finish to exit the installer.

Figure 7-125 Installation was successful

324

IBM TotalStorage Productivity Center: The Next Generation

11.You are taken back to the SMI-S Agent for Tape main page. Click Exit to exit the LaunchPad as shown in Figure 7-126.

Figure 7-126 Exit the Installation Wizard

12.After leaving the installation dialog box, you can go to a LINUX shell to verify that the agent has been started. To do this, use the command shown in in Figure 7-127.

KLCHL5H:/opt/IBM/smis # ps -ef | grep -i SMI root 31041 1 0 11:01 pts/1 00:00:35 /opt/SMIAgent/jre/bin/java -server -classpath /opt/SMIAgent/agent/server/jserver/lib/wbemstartup.jar -Xmx512m -Djava.security.manager -Dconsole_logging=true -Dfile_logging=/opt/SMIAgent/agent/server/jserver/logr/test.log -Dprovider_xml=/opt/SMIAgent/agent/server/jserver/bin/provider.xml -DSMIAgentConfig_xml=/opt/SMIAgent/agent/server/jserver/bin/SMIAgentConfig.xml -Djava.security.policy=/opt/SMIAgent/agent/server/jserver/bin/jserver.policy -Dsun.rmi.transport.http.connectionTimeout=180000 -DBaseDir=/opt/SMIAgent/agent/server org.wbemservices.wbem.bootstrap.StartWBEMServices /opt/SMIAgent/agent/server/jserver/bin/jserver.properties root 475 1 0 14:37 ? 00:00:00 /opt/IBM/smis/tapeagent/packages/snmp/bin/snmptrapd -c /opt/IBM/smis/tapeagent/packages/snmp/bin/snmptrapd.conf Figure 7-127 Check to see if the services are running

In the output, you can see the Java process which represents the CIMOM for the tape libraries. You should also have an snmp-trap-daemon running, because this is how the CIMOM interacts with the library.

Chapter 7. CIMOM installation and customization

325

7.11.3 Configuring the SMI-S Agent


This topic provides instructions for configuring the IBM TotalStorage SMI-S Agent for Tape on a Linux operating system using the scripts stored in the installation directory created by the installation program. In our lab, the directory is /opt/IBM/smis/tapeagent . 1. Use the setdevice.sh command to configure tape libraries. It allows a user to configure the SMI-S Agent with a particular library. When the SMI-S Agent is first started after installation, the SMI-S Agent is not configured with library information. Set the library IP address using the setdevice.sh command. For 358x libraries, it is used to set the port and community as shown in Figure 7-128. SetDevice -a<add|del|list> -t<3494|3581|3582|3583|3584> [-l<IP|Alias> [-p <port> [-c<community> [-s<snmpversion>]]]
Figure 7-128 Syntax diagram for setdevice

2. Check to see that there is no library registered in the CIMOM using the command shown in Figure 7-129.
./setdevice.sh -alist -t3584 CLI ei -n root/ibm IBMTSSML3584_DeviceConfiguration Figure 7-129 List registered libraries - no Library registered

3. To register the Library to the CIMOM enter the command shown in Figure 7-130..
./setdevice.sh -aadd -t3584 -l9.11.213.187 -p161 -cpublic -s2 Figure 7-130 The 3584 is registered with the CIMOM

The output is shown in Figure 7-131..


CLI im -n root/ibm IBMTSSML3584_Provider Register3584Library IpAddress=9.11.213.187 Port=161 Community=public SnmpVersion=2 Return Value= 0 Figure 7-131 Output from setdevice while registering the Library

4. Issue the ./setdevice.sh -alist command again. The output is shown in Figure 7-132 on page 327.

326

IBM TotalStorage Productivity Center: The Next Generation

./setdevice.sh -alist -t3584 CLI ei -n root/ibm IBMTSSML3584_DeviceConfiguration path= IBMTSSML3584_DeviceConfiguration.InstanceID="9.11.213.187" //Instance of Class IBMTSSML3584_DeviceConfiguration [Description ("The SettingData class represents configuration-related " "and operational parameters for one or more ManagedElements. " "A ManagedElement can have multiple SettingData objects associated " "with it. The current operational values for the parameters " "of the element are reflected by properties in the Element itself " "or by properties in its associations. These properties do not " "have to be the same values that are present in the SettingData " "object. For example, a modem might have a SettingData baud " "rate of 56Kb/sec but be operating at 19.2Kb/sec. Note: The " "CIM_SettingData class is very similar to CIM_Setting, yet both " "classes are present in the model because many implementations " "have successfully used CIM_Setting. However, issues have arisen " "that could not be resolved without defining a new class. Therefore, " "until a new major release occurs, both classes will exist in " "the model. Refer to the Core White Paper for additional information. " "SettingData instances can be aggregated together into higher- " "level SettingData objects using ConcreteComponent associations.") : Translatable] instance of class IBMTSSML3584_DeviceConfiguration { string Caption; string Description; string InstanceID = "9.11.213.187"; string ElementName; string Port = "161"; string Community = "public"; sint32 SnmpVersion = 2; }; Figure 7-132 Displaying the Library via setdevice -alist -t3584

7.12 Discovering endpoint devices


Discovery of switches, storage, or tape subsystems is the main task performed by CIMOM discovery. CIMOMs are discovered through SLP or by entering information about known CIMOMs (IP address, userid, password) manually. TotalStorage Productivity Center manages and monitors storage subsystems (disk and tape) through their CIMOMs. To do this, it must know some information about each CIMOM. The IBM TotalStorage Productivity Center discovery job performs the following steps: 1. Broadcasts to find Service Agents (SA) on the local subnet. 2. Looks in the Directory Agent (DA) registry if a DA has been specified. The CIMOM discovery job has several purposes. It can scan the local subnet to find CIMOMs and contact CIMOMs that have been added through the dialog. After it has found a CIMOM, it attempts to log in and discover the subsystems being managed by the CIMOM. If a CIMOM is discovered by the discovery job, but requires login information, you must add that in the Add CIMOM panel, hen rerun the discovery job to collect the subsystem information. This will cause an alert to be issued in the alert log, either a Subsystem

Chapter 7. CIMOM installation and customization

327

Discovered or Switch Discovered alert depending on the type of CIM Agent. For more information refer to 8.8, Alerting on page 398. 1. To see if this kind of alert has been missed, look at the Alert log by navigating in the Navigation Tree to IBM TotalStorage Productivity Center Alerting. Otherwise, you can look at the log of the CIMOM Discovery job. Every time you submit a CIMOM discovery, many logs are generated, one log for each CIM Agent and several overall logs. 2. If you look at the log of a specific CIM Agent, you can see an output similar to the one shown in Figure 7-133. The CIMOM located on host with 9.1.39.171 has discovered four devices (switches): 100000051E34E895/IBM_2005_B32 1000006069201D74/itsosw2 1000006069201D4E/itsosw1 10000060691064CF/itsosw3

Figure 7-133 CIMOM discovery job log

Tip: This is a way to discern the relationship between CIMOM and related managed devices. So if you need to perform maintenance on the CIMOM, you can also know which devices will be affected by this outage. When the setup of the CIM Agents is complete, before running any probe against the managed subsystems or switches, you must rerun a CIMOM discovery job. The amount of time the CIMOM discovery job takes will depend on the number of CIMOMs, the number of subsystems you have, and whether you are scanning the local subnet or not. The CIMOM discovery job can be run on a schedule. How often you run it will depend on how dynamic your environment is. It will need to be run to detect a new subsystem. The CIMOM discovery job also performs basic health checks of the CIMOM and subsystem. For information about setting up the CIMOM discovery job refer to CIMOM discovery job on page 347.

7.13 Verifying and managing CIMOMs


You can verify that TotalStorage Productivity Center can authenticate and communicate with the CIM Agents using the TPC GUI. 1. Select the node Administrative Services Agents CIMOM and right-click the CIMOM on which you want to work. Select Test CIMOM Connection and have

328

IBM TotalStorage Productivity Center: The Next Generation

TotalStorage Productivity Center check the communication and authentication with that CIMOM (see Figure 7-134).

Figure 7-134 Test CIMOM Connection

If the test is successful, you obtain the result shown in Figure 7-135.

Figure 7-135 Connection successful

2. If you need to get rid of a previously defined or discovered CIMOM, you can select Remove CIMOM, as shown in Figure 7-136 on page 330.

Chapter 7. CIMOM installation and customization

329

Figure 7-136 Remove CIMOM

7.14 Interoperability namespace summary table


The Interoperability Namespace for a CIMOM is discoverable through SLP and is populated automatically in the TotalStorage Productivity Center GUI for CIMOMs discovered through SLP. For CIMOMs that are entered manually rather than discovered through SLP, you should check their Provider documentation for the correct Interop Namespace. Table 7-2 provides Interoperability Namespaces and reflects the namespaces at the time the book was written. Important: When the providers release new versions of their products, these values can change. Check your provider documentation for their Interoperability Namespace.
Table 7-2 interoperability namespace summary Vendor Interoperability Namespace

Cisco Brocade

root/cimv2

/root/interop or /root/brocade1 Note: Contact your switch vendor for the correct namespace to use.
/interop

McDATA

ESS, DS6000, DS8000, SVC


Engenio EMC

/root/ibm

/interop /root/emc

330

IBM TotalStorage Productivity Center: The Next Generation

Vendor

Interoperability Namespace

Hitachi

/root/hitachi/dm35 for HiCommand 3.5 /root/hitachi/dm42 for HiCommand 4.0 /root/hitachi/dm42 for HiCommand 4.2 /root/hitachi/dm43 for HiCommand 4.3
/root

HP SUN StorEdge

/root/sun3510 or /interop Note: This is for a subsystem and not a switch.

IBM Tape ADIC Tape

/root/ibm root/cimv2

7.15 Planning considerations for SLP


The Service Location Protocol (SLP) has three major components: Service Agent (SA) User Agent (UA) Directory Agent (DA) The SA and UA are required components while DA is an optional component. You might have to make a decision whether to use SLP DA in your environment, based on considerations described in the following sections.

7.15.1 Considerations for using SLP DA


You can choose to use a DA to reduce the amount of multicast traffic involved in service discovery. In a large network with many UAs and SAs, the amount of multicast traffic involved in service discovery can become so large that network performance degrades. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA discovery. SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for normal UA request, delays and time-outs are eliminated. You might consider using DAs in your enterprise if any of the following conditions are true: Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by snoop. UA clients experience long delays or time-outs during multicast service request. You would like to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load.

Chapter 7. CIMOM installation and customization

331

Your network does not have multicast enabled and consists of multiple subnets that must share services. The configuration of an SLP DA is particularly recommended when there are more than 60 SAs that need to respond to any given multicast service request.

7.15.2 SLP configuration recommendation


Some configuration recommendations are provided for enabling TotalStorage Productivity Center to discover a larger set of storage devices. These recommendations cover some of the more common SLP configuration problems. This topic discusses router configuration and SLP directory agent (DA) configuration.

Router configuration
Routers are one common source of problems in your effort to setup the communication needs for SLP. Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address and port, 239.255.255.253, port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by TotalStorage Productivity Center. To configure your router hardware and software, refer to your router reference and configuration documentation. Attention: Routers are sometimes configured to prevent passing of multicast packets between subnets. Routers configured this way prevent discovery of systems between subnets using multicasting. Routers can also be configured to restrict the minimum multicast time-to-live (TTL) for packets it passes between subnets, which can result in the need to set the Multicast TTL higher to discover systems on the other subnets of the router. The Multicast TTL controls the time-to-live for the multicast discovery packets. This value typically corresponds to the number of times a packet is forwarded between subnets, allowing control of the scope of subnets discovered. Multicast discovery does not discover systems using TCP/IP protocol stacks that do not support multicasting (for example, some older Windows 3.x and Novell 3.x TCP/IP implementations).

SLP directory agent configuration


Configure the SLP directory agents (DAs) to circumvent the multicast limitations. With statically configured DAs, all service requests are unicast by the user agent. Therefore, it is possible to configure one DA for each subnet that contains storage devices that are to be discovered by TotalStorage Productivity Center. One DA is sufficient for each subnet. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow TotalStorage Productivity Center to discover all of the devices, it needs to be statically configured with the addresses of each of these DAs. This can be accomplished using the TotalStorage Productivity Center. This setup is described in 7.6, Installing CIM agent for IBM DS4000 family on page 274.

7.15.3 General performance guidelines


Here are some general performance considerations for configuring the TotalStorage Productivity Center environment. Do not overpopulate the SLP discovery panel with SLP agent hosts. Remember that TotalStorage Productivity Center includes a built-in SLP User Agent (UA) that will receive 332
IBM TotalStorage Productivity Center: The Next Generation

information about SLP Service Agents and Directory Agents (DA) that reside in the same subnet as the TotalStorage Productivity Center server. You should have not more than one DA per subnet. Misconfiguring the TotalStorage Productivity Center CIMOM discovery preferences might impact performance on auto discovery or on device presence checking. It might also result in application time-outs, as attempts are made to resolve and communicate with hosts that are not available. It should be considered mandatory to run the CIM Agent software on a separate host from the TotalStorage Productivity Center server. Attempting to run a full TotalStorage Productivity Center implementation on the same host as the CIM agent, will result in dramatically increased wait times for data retrieval. You might also experience resource contention and port conflicts.

7.16 CIMOM registration


In general, you have two methods to make the CIMOMs known to your TotalStorage Productivity Center Server. You can manually register CIMOMs, or they can be automatically discovered.

7.16.1 Manual method to add a CIMOM


The first method is to enter them one-by-one using the TotalStorage Productivity Center GUI. This assumes that you either have the CIMOM in your local subnet together with the TotalStorage Productivity Center server, or you have no restrictions within your network infrastructure to reach the CIMOMs when they reside in other subnets. 1. From the tree structure select Administrative Services Agents CIMOM. Right-click CIMOM and select Add CIMOM. Figure 7-137 shows several CIMOMs registered. This has to be done for each CIMOM that will be used to manage your storage subsystems.

Figure 7-137 Manual entry of new CIMOMs

7.16.2 Automated method to add CIMOMs


The second method is using the automatic discovery performed by SLP. When the CIMOMs have been set up using the correct userid and password combination, then they are immediately available in the TotalStorage Productivity Center GUI ( status green ). If not, the TotalStorage Productivity Center administrator has to update the CIMOM entries within the
Chapter 7. CIMOM installation and customization

333

TotalStorage Productivity Center GUI to make them available and fully functioning. The DAs can be located in subnets other than the one where your TPC server is located. This setup will reduce multicast traffic in your network. You can use the following procedure to set up one or more Service Location Protocol Directory Agents (SLP-DAs) so that TotalStorage Productivity Center for Disk can discover devices that reside in subnets other than the one in which TotalStorage Productivity Center for Disk resides. In this case the discovered CIMOMs will be displayed in the TPC GUI. For example, say they are delivered. The following screen shows a new entry for a CIMOM which is not fully operational, because it might have a userid and password combination which does not fit into TPC.

Figure 7-138 New CIMOM discovered by SLP DA

The TPC Administrator determines which CIMOMs should be used and then provides the information to make it possible for TPC to login into them and use the CIMOMs. Perform the following steps to set up the SLP DAs: 1. Identify the various subnets that contain devices that you want TotalStorage Productivity Center to discover. 2. Each device is associated with a CIM Agent. There might be multiple CIM Agents for each of the identified subnets. Choose one of the CIM Agents for each of the identified subnets. (It is possible to choose more than one CIM Agent per subnet, but it is not necessary for discovery purposes.) The following steps will change the run mode of the CIM-Agent. 3. Each of the identified CIM Agents contains an SLP service agent (SA), which runs as a daemon process. The default is, that it runs in SA mode. We change it to have it run in DA mode. Each of these SAs is configured using a configuration file named slp.conf. Perform the following steps to edit the file: a. For example, if you have DS CIM agent installed in the default install directory path, then go to C:\Program Files\IBM\cimagent\slp directory. b. Look for file named slp.conf c. Make a backup copy of this file and name it slp.conf.bak.

334

IBM TotalStorage Productivity Center: The Next Generation

d. Open the slp.conf file and scroll down until you find (or search for) the line
;net.slp.isDA = true

Remove the semi-colon (;) at the beginning of the line. Ensure that this property is set to true (= true) rather than false. Save the file. e. Copy this file (or replace it if the file already exists) to the main windows subdirectory for Windows machines (W2K C:\WINNT; W2K3 C:\WINDOWS), or in the /etc directory for UNIX machines. 4. It is recommended that you reboot the server where the SLP is going to be modified at this stage. Alternatively, you can choose to restart the SLP and CIMOM services. You can do this from your windows desktop by selecting Start Menu Settings Control Panel Administrative tools Services. Launch the Services GUI and locate the Service Location Protocol. Right-click and select stop. Another panel displays which will asks if you want stop IBM CIM Object Manager service. Click Yes. You can start the SLP daemon again after it has stopped sucessfully. Alternatively, you can choose to restart the CIMOM using the command line as described in 7.4.7, Restart the CIMOM on page 264.

7.16.3 Configuring TotalStorage Productivity Center for SLP discovery


You can use the following panel in Figure 7-139 to enter a list of DA addresses.

Figure 7-139 Panel to add a DA to TPC

TotalStorage Productivity Center sends unicast service requests to each of these statically configured DAs, and sends multicast service requests on the local subnet on which TotalStorage Productivity Center is installed. The prerequsite is to configure an SLP DA by changing the configuration of the SLP service agent (SA) as mentioned previously.
Chapter 7. CIMOM installation and customization

335

You have now converted the SLP SA of the CIM Agent to run as an SLP DA. The CIMOM is not affected and will register itself with the DA instead of the SA. However, the DA will automatically discover all other services registered with other SLP SAs in that subnet.

7.16.4 Registering the CIM Agent to SLP-DA


If everything with the SLP-DA/SA setup is correct, you are not required to register any CIM Agents manually. In the event it would become necessary to register any CIM Agent manually, issue the following command on the SLP DA server. 1. First you have to locate the SLPTOOL. On a Windows System, its location is, for example:
C:\Program Files\IBM\cimagent\slp.

2. Issue a command similar to the one shown here:


slptool register service:wbem:https://ipaddress:port

Where ipaddress is the CIM Agent IP address. 3. Issue a verifyconfig command, as shown in Figure 7-39 on page 269 to confirm that SLP is aware of the registration. Attention: Whenever you update SLP configuration as shown here, you might have to stop and start slpd daemon. This will enable SLP to register and listen on newly configured ports. Also, whenever you restart the SLP daemon, ensure that the CIMOM agent has also restarted. Otherwise, you can issue the startcimom.bat command, as shown in previous steps. Another alternative is to reboot the CIMOM server.

7.16.5 Creating slp.reg file


If you must make any manual entries into your SLP-DAs configuration, then it would be helpful to make these entries persistant to restarts of your servers. To do this follow the steps listed here. Important: To avoid manually registering the CIMOMs outside of the subnet every time the Service Location Protocol (SLP) is restarted, create a file named slp.reg. The default location for the registration is C:\winnt\, or the equivalent in your Operating System. Slpd reads the slp.reg file on startup and rereads it when ever the SIGHUP signal is received.

336

IBM TotalStorage Productivity Center: The Next Generation

slp.reg file example


Example 7-1 is a slp.reg file sample.
Example 7-1 slp.reg file ############################################################################# # # OpenSLP static registration file # # Format and contents conform to specification in IETF RFC 2614, see also # http://www.openslp.org/doc/html/UsersGuide/SlpReg.html # #############################################################################

#---------------------------------------------------------------------------# Register Service - SVC CIMOMS #----------------------------------------------------------------------------

service:wbem:https://9.43.226.237:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Open Systems Lab, Cottle Road authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 service:wbem:https://9.11.209.188:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Tucson L2 Lab authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 #service:wbem:https://9.42.164.175:5989,en,65535 # use default scopes: scopes=test1,test2 #description=SVC CIMOM Raleigh SAN Central #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------# Register Service - SANFS CIMOMS #---------------------------------------------------------------------------#service:wbem:https://9.82.24.66:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Gaithersburg ATS Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #service:wbem:https://9.11.209.148:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Tucson L2 Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20

Chapter 7. CIMOM installation and customization

337

#---------------------------------------------------------------------------# Register Service - FAStT CIMOM #---------------------------------------------------------------------------#service:wbem:https://9.1.39.65:5989,en,65535 #CIM_InteropSchemaNamespace=root/lsissi #ProtocolVersion=0 #Namespace=root/lsissi # use default scopes: scopes=test1,test2 #description=FAStT700 CIMOM ITSO Lab, Almaden #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20

338

IBM TotalStorage Productivity Center: The Next Generation

Chapter 8.

Getting Started with TotalStorage Productivity Center


TotalStorage Productivity Center V3.1 is a powerful management platform with a rich variety of functions derived from three separate products integrated into a single console. This chapter intends to provide you a jump start into the central functions of TotalStorage Productivity Center. It will give you an introduction on how to get started to use the product for your storage management tasks. This chapter will not cover every aspect and detail of the functions in TotalStorage Productivity Center but focusses on helping you to get an overview of the product capabilities, the structure of the user interface and the relationships of the product components. TotalStorage Productivity Center V3.1 offers two user interfaces, a Java based graphical user interface and a command line interface. It also offers an Application Programming Interface. This chapter will only cover the use of the graphical user interface. You can find the documentation of the command line interface in the TotalStorage Productivity Center Command-line Interface Reference Version 3 Release 1. In this chapter we use a practical, learning-by-doing approach that guides you through the TotalStorage Productivity Center product using real world examples which we used successfully in our ITSO lab environment. We illustrate the activities with screen captures where appropriate.

Copyright IBM Corp. 2006. All rights reserved.

339

8.1 Infrastructure summary


As a foundation for a better understanding of these examples, an overview of the infrastructure of our lab environment is provided. At the core of our lab environment we have a Microsoft Windows 2003 server running the TotalStorage Productivity Center V3.1 Standard Edition. This server is connected to the SAN and is also running the TotalStorage Productivity Center Data and Fabric Agents. The host name of this server is gallium.almaden.ibm.com. The SAN Infrastructure is shown in Figure 8-1 on page 341. It consists of one Fibre Channel Switch, an IBM 2005-B32. TotalStorage Productivity Center is accessing this switch through SNMP, the advanced Brocade API and through a CIMOM running on the KLCHL5H server. There are two storage subsystems connected to this SAN: SVC-2145-ITSOSVC01 - A two-node SAN Volume Controller Cluster, providing its own CIMOM on the Master Console with the host name ITSOSVC.almaden.ibm.com DS4500-ITSODS4500 - A DS4500 storage subsystem which is accessed through the CIMOM running on colorado.almaden.ibm.com The DS4500 acts as the back-end storage to the SVC and is also accessed directly from one of the attached servers (The SVC Master Console). In addition to the TotalStorage Productivity Center server Gallium, there are four more servers connected to the SAN. Each of those servers, with the exception of Helium, are running the TotalStorage Productivity Center Data and Fabric Agents. The four servers are: AZOV.almaden.ibm.com. - AIX 5.3, 2 HBAs, accessing 14 Virtual Disks on the SVC colorado.almaden.ibm.com. - Microsoft Windows 2003, 1 HBA, accessing six Virtual Disks on the SVC. This server is running the CIMOM for the DS4500. pqdi - Microsoft Windows 2003, 1 HBA, accessing 16 Virtual Disks on the SVC ITSOSVC.almaden.ibm.com - Windows 2003, two HBAs, accessing 16 LUNs on the DS4500. This is the Master Console for the SVC providing the CIMOM. It is configured as a SLP Directory Agent. Helium.almaden.ibm.com - Microsoft Windows 2003, two HBAs, accessing two Virtual Disks on the SVC, no Data Agent, no Fabric Agent. In addition to those SAN-connected servers, we have two servers without Fibre Channel connection. KLCHL5H.almaden.ibm.com - LINUX SLES9, running a Data Agent but no Fabric Agent. This server is running the CIMOM for the 2005-B32 switch and a CIMOM for a remote tape library located in Tucson. KLCH4VZ.almaden.ibm.com - Microsoft Windows 2000, running no Data Agent and no Fabric Agent. This server is running the DS OPEN API (used for a DS8000 and an ESS). Over a Wide Area Network, we have further access to several storage subsystems and a tape library. DS6000-1750-13AB70A - A DS6000 system, which is accessed through a DS OPEN API CIMOM running on a remote server (9.43.252.17). DS8000-2107-75BALB1. A DS 8000 system, which is accessed through the DS OPEN API CIMOM running on the KLCH4VZ server. ESS-2105-22513. An ESS system, which is accessed through the DS OPEN API CIMOM running on the KLCH4VZ server.

340

IBM TotalStorage Productivity Center: The Next Generation

Tape-3584-7819833. A 3584 tape library, which is accessed through the tape CIMOM running on the KLCHL5H server.

IP Network
DA = Data Agent FA = Fabric Agent
KLCH4VZ DS-OPEN-API no DA, no FA 9.1.38.35 ITSOSVC DA+FA HELIUM no DA, no FA KLCHL5H DA+FA Brocade CIMOM on port 5970/71 Tape CIMOM on port 5989 DA, no FA 9.1.39.70 2005-B32 SNMP/API pqdi DA+FA colorado DA+FA DS4000CIMOM 9.1.38.39 AZOV DA+FA

WAN
DS8000-2107-75BALB1 ITSO Poughkeepsie

SAN

ESS-2105-22513 ITSO Poughkeepsie

gallium T P C V 3. 1 DA+FA

Tape-3584-7819833 Tucson

SVC-2145-ITSOSVC01 Almaden

DS4500-ITSODS4500 Almaden

9.43.252.17 DS-OPEN-API

DS6000-1750-13AB70A ITSO OSVL San Jose

Figure 8-1 Lab environment

8.2 TotalStorage Productivity Center function overview


TotalStorage Productivity Center V3.1 consists of three product components which are integrated in one central console. You can purchase these product components separately or all together in one package, the TotalStorage Productivity Center V3.1 Standard Edition. The product components are: TotalStorage Productivity Center for Disk TotalStorage Productivity Center for Fabric TotalStorage Productivity Center for Data Each of the product components provides a common set of functions: Collecting data about your infrastructure Retrieving and displaying data about your infrastructure, monitoring your infrastructure Making changes to your infrastructure However, the different components of the TotalStorage Productivity Center product direct these functions to different parts of your infrastructure. TotalStorage Productivity Center for Disk is directed to your storage and tape subsystems TotalStorage Productivity Center for Fabric is directed to your SAN infrastructure TotalStorage Productivity Center for Data is directed to your servers and computers and the volumes, databases and file systems on them. In this chapter, we look at the TotalStorage Productivity Center Standard Edition and our infrastructure from an end-to-end point of view. Consequently, we familiarize you with the TotalStorage Productivity Center along the basic functions shown in Figure 8-2 on page 342.

Chapter 8. Getting Started with TotalStorage Productivity Center

341

We are not looking at the individual product components. Instead we look at the infrastructure as a whole. We cover all functions except the policy management of your computers, which is beyond the scope of this chapter. Policy management is described in the IBM TotalStorage Productivity Center Users Guide, GC32-1775.

Retrieving and Displaying Data V i e w i n g, R e p o r t i n g Monitoring Alerting

C o l l e c t i n g C o n f i g u r a t i o n, A v a i l a b i l i t y and Asset Data

Collecting Performance Data Policy Management

Servers SAN Infrastructure Tape L i b r a r i e s, Storage Subsystems

Computers

Configuring Devices

Figure 8-2 The basic TotalStorage Productivity Center function groups

8.3 First steps


This section shows you how to start the TotalStorage Productivity Center GUI, explains some basic foundations of the GUI, and guides you through the initial setup and configuration steps necessary to get started.

8.3.1 Starting the GUI


TotalStorage Productivity Center provides you two methods for running the graphical user interface: as a downloadable Java applet or as an application.

As an application
To start the IBM TotalStorage Productivity Center GUI on Microsoft Windows, open the IBM TotalStorage Productivity Center GUI by clicking Start Programs IBM TotalStorage Productivity Center Productivity Center. You can also double-click the IBM TotalStorage Productivity Center icon if it is installed on your desktop. To start the IBM TotalStorage Productivity Center GUI on Linux, go to a command prompt window and run the command TPC. To start the IBM TotalStorage Productivity Center GUI on AIX, go to a command prompt window and issue the following command: /usr/bin/TPC.

342

IBM TotalStorage Productivity Center: The Next Generation

Note: For UNIX or Linux, to invoke the TPC command from a command prompt window, include /usr/bin in your PATH.

As a Java applet
The benefit of running the user interface as a Java applet is that you are not required to install the GUI component on every workstation. Users can simply access the IBM TotalStorage Productivity Center applet from any Web-enabled workstation in your environment and the appropriate applets are downloaded automatically for them on an as-needed basis. To start and run the user interface as a Java applet: 1. Start a Web browser session. 2. Point your browser to the URL to your TotalStorage Productivity Center server and the appropriate port you set up when enabling the Web server on that machine (see 4.8, Configuring the GUI for Web Access under Windows 2003 on page 107 and 5.10, Installing the user interface for access with a Web browser on page 186.

8.3.2 Logging on
TotalStorage Productivity Center provides a role-based administration model as described in Chapter 2, Key concepts on page 13. When installing the TotalStorage Productivity Center Server you have to specify an operating system group, whose members will become superusers of the TotalStorage Productivity Center product. You can now log on to TotalStorage Productivity Center using one of the users of this operating system user group. In addition to the superuser role, TotalStorage Productivity Center provides nine additional roles to which you can assign operating system user groups. Additional details about the role based administration model is in the IBM TotalStorage Productivity Center Users Guide, GC32-1775.

8.3.3 GUI basics


The tree-based navigation system of the user interface lets you see a hierarchical organization of the product features in the left pane while viewing detailed reports and charts in the right pane. The left pane is called the Navigation Tree and the right pane is called the Content Pane. Figure 8-3 on page 344 shows the Welcome Screen of the TotalStorage Productivity Center GUI. It is always displayed when you log on.

Chapter 8. Getting Started with TotalStorage Productivity Center

343

Figure 8-3 TotalStorage Productivity Center welcome screen

When you start IBM TotalStorage Productivity Center, the Navigation Tree is expanded to show all the high level functions. You can drill down on an element in the tree by clicking on it or by clicking on the expand icon. When you right-click a node a pop-up (context) menu displays, which lets you perform additional actions for the node. If the Navigation Tree gets too large, or if you want to return it to its original state, right-click the major nodes of the tree and select Collapse Branch or Expand Branch. If you right-click the IBM TotalStorage Productivity Center node and select Collapse Branch, the entire Navigation Tree collapses. Then, right-click the main IBM TotalStorage Productivity Center node and select Expand Branch to return the Navigation Tree to its original state, expanded only to show the main functions. The Content Pane opens on the right side of the main window. When you select a node in the Navigation Tree, the corresponding function window opens in the Content Pane. You can use the windows that open in the Content Pane to define and run the different functions (for example, monitoring jobs, alerts and reports) available within TotalStorage Productivity Center. The information shown on the pages in the Content Pane will vary, depending on the function with which you are working.

8.4 Initial configuration


After having logged on to TotalStorage Productivity Center the first time, verify that all required services are up and running. To do this, select Administrative Services Services. Look for little green squares next to the left of each service entry, as shown in Figure 8-4 on page 345.

344

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-4 TPC services up and running

Now that you have started the TotalStorage Productivity Center GUI successfully and have verified that all the internal services are up and running, the first thing you obviously want to do is to collect information about your infrastructure. However, before you can start to do this, you first must establish the communication with your infrastructure through the various communication channels TotalStorage Productivity Center is going to use. Figure 8-5 gives an overview of the sequence of the next steps.

1.
establish communication to your infrastructure

2.
collect data about your infrastructure discover elements and retrieve basic Information collect performance data

3.
Retrieveing and displaying data - Monitoring and Alerting - Managing

discover / register your CIMOMs

CIMOMs

verify registration / connectivity

discover / register

out of band Fabric Data Agent

probe

Fabric Agent

verify registration / connectivity

Figure 8-5 The first steps, sequence

As shown in Figure 8-5, TotalStorage Productivity Center uses basically three major channels to communicate with the infrastructure. These are: CIM Object Managers to interact with storage subsystems, tape libraries and SAN components.

scan

Chapter 8. Getting Started with TotalStorage Productivity Center

345

Agents on the managed servers and computers (Data Agents to interact with servers and computers and Fabric Agents to interact with Host Bus Adapters and SAN components through an in band channel) SNMP and proprietary APIs to interact with SAN components out of band. We now show you how to set up all those communication channels by guiding you through the configuration steps we used for our lab environment described in 8.1, Infrastructure summary on page 340.

8.4.1 Configuring CIMOMs


There are two ways of making the CIMOMs of your infrastructure known to theTotalStorage Productivity Center. Which one you use depends on how you have set up your CIMOM infrastructure. If all your CIMOMs and the TotalStorage Productivity Center Server are located in the same subnet, or if you have your CIMOMs distributed across multiple subnets and have setup a valid SLP infrastructure utilizing SLP Directory Agents, you will be able to discover all your CIMOMs automatically. Otherwise you must enter CIMOMs manually. We first try to do an automatic discovery to detect all CIMOMs that are reachable through our SLP infrastructure. We then add manually the CIMOMs that are still missing. To initiate an automatic CIMOM discovery, select Administrative Services Discovery and click CIMOM (see Figure 8-6).

Figure 8-6 Configuring CIMOMs - initiate automatic CIMOM discovery

We now introduce a very important principle of operating the TotalStorage Productivity Center. We tell the TotalStorage Productivity Center to perform a certain task or action. Many of these tasks are handled as special objects, called jobs, within the TotalStorage Productivity Center. We can define those jobs, save them and run them at a later time, and schedule them for a single or repeated runs, or run them simultaneously.

346

IBM TotalStorage Productivity Center: The Next Generation

CIMOM discovery job


In this case, we want the TotalStorage Productivity Center to discover CIMOMs, so we have to define a CIMOM discovery job. 1. Job definition works similar for almost all type of jobs within TotalStorage Productivity Center. After clicking the Discovery CIMOM job class in the Navigation Tree, you can see three or more tabs in the Content Pane. 2. In the first tab, When to run, you can define a schedule for the job. You can have it run now, once at a later time, or multiple times. In this scenario, we select to Run Now (see Figure 8-7).

Figure 8-7 When to Run

3. The next tab, Alert, allows you to specify what to do when certain conditions for the job you are defining arise at runtime. For a CIMOM discovery job, there is only one condition for which you can define a reaction. You can specify what kind of Alerts TotalStorage Productivity Center will trigger if the CIMOM discovery job fails. You will find these first two tabs in almost all job definitions. 4. Figure 8-8 on page 348 below shows a screen capture of the Alert tab with the single condition for this job we can choose, which is Job Failed.

Chapter 8. Getting Started with TotalStorage Productivity Center

347

Figure 8-8 Configuring CIMOMs - initiate automatic CIMOM discovery - Alert

5. In the last tab, Options, you can enter information specific for the type of job you are defining. When defining a CIMOM discovery job, you can enter the IP addresses of the SLP Directory Agents of the environment you want the discovery job to query for CIM Agents (see 2.2.1, SLP architecture on page 18). In our case, we enter the IP address of the SVC Master Console (ITSOSVC), which we have configured as an SLP Directory Agent.

Figure 8-9 Configuring CIMOMs - initiate automatic CIMOM discovery - Enter DA addresses

6. You have now entered all information needed by the CIMOM discovery job to run successfully. We could now just save the job definition or save the job definition and have TotalStorage Productivity Center execute the job at that point in time we have specified in the When to Run tab. To initiate the latter, click Enabled in the upper right corner of the Content Pane and select File Save in the menu bar. You can see a message box saying that the CIMOM job has been submitted (see Figure 8-10 on page 349).

348

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-10 Job CIMOM submitted notice

Important: The CIMOM discovery is designed as a two-stage process: 1. The CIMOM discovery job locates all CIMOMs through the Service Location Protocol by broadcasting in its subnet and querying all SLP Directory Agents for which IP addresses have been entered in the job definition (in our case 9.1.38.48). 2. The discovery job tries to login to the CIMOMs it has discovered and tries to retrieve information about the elements managed by this CIMOM. Up to this time however, it was not possible to enter any user credentials for these log ins. So the discovery job uses null as a user ID and password to try to log into the CIMOMs. This will be successful for only CIMOMs which have been set up to not require any user authentication. So it is very likely that the first discovery job will end up with errors and with a status that the discovery and retrieval for the elements has succeeded only for few CIMOMs, if for any at all. For the other CIMOMs, a second discovery job must be initiated after entering the user credentials to retrieve the basic information for the elements behind those CIMOMs. Here is how we can monitor our CIMOM discovery job: There can exist only one CIMOM job definition in the system. We have defined this job definition in the previous steps and saved it. Every time we click Administrative Services Discovery CIMOM, we can view and change this job definition. This job definition can run multiple times. Each run produces an entry below the Administrative Services Discovery CIMOM Node of the Navigation Tree. This is the case for all types of jobs and you will see this mechanism implemented throughout the whole TotalStorage Productivity Center user interface. Because we have not just saved our CIMOM discovery job definition, but have also started the execution of the job, TotalStorage Productivity Center has created such an entry for your job.

Monitoring CIMOM Discovery job


We can view this entry by following the menu path Administrative Services Discovery CIMOM right-click CIMOM and select Refresh Job List from the drop down menu. Then expand the CIMOM Node of the Navigation Tree. The entry for our job is named with the time stamp of its starting time. We see a little icon next to the entry. A blue circle indicates that the job is currently running. A red circle indicates that the job has competed with errors, and a green square indicates that the job has completed without errors. TotalStorage Productivity Center uses also a yellow triangle to indicate that a job has completed with warnings.

Chapter 8. Getting Started with TotalStorage Productivity Center

349

Figure 8-11 below shows two entries. The upper one belongs to a job we ran earlier and the lower one is the job we just submitted. The blue circle beside this entry indicates that the job is running. By clicking the job entry, we can see a list of all logs for that job in the Content Pane. We can look at the logs by clicking the icon next to a log entry. This works even if the job has not yet finished, but is still running.

Figure 8-11 Configuring CIMOMs - CIMOM discovery running

We can update the status of the job by right-clicking Administrative Services Discovery CIMOM and select Update Job Status. The status will not update, unless we refresh it this way. We can finally see the following screen in Figure 8-12 on page 351, which indicates that our discovery job has completed with errors.

350

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-12 CIMOM discovery completed with errors

Examining the logs of the failed part of the job, we find that the errors are caused by failed log ins, just as we expected. Note that other parts of the jobs have completed successfully. These are the log ins and retrievals of information for the managed elements which do not require any authentication (in our example the DS4000 CIMOM on the colorado server) behind those CIMOMs. We can view what CIMOMs our discovery job has detected (Figure 8-13 on page 352). Expand the Administrative Services Agents node of the Navigation Tree, right-click the CIMOM node, select Refresh and then expand the CIMOM node. Look for an entry for each of the discovered CIMOMs. Those CIMOMs for which the login of the discovery job has been successful are marked by a green square. Those CIMOMs for which no login could be established are marked by a red circle. Our CIMOM discovery job detected three CIMOMs. The DS4000 CIMOM was discovered over two ports (secure and nonsecure). For this CIMOM, the login has been successful and the the information for the elements managed by this CIMOM could be retrieved. The other two CIMOMs are the DS-OPEN-API on the KLCH4VZ server (3.1.38.35) and the SVC CIMOM residing on the SVC Master Console ITSOSVC (9.1.38.38). Those CIMOMs require authentication, so the login to those CIMOMs has not been successful and TotalStorage Productivity Center could not retrieve the information for the managed elements.

Chapter 8. Getting Started with TotalStorage Productivity Center

351

Figure 8-13 CIMOMs discovered with SLP by automatic discovery job

TotalStorage Productivity Center has now discovered three of our six CIMOMs and was able to login to one of those three. Three CIMOMs are still missing.

User IDs and passwords


The next task is to enter the user IDs and passwords for the two CIMOMs which are already discovered and need authentication (SVC CIMOM and DS-Open-API), then manually enter the three missing CIMOMs. 1. Click the CIMOM entry in the Navigation Tree (Figure 8-14 shows the information for the DS-Open API CIMOM) and see all the information TotalStorage Productivity Center holds for the CIMOMs in the Content Pane. TotalStorage Productivity Center shows us a connection state (in our example below we see LOGIN FAILED as expected). 2. Enter the valid user ID and password , as well as a display name. Perform these two steps step for each of the discovered CIMOMs.

Figure 8-14 Enter CIMOM user ID, password and display name

352

IBM TotalStorage Productivity Center: The Next Generation

3. Save these entries by selecting File Save in the menu bar. Note that we have selected the Test CIMOM connectivity before updating box. This causes TotalStorage Productivity Center to connect to the CIMOM and try to login to it with the credentials we just have specified. If this is successful, the status indication of the CIMOM will turn green. However, TotalStorage Productivity Center does not retrieve the information about the elements managed by this CIMOM. This requires another discovery job. After updating all our CIMOM definitions, you can see the following screen in Figure 8-15:

Figure 8-15 Configuration of automated CIMOMs complete

Manually entering CIMOMs


Next, we have to enter the three missing CIMOMs manually. Follow the path Administrative Services Agents CIMOM and right-click CIMOM. Select Add CIMOM from the drop-down menu. TotalStorage Productivity Center displays a dialog box (Figure 8-16 on page 354) where you enter the basic information about the CIMOM: IP address, port and protocol (HTTP or HTTPS) for the CIMOM Interoperability namespace (refer to the manufacturer of the CIMOM for the correct namespace. It is usually /root/ibm for IBM storage and tape systems. Refer to 7.14, Interoperability namespace summary table on page 330 for more information. User ID and password for the CIMOM, if required Display name (optional)

Chapter 8. Getting Started with TotalStorage Productivity Center

353

Figure 8-16 Add CIMOM dialog

4. After entering our missing CIMOMs, we see the following CIMOM entries under Administrative Services Agents CIMOM (Figure 8-17).

Figure 8-17 Configuring CIMOMs - CIMOM configuration complete

Note: The Brocade CIMOM and the Engenio SMI-S connect to HTTP and HTTPS over two different ports, so they appear two times in the list. We have now completed the configuration of our CIMOMs in the TotalStorage Productivity Center. However, the retrieval of the information about the managed elements (storage subsystems, tape library, and switch) has not yet occurred for those CIMOMs which require authentication. So we have to run a further CIMOM discovery job. This CIMOM discovery job will not discover any new CIMOMs (as long as we did not add any new ones to our infrastructure in the meantime). However, the CIMOM discovery job will now be able to login in each of the configured CIMOMs and retrieve all information about the managed storage subsystems, tape libraries and switches.

354

IBM TotalStorage Productivity Center: The Next Generation

This CIMOM discovery job now completes without errors and produces the output shown in Figure 8-18.

Figure 8-18 Configuring CIMOMs - second CIMOM discovery job completed successfully

Logs
We should now inspect the logs to verify that all our storage subsystems, tape libraries, and switches have been discovered successfully. We can also verify the discovery of a storage subsystem by inspecting the Alert Log. TotalStorage Productivity Center comes with a default Alert configured that will raise an entry in the storage subsystem Alert Log each time a new storage subsystem is discovered. View this Alert Log by selecting IBM TotalStorage Productivity Center Alerting Alert Log Storage Subsystem. Note that a SAN Volume Controller is not considered a storage subsystem in this context, so an Alert for the discovery of an SVC is not generated by default. TotalStorage Productivity Center also provides default Alerts for the discovery of switches, fabrics, and endpoints. We cover TotalStorage Productivity Center Alerting in greater detail in 8.8, Alerting on page 398. Now that we have discovered all our storage subsystems, tape libraries and switches behind our CIMOMs successfully, we can confirm if they show up in TotalStorage Productivity Center, where they should.

Storage subsystems
The storage subsystems and SVCs are located under Disk Manager Storage Subsystems. You can see a list of all storage subsystems and SVCs. Launch a detail page for each storage subsystem by clicking on the symbol left of the storage subsystem name. On this detail page, you can enter some information for the storage subsystem such as a display name, some user defined properties and the address of the element manager, in cases this address is not retrieved correctly through the CIMOM. In Figure 8-19 on page 356 you see a list of our five storage subsystems including one SVC.

Chapter 8. Getting Started with TotalStorage Productivity Center

355

Figure 8-19 Configuring CIMOMs - list of discovered storage subsystems and SVCs

The tape library should be visible under Tape Manager Tape Libraries. As with the storage subsystems, we see a list of all tape libraries. We again can launch a detail page for each library by clicking the symbol left of the tape library name. In our case, we see one tape library (Figure 8-20).

Figure 8-20 Configuring CIMOMs - list of discovered tape libraries

Fabrics
The fabrics the discovery job has detected are listed under Fabric Manager Fabrics. You can see a list of all discovered fabrics in Figure 8-21 on page 357; however a list of all discovered switches is not available. You can see a list of switches in the topology viewer. The use of this part of the TotalStorage Productivity Center GUI is covered in detail in Chapter 9, Topology viewer on page 415. Note that the information of the fabric we see does not necessarily come from the CIMOM. The reason for this is we also have already installed some fabric agents in our infrastructure which deliver information to TotalStorage Productivity Center without any discovery process (see 8.4.2, Verifying Data and Fabric Agents on page 357).

356

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-21 Configuring CIMOMs: list of discovered fabrics

Important: You should set up a CIMOM discovery job for running in recurrent intervals to detect changes in your infrastructure automatically. The length of the interval should be adjusted to the frequency of those changes in your specific environment.

8.4.2 Verifying Data and Fabric Agents


There is no need to perform any discovery process for your Data and Fabric Agents. After installation, these agents register with TotalStorage Productivity Center by themselves and should be promptly available. All you must do is to verify that the agents you have deployed are up and running, and that TotalStorage Productivity Center is able to communicate with them. Select Administrative Services Agents and expand the Data and Fabric Nodes. Look for an entry for each Data Agent and Fabric Agent installed in your environment. By clicking the entries for the agents, you can see a panel with general and detailed information for the specific agent. By right-clicking the entry for an agent, you can display a context menu. Select Check to verify that the agent is up and running (Figure 8-22).

Figure 8-22 Verifying Data and Fabric Agent connectivity - check agents

Chapter 8. Getting Started with TotalStorage Productivity Center

357

You can also view a list of all computers with agents installed by selecting Data Manager Reporting Asset By Computer. See Figure 8-23. Note, that our computer, Helium, which has no agents installed is not listed in this asset list.

Figure 8-23 Verifying Data and Fabric Agent connectivity - computer asset list

8.4.3 Configuring out-of-band fabric connectivity (SNMP/API)


Although we have already established two communication paths to our switch (CIMOM and in-band with the fabric agents), we must set up the SNMP / Brocade API-based out-of-band connectivity also. We must do this for our 2005-B32 switch because at the time of writing, Brocade does not support the GS-3 standard and still needs zone management to be performed through their proprietary API over IP. This is different for switches from other vendors and is documented on the following Web site: http://www-306.ibm.com/software/sysmgmt/products/support/IBM_TSANM_Device_Compatib ility.html The steps for the out of band fabric discovery are very similar to the discovery of the CIMOMs. We can either define a job for automatic discovery, or add the switches manually. 1. To define an out of band discovery job select Administrative Services Discovery and click Out of Band Fabric. 2. In the Content Pane, you can see the several tabs for the definition of the job. For our out-of-band discovery job, we must enter the IP addresses of the subnets we want to scan in the Options tab. The switch in our infrastructure has an IP address of 9.1.39.25, so we enter this subnet to the list of subnets (Figure 8-24 on page 359).

358

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-24 Configuring out of band fabric connectivity - set up discovery job

3. After submitting the job by selecting File Save in the menu bar, you can monitor the job by expanding the Administrative Services Discovery Out of Band Fabric and clicking the job run entry. In the Content Pane, you can examine the logs. 4. Our out of band discovery job detects one switch (our 2005-B32), so TotalStorage Productivity Center creates an entry under Administrative Services Agents Out of Band Fabric, representing this switch and labeled with its host name. Click the entry and see the details for that particular switch. TotalStorage Productivity Center automatically subscribes to the following traps: LinkUp, LinkDown, Activated, Deactivated, PowerOn, PowerOff, UnknownEvent, LIPReinitialize, LIPReset, RSCN, so these events will be reported without any further configuration. 5. To activate the Advanced Brocade API (which we need for zoning support for the 2005-B32 switch) you must select Enable Advanced Brocade Discovery and enter the user ID and password for the switch, as shown in Figure 8-25.

Figure 8-25 Configuring Out of Band Fabric connectivity - enable Advanced Brocade API

Chapter 8. Getting Started with TotalStorage Productivity Center

359

6. Save this information by selecting File Save in the menu bar. The out-of-band Fabric discovery for our 2005-B32 switch is now complete. As an alternative if we had chosen to enter our switch manually, we could have done this by expanding Administrative Services Agents and right-clicking Out of Band Fabric. This brings up a context menu where we can select Add. TotalStorage Productivity Center would open a dialog box where we could enter the host name or IP address of the switch to add, as shown in Figure 8-26. In this case, we could enable the Advanced Brocade API right away.

Figure 8-26 Configuring Out of Band Fabric connectivity - manually enter switch

8.5 Collecting data about your infrastructure


TotalStorage Productivity Center uses all the communication paths set up in 8.4, Initial configuration on page 344 to collect information about the infrastructure it manages. Basically, there are two ways of how information from the managed infrastructure can enter TotalStorage Productivity Center, event-driven through CIMOM indications and SNMP traps, or through data collection. In this section, we guide you through the methods to configure TotalStorage Productivity Center to collect data about your infrastructure. TotalStorage Productivity Center collects data by running data collection jobs. There are different data collection jobs for the type of information collected and from which types of elements the information is retrieved. The most common types of data collection jobs available within TotalStorage Productivity Center are:

Discovery jobs locate data sources and collects basic information about these data sources. In the scope of this chapter, we run discovery jobs against CIMOMs and out-of-band fabric agents. When a discovery job is run against a storage subsystem CIMOM, the job locates all storage subsystems behind this CIMOM and retrieves all information the CIMOM holds for these systems. The discovery job however does not cause the CIMOM to login to the storage subsystems and retrieve more detailed information. When a discovery job is run against a fabric, the job retrieves all available information for the fabric if supported by the switches. Probe jobs collect detailed statistics on all the assets of the managed infrastructure, such
as computers, disk controllers, hard disks, clusters, fabrics, storage subsystems, LUNs, tape libraries, and file systems Probe jobs can also discover information about new or removed disks and file systems. Probe jobs can be directed against any elements in the 360

IBM TotalStorage Productivity Center: The Next Generation

managed infrastructure. In our examples we run Probe jobs against storage subsystems, fabrics and computers.

Scan jobs collect statistics about the actual storage consumption. Scans are always
directed against a Data Agent and deliver very detailed information about the file systems, files, and databases of computers.

Ping jobs gather statistics about the availability of the managed computers. Ping jobs
generate TCP/IP pings and consider the computer available if it gets an answer. This is purely ICMP-protocol based and there is no measurement of individual application availability. Like Scans, pings can be directed against computers only.

Performance Monitor jobs collect statistics about the performance of storage subsystems
and switches. Performance Monitor jobs can be run against storage subsystems, SVCs, and switches and always need a CIMOM to communicate with the elements. Figure 8-27 shows an overview of the types of data collection jobs and the components from which they gather information.

Data Collection Jobs


PERFORMANCE MONITOR

Storage and Tape Subsystems

Fabric Agent

SNMP / API

DISCOVERY

CIMOM

PROBE

SAN Components

central TPC DB

Data Agent

SCAN

Computers

Figure 8-27 Collecting Data - various data collection jobs

In 8.4, Initial configuration on page 344, we show in detail how to define and run discovery jobs. In the following sections we create Probe jobs, Scan jobs, Ping jobs and Performance Monitor jobs for our infrastructure, and show the detailed steps for each of the data collection jobs.

8.5.1 Creating Probes


To create a Probe, follow these steps: 1. Expand IBM TotalStorage Productivity Center Monitoring, right-click Probes, and click Create Probe (Figure 8-28 on page 362).

PING

Chapter 8. Getting Started with TotalStorage Productivity Center

361

Figure 8-28 Collecting Data - create Probes, what to Probe

2. In the What to Probe tab in the Content Pane, select which of the infrastructure elements against which your Probe job will run. You could define one Probe job for all of the elements in your infrastructure. However, we recommend defining multiple Probe jobs to improve the granularity of your data collection strategy. For our infrastructure, we are going to define one Probe job for each element category (Storage subsystems, tape library, computers and fabric). 3. To select all computers for your first Probe job, expand the Computers node of the Available column and look for the list of computers. See Figure 8-29 on page 363. Select All computers because as we want to include all computers (also includes computers added in the future) in our Probe job.

362

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-29 Collecting Data - create Probes, select all computers

We chose to run the Probe job repeatedly once a day at 1:00 am. 4. Review the When to Run and Alert tabs, and set the schedule and Alert options accordingly. Select Enabled in the upper right corner of the Content Pane. Lastly, select File Save in the menu bar. TotalStorage Productivity Center asks for a name for the Probe job. We name our job Probe Computers. Note: A Probe job gathers detailed information about your infrastructure. Therefore, we recommend running the Probe jobs in recurrent intervals. To determine those intervals, you should take many individual factors into account. In any case, there will be a trade-off between information currency and overhead generated by the Probe jobs. Generally, you might want to have one Probe job per element per day. 5. We have now defined our first Probe job for all our computers. We scheduled the job to run each day at 1:00 am. However, we want the job to run right now as well. See Figure 8-30 on page 364. Expand IBM TotalStorage Productivity Center Monitoring Probes, and right-click the entry for the job we just have defined. In the context menu, select Run now. This will run the computer Probe job right away without modifying it.

Chapter 8. Getting Started with TotalStorage Productivity Center

363

Figure 8-30 Collecting Data - create Probe, run now

6. We now repeat the steps for our other infrastructure elements which are storage subsystems, tape libraries, and fabrics. After defining Probe jobs for all of those element classes and running them right now, we see the following screen in Figure 8-31.

Figure 8-31 Collecting Data - all Probe jobs defined and execution started

In Figure 8-31 we see an entry for each Probe job we just have defined under the IBM TotalStorage Productivity Center Monitoring, Probes Node of the Navigation Tree. Their names are built from the user ID which we used to logon to TotalStorage Productivity Center and the name we specified when saving the job definition. 364
IBM TotalStorage Productivity Center: The Next Generation

Each Probe job has one entry for the run we initiated when we selected that the job should run now. There will be another entry created each day at 1:00 am. The blue circles next to the entries for the job runs indicate that the jobs are still running. The little yellow triangle next to our run of the Probe Computers job indicates that the job run has completed with warnings. We can examine the logs in the Content Pane to find the reasons for the warnings. Tip: TotalStorage Productivity Center allows you to create individual groups of infrastructure elements within the element class. So you could create computer groups or storage subsystem groups for example. You can then use these groups to specify the elements for your Probe job. Refer to the TotalStorage Productivity Center V3.1 Users Guide for details on Grouping.

8.5.2 Creating Scans


Creating Scan jobs works similar to creating Probe jobs, with the difference that Scans can be directed only to computers with Data Agents installed and their file systems. 1. To create a Scan job, expand Data Manager Monitoring, right-click Scans, and click Create Scan (Figure 8-32).

Figure 8-32 Collecting Data - create Scans, file systems

2. In the Filesystems tab (Figure 8-33 on page 366), select either computers or single file systems of computers. We would also be able to select groups of file systems if we had defined them before.

Selecting a computer means implicitly selecting all the file systems of this computer. In our
case, we select all the computers of our infrastructure with all their file systems to be scanned.

Chapter 8. Getting Started with TotalStorage Productivity Center

365

Figure 8-33 Collecting Data - create Scans, all computers and file systems selected for Scan

Note: Only file systems found by Probe jobs will be available for Scans. 3. The Directory Groups tab allows us to specify directory groups, which would allow us to direct the Scan to specific directories within a file system without having to scan the entire file system. Directory groups can be either defined by expanding Data Manager Monitoring Groups, right-clicking Directory, and clicking Create Directory Group or directly from the Directory Groups tab of the Scan job definition.

Profiles
In the Profiles tab, we can select one or more profiles for our Scan job. Profiles allow us to specify what statistical information is gathered and to fine-tune and control what files are scanned. Profiles are generally used to: Limit files to be scanned. Specify file attributes to be scanned. Select the summary view: Directories and file systems User groups Operating system user groups Set statistic retention periods. TotalStorage Productivity Center provides a selection of default profiles and allows you to create user-defined profiles. Profiles can be defined either by expanding Data Manager Monitoring, right-clicking Profiles, and clicking Create Profile, or directly from the Profiles tab of the Scan job definition. The creation and use of profiles in Scan jobs is documented in detail in the TotalStorage Productivity Center V3.1 Users Guide.

366

IBM TotalStorage Productivity Center: The Next Generation

Each of the default profiles allows us to select a specific statistic needed for the default reports which TotalStorage Productivity Center provides. Because we want to gather all of the statistical information available from our computers and file systems, we select all of the default profiles and apply them to file systems and directories when we create our Content Pane (Figure 8-34).

Figure 8-34 Collecting Data - create Scans, select all default profiles

In the When to Run tab, we choose to run our Scan job each morning at 5:00 am. After reviewing the When to Run and Alert tabs, setting the schedule and Alert options accordingly, and checking Enabled in the upper right corner of the Content Pane, we select File Save in the menu bar. TotalStorage Productivity Center asks us for a name for our Scan job. We name it Scan Computers. We again want our job also to execute immediately, so expand Data Manager Monitoring, Scan, and right-click the entry for the job we just have defined, then select Run now. By selecting the entry created for the run, we can then examine the logs for our Scan job (Figure 8-35 on page 368).

Chapter 8. Getting Started with TotalStorage Productivity Center

367

Figure 8-35 Collecting Data - create Scans, examine job logs

8.5.3 Creating Pings


Creating a Ping is quite straightforward. 1. Expand Data Manager Monitoring, right-click Pings, and select Create Ping.

Figure 8-36 Collecting Data - create pings, select computers

2. In the Computers tab, select the computers which our Ping job will ping. For our Ping job, we select to ping all computers. 3. In the When to Run tab, we schedule the frequency of our ping. We want TotalStorage Productivity Center to run a ping against all our computers every 10 minutes. So, we select to run the job immediately and repeatedly, with Run Now and repeating the ping every 10 minutes (Figure 8-37 on page 369).

368

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-37 Collecting Data - create pings, set frequency

4. In the Options tab, we can specify how often the Ping statistics are saved in the database repository. By default, TotalStorage Productivity Center keeps its Ping statistics in memory for one hour before flushing them to the database and calculating an average availability. We can change the flushing interval to another amount of time, or a number of Pings (for example to calculate availability after every 10 Pings). The system availability is calculated as:
(Count of successful Pings) / (Count of Pings)

A lower interval can increase database size, but gives you more accuracy on the availability history. We selected to save each Ping into the database at each Ping, which means we will have an availability of 100% or of 0%, but we have a more granular view of the availability of our servers. 5. For a Ping job, we must specify a condition in the Alert tab. We must specify what kind of Alert TotalStorage Productivity Center will generate if a computer is not reachable a certain number of times. We choose to generate an email if a computer is not reachable more than 5 Times.

Figure 8-38 Collecting Data - create pings, set Alert

Chapter 8. Getting Started with TotalStorage Productivity Center

369

6. After checking that Enabled in the upper right corner of the Content Pane is selected, we select File Save in the menu bar. TotalStorage Productivity Center again asks us for a name for our Ping job. We name it Ping Computers. 7. By selecting the entry created for the run we can examine the logs ouf our Ping job.

8.5.4 Creating Performance Monitors


TotalStorage Productivity Center collects data about the performance of the managed infrastructure by running Performance Monitor jobs. Those Performance Monitor jobs work on the basis of SMI-S and require an CIMOM. TotalStorage Productivity Center stores the collected performance data in the central database. The data is used in predefined or user-defined performance reports, charts and alerts, and is also used as the basis for the performance overlays and status indications in the topology viewer (see chapter Chapter 9, Topology viewer on page 415). The data serves also as the basis for the calculations performed by the Volume Performance Advisor feature. TotalStorage Productivity Center allows to create Performance Monitor jobs for storage subsystems, SVCs and for switches.

Create a storage subsystem Performance Monitor


To create a Performance Monitor job for a storage subsystem, follow these steps: 1. Expand Disk Manager Monitoring, right-click Subsystem Performance Monitors, and select Create Subsystem Performance Monitor, as shown in Figure 8-39.

Figure 8-39 Collecting Data - create subsystem Performance Monitor

2. In the left column of the Storage Subsystems tab of the Content Pane, we see a list of all subsystems which are available in our infrastructure for performance monitoring. We could select all our storage subsystems to be monitored within this Performance Monitor job, however we recommend to specify a job for each storage subsystem for which you would like to collect performance data. This would improve flexibility, as you can specify different intervals and durations for the jobs as well as different alert conditions (what to do, if the Performance Monitor fails). 3. We begin with the definition of a Performance Monitor job for our SVC system, so we select the entry for the SVC and move it to the Selected subsystems column (Figure 8-40 on page 371).

370

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-40 Collecting Data - create subsystem Performance Monitor - select storage subsystem

4. In the Sampling and Scheduling tab, we can specify the resolution (sampling interval) and duration of our performance data collection as well as a point in time to start the Performance Monitor. Which interval lengths (sampling interval, resolution) we can choose are determined by the storage subsystem. The SVC offers us intervals between 15 minutes and one hour. We choose 15 minutes. We also choose to run the performance data for 24 hours starting immediately (Figure 8-41).

Figure 8-41 Collecting Data - create subsystem Performance Monitor - set time line

5. If we click Advanced next to the selection for the interval length, TotalStorage Productivity Center offers to specify a frequency. This frequency may be greater than or equal to the
Chapter 8. Getting Started with TotalStorage Productivity Center

371

interval length. If the frequency is larger than the interval length, not every sample gathered will be saved in the TotalStorage Productivity Center database, which will save disk space. However, every sample that contains a Constraint Violation will be saved regardless (see 8.8, Alerting on page 398). In our example we leave the frequency equal to the interval length, so that every sample of performance data retrieved from the SVC will be saved in the central database. 6. In the Alert tab, we specify what Alerts TotalStorage Productivity Center will trigger when the performance collection job fails. Note: The Alert tab does not specify any conditions which are raised when performance thresholds are exceeded. Those Alerts are specified in a different way. Refer to 8.8, Alerting on page 398. 7. After verifying that Enabled is selected, we select File Save in the menu bar. TotalStorage Productivity Center asks us for a name for our Performance Monitor job. We name it SVC Performance Monitor. 8. Our job definition has created an entry under the Disk Manager Monitoring Subsystem Performance Monitors node. We can see it by right-clicking this node and selecting Refresh. Below this entry, we see a further entry for each run of the job. As we have chosen to start the execution of the job immediately, we find an entry for our run, which is marked by a blue circle indicating that the job is currently running.

Figure 8-42 Collecting Data - subsystem Performance Monitor running

9. If we select the entry for the run of our Performance Monitor job we see the job log in the Content Pane. We can examine the job log by clicking on the symbol next to it. The log shows us that the Performance Monitor begins with a re-retrieval of the storage subsystem configuration data (which is also retrieved by Probe jobs). We then see entries for the performance data retrieval for every interval in the log. In our example in Figure 8-43 on page 373 we see that the Performance Monitor job has found that the SVC has I/O Group, 3 MDisk Groups, 13 MDisks, and 16 VDisks. It has retrieved a incomplete sample of performance data for some reason and has then inserted 29 performance records into the database.

372

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-43 Collecting Data - subsystem Performance Monitor log

10.We define a Performance Monitor for each of our storage subsystems except the DS4500 (performance monitoring for the DS4500 is not available with the Version 3.1.0 of TotalStorage Productivity Center). After completing the definition, we see the following screen in Figure 8-44.

Figure 8-44 Collecting Data - all subsystem Performance Monitors created

Chapter 8. Getting Started with TotalStorage Productivity Center

373

Create a switch Performance Monitor


Note: At the time of writing, switch performance monitoring is only possible through CIMOMs. In-band or SNMP/API out-of-band performance monitoring is not available. To create a Performance Monitor for our 2005-B32 switch, follow these steps: 1. Expand Fabric Manager Monitoring, right-click Switch Performance Monitors, and select Create Switch Performance Monitors.

Figure 8-45 Collecting Data - create switch Performance Monitor

2. In the left column of the Switches tab of the Content Pane, we see a list of all switches which are available in our infrastructure for performance monitoring. In our case we see our only switch, the 2005-B32. We could select all available switches (if there were more) to be monitored within this Performance Monitor job, however again we recommend to specify a job for each switch for which you would like to collect data. 3. We select the entry for the 2005-B32 switch and move it to the Selected switches column. 4. In the Sampling and Scheduling tab, we set our interval, frequency, and schedule for the switch Performance Monitor as we did for the storage subsystem Performance Monitors. We choose a interval length and frequency of 5 minutes for our Performance Monitor job and select to have it Run Now and run for 24 hours (Figure 8-46 on page 375).

374

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-46 Collecting Data - create switch Performance Monitor - set time line

5. In the Alert tab, we specify what Alerts TotalStorage Productivity Center shall trigger when the Performance Monitor job fails. Again these Alerts have nothing to do with Alerts you might want to have generated when certain performance thresholds are exceeded. 6. We select File Save in the menu bar. TotalStorage Productivity Center asks us for a name for our Performance Monitor job. We name it Switch Performance Monitor. 7. If we refresh the Switch Performance Monitor node we see an entry for our switch Performance Monitor job definition. If we expand the entry we see our job run. Clicking it allows us to examine the job log while the Performance Monitor is running. As with storage subsystem Performance Monitors, the switch Performance Monitor starts with retrieving the configuration data of the switch. We find in the log, that the Performance Monitor has detected 32 ports (Figure 8-47).

Figure 8-47 Collecting Data - switch Performance Monitor log

Chapter 8. Getting Started with TotalStorage Productivity Center

375

8.6 Retrieving and displaying data about your Infrastructure


We have now set up the basic data collection jobs for our infrastructure. We should give the system some time to collect a certain amount of data (especially performance data) and can then start to view the information about our infrastructure. TotalStorage Productivity Center offers basically two approaches to look at your infrastructure: The Topology Viewer is designed to provide a powerful graphical and tabular representation of the physical and logical resources that have been discovered in your storage environment. It depicts the relationships among resources and is able to provide status and performance information within the graphical view. The Topology Viewer is a central component of TotalStorage Productivity Center and plays a vital role in your day to day storage management activities. So we have described it in a dedicated chapter (see Chapter 9, Topology viewer on page 415). Predefined and user-defined Reports, tables and charts are another way to view information about your system besides the graphical depiction of the topology of your infrastructure through the Topology Viewer. There is a great variety and flexibility within TotalStorage Productivity Center in defining and viewing these reports. In this section we show you how to access some of the most important and common reports, tables, and charts of data about your infrastructure. We have organized the section along the different classes of elements in our infrastructure (storage subsystems, fabrics, tape libraries and computers). We generate views on assets, configuration, and performance data for all of these classes.

8.6.1 Viewing data about your storage subsystems


In this section, we show you how to look at storage subsystem data.

Viewing asset and configuration data


We start viewing asset and configuration data about our storage subsystems by following these steps: 1. Expand Disk Manager and select Storage Subsystems, which gives us a list of all our storage subsystems and SVCs in our infrastructure (Figure 8-48).

Figure 8-48 Viewing storage subsystem data - storage subsystem list

376

IBM TotalStorage Productivity Center: The Next Generation

We see the five storage subsystems of our infrastructure, one DS8000, one DS6000, one ESS, one DS4500 and a SVC. 2. By clicking the symbol next to the entries of the storage subsystems we see a detail panel with the basic information about these storage subsystems in the Content Pane. Figure 8-49 shows the detail panel for our DS8000 storage subsystem.

Figure 8-49 Viewing storage subsystem data - storage subsystem detail panel

3. We now return to the storage subsystem list by selecting the Storage Subsystems tab. 4. Next we want to display information about the volumes of our storage subsystems. We can get the information by highlighting a storage subsystem and clicking Volumes (Figure 8-50).

Figure 8-50 Viewing storage subsystem volume information - select storage subsystem

5. Because we have highlighted our DS8000 storage subsystem, we have to choose the extend pool for which we like to have the volumes listed in the next panel. We have two Extend Pools for open systems configured on our DS8000. They are named ITSO_TPC_EP0 and ITSO_TPC_EP1 (see Figure 8-51 on page 378). We select ITSO_TPC-EP0.

Chapter 8. Getting Started with TotalStorage Productivity Center

377

Figure 8-51 Viewing storage subsystem volume information - select extend pool

We receive the following list of volumes in this extend pool as shown in Figure 8-52.

Figure 8-52 Viewing storage subsystem volume information - volume list

6. If we click the icon next to a certain volume, we can view a detail screen for this volume as shown in Figure 8-53 on page 379.

378

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-53 Viewing storage subsystem volume information - volume detail panel

7. Besides the information about the volumes in our storage subsystem, we can find a very detailed view on the assets of our storage subsystem if we expand Data Manager Reporting Asset By Storage Subsystem (see Figure 8-54), and then double-click the storage subsystem asset.

Figure 8-54 Viewing storage subsystem asset information

Chapter 8. Getting Started with TotalStorage Productivity Center

379

Reports
We can browse through the details of the storage subsystem configurations by expanding and collapsing the nodes for the logical entities. TotalStorage Productivity Center presents information about storage subsystems also in form of reports. There are predefined reports within TotalStorage Productivity Center and you can create your own reports. All the predefined reports for storage subsystems are performance-related and can be found under IBM TotalStorage Productivity Center My Reports System Reports Disk. We discuss them when looking at the storage subsystem performance data.

Custom reports
Because there are no predefined reports related to asset and configuration data of storage subsystems, here we generate reports by ourselves. We show how to do this by generating a report that lists the LUN mapping of the Virtual Disks of our SVC which are bigger than 100 MB in capacity, as an example. Note: The concept of reports is widely used within the TotalStorage Productivity Center user interface and works similar for all kinds of reports. 1. To generate the report for our SVC LUN mapping we expand Disk Manager Reporting Storage Subsystems LUN to HBA Assignment and select By Storage Subsystem. In the Content Pane we see all the data which would be available for a LUN to HBA Assignment report. By default TotalStorage Productivity Center includes all available columns in the report. We are able to deselect some of the included columns and to reorder them according to our needs.

Figure 8-55 Viewing storage subsystem information, create reports, select data

2. We then click Selection. TotalStorage Productivity Center shows us a list of all available storage subsystems and we select which storage subsystems we want to include in our report (Figure 8-56 on page 381).

380

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-56 Viewing storage subsystem information, create reports, select storage subsystem

3. We want to generate our report for our SVC system only, so we deselect all storage subsystems except the SVC. Clicking OK brings us back to the panel shown in Figure 8-55 on page 380. 4. Click Filter. We can now specify the conditions which have to be met for each single row of the report. The filter, of course, applies only to the storage subsystems which have been selected for the report (in our case the SVC). We want to have only Virtual Volumes reported, which exceed the capacity of 100 MB. So we select LUN Capacity as the column on which we want to apply a filter on the next panel, as shown in Figure 8-57.

Figure 8-57 Viewing storage subsystem information, create reports, apply filter

5. Specify which condition the capacity of the LUN has to meet to be included in our report. We select >= as an Operation and enter 100M as Value 1. If you are not sure about the notation of the values, use the Edit button. 6. Clicking OK brings us back to the panel shown in Figure 8-55 on page 380. 7. We have now successfully defined our report. We save the report by selecting File Save in the menu bar. We are prompted to specify a report name and name the report SVC LUN mapping. The report definition was saved under IBM TotalStorage Productivity Center My Reports tpcadmins Reports and can be generated by simply clicking it. The report can also be generated directly from the panel shown in Figure 8-55 on page 380 by clicking Generate Report, even without saving the report definition.

Chapter 8. Getting Started with TotalStorage Productivity Center

381

Figure 8-58 Viewing storage subsystem information, example report

The report shows us 19 LUN mappings for Virtual Disks greater than 100MB of capacity. It shows us the HBAs to which the LUNs are mapped and all the other information we selected to be included in our report. Many reports within TotalStorage Productivity Center allow us to zoom into more details of the rows of the report. In our example, if we click to the little icon left of the single rows, we can see a detail panel for the Virtual Disk of this specific LUN mapping showing us on which Managed Disks the extends of this Virtual Disk are located as shown in Figure 8-59.

Figure 8-59 Viewing storage subsystem information, example report, SVC Virtual Disk details

382

IBM TotalStorage Productivity Center: The Next Generation

Now that you know how to generate reports within TotalStorage Productivity Center, you can generate further reports related to asset and configuration information for you storage subsystems under the Disk Manager Reporting Storage Subsystems node of the Navigation Tree.

Viewing performance data


TotalStorage Productivity Center presents performance data for your storage subsystems in the form of reports and charts. There are predefined storage subsystem performance reports under IBM TotalStorage Productivity Center My Reports System Reports Disk and you can generate you own performance reports under Disk Manager Reporting Storage Subsystem Performance. Reports are always the base for charts. If you want to generate a chart, you will first have to generate the corresponding report. To see how this process works, we first try some of the predefined reports. 8. Expand IBM TotalStorage Productivity Center My Reports System Reports Disk and see the list of system supplied performance reports (Figure 8-60).

Figure 8-60 Viewing storage subsystem performance, list of predefined reports

9. We can generate one of the reports simply by clicking it. In the example in Figure 8-61 on page 384, we generate the Subsystem Performance report.

Chapter 8. Getting Started with TotalStorage Productivity Center

383

Figure 8-61 Viewing storage subsystem performance, predefined report Subsystem Performance

We see an aggregation of the overall performance of the storage subsystems for our infrastructure for which performance data has been collected and is available (remember, we have no support for DS4000 performance management with the code bundle used to write this book). Note: A SAN Volume Controller is not considered a storage subsystem in terms of performance management. At the time of writing, performance data cannot be aggregated for an entire SVC cluster. For this reason there is no SVC performance data on a storage subsystem level. Performance data for a SVC is reported on an IO-Group, MDisk-Group and VDisk level instead. This might change in future releases of TotalStorage Productivity Center 10.By clicking on the Selections tab, we can review and also change the definition for this predefined report and regenerate it. 11.We can now generate a chart out of our predefined report. The chart generation process is initiated by clicking the little pie chart icon in the top left corner of the Content Pane (see Figure 8-62). Each time you see this icon, you are able to generate an chart out of a report or parts of a report.

Figure 8-62 Viewing storage subsystem performance, generate chart

384

IBM TotalStorage Productivity Center: The Next Generation

12.TotalStorage Productivity Center offers us a menu where we can select the chart type and rows of the report which will be included in the chart as well as the performance metric we want to have charted (see Figure 8-63). We select to draw a History Chart over all available samples, include all storage subsystems in our chart and that we want to see the Total IO Rate as a metric.

Figure 8-63 Viewing storage subsystem performance, generate chart, chart type and metrics

If we select just Chart as the chart type, the system draws a bar diagram for all three storage subsystems representing the average Total IO Rate over the complete time where samples are available. Because we selected a History Chart, we see the result in Figure 8-64.

Figure 8-64 Viewing storage subsystem performance, history chart

We see the history of the total IO Rate for our three storage subsystems for all available samples. The DS6000 has not reported any IO activity in the monitored time frame. We could now limit the time scale and the resolution and redraw the chart.

Chapter 8. Getting Started with TotalStorage Productivity Center

385

Note, that TotalStorage Productivity Center generates also some trends for the performance of our storage subsystems in the future which are depicted by the dotted prolongation of the performance graphs. This can be very useful to foresee performance bottlenecks and determine appropriate measures to prevent them from occurring. In addition to the predefined storage subsystem performance reports, you can generate your own reports under Disk Manager Reporting Storage Subsystem Performance. The generation of those performance reports works similar to the generation of reports about asset and configuration data. TotalStorage Productivity Center allows you to generate your own storage subsystem reports by: Storage Subsystem Controller IO-Group (SVC) Array Managed Disk Group (SVC) Volume Managed Disk (SVC) Port

8.6.2 Viewing data about your fabrics


In this section we show you how to look at fabric data.

Viewing asset and configuration data


Asset and configuration data about the fabrics in our infrastructure are presented in the Topology Viewer and at two further locations in the TotalStorage Productivity Center User Interface. We can view a list of fabrics including information about zoning configuration under Fabric Manager Fabrics and we can generate a number of predefined reports on our SAN infrastructure assets under IBM TotalStorage Productivity Center My Reports System Reports Fabric. 1. We start by expanding Fabric Manager and select Fabrics. We see a list of the Fabrics in our infrastructure as shown in Figure 8-65. In our environment, we see one fabric which consists of just one switch.

Figure 8-65 Viewing fabric data - fabric list

2. We can select the fabric and click Zone Configuration. TotalStorage Productivity Center shows us a panel with the zoning information of this fabric (see Figure 8-66 on page 387). We can also change the zoning in these panels. We discuss zoning changes in 8.9, Configuring your storage subsystems and switches on page 409.

386

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-66 Viewing fabric data - zoning information

Besides this information, TotalStorage Productivity Center offers us a number of predefined reports to view asset and configuration data about our SAN infrastructure. We find these reports under IBM TotalStorage Productivity Center My Reports System Reports Fabric as shown in Figure 8-67 on page 388. There reports show the following data:

Port Connections lists information about all free and occupied ports on the switches in
your SAN. These reports provide information about the port connections and status, as well as the port and switch IDs, and other details.

SAN Assets (All) lists information about all assets discovered by TotalStorage Productivity
Center.

SAN Assets (Connected Devices) lists information about all assets discovered by TotalStorage Productivity Center, that are currently connected to the SAN. SAN Assets (Switches) lists all the switches discovered by TotalStorage Productivity
Center.

Port Errors lists information about errors being generated by ports on the switches in your
SAN. These reports provide information about the error frames, dumped frames, link failures, port and switch IDs, and other details. This report can only be generated for switches that have performance data collected.

Chapter 8. Getting Started with TotalStorage Productivity Center

387

Figure 8-67 Viewing fabric data - reports

Viewing performance data


TotalStorage Productivity Center provides several predefined reports to view switch performance data and also allows the generation of user-defined, customized reports. We find the predefined fabric performance reports under IBM TotalStorage Productivity Center My Reports System Reports Fabric. The following reports are available:

Switch Performance displays a list and charts on the overall performance of the switches
in your infrastructure. Overall performance means, that TotalStorage Productivity Center aggregates the performance data over all ports in the switches.

Top Switch Port Data Rate Performance displays a list of the 25 ports in your fabric which show the highest data rates (averaged over the time, for which performance data has been collected). Top Switch Port Packet Rate Performance displays a list of the 25 ports in your fabric
which show the highest packet rates (averaged over the time, for which performance data has been collected). Figure 8-68 on page 389 shows the Top Switch Port Data Rate Performance report for our infrastructure as an example.

388

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-68 Viewing fabric performance data - Top Switch Port Data Rate Performance report

In addition to the predefined reports, we can generate our own, customized reports and charts by expanding Fabric Manager Reporting Storage Subsystem Performance Switch Performance and selecting By Port. As an example, we generate a chart showing the performance development of the ports to which our SAN Volume Controller is connected. We have a two-node SAN Volume Controller in our infrastructure, so it is connected to eight ports of our 2005-B32 switch. 1. To determine which ports the SVC is connected to, we look at the Port Connection report under IBM TotalStorage Productivity Center My Reports System Reports Fabric. We sort the report by Device by clicking on the column header. We can see the following list shown in Figure 8-69.

Figure 8-69 Viewing fabric performance data - generate customized report, determine ports to report

We note, that our SVC is connected to the ports 5, 6, 12, 13, 14, 15, 21, 22. 2. Expand Fabric Manager Reporting Storage Subsystem Performance Switch Performance, select By Port and click Selection in the Content Pane.

Chapter 8. Getting Started with TotalStorage Productivity Center

389

3. TotalStorage Productivity Center shows us a panel where we can select the ports we want to have included in our report. Note, that in this panel TotalStorage Productivity Center offers us only the worldwide names of the ports, so we have to map it to the port numbers. In our example, port 5 is 200500051E34E895, port 6 is 200500051E34E895 and so on. We click Deselect All and then select our eight ports (Figure 8-70).

Figure 8-70 Viewing fabric performance data - generate customized report, select ports to report

4. We click OK and then Generate Report. TotalStorage Productivity Center shows us the performance report for the eight ports we selected. 5. We now want to generate the chart, so we select the little pie chart icon on the top left corner of the Content Pane. In the dialog panel which opens, we specify the type of chart and the metric we want TotalStorage Productivity Center to use to generate the our chart. We select History Chart and as a metric we select the Total Port Data Rate as shown in Figure 8-71 on page 391.

390

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-71 Viewing fabric performance data - generate customized report, specify chart

6. After clicking OK, TotalStorage Productivity Center shows us our performance history chart depicting the Total Port Data Rate for the eight SAN Volume Controller Ports, as shown in Figure 8-72.

Figure 8-72 Viewing fabric performance data - customized performance chart

We can save the report by selecting File Save in the menu bar. Note, that this would save the report definition, but not the definition for our chart.

8.7 Viewing data about your tape libraries


You can view a list of the tape libraries in your infrastructure by following these steps: 1. Expand Tape Manager and select Tape Libraries as shown in Figure 8-73 on page 392.

Chapter 8. Getting Started with TotalStorage Productivity Center

391

Figure 8-73 Viewing tape libraries - tape library list

2. In our example, we see the one tape library we have in our lab environment. TotalStorage Productivity Center shows us the status of the library, the number of drives, cartridges and the maximal number of cartridges. We can launch a detail panel for our tape library by clicking the icon next to the entry (Figure 8-74).

Figure 8-74 Viewing tape libraries - tape library detail panel

3. In this panel, you can enter a name for your tape library and some user-defined properties. Selecting the Tape Libraries tab of the Content Pane returns you to the list of tape libraries. 4. You can now generate lists for drives, media changers, IO ports and cartridges. You can also launch the element manager of our tape library. Figure 8-73 shows a list of the available drives in our lab tape library as an example.

392

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-75 Viewing tape libraries - list of drives

There are no further reports, tables or charts for tape libraries available with TotalStorage Productivity Center V3.1.

8.7.1 Viewing data about your computers and file systems


TotalStorage Productivity Center organizes the viewing of data and information about your computers and file systems in a very similar way of viewing asset and configuration information about your storage subsystems. TotalStorage Productivity Center basically offers you the following ways to look at your servers and computers: Predefined reports under IBM TotalStorage Productivity Center My Reports System Reports Data. User-defined reports under Data Manager Reporting Navigation Tree-based asset-repository browsing under Data Manager Reporting Asset Topology Viewer, described in Chapter 9, Topology viewer on page 415 Note: Only servers and computers with Data Agents installed will contribute to this information and data.

Navigation Tree-based asset reporting


We can start with the Navigation Tree-based asset reporting, which we already know from viewing asset and configuration data of our storage subsystems. 1. Expand Data Manager Reporting Asset as shown in Figure 8-76 on page 394. TotalStorage Productivity Center allows us to browse our assets grouped by cluster, by computer, and by operating system types. Each of the groupings lead to the same information.

Chapter 8. Getting Started with TotalStorage Productivity Center

393

Figure 8-76 Viewing computer and file system information - asset browsing, grouping

2. By drilling further into the nodes of the individual computers we will find all important storage-related information such as disk controllers, disks, file systems, network shares and so on. Clicking an element in the Navigation Tree always shows a detail page about the element in the Content Pane. Figure 8-77 on page 395 shows a fully expanded asset tree for a Microsoft Windows 2003 server to provide an overview of the information you will be able to browse.

394

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-77 Viewing computer and file system information: asset browsing, overview

Predefined reports
Next we will look at the system supplied predefined reports on computers and file systems. You can find those reports under IBM TotalStorage Productivity Center My Reports System Reports Data. TotalStorage Productivity Center offers the following predefined reports:

Access Time Summary Report is a summary of the number of files in your environment and
when they were last accessed.

Disk Capacity Summary Report includes disk capacity, per disk, per computer, per cluster, per computer group, per domain or for the whole environment. Access File Summary Report is an overview of the information for files by directory,
directory group, file system, file system group, cluster, computer, computer group, domain, and for the entire network.

Disk Defects Report lists any disk defects on the computers being monitored by
TotalStorage Productivity Center through Data Agents.

Most at Risk Files Report displays information about the oldest files that have been
modified and have not yet been backed-up or archived since they were modified.

Oldest Orphaned Files Report displays information about files that have the oldest creation date and no longer have the owners registered as users on the computer or network. Most Obsolete Files Report lists information about files that have not been accessed or
modified for the longest period of time.

Chapter 8. Getting Started with TotalStorage Productivity Center

395

Storage Access Times Report lists when files were last accessed and how long ago they
were accessed.

Storage Availability Report displays the availability of the computers that are monitored
with pings.

Storage Capacity Report lists storage capacity information about each computer within
your organization.

Storage Modification Times Report provides information about files within the network that
were modified.

Total Freespace Report displays the total amount of unused storage across a network. User Quota Violations Report lists which users within the enterprise have violated a Data
Manager quota.

User Space Usage Report lists information about storage statistics related to a specific
user within your enterprise.

Wasted Space Report lists information about storage statistics on non-OS files not
accessed in the last year and orphan files. Some of the reports allow you to drill down into the single rows of the reports and some allow you to generate charts out of the report. 1. As an example, we take a look at the Disk Capacity Summary Report. When generated, it first shows us a one row report summarizing the total disk capacity of all computers in our infrastructure running Data Agents shown in Figure 8-78.

Figure 8-78 Computer and file system information, predefined reports, and Disk Capacity Summary

2. In our environment, the six computers which have Data Agents installed have a total disk capacity of 19.26 TB. We can now drill into this row and see the disk capacities for each of those six computers, as shown in Figure 8-79 on page 397.

396

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-79 Viewing computer and file system information, Disk Capacity per computer

3. By clicking the pie chart icon in the top left corner of the Content Pane, we are able to generate a graphical depiction of this disk capacity distribution in form of a bar chart (see Figure 8-80.)

Figure 8-80 Viewing computer and file system information, Disk Capacity per computer, bar chart

4. In addition to these predefined reports you can generate a wide variety of user-defined reports under Data Manager Reporting as shown in Figure 8-81 on page 398.

Chapter 8. Getting Started with TotalStorage Productivity Center

397

Figure 8-81 Viewing computer and file system information, user defined reports

The generation of those reports works similar to the generation of the reports we highlighted in previous sections. Refer to the IBM TotalStorage Productivity Center Users Guide, GC32-1775 for more details on the user-defined reports available.

8.8 Alerting
You can set up TotalStorage Productivity Center to examine the data about your infrastructure as it enters the system for certain, configurable conditions and trigger different events as a reaction to those conditions. For this purpose TotalStorage Productivity Center uses two mechanisms, Alerts and Constraints. Alerts and Constraints have different goals but are closely related, as we will see. First, we define what we understand by Alerts and Constraints.

Alerts
TotalStorage Productivity Center Alerts cause an entry in the TotalStorage Productivity Center Alert Log found under IBM TotalStorage Productivity Center Alerting Alert Log. In addition to this entry, an Alert can optionally cause one of the following actions: SNMP Trap Tivoli Enterprise Console (TEC) Event Login Notification (a notification of a TotalStorage Productivity Center user, which will be received upon login) Windows Event Log Entry The execution of a Script on any server in the infrastructure running a Data Agent E-mail

Constraints
TotalStorage Productivity Center Constraints are configurable conditions (often, but not always thresholds) which cause TotalStorage Productivity Center to log the occurrence of a 398
IBM TotalStorage Productivity Center: The Next Generation

detection of such a condition in its database as a Constraint Violation. These Constraint Violations can be viewed in reports and charts and can also be used to trigger certain actions. As we have already seen when defining our data collection jobs in the previous chapters, we can define Alerts for jobs within TotalStorage Productivity Center to notify the user when the job would complete with errors. In addition to those more implicit Alert definitions, we can define Alerts for a lot of other storage-related events in our environment. In this chapter we concentrate on Alerts and Constraints for: Computers and their file systems and directory structures Storage subsystems Fabrics, switches and SAN endpoints TotalStorage Productivity Center handles the Alerts and Constraints a little different for computer, file systems and directories on the one hand and storage subsystems and fabrics, switches and SAN endpoints on the other hand as we will see in the following sections.

Alerts and Constraints for computers, file systems and directories


TotalStorage Productivity Center handles Alerts and Constraints for computer, file systems and directories relatively independently. You define the Alerts under Data Manager Alerting and the Constraints under Data Manager Policy Management Alerting Constraints. The only logical connection between the two is, that a Constraint can be configured to trigger an Alert. Figure 8-82 shows this relationship.

generates

Alert Definition Constraint Violation


Computers

Alert

may generate

Constraint

generates

Figure 8-82 Alerts and Constraints for computers, file system and directories

There are a lot of very granular conditions TotalStorage Productivity Center allows you to define for Alerts and Constraints for computers, file systems and directories. This is especially true for Constraints which can become rather complex. However, we start our discussion with Alerts.

Creating an Alert
We now define an Alert and a Constraint for our infrastructure as an example. We want to receive an e-mail if the root file system of our AIX server (AZOV) goes below a threshold of 200 MB. We define this state as an Alert.

Chapter 8. Getting Started with TotalStorage Productivity Center

399

We further want to create a Constraint Violation each time a certain user (tpcadmin) stores an mp3 file to the file system on the C: drive of the pqdi server. We want to create an entry in the TotalStorage Productivity Center Alert Log if the total capacity of all these mp3 files exceeds 5 GB. 1. We start by setting up our Alert. Expand Data Manager Alerting, right-click Filesystem Alerts and select Create Alert (see Figure 8-83).

Figure 8-83 Creating file system Alert

2. In the Content Pane (Figure 8-84), we must specify our condition, which should trigger TotalStorage Productivity Center to raise the Alert. In our example the condition is that a file systems freespace falls below 200 Megabytes. We further specify, that TotalStorage Productivity Center should send an email to a certain user, should the condition arise.

Figure 8-84 Creating file system Alert, specify alert condition and action

3. We then select the Filesystems tab of the Content Pane. Here we define the file system to which our condition will apply. In our case, we select the root file system of the AZOV machine (see Figure 8-85 on page 401).

400

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-85 Creating file system Alert, specify file system

4. We now save the Alert by selecting File Save in the menu bar. We name the Alert AZOV filesystem freespace alert (see Figure 8-86).

Figure 8-86 Creating file system Alert, saved Alert

Now, the Alert is active. At the next Scan, when TotalStorage Productivity Center detects that the root file system of AZOV falls below the configured threshold of 200MB, an Alert will be raised in the TotalStorage Productivity Center Alert Log and an e-mail will be sent to the recipient specified in the Alert definition. Figure 8-86 below shows the TotalStorage Productivity Center Alert Log.

Chapter 8. Getting Started with TotalStorage Productivity Center

401

Figure 8-87 Creating file system Alert, Alert Log entry

You can bring up a detail screen for the Alert by clicking the icon next to the Alert Log entry.

Constraint definition
In this section we define our Constraint. Remember we want to create a Constraint Violation each time a certain user (tpcadmin) stores a mp3 file to the file system of the C: drive of the pqdi server. In addition, we want to create an entry in the TotalStorage Productivity Center Alert Log if the total capacity of all these mp3 files exceeds 5 GB. 1. Expand Data Manager Policy Management Constraints. You can see a number of predefined Constraints which are already active when you install TotalStorage Productivity Center. 2. Right-click Constraints and select Create Constraint as shown in Figure 8-88.

Figure 8-88 Creating Constraint

3. Unlike the Alert definition, we first have to select the file system for which we like to define our Constraint. We select the C:\ file system of the pqdi server as shown in Figure 8-90 on page 403.

402

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-89 Creating Constraint, select file system

4. Select the File Types tab of the Content Pane. Here, you specify the type of files which you want to trigger the Constraint Violation. In our example, we select the predefined mp3 type. We could also define new file types in the panel shown in Figure 8-90.

Figure 8-90 Creating Constraint, select file system

5. We want further to limit the Constraint to one specific user, tpcadmin. We select this user in the Users tab of the Content Pane shown in Figure 8-91 on page 404.

Chapter 8. Getting Started with TotalStorage Productivity Center

403

Figure 8-91 Creating Constraint, select user

6. In the Options tab, we could specify some more parameters for our Constraint such as the amount of violating filenames TotalStorage Productivity Center is going to store in its database and some conditions which would trigger an Alert other than those defined in the Constraint. 7. In the Alert tab, we specify the condition that an Alert should be raised if the total size of all violating mp3 files stored by this user stores exceeds 5 MB (Figure 8-92).

Figure 8-92 Creating Constraint, specify Alert condition

404

IBM TotalStorage Productivity Center: The Next Generation

8. Save your Constraint and name it pqdi mp3 constraint. 9. Log on to pqdi using the tpcadmin user and store more than 5 MB of mp3 files on the C:\ file system. This will cause the generation of a Constraint Violation and an entry in the Alert Log on the next run of a Scan job against pqdi. 10.You can view the Constraint Violation by expanding Data Manager Reporting Usage Violations Constraints and selecting By Computer (Figure 8-93, Creating Constraint, Constraint Violation report on page 405).

Figure 8-93 Creating Constraint, Constraint Violation report

11.We can even further drill into the Constraint Violation and list the files which cause the violation. In our case, the files are file1.mp3 to file4.mp3 and are stored in the root directory of C:\ as shown in Figure 8-94.

Figure 8-94 Creating Constraint, Constraint Violation details

Chapter 8. Getting Started with TotalStorage Productivity Center

405

Because we defined it in our Constraint definition, the Constraint Violation has also triggered an Alert entry in the TotalStorage Productivity Center Alert Log. The reason is that the combined size of the four mp3 files is bigger than 5 MB (see Figure 8-95).

Figure 8-95 Creating Constraint, Constraint triggered Alert

Alerts and Constraints for storage subsystems, fabrics and switches


TotalStorage Productivity Center knows Alerts for storage subsystems, fabrics, switches, and SAN endpoints. Constraints can exists for storage subsystems and switches. For these element classes, it is not possible to define Alerts and Constraints independently. You can define Alerts only. For some of the Alert conditions you specify during the Alert definition however, a Constraint is automatically generated. Those conditions are the Threshold conditions available as Alert conditions for storage subsystems and switches. Figure 8-96 shows an overview of the relationships between Alerts and Constraints for storage subsystem, fabrics, switches and SAN endpoints

generates may generate

Alert Definition Constraint Violation


storage subsystems, fabrics, switches Constraint Definition for storage subsystems and switches

Alert

generates

Figure 8-96 Alerts and Constraints for storage subsystems, fabrics and switches

Although an Alert definition can generate a Constraint definition and you would be able to report on the Constraint Violations caused by those Constraint definitions, there is no straightforward way to view the Constraint definition itself.

406

IBM TotalStorage Productivity Center: The Next Generation

Important: TotalStorage Productivity Center comes with three default Constraints which will cause Constraint Violations in the Constraint Violation Reports and may cause elevated status indications in the Topology Viewer. These Constraints however, do not raise a TotalStorage Productivity Center Alert Log when violated. Those predefined default Constraints are storage subsystem Constraints only. Their metrics and conditions are:

NVS_Full is defined as DASD Fast write operations delayed due to NVS space
Constraints divided by all I/O requests. Default values: critical level: 10, warning level: 3

Cache_HoldTime means cache holding time in seconds for this subsystem Default values: critical level: 30 sec. , warning level: 60 sec. Disk Utilization is expressed in percentages: critical level is 80%, warning level is 50%.
TotalStorage Productivity Center creates Constraint definitions for storage subsystems, and switches only, if you define Alerts with Threshold conditions. Those conditions are all performance related. This means that Constraint Violations will only be raised when performance data is entering the TotalStorage Productivity Center system while being collected. Here we create an Alert definition and an implicit Constraint definition for the condition, that the total Data Rate for our DS8000 subsystem exceeds 5 MB/s. 1. Expand Disk Manager Alerting, right-click Storage Subsystem Alerts and select Create Storage Subsystem Alert, as shown in Figure 8-97.

Figure 8-97 Creating storage subsystem Alert

2. In the Content Pane we must specify the condition for our Alert. Select Total Data Rate Threshold. Enter the values for the metric. We can establish boundaries for normal expected subsystem performance when defining storage subsystem Alerts. When the collected performance data samples fall outside the range we set, an Alert will be generated. The upper boundaries are Critical Stress and Warning Stress. The lower boundaries are Warning Idle and Critical Idle. We can leave some of the values empty. In this case, TotalStorage Productivity Center will simply not check the performance data against this boundary. In our example shown in Figure 8-98 on page 408, we only define the Critical Stress value with 5 MB/s.

Chapter 8. Getting Started with TotalStorage Productivity Center

407

Figure 8-98 Creating storage subsystem Alert, specifying the alert condition

We do not specify any additional actions to be taken by TotalStorage Productivity Center when the Alert is raised. 3. In the Storage Subsystems tab, select the storage subsystem for which the Alert (and the implicit Constraint Violation) shall be raised. Select our DS8000 system as shown in Figure 8-99.

Figure 8-99 Creating storage subsystem Alert, specifying the alert condition

4. We now save our storage subsystem Alert definition and name it DS8000 Total Data Rate. Each time a performance data sample is retrieved by the Performance Monitor job running for our DS8000 storage subsystem, TotalStorage Productivity Center will now inspect the performance data against the threshold we specified in our Alert. When TotalStorage Productivity Center detects that the Total Data Rate exceeds 5 MB/s it will raise an Alert in

408

IBM TotalStorage Productivity Center: The Next Generation

the Alert Log and will also log a Constraint Violation, because the Alert condition for this Alert was a Threshold condition. 5. We can view this Constraint Violation by expanding Disk Manager Reporting Storage Subsystem Performance. Select Constraint Violations. Select the Total Data Rate Threshold Column and generate the Report. The corresponding Alert can be viewed in the TotalStorage Productivity Center Alert Log. Note: If a performance manager job is currently running for a specific subsystem, fabric. or switch, you are able to define Alerts and implicit Constraints for that same element. However, these definitions will not be in effect unless you stop the Performance Monitor, delete it, redefine, and restart it. You can define Alerts for fabrics, switches and SAN endpoints and implicit Constraint definitions for switches in the same way as described above under Fabric Manager Alerting. The Constraint Violations can be viewed under Fabric Manager Reporting Switch Performance, by selecting Constraint Violations and generating the desired report. The Alerts can be viewed in the TotalStorage Productivity Center Alert Log.

8.9 Configuring your storage subsystems and switches


TotalStorage Productivity Center allows you to perform basic configuration tasks of your storage subsystems including the SAN Volume Controllers and switches in your environment. With TotalStorage Productivity Center you can: Create volumes on your storage subsystems and Virtual Disks on your SAN Volume Controllers and assign them to host ports. Add Managed Disk Candidates to Managed Disk Groups on your SAN Volume Controller. Delete Volumes and Virtual Disks. Change the zoning configuration of your switches. To enter the Volume / Virtual Disk creation dialog, follow these steps: 1. Expand Disk Manager and select Storage Subsystems. This shows us the list of all available storage subsystems and SVCs in our infrastructure. Select a storage subsystem or SAN Volume Controller and click Create Volume or Create Virtual Disk to start the process of Volumes / Virtual Disks creation as shown in Figure 8-100 on page 410. In our scenario, we selected SVC.

Chapter 8. Getting Started with TotalStorage Productivity Center

409

Figure 8-100 Configuring storage subsystems and switches, invoke volume / virtual disk creation

2. Enter the zoning configuration dialog box by expanding Fabric Manager and selecting Fabrics. This shows a list with the available fabrics. Select a fabric and click Zone Configuration. It is also possible to enter the zone configuration dialog box directly from the Create Volume / Virtual Disk dialog box shown in Figure 8-101.

Figure 8-101 Configuring storage subsystems and switches, invoke zoning configuration

We will now create a Virtual Disk on our SAN Volume Controller and assign this Virtual Disk to our SVC Master Console ITSOSVC.almaden.ibm.com (see Figure 8-1 on page 341) as an example. There is no zone between this computer and the SVC, so that we need to create this zoning configuration also. 1. We enter the Create Virtual Disk dialog box by selecting the SAN Volume Controller and clicking Create Virtual Disk in the screen shown in Figure 8-100. 2. TotalStorage Productivity Center shows you a panel where you specify the properties of the Virtual Disk you want to create. Create one (1) Virtual Disk with a size of 500 MB, name it TPC_CREATED and let IO-Group 1 manage it. Specify to create the Virtual Disk in the DS4500_R10 Managed Disk Group. Click Next (see Figure 8-102 on page 411).

410

IBM TotalStorage Productivity Center: The Next Generation

Figure 8-102 Create Virtual Disk

3. We now see a list with all WWPNs TotalStorage Productivity Center has located. The information about the WWPNs come from various sources (from the switches and from the host definitions in the subsystems) and they are not ordered by fabrics or any other criteria. As a result, we must look for one of our Master Console HBA, which we can locate easily because the system presents us with meaningful names for the HBAs of this computers. 4. Select one of the two HBAs (we could also select both) by moving the HBA to the Assigned Ports column. See Figure 8-103). Then we click Next.

Figure 8-103 Create Virtual Disk, select host port

Chapter 8. Getting Started with TotalStorage Productivity Center

411

5. TotalStorage Productivity Center now recognizes that there is no zone between the SVC and the Master Console. It asks you if it should create a new zone, update an existing one, or do nothing. Select Create a new zone with the name SVCMC_ITSOSVC01. See Figure 8-104 and click Next.

Figure 8-104 Create Virtual Disk, create zone

6. View the summary page as in Figure 8-105, where you can review your configuration request.

Figure 8-105 Create Virtual Disk and zone, summary page

7. Click Finish. TotalStorage Productivity Center now creates three jobs: one for the creation of the Virtual Disk, one for the LUN mapping, and one for the creation of the new zone.

412

IBM TotalStorage Productivity Center: The Next Generation

The first two jobs are related to the configuration changes of the SVC (Virtual Disk creation and LUN mapping) create an entry under Disk Manager Monitoring Jobs (see Figure 8-106). The job for the creation of the new zone causes an entry under Fabric Manager Monitoring Jobs.

Figure 8-106 Create Virtual Disk and zone, job submitted

8. We can monitor the jobs and examine their logs by clicking the entries and drilling into the job logs (see Figure 8-107).

Figure 8-107 Create Virtual Disk and zone, job entries

The green squares and the entries in the job logs indicate that all the jobs have completed successfully, so that our SVC Master Console is able to access the Volume. 9. Look at your zone configuration and verify that the new zone has been created. Enter the zone configuration dialog box by expanding Fabric Manager Fabrics, select our fabric and click Zone Configuration. Expand the active zone set and see the new zone SVCMC_ITSOSVC01 with the correct ports (see Figure 8-108 on page 414).

Chapter 8. Getting Started with TotalStorage Productivity Center

413

Figure 8-108 Create Virtual Disk and zone, new zone

10.As a last step, verify that the new Virtual Disk has been created and that the Virtual Disk was assigned to our Master Console HBA. For this, expand Disk Manager Reporting Storage Subsystems LUN to HBA Assignments and select By Storage Subsystem. Select the SVC to be included in your report and generate the report (see Figure 8-109).

Figure 8-109 Create Virtual Disk and zone, LUN to HBA Assignment Report

414

IBM TotalStorage Productivity Center: The Next Generation

Chapter 9.

Topology viewer
Within IBM TotalStorage Productivity Center the Topology Viewer is designed to provide an extended graphical topology view; a graphical representation of the physical and logical resources (for example, computers, fabrics, and storage subsystems) that have been discovered in your storage environment. In addition, the Topology Viewer depicts the relationships among resources (for example, the disks comprising a particular storage subsystem). Detailed, tabular information (for example, attributes of a disk) is also provided. With all the information that topology viewer provides, you can easily and more quickly monitor and troubleshoot your storage environment. The overall goal of the Topology Viewer is to provide a central location to view a storage environment, quickly monitor and troubleshoot problems, and gain access to additional tasks and functions within the TotalStorage Productivity Center UI without users losing their orientation to the environment. This kind of flexibility through the Topology Viewer UI displays better cognitive mapping between the entities within the environment, and provides data about entities and access to additional tasks and functionality associated with the current environmental view and the user's role. The Topology Viewer uses the TotalStorage Productivity Center database as the central repository for all data it displays. It actually reads the data in user-defined intervals from the database and updates, if necessary, the displayed information automatically. The Topology Viewer is an easy-to-use and powerful new tool within TotalStorage Productivity Center. It will make a storage managers life easier. But as is true for every tool, you first have to understand the basics, the concept and also the dos and donts, to get the most out of it. This chapter guides you through the Topology viewer. After reading this chapter you will know how the Toplology Viewer is used, where to find the information you want to see, and how to navigate among the views within the tool. But first things first. We start with some definitions.

Copyright IBM Corp. 2006. All rights reserved.

415

9.1 Design principles and concepts


This section provides an overview of the design principles. Definition of the basic terms is followed by a description of the general layout of the display including how to navigate among the views. What is displayed where is covered by the last three sections.

9.1.1 Progressive information disclosure


The purpose of progressive information disclosure is to reduce the complexity of a system. Typically, as it is used in Topology Viewer design, information is initially presented at some high level to the user. The presentation is progressively and selectively enriched with more detail, unveiling complexity as the user becomes more interested in particular aspects of the system. The object displayed in Figure 9-1 is displayed progressively for a Storage Subsystem within the Topology Viewer. On the first layer, you can see a group of Storage Subsystems displayed as an icon with a label (DS8000 ).

Figure 9-1 Overview display with no details

The same object displayed in a more detailed view of the Topology Viewer discloses much more detail. In Figure 9-2, for an object such as the DS8000, the device details such as the number of Disks, Pools, Volumes and LUNs are presented. By clicking the plus sign at the upper right of the displayed Entity Groups, you can see even more information. Entity Groups are described in the following sections.

Figure 9-2 Enriched View

416

IBM TotalStorage Productivity Center: The Next Generation

9.1.2 Semantic Zooming


Graphical topology displays appear simple and manageable when the environment is relatively simple with few numbers of entities and connections. Scaling is an issue in all kinds of complex displays.

Semantic Zooming is essentially a scaling technique applied to information abstractions, as opposed to graphical features, as in graphical zooming. While graphical zooming changes the scale of the graphical representation of the objects in a view, semantic zooming, changes the level of information abstractions.
Semantic Zooming is used in the Topology Viewer to address the scaling issue and is key to the design of the Topology Viewer. It is implemented with four distinct zoom levels or levels of abstractions. The four levels are defined as: 1. Overview, everything A global view which shows a highly aggregated view of the entire environment. 2. L0 (level 0) large groups of similar stuff A groups view focusing on a class of entities. It shows several groups of entities that correspond to the topology class selected. The entities are Computers, Fabrics, Storage Subsystems and other. 3. L1 (level 1), related groups of stuff A group view which focuses on one group of selected entities and shows their immediate neighbors such as a group of computers. 4. L2 (level2), one entity Detail view, the most detailed view that focuses on a selected entity and its neighbors. The four levels of Semantic Zooming within the Topology Viewer are used and described in detail in 9.2, Getting started on page 434. For example, at a high level an object representing a storage subsystems on L0 may be rendered as an icon with a title only as shown in Figure 9-3.

Figure 9-3 Topology Viewer Overview (L0 Storage)

At lower levels, the same storage subsystem can be rendered as a box containing subentities representing disks and showing attributes of each individual disk. At yet another detail level, the ports of the storage subsystem might be shown, as well as status information for each port. The example in Figure 9-4 on page 418 shows the detailed view of a DS8000 Storage Subsystem on level L2.

Chapter 9. Topology viewer

417

Figure 9-4 Topology Viewer Details (L2 Storage)

Note that in a graphical zoom all of these details would be shown in all zoom levels though often too small to be useful and add clutter to the display. Semantic zooming thus reduces unnecessary details and allows the user to focus on relevant details at a particular level for a particular task.

9.1.3 Entities
An entity is defined as a physical device or logical resource discovered by TotalStorage Productivity Center. Each entity typically has a set of attributes. For example, an entity of class Computer has an OS Type attribute. Additionally, entities may have user defined attributes called User Defined Properties (UDPs) that can be set from the Topology Viewer. All discovered entities have a conventional identification associated with them, similar to a label. These labels are displayed in the graphical as well the tabular view.

Entity classes
The Topology Viewer assigns every entity discovered by TotalStorage Productivity Center to one of the four following entity classes: Computers Fabrics (includes switches) Storage (includes tape libraries) Other Specific views for each of these four classes are provided by the Topology Viewer. For each specific class, three separate views exist. These views match to the three different detail levels (L0-L2) defined in 9.1.2, Semantic Zooming on page 417. Including the special Overview, the Topology Viewer facilitates the display of thirteen different views. The table in Figure 9-5 on page 419 shows these views in a matrix.

418

IBM TotalStorage Productivity Center: The Next Generation

Figure 9-5 Overview views of Topology Viewer

Navigation between the single views, or cells, is simple and flexible. To drill down to a single computer, follow these steps: 1. Starting at the Overview, select the entity class Computer and view C0 will display. This is the Level 0 view of computers (groups of similar stuff). 2. Within C0 select one specific group and view C1 will display. This is the Level 1 view of computers (related groups of stuff). 3. Open the C2 view by selecting one single computer within the group displayed in C1, this will display the L2 view of computer (one entity). This is just an example. There are always multiple ways to find the requested information about the entities. Other examples and how to navigate between views is described in detail in 9.2.2, Navigation on page 436. Note: The Storage views include both storage subsystems and tape libraries.

Entity class of Others


The first three entity classes are self explanatory. The fourth one, the Other class, needs some explanation. TotalStorage Productivity Center will put all discovered entities it was not able to identify as either Computer, Fabric, or Storage in this Other entity class. Assume TotalStorage Productivity Center successfully discovered a fabric with all the switches and several CIMOMs with Storage Subsystems connected to this fabric. It also recognizes, with the installed Agents, several computers. The discovered switches will also report WWNN and WWPN information back to TotalStorage Productivity Center that it cannot associate with a recognized computers HBAs or to previously discovered FC ports of a Storage Subsystem. TotalStorage Productivity Center Topology Viewer groups the WWPN by WWNN and display it as entities with a state of Unknown and present these entities within class Others. Operations which are supported by the Topology Viewer on these entities are: Assign a name to the entity Assign a type to the entity

Chapter 9. Topology viewer

419

The following example shows how this is done within Topology Viewer. 1. Assume that we have one entity in class Others. We expanded the group. The Topology Viewer displays information similar to Figure 9-6.

Figure 9-6 Other Entity detailed view

The entity possess two FC ports connected to a switch, but the entity type is unknown to TotalStorage Productivity Center. 2. Add information derived from other sources, to this entity. In this case the Type and a descriptive name are added. To add the detailed information, move the cursor over the unknown entity and right click. The Background Context Menu is displayed. Select Launch Detail Panel and the panel in Figure 9-7 opens.

Figure 9-7 Update Entity of type Other

3. By defining a type in our example Computer, the Topology is able to display this entity in the appropriate class. We also change the entities label from the originally displayed WWNN to TPC1. Opening the Computer Level 0 view (C0) displays the information in Figure 9-8 on page 421.

420

IBM TotalStorage Productivity Center: The Next Generation

Figure 9-8 Display updated entity in new class

The changed entity is now be displayed as an entity with label TPC1 in class Computers.

Entity groups
Entities are the displayable elements within the Topology Viewer. On the overview level entities can be computers, fabrics or storage subsystems. On the detailed views entities such as ports, HBAs, disks, pools, LUNs, and volumes will be shown.
Because the number of entities can become overwhelming in a real system, grouping of entities is employed in order to reduce the number of entities shown in the topology view. The Topology Viewer provides grouping of entities that share some similarities. Depending on the entity type several grouping criteria are available. Users can change the default grouping algorithm so that entities can be regrouped dynamically based on the selected algorithm. By doing this, they can work most effectively based on their current tasks. Figure 9-9 shows grouping of Disk Subsystem entities in one single group:

Figure 9-9 Single group of Disk subsystems

To change the grouping algorithm, move the cursor over the specific group, right-click, then select group by option. Depending on the type of the entities in the group, different grouping criteria are available. Changing the grouping criteria in our example from Single Group to Group by Health Status, provides the following view of two groups shown in Figure 9-10.

Figure 9-10 Disk Subsystem grouped by Health Status

Chapter 9. Topology viewer

421

Arrangement of entities within a group (sort order) can also be modified so that an ordering of entities is provided that best suits your current tasks. By clicking on the upper right corner (plus/minus sign) of an Entity Group, the group can be expanded or collapsed. In the graphical topology view as well as in the tabular view, a default grouping is applied so that collections of entities are shown as groups whenever possible.

9.1.4 Information Overlays


With Information Overlays, interactive displays can be created that provide information in various overlays that become activated when the user performs an operation. Information Overlays is another technique used in topology design to enrich the visual display of the environment with information relevant to their tasks, such as system health and performance. While this technique is used traditionally in geographical maps, it is very well suited to the visualization of complex systems. Within the Topology Viewer, user tasks are supported by status overlays such as colored and iconic overlays of information such as: Health status of entities or group of entities Performance overlays Policy compliance/violations An Entity with enabled health (upper square) and performance overlay (bar chart) is presented in the graphical view of the Topology Viewer as shown in Figure 9-11.

Figure 9-11 Overlays graphical view

Overlays are also displayed in the tabular view. The same Overlays as shown in Figure 9-11 are shown in in the tabular view in Figure 9-12.

Figure 9-12 Overlays Tabular View

In TotalStorage Productivity Center V3.1 Health Overlays are also applied to Connections by displaying them in different colors, see 9.1.6, Connections on page 424. This may be expanded to other overlays in future releases. Overlays can be turned on and off by setting the specific option in the Background Context Menu. This menu is available at all times for the background of the graphical views. The background is any part of the graphical view that is not displaying a group entity, individual entity, or connection. To invoke the context menu, right-click in the background. By selecting the Global Settings the overlay options are displayed.

422

IBM TotalStorage Productivity Center: The Next Generation

Note: Overlay settings are not persistent, therefore you have to activate them for each session. Performance overlays are only visible on the detailed level, which is the L2 view.

9.1.5 Layout
The primary launch point for the Topology Viewer is the standard navigation area on the left side, for example the node tree, of the TotalStorage Productivity Center interface. The Topology Viewer provides several views of the environment. Each of the views is split into two synchronized views of the environment. Two subviews are displayed in the Topology Viewers panel, they are: Graphical Topology View displayed in the upper part of the panel Tabular View displayed in the lower part of the panel The Graphical and Tabular subviews are displayed together as one view and are always synchronized, they both display the same entities. Changes in one subview will update the other one and vice versa. After launching the Topology Viewer the top-level view is always open. If multiple views are open, only one is displayed in the panel. Each view creates its own Tab as shown in Figure 9-13. By selecting one of the Tabs you can quickly switch between the single views.

Figure 9-13 Topology Viewer Panel layout

Graphical View
Essentially the Graphical View displays visual renderings of the environment with icons, boxes, and lines showing the entities and their logical and physical connections spatially. The Graphical View is a visual and spatial rendering of the entities and their relationship to each other in the environment. Entities in the environment are rendered as icons with labels. Groups of entities are rendered as boxes with labels. Depending on whether the group is collapsed or expanded, the individual entities in the group are visible, or a summary of the group's content is provided. Logical and physical connections among entities are rendered on demand as lines connecting the relevant entities or groups of entities. The advantage of the graphical view is that relationships among entities can be rendered spatially allowing users to spatially map the environment and orient themselves for their tasks.

Chapter 9. Topology viewer

423

Tabular View
The Tabular View displays a textual rendering of the environment using a set of tables organized in tabs by class of entities with each table organized in rows and columns showing different attributes of the entities in the environment.

Mini-map
To further facilitate effective navigation the graphical view contains a mini-map window that represents an abstract overview of the whole topology view, thereby providing an environmental and work context for the user. The mini-map indicates the visible portion of the topology, and facilitates user interaction to pan the visible portion.

Figure 9-14 Topology Viewer MiniMap

The mini-map window renders the whole topology view in abstract form, reducing the level of detail and ensuring that the complete mini-map is visible at all times. Essentially, the map shows only the top and second level of detail of the current view and groups are represented as colored boxes based on the aggregated health value of that group. Note: In TotalStorage Productivity Center V3.1, the health overlay is shown. Future versions of topology might allow users to select what overlay is surfaced in the mini-map.

9.1.6 Connections
The Topology Viewer UI supports both physical and logical connections among entities in the environment. Connections are shown in both graphical and tabular views on demand. The connection option is turned on or off via the Background Context Menu (see 9.1.4, Information Overlays on page 422). Connections between entities are only shown at L1 and L2 view levels. In these views connections between entities are shown on demand, when the user selects entities or groups.

Health Status Overlays


Health Status Overlays can be applied to connections if they are turned on via the Background Context Menu ( see 9.1.4, Information Overlays on page 422). Unless Health Overlay is turned on, connections are shown as black lines. Figure 9-15 on page 425 shows how connections are displayed for one single switch in the graphical as well as in the tabular view.

424

IBM TotalStorage Productivity Center: The Next Generation

Figure 9-15 Topology Viewer Connections

A connection to a collapsed group can represent either a single connection or several connections. Single connections are drawn as thin lines, but multiple connections are indicated using thicker lines. Therefore, Figure 9-15 indicates that the switch is connected to more than one entity in the collapsed group.

9.1.7 Status propagation


Entities within Topology Viewer have a status. Two different types of status are known within the Topology Viewer. One of the states signifies the actual Health status of an entity, the second one covers the actual Performance State. There are five unique Health states and four unique Performance States. As already shown in 9.1.4, Information Overlays on page 422, the entities state is visualized by use of Overlays and displayed in the topology view as well in the tabular view of the Topology Viewer.

Health status
Health status is shown on connections. Individual connections can show normal (green) or critical state (red). If several connections are aggregated to be shown as one (thicker) line, this line may also show warning state (yellow) according to the normal aggregation rules. Aggregation rules are discussed in following sections. The table in Figure 9-16 on page 426 gives an overview of the actual states and their graphical representation within the Topology Viewer.

Chapter 9. Topology viewer

425

Figure 9-16 Topology Viewer states

The Health Status Overlay displays operational status of the entities, such as Computers, Storage Subsystems, and Switch Ports. Entities can show the following different health values: Normal Warning Critical Missing Unknown Entity is normally operating. There is at least some part of the entity that is not operating or has serious problems. Entity is either not operating or has serious operational problems. Entity was discovered and recognized by TotalStorage Productivity Center but is no longer discovered as of the latest refresh cycle. Entity was was discovered but not recognized by TotalStorage Productivity Center.

Examples of Unknown health values are: A computer without any TotalStorage Productivity Center Agent, connected to a fabric or switch monitored by TotalStorage Productivity Center A Disk subsystem or Tape Library not registered to a CIMOM, or registered to a CIMOM not discovered by TotalStorage Productivity Center connected to a fabric or switch monitored byTotalStorage Productivity Center The example in Figure 9-17 shows three expanded entity groups. Each entity displays its specific state. The group state is displayed at the top to the left of the group label.

Figure 9-17 Group States

426

IBM TotalStorage Productivity Center: The Next Generation

Performance status
The Performance Overlay displays the operational status of the entities, such as Computers, Storage Subsystems, Switch Ports, if predefined internal thresholds have been reached or exceeded. The Topology Viewer scans the PMM_Exception table for the existing constraint violations. Topology Viewer shows the performance status of an entity as one of the following: Normal Warning Critical Unavailable Entity is operating at expected performance. There is at least one part of the entry that is not operating at expected performance. Entity is operating below expected performance. Will be used if entity is not available in certain situations.

The performance status is aggregated up to a high level entity such as switch or subsystem based on the combination of all the Performance Manager exceptions from different entities underneath the high level entity. To get more information about a performance state or Critical or Warning shown in the Topology Viewer, select: Disk Manager Reporting StorageSubsystemPerformance ConstraintViolation Generate the report for the specific subsystem, look for the first critical or warning entry, and display it to get the details of Date, Time, and Values. Then search for the alert timestamp in the Alerts report to determine the most recent time alert. An example of Performance Overlay is shown in Figure 9-21 on page 430.

Aggregation rules
Multiple states from single entities are aggregated into group states or states of entities shown in upper layers. The table in Figure 9-18 on page 428 explains how entity states are aggregated within the graphical and tabular view of the Topology Viewer.

Chapter 9. Topology viewer

427

Figure 9-18 Basic aggregate status

Following are the health and performance state aggregation rules: If all entities are in the same state, the aggregated state will reflect this state. An aggregated Critical State (red) occurs if: For a single entity, all the entries in the database are critical. One entity is in the Missing State and at least one other entity shows a Critical State (shown in the first and second rows). The state will remain critical as long as the entries are maintained in the PMM_Exception table. An aggregated Normal State occurs if: One entity is in the Normal State and all others are in Normal or Undefined state (fourth row from top). One entity is in the Undefined State and all others are in Normal or Undefined State (fifth row from top). An aggregated Warning State (yellow) occurs if: At least one entry for an entity is in the Warning state, or Critical and Warning State entities are combined. All other combinations not described under Critical State or Normal State will aggregate to Warning State.

428

IBM TotalStorage Productivity Center: The Next Generation

States displayed within the tabs on top of the Topology View will be aggregated to a specific high-level entity class (refer to Entity classes on page 418). For Computers and Storage only the states of these entities are used for aggregation. For Fabric, the switch status only is used (Level 2). For Level 1 and Level 0 the states of all displayed entities are used.

9.1.8 Hovering
Hovering the mouse pointer over the status overlay icons reveals short textual descriptions of the status information, as shown in Figure 9-19.

Figure 9-19 Hovering on Status Overlays

Hovering displays performance details only where the data is actually available at that level of detail. For example, performance data is available for individual ports and switches as a whole. However, no aggregated performance data is being collected for port groups. Therefore, hovering over a port group's performance overlay icon reveals only a textual description of the overlay status. The first example in Figure 9-20 shows performance details when hovering over the performance overlay icon of a single port:

Figure 9-20 Hovering on Performance Overlays on a SAN switch

The example in Figure 9-21 on page 430 shows performance information when hovering over the performance overlay icon of the disk pool. Note that only a textual description of the overlay status is displayed.

Chapter 9. Topology viewer

429

Figure 9-21 Hovering on Performance Overlay on Disk Pools

9.1.9 Pinning
The simple selection paradigm is sufficient for most selection activities, however it does not allow for retention of the selected entities for future reference. In the Topology view, pinning entities is introduced as a way to overcome this problem. Pinning provides a method to retain a list of selected entities for future reference or quick access. Pinned entities are typically of high interest to the user, such as for monitoring purposes, and easy access is essential. Through pinning, the Topology Viewer can provide functionality that is similar to NetView SmartSets. Pinning marks entities for longer durations. The pins will remain visible as you change views in the Topology Viewer. It can be used to easily refer to a small number of entities throughout a session and to providing a direct path to find such items (see 9.2.2, Navigation on page 436). To pin an entity in the graphical view, open the Background Context Menu (see 9.1.4, Information Overlays on page 422), and select the Pin or Unpin menu option. In Figure 9-22 you see a pinned SVC at level S0 with Health state missing, as indicated in the upper left of the icon. Note that in an expanded group, pinning is shown on each pinned entity.

Figure 9-22 Pinned entity

Figure 9-23 on page 431 shows the pinned entity as it is displayed in the Overview panel of the Topology Viewer. Note that in a collapsed group pinned entities are surfaced.

430

IBM TotalStorage Productivity Center: The Next Generation

Figure 9-23 Pinned entity in Overview panel

As you easily can see by this example, pinning in the Topology Viewer offers an easy-to-use and powerful method to select single or multiple entities you want to monitor.

Note: TotalStorage Productivity Center V3.1 supports only a single pin list and the pin list is not persistent. If the Topology Viewer is closed and reopened, the pins are not displayed. Future releases of the Topology Viewer might extend the pinning concept to multiple persistent pin lists and might introduce additional pinning functionality.

9.1.10 Zone and zonesets


The Topology Viewer also allows users to examine zones and zonesets. Zone information is shown in a Zone tab in the table view. The Zone tab is deactivated by default and can be turned on using the global settings dialog box of the the Background Context Menu (see 9.1.4, Information Overlays on page 422). The Zone tab works similar to the locate function, selecting a zone or zoneset in the Zone tab highlights zone and zoneset members in the graphical view. The reason behind this design is the amount of data that needs to be sent to the Topology Viewer to fully populate the Zone tab. As in all the tabs in the tabular view, the entities listed in the Zone tab are only those entities that are displayed in the current view (for example, a specific switch entity). However zone or zoneset selections apply to all open views in the Topology Viewer. Thus, switching to another view might indicate other entities in the selected zone in that particular view. Table 9-24 on page 432 shows how zoning information is displayed in the Topology Viewer tabular view for a server (AZOV) with two HBAs connected to the SVC (ITSOSVC01).

Chapter 9. Topology viewer

431

Figure 9-24 Zoning example graphical view

The zoning Information for this configuration will be shown in the tabular view of the Topology Viewer as shown in Figure 9-25.

Figure 9-25 Zoning tabular view

In our example, each server HBA port is zoned to two ports of each of the two SVC nodes. We defined two zones (HBA1_ITSOSVC01 and HBA2_ITSOSVC01) containing five ports each. In the Label column, you can see the specific port WWPNs. Note: TotalStorage Productivity Center V3.1 Topology Viewer provides no control functions, therefore you cannot make zone changes. The Zone tab is a pure visualization of zone membership. Details about supported zoning actions can be found in 8.9, Configuring your storage subsystems and switches on page 409. At the time of writing this book, zoning information can be gathered in-band through the Fabric Agent for the supported McData and Cisco switches and out-of- band with the IP/API interface for Brocade switches.

432

IBM TotalStorage Productivity Center: The Next Generation

9.1.11 Removing entities from the database


If an entity is in a missing state, you might want to remove it from the view and the database. To remove a missing entity from the database perform one of the following tasks: In the graphical view, select the the entity. Right-click to open the Background Context Menu. Select Remove from Database from the context menu. In the tabular view, select the missing entity and select Remove from Database from the action menu in the table view. Note: If volumes are deleted on a storage subsystem already discovered by TotalStorage Productivity Center, the Topology Viewer will display these entities with Health Status missing. They have to be explicitly removed by the user.

9.1.12 Refreshing the Topology Viewer


The Refresh function either manually or automatically refreshes the contents of the views from the TotalStorage Productivity Center database. The Refresh function is accessed through the Background Context menu. Right-click anywhere on the background of the Topology Viewers graphical view background to open the Background Context menu as shown in Figure 9-26. Refresh View triggers an update of the currently displayed window with fresh data from the database. Refresh All Views updates all currently open views. Opening a new view will also generate a refresh request from the TotalStorage Productivity Center database server to ensure the most current information is being presented.

Figure 9-26 Topology Viewer Refresh Views

The Topology Viewer refreshes its views from the database every five minutes by default. This can be changed by right-clicking anywhere the background of the Topology Viewers graphical view background. This will open the Background Context menu. Select Refresh Settings and update the setting based on your installation needs as shown in Figure 9-27 on page 434. This is a persistent setting.

Chapter 9. Topology viewer

433

Figure 9-27 Topology Viewer Refresh Rate Setting

9.2 Getting started


This section describes the content of the Topology Viewer, navigation between views, and how to launch it.

9.2.1 Launch the Topology Viewer


The Topology Viewer is launched from one of the five possible entry points provided by the tree view on the left side of the TotalStorage Productivity Center V3.1 user interface. Closing the Topology Viewer is done by closing the panel like any other panel of the TotalStorage Productivity Center V3.1 UI. Launch the Topology Viewer by selecting IBM TotalStorage Productivity Center Topology as shown in Figure 9-28.

Figure 9-28 Topology Viewer Launch Points

Four child nodes are added under the Topology node. They are: Computers Fabrics Storage Other They provide four additional launch points for the Topology Viewer. They serve as a fast-track to unveil directly one of the four entity class views the Topology Viewer provides. When launching the Topology Viewer through one of the child nodes, it opens with two views. The selected child view at L0 will be displayed and the Overview Tab will be opened as well, so there will be an additional Tab displayed in the panel. As described in 9.1.5, Layout on page 423, this view remains open as long as the Topology Viewer is active. Launching the Topology Viewer by selecting IBM TotalStorage Productivity Center Topology will show the view in Figure 9-29 on page 435.

434

IBM TotalStorage Productivity Center: The Next Generation

Figure 9-29 Topology Viewer Overview tab

On the top left corner of the Topology Viewer panel the Overview Tab is displayed. Additional Tabs are added as you open new views. The graphical view shows the four entity classes (Computers, Fabrics, Storage and Other). For each class the actual number of entities and the aggregated Health status is displayed. This overlay is activated by default. The synchronized tabular view presents the same information. On top of the tabular view you find the Action drop down. It provides most of the function available through the Background Context Menu (accessed through a right click in the graphical view). The additional Locate drop down allows you to search items in the tabular view. Found items are highlighted. Tip: If you want to enjoy the Topology Viewer in premium cinemascope quality, select View from the TotalStorage Productivity Center UI and deselect the menu option tree. You can undo this by selecting tree again. On the top right corner of the Topology Viewer panel shown in Figure 9-29, the mini-map is displayed in the topology view. It indicates the visible portion of the topology, and facilitates user interaction to pan the visible portion (see 9.1.5, Layout on page 423). You can pan to the part of the topology you want to see easily by one of the following methods: Move the cursor to mini-map, press the left mouse button and move the cursor. From any point in the Topology Viewer, press the middle mouse button and move the cursor.

Chapter 9. Topology viewer

435

From any point in the Topology Viewer press and hold the Ctrl and Alt keys, move the cursor to navigate If you find it too hard to pan within the mini-map, simply resize the mini-map window and make it bigger or smaller. This will reduce/increase the speed of the movement of the visible part of the graphical view. Note: TotalStorage Productivity Center V3.1 supports the launching of the Topology viewer by using one of the five entry points as described above. In a future release, the Topology Viewer might be also launchable from a number of tasks such as reports.

9.2.2 Navigation
Navigating between the different views in the Topology Viewer is easy and simple. The only things you have to remember before you start are: There are four different entity classes (Computer, Fabric, Storage And Other). There are four (semantic) zooming levels per class (Overview, L0:groups of similar stuff, L1: related group of stuff, L2:one entity). There are a total of 13 views. The table in Figure 9-30 provides a simplified version of the different views available in the Topology Viewer. For the sake of simplicity, we omitted the entity class Other. A detailed description of this class is given in 9.1.3, Entities on page 418. The table in Figure 9-31 on page 437 shows the four entity classes (the columns), and the four zooming levels. Knowing where you are is good. Knowing where you can go from there is better. The possible navigation between levels is shown by the arrows in the table. For example, you can go from the Storage view S0 directly to the detail view S2.

Figure 9-30 Topology Viewer UI navigation

Remember that pinned entities (see 9.1.9, Pinning on page 430) always offer the possibility to go directly to the specific entity view by simply clicking them. The following sections discuss the details of the single views of the Topology Viewer. We added a little navigator icon (see Figure 9-31 on page 437) to specific views only to help you

436

IBM TotalStorage Productivity Center: The Next Generation

remember quickly which of the views is presented. Within this icon, we used the same terminology as in Figure 9-30 on page 436 (for example, F2 for entity class Fabric, view L2). Remember that this navigator icon is only added for educational purposes and is not part of the TotalStorage Productivity Center GUI. .

Figure 9-31 Navigator icon

9.2.3 Computer
This section describes the levels within the Computer view.

L0: Computers
This is the default view upon entering the graphical view via selection of Computers in the navigation menu. The view renders all known computer groups. Entities always belong to some group so there are no ungrouped entities. The grouping criteria can be changed by the user, see 9.1.3, Entities on page 418. The example in Figure 9-32 shows the computer entities grouped by Health status, which is the default. Again note that the navigator icon is not part of the TotalStorage Productivity Viewer, but added by us. It is to help you identify where you are within the Topology Viewer.

Figure 9-32 Computers L0

Chapter 9. Topology viewer

437

L1: Computers
When the user navigates to the Computers L1 , the view shows one Computers Group and all known and related environmental entities for that Computers Group. Again, Computers group can be used to get an overview. Related entities are displayed as collapsed groups, by default, linked to the Computers group (see Figure 9-33). Physical connections are shown as a Fabric group, linked to the lower right. Logical connections to volumes are shown as a Device group linked to the group towards the upper right. The Device Group shows all related volumes. The Fabric Group shows all connected switches and fabrics. More details can be shown by expanding the group (see Figure 9-34 on page 439).

Figure 9-33 Computers L1 (C1)

Computers L1 tabular view


The tabular view presents additional information about switches, volumes and connections. The Subsystem Tab is used to provide information about a subsystem that is directly connected to a server, that is connected without a switch or fabric. By expanding the two groups Device and Fabric more details become visible. Connections can be displayed to get information about which volumes are owned by a computer and how is it connected to the fabric. With TotalStorage Productivity Center V3.1, displaying connections or relationships between computers and a Storage Subsystem is not supported. How to determine the Storage Subsystem that provides the volumes for a specific computer is shown in Figure 9-34 on page 439.

438

IBM TotalStorage Productivity Center: The Next Generation

Figure 9-34 Computer L1 (C1) Devices

By selecting a specific computer, the connections tell you immediately which volumes on which disk subsystem are owned by this computer. More details about the volumes can be found in the tabular view by clicking the Volume tab. For more information about the volumes attributes and where the volumes are in the specific subsystem, go to view S2 (L2: Storage Subsystem Disk on page 447). By selecting a single computer entity, the more detailed L2: Computer view is started.

L2: Computer
When the user navigates to the Computer L2 view, the view shows an individual computer with its HBAs and ports, and all known and related environmental entities for that computer. The individual computer is the focus of this view and is displayed vertically in the middle of the view, left justified (see Figure 9-35 on page 440). As in the L1: Computers view, two groups are linked to the computer, containing Fabric and Device groups. The computer's HBAs and ports are shown when expanding the Fabric group. LUNs and Volumes for that computer are contained in the Device group.

Chapter 9. Topology viewer

439

Figure 9-35 Computer L2 (C2) Overview

Expanding the Device group reveals the relationship between this computers LUNs and the Volumes on the Storage System providing them. The tabular view shows the size of the single volumes (see Figure 9-36 on page 441).

440

IBM TotalStorage Productivity Center: The Next Generation

Figure 9-36 Computer L2 (C2) Devices

The Fabric group contains a group with the computers HBAs (which can be expanded to explore individual ports for Dual-Port HBAs) and a group of connected switches, if applicable. Figure 9-37 on page 442 shows an example of an expanded Fabric group:

Chapter 9. Topology viewer

441

Figure 9-37 Computer L2 (C2) Fabrics

9.2.4 Fabric
The goal of this graphical view navigation class is to provide the user with Fabric and switch information. A Fabric is a network of entities that are attached through one or more switches. In addition, TotalStorage Productivity Center is to support the display of Virtual SANs (VSANs) and Logical SANs (LSANs). Although SANs, VSANs, and LSANs are implemented and managed differently depending on your environment, they are conceptualized in the same manner through the Topology Viewer. However, because of the conceptual similarities, they are grouped under the larger category of Fabrics in the Topology Viewer for ease in categorization and visualization. In general, the semantic zoom levels within this class allow you to view Fabric components, view relationships, and connections between switches and the entities of the Fabrics and view details of switches.

L0: Fabric
This is the default view upon entering the graphical view through selection of Fabric in the navigation menu. The view shows all known Fabrics and Virtual Fabrics as groups (Fabrics being physically networked entities versus being defined by fabric or zone relationships). Each group represents one Fabric, depending on what is discovered in the environment, therefore there can be a mix of Fabrics displayed at this level.

442

IBM TotalStorage Productivity Center: The Next Generation

Figure 9-38 Fabric-L0 (F0)

L1: Fabric
At the L1: Fabric level both Fabrics and Virtual Fabrics are represented in the same manner. Computer groups and Other groups are shown on the left of the switches, vertically stacked and left-aligned. By default, computer groups are shown collapsed. Storage groups are shown on the right of the switches, vertically stacked and left-aligned. By default, storage groups are shown collapsed. Groups of Tape Libraries are stacked below Subsystem groups. Groups of Other type entities are stacked below computer groups. Connections between the single groups, or if expanded, between the entities can be displayed. Figure 9-39 on page 444 shows the L1: Fabric view, all groups are expanded. The Other group shows an entity with a user defined name of Helium. At this point, no type has been assigned.

Chapter 9. Topology viewer

443

Figure 9-39 Fabric-L1 (F1)

L2: Fabric (Switch)


At this level the focus is on one switch. When the user navigates to the L2 level, the view will show an individual switch, its ports, and all known and related environmental entities. Entities connected to the switch are shown as groups under the switch. By default ,these groups are collapsed. The ordering of the groups is determined by entity types with Computers on the left, followed by Switches, followed by Storage entities (first Subsystems and then Tape Libraries) and last by Others. Depending on the selected grouping algorithm, each of these groups might be split into several groups in this view. Connections between the single groups, or if expanded, between the entities can be displayed. Figure 9-40 on page 445 shows the L2 Fabric view, with an expanded Storage Subsystem group showing the two fabric connections of a DS4500 and the eight connections of an 2-node SVC Cluster to the switch. The Other group shows an entity with a user defined name of Helium, but no type has been assigned so far.

444

IBM TotalStorage Productivity Center: The Next Generation

Figure 9-40 Fabric-L2 (F2) Switch

9.2.5 Storage Subsystem


The goal of this graphical view navigation class is to provide the user with storage information. In general, the semantic zoom levels within this class allow the user to view subsystems and tape libraries, view relationships and connections between subsystems/tape libraries and the near entities within the environment, and view details of subsystems and tape libraries. Note that both disk subsystems and tape libraries are considered to be storage and are therefore shown in this group.

L0: Storage Subsystem


This is the default view upon entering the graphical view via selection of Storage in the navigation menu. The view will render all known subsystem and tape library groups as shown in Figure 9-41 on page 446.

Chapter 9. Topology viewer

445

Figure 9-41 Storage-L0 (S0) Disk

As this example shows, additional information is available in the tabular view of the panel.

L1: Storage Subsystem Disk


When you navigate to the Subsystems L1 level, the view renders a subsystems group and all known and related environmental entities for that subsystems group. Related entities are displayed as a group called Fabric linked to the lower right of the subsystems group as shown in Figure 9-42.

Figure 9-42 Storage-L1 (S1) Disk

446

IBM TotalStorage Productivity Center: The Next Generation

L2: Storage Subsystem Disk


When you navigate to the Subsystems L2 view, the view shows an individual subsystem, its DAs and ports, and all known and related environmental entities for that subsystem. The subsystem's DAs and ports are shown as being a part of the subsystem's Fabric group, which is collapsed by default. The Fabric group also contains related entities, shown as initially collapsed groups. Disks, Pools, and Volumes of the subsystem are shown as a Device group, stacked above the subsystem. Figure 9-43 shows these groups in collapsed state for an 2-node SVC cluster.

Figure 9-43 Storage-L2 (S2) Disk

Expanding the Device group will allow users to explore disk/pool/volume/computer relationships and aids in understanding LUN maskings and mappings. The following example covers this. The expanded Device group is shown in two figures. Figure 9-44 on page 448 shows the disk, pool, and volume relationships, in SVC terminology the managed disk to managed disk group to vdisk relationship.

Chapter 9. Topology viewer

447

Figure 9-44 Storage L2 Disk, Pools, Volumes

Figure 9-45 shows the LUN masking and mapping aspects, the volume to LUN relationship. In SVC terminology how the vdisks are mapped to hosts.

Figure 9-45 Storage L2 Volumes, LUNs

448

IBM TotalStorage Productivity Center: The Next Generation

The Fabric group covers the fabric connectivity of the storage entity. Figure 9-46 shows the eight connections of a 2-node SVC cluster to our lab switch.

Figure 9-46 Storage L2 Fabric

L1: Storage Subsystem Tape


When the user navigates to the Tape Libraries L1 zoom, the view shows a Tape Libraries Group and all known and related environmental entities for that Tape Libraries Group. View Layout is the same as that for the L0 Storage Subsystem Disk expanded and collapsed views. Related entities are displayed as a group called Fabric linked to the lower right of the tape libraries group.

L2: Storage Disk Subsystem Tape


When the user navigates to the Tape Library L2 level, the view shows an individual tape library, its DAs, ports and media changers, and all known and related environmental entities for that tape library. The tape library's drives and ports will be shown as being a part of the tape library's Fabric group, which is collapsed by default. The Fabric group also contains related entities, shown as initially collapsed groups. Media changers of the tape library are shown as a Media group, stacked above the tape library as shown in Figure 9-47 on page 450.

Chapter 9. Topology viewer

449

Figure 9-47 Storage-L2 (S2) Tape

9.3 Printing
The TotalStorage Productivity Center Topology Viewer provides print functionality. Right-clicking the graphical view background or the tabular view opens the Topology Viewers action window. Selecting Print or Print Preview generates a one page printout or preview of the content actually displayed in the graphical view part of the Topology Viewer. The content is scaled to fit one page if necessary. Printing the content of the tabular view is not supported with the current release of the Topology Viewer. Future versions of the Topology Viewer might provide additional print options such as printing the contents of individual tabular view tabs and printing on multiple pages.

9.4 Summary
The Topology Viewer provides users with options to view their topology based on their work and in context with the fabric environment. Four primary entry points are provided. An overview which displays the whole environment in a summary form, with three views each displaying a Topology L0 View focused on the SAN as a whole, on computers within the SAN, or on storage subsystems within the SAN.

450

IBM TotalStorage Productivity Center: The Next Generation

Unlike its counterparts, the TotalStorage Productivity Center V3.1 Topology Viewer employs a design approach that supports the concept of progressive information disclosure, primarily to manage the scale and complexity common in storage environments today. This concept is supported by allowing users to use semantic zoom levels and overlays in support of key work path scenarios. The overall goal of the Topology Viewer is to provide a central location to view a storage environment, quickly monitor and troubleshoot problems, and gain access to additional tasks and functions within the TotalStorage Productivity Center UI without losing your orientation to the environment. The Topology Viewer is part of TotalStorage Productivity Center V3.1 and relies on the TotalStorage Productivity Center database as a central data repository. To be able to display information about the entities, the information has to be collected by the specific methods TotalStorage Productivity Center provides: discovery, probing, performance data collection, and alerting. To present an actual view on the current environment, the Topology Viewer refreshes its views in intervals you can define.

Chapter 9. Topology viewer

451

452

IBM TotalStorage Productivity Center: The Next Generation

10

Chapter 10.

Managing and monitoring your storage subsystem


This chapter contains several scenarios for managing your storage infrastructure using TotalStorage Productivity Center. These hands-on scenarios were developed in our lab environment as described in 8.1, Infrastructure summary on page 340 Case study 1 describes the the steps necessary to add a new server to an existing storage environment managed via TPC 3.1 and provide new storage volumes to it. We use one Windows and one AIX computer throughout this example Case study 2 takes you through how to detect and alert unwanted files. Case study 3 describe the steps necessary to setup a Policy management to detect and act on a filesystem threshold-exceeded condition. This function is limited to AIX computers using a JFS filesystem.

Copyright IBM Corp. 2006. All rights reserved.

453

10.1 Case study 1: adding new servers and storage


In this case study, we show how the following two new servers will be integrated into our storage environment. For each server we will assign two volumes, do the zoning and set up the monitoring. The environment we used is described here. AZOV AIX 5.3, 2 HBA, Multipath Driver SDDPCM 2.1.0.8 (MPIO), TPC Agents installed: Data Agent and Fabric Agent COLORADO W2003 Server Standard Edition, Multipath Driver SDD 1.6.0.7-1 TPC Agents installed: Data Agent, Fabric Agent Storage will be provided by a two-node SVC cluster: ITSOSVC01, Version 3.1.0.2 Master Console, Version 3.1.0.542 Managed disks for this SVC are provided by an IBM DS4500 Disk subsystem The SAN fabric in our example consists of a single switch: IBM 2005 B32, Version 5.1.0b The two servers, the SVC cluster, and the DS4500 are connected to this switch. The procedure used to develop this case study is described in the following sections: Server tasks lists the prerequisites on server side, installing Agents and multipath driver Storage provisioning describes how the volumes are created and mapped to the host on the storage-subsystem and it shows how the zoning for the new server is done with TPC. Storage monitoring shows the options and possibilities TPC offers in order to monitor the new assigned storage.

10.1.1 Server tasks


At this point, we have installed all required software, the Data Agents, the Fabric Agents and the appropriate multipath device driver on the servers. We have connected the servers to the switch. We can see the Data and Fabric Agents on these servers in TPC in the Navigation Tree. 1. Select Administrative Services Agents, as shown in Figure 10-1 on page 455.

454

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-1 Data and Fabric Agents

2. Probe the new computers by selecting them from the default probe under IBM TotalStorage Productivity Center Monitoring Probes tpcadmin Probe Computers as shown in Figure 10-2. .

Figure 10-2 Probe restricted to just two computers

Chapter 10. Managing and monitoring your storage subsystem

455

3. After the probe is submitted, check the results. Select the corresponding Job entry and drill down into the log details (see Figure 10-3).

Figure 10-3 Probing servers

4. The following screen captures display the status of the two servers (AZOV and colorado) before the disk volumes are added. Open the Topology Viewer, the computers in our lab are shown as in Figure 10-4.

Figure 10-4 Environment overview

The two new servers are displayed in the ITSO group, connected to the switch IBM_2005_B32. On the right-hand side, the two subsystems including their connection to the SAN are displayed. 5. Displaying server AZOV in the Topology Viewer (see Figure 10-5 on page 457) shows no SAN devices, zero LUNS, and zero Volumes. On the SAN or Fabric side, we can see the two single port HBAs (fcs0, fcs1) with their specific WWPNs and the connections to switch IBM_2005_B32 (Figure 10-5 on page 457).

456

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-5 AZOV Topology view

6. Using a cmd-line on server AZOV, we see only the three internal disks when we issue the command shown in Figure 10-6.

# lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive # Figure 10-6 Output of lsdev command to display AZOVs internal disks

7. The output of the lsdev command to show the adapters is displayed in Figure 10-7.

# lsdev -Cc adapter -H name status location description ent0 ent1 fcs0 fcs1 Available Available Available Available 1L-08 14-08 1Z-08 1D-08 10/100 Mbps Ethernet PCI Adapter II (1410ff01) 10/100 Mbps Ethernet PCI Adapter II (1410ff01) FC Adapter FC Adapter

Figure 10-7 Output of lsdev command to display AZOVs adapters

8. The output of the lscfg command to display the WWPN of adapter fcs1 is shown in Figure 10-8 on page 458. This is necessary because we must use it later during the mapping of the host to subsystem ports. Make a note of which WWPN is to be used.

Chapter 10. Managing and monitoring your storage subsystem

457

# lscfg -vl fcs1 fcs1 U0.1-P2-I5/Q1 FC Adapter .... Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C932A753 Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U0.1-P2-I5/Q1 pcmpath query device# Figure 10-8 Output of lscfg command to display AZOVs WWPNs

9. The output of the pcmpath command in Figure 10-9 shows no path from AZOV to any devices in the SAN.

# pcmpath query device # Figure 10-9 Output of pcmpath command to display AZOVs disks seen by the SDD

10.Figure 10-10 and Figure 10-11 on page 459 show the same information for the server Colorado. Colorado also has no external disks assigned.

Figure 10-10 COLORADO Topology Viewer

458

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-11 COLORADO server view

10.1.2 Storage provisioning and zoning on Colorado


To work with storage and zoning on the Colorado server, follow these steps: 1. In the Explorer View, navigate to the Disk Manager Storage Subsystems entry. You can see a list of the subsystems, which should look similar to Figure 10-12.

Figure 10-12 Select the storage subsystem to provide you the storage

2. In our case, we select the SVC-2145 subsystem, then click Create Virtual Disk. The Create Virtual Disk Wizard in Figure 10-13 on page 460 opens.

Chapter 10. Managing and monitoring your storage subsystem

459

3. Enter the number of Vdisks to be created, in our case two (2), and the prefix of the name for the vdisks. The system will add the suffix of the name.

Figure 10-13 Enter the vdisk parameters

4. After you have entered all parameters click the Next button. The Assign the virtual host to disk ports of the wizard displays, as in Figure 10-14 on page 461.

460

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-14 Select the host port

5. After you have selected the host port for Colorado from the Available ports list and moved it into the right column named Assigned ports, click the Next button. The third screen of the Wizard as shown in Figure 10-15 on page 462 opens. 6. Enter a name for the new zone, then select the ports in the storage subsystem which will be contained in the new zone you will create. Click Next after entering the zone name. Important: If you encounter errors while creating the zone definitions, one reason might be that you have chosen a name that already exists somewhere in the alias list of the switch configuration. All names must be unique.

Chapter 10. Managing and monitoring your storage subsystem

461

Figure 10-15 Select the ports of the storage subsystem

7. The Summary panel shown in Figure 10-16 on page 463 lists the settings you entered in the previous panels. IBM TotalStorage Productivity Center creates two VDisks and performs the necessary steps for the zoning to have these vdisks assigned to our server Colorado. Click Finish to continue.

462

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-16 Summary panel of the Wizard

8. The Create Virtual Disk wizard starts a virtual-disk-creation job, as indicated in Figure 10-17. The job log is shown in Figure 10-18 on page 464 and Figure 10-19 on page 464.

Figure 10-17 Finishing the Wizard will spawn two jobs

Chapter 10. Managing and monitoring your storage subsystem

463

LUN-Definition Job Log


3/2/06 4:47:12 PM BTACS0000I Starting Control Process: createSVCDisks, Device Server RUN ID=1147, Job ID=1478. 3/2/06 4:47:12 PM HWN021675I Started creation of volume with size 2147483648 in pool DS4500_R10 on subsystem SVC-2145-ITSOSVC01-IBM 3/2/06 4:47:16 PM HWN021676I Volume creation completed successfully. New volume COLORADO created with size 2147483648 in pool DS4500_R10 on subsystem SVC-2145-ITSOSVC01-IBM. 3/2/06 4:47:16 PM BTACS0001I Finished Control Process: Device Server RUN ID=1147, Job ID=1478, Status=1, Return Code=0.

Figure 10-18 TPC log output after LUN creation is done on Colorado - Part 1 of 2

Port Mapping Job Log


3/2/06 4:47:18 PM BTACS0000I Starting Control Process: assignStorageVolumesToWWPNs, Device Server RUN ID=1173, Job ID=1504. 3/2/06 4:47:20 PM HWN021678I Started assignment of volume COLORADO on subsystem SVC-2145-ITSOSVC01-IBM to initiator port 210000E08B06C50B. 3/2/06 4:47:30 PM HWN021679I Finished assignment of volume COLORADO on subsystem SVC-2145-ITSOSVC01-IBM to initiator port 210000E08B06C50B. 3/2/06 4:47:30 PM BTACS0001I Finished Control Process: Device Server RUN ID=1173, Job ID=1504, Status=1, Return Code=0.

Figure 10-19 TPC log output after LUN to hostport assignment is done on Colorado - Part 2 of 2

9. We zoned the new disks manually at the switch in this case. Through the Disk Management view on Colorado in Figure 10-20 you can see that there are new disks.

Figure 10-20 New disks detected and available

464

IBM TotalStorage Productivity Center: The Next Generation

10.1.3 Storage Provisioning and Zoning on AZOV ( 002-AZOV )


To start with the definition for the new disks, follow these steps: 1. Start again with the selection of the storage subsystem, in this case the SVC. Again there are the VDisk creation wizard panels (see Figure 10-21) as previously shown in 10.1.2, Storage provisioning and zoning on Colorado on page 459. After you have selected the host ports click Next on this panel.

Figure 10-21 Create Virtual Disk Wizard - First Panel

Here you have to enter the parameters for the server AZOV. We chose two VDisks, each 4 GB in size. For the name, we specified only the prefix part of the name. Click Next to proceed to the panel where you select the host ports to be used shown in Figure 10-22 on page 466. 2. Note that AZOV has two host ports. After you have selected the host ports click Next.

Chapter 10. Managing and monitoring your storage subsystem

465

Figure 10-22 Assign the virtual disk to host ports panel

3. Select the appropriate Subsystem ports of the SVC as shown in Figure 10-23 on page 467. Click Next to proceed.

466

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-23 Assign the host ports to the subsystem ports

4. The next panel is the Summary panel displaying the information that has been entered (see Figure 10-24 on page 468). Review all the information you entered so far. When everything is correct, click Finish.

Chapter 10. Managing and monitoring your storage subsystem

467

Figure 10-24 The summary screen of the Wizard

5. Note that TPC again starts two jobs. You can navigate to the Jobs node under the Disk Manager node in the explorer view (see Figure 10-25 and Figure 10-26) to locate and open the job logs.

Figure 10-25 The Wizard is finished

Figure 10-26 Jobs are displayed after finishing the wizard.

468

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-27 and Figure 10-28 show the output from the jobs created by the Create Virtual Disk Wizard.

C:\Program Files\IBM\TPC\device\log>type msg.control.1046.1339.log 3/1/06 5:42:10 PM BTACS0000I Starting Control Process: createSVCDisks, Device Se rver RUN ID=1046, Job ID=1339. 3/1/06 5:42:11 PM HWN021675I Started creation of volume with size 524288000 in p ool DS4500_R10 on subsystem SVC-2145-ITSOSVC01-IBM 3/1/06 5:42:14 PM HWN021676I Volume creation completed successfully. New volume TPC_CREATED created with size 524288000 in pool DS4500_R10 on subsystem SVC-2145 -ITSOSVC01-IBM. 3/1/06 5:42:14 PM BTACS0001I Finished Control Process: Device Server RUN ID=1046 , Job ID=1339, Status=1, Return Code=0. Figure 10-27 TPCs Job Control Output - Assign the Vdisks

3/1/06 5:42:17 PM BTACS0000I Starting Control Process: assignStorageVolumesToWWP Ns, Device Server RUN ID=1071, Job ID=1364. 3/1/06 5:42:18 PM HWN021678I Started assignment of volume TPC_CREATED on subsyst em SVC-2145-ITSOSVC01-IBM to initiator port 210000E08B1A5996. 3/1/06 5:42:28 PM HWN021679I Finished assignment of volume TPC_CREATED on subsys tem SVC-2145-ITSOSVC01-IBM to initiator port 210000E08B1A5996. 3/1/06 5:42:28 PM BTACS0001I Finished Control Process: Device Server RUN ID=1071 , Job ID=1364, Status=1, Return Code=0. Figure 10-28 TPCs Job Control Output - Assign the host to the subsystem ports

In our scenarios, we have divided the steps of LUN creation, assignment, and zoning. In our lab we did the zoning manually. In 10.1.5, Manual zone configuration with TPC on page 473 we describe how the zoning can be done manually.

Verifying the results on AZOV


Go to a command line on the AZOV computer and issue the command cfgmgr, followed by the lsdev -Cc disk command. These commands produce the result shown in Figure 10-29.

hdisk0 hdisk1 hdisk2 hdisk3 hdisk4

Available Available Available Available Available

1S-08-00-8,0 1S-08-00-9,0 1S-08-00-10,0 1Z-08-01 1Z-08-01

16 Bit LVD 16 Bit LVD 16 Bit LVD SAN Volume SAN Volume

SCSI Disk Drive SCSI Disk Drive SCSI Disk Drive Controller MPIO Device Controller MPIO Device

Figure 10-29 AZOV now has two more disks

A subsequent query using the command pcmpath query device produces the output shown in Figure 10-30 on page 470.

Chapter 10. Managing and monitoring your storage subsystem

469

DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 60050768018300C4180000000000001A ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 0 0 1 fscsi0/path1 CLOSE NORMAL 0 0 2 fscsi0/path2 CLOSE NORMAL 0 0 3 fscsi0/path3 CLOSE NORMAL 0 0 4 fscsi1/path4 CLOSE NORMAL 0 0 5 fscsi1/path5 CLOSE NORMAL 0 0 6 fscsi1/path6 CLOSE NORMAL 0 0 7 fscsi1/path7 CLOSE NORMAL 0 0 DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 60050768018300C4180000000000001B ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 0 0 1 fscsi0/path1 CLOSE NORMAL 0 0 2 fscsi0/path2 CLOSE NORMAL 0 0 3 fscsi0/path3 CLOSE NORMAL 0 0 4 fscsi1/path4 CLOSE NORMAL 0 0 5 fscsi1/path5 CLOSE NORMAL 0 0 6 fscsi1/path6 CLOSE NORMAL 0 0 7 fscsi1/path7 CLOSE NORMAL 0 0 Figure 10-30 Output of pcmpath query device

These devices showed up through Smitty as well and we were able to create a volume group, a logical volume and finally we put a filesystem on it as shown in Figure 10-31.

# lsvg -l SVC-DISKS SVC-DISKS: LV NAME TYPE SVC-DISKS-LV jfs2 loglv00 jfs2log

LPs 1 1

PPs 1 1

PVs 1 1

LV STATE open/syncd open/syncd

MOUNT POINT /svc-storage N/A

Figure 10-31 The two new disks are in use by AZOV

10.1.4 Post activities for Colorado and AZOV


Probe the servers Colorado and AZOV again using a selective probe to focus only on those servers. After the probe is completed, you will see the VDisks as a resource to both computers. The screen captures in Figure 10-32 on page 471 and Figure 10-33 on page 472 show the Topology Viewer of the servers and their new volumes.

470

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-32 Colorado has its new volumes

Chapter 10. Managing and monitoring your storage subsystem

471

Figure 10-33 AZOV has its new volumes

Figure 10-34 on page 473 shows the TotalStorage Productivity Center GUI navigation tree Data Manager Reporting showing the Colorado and AZOV servers and their disks. This information corresponds to the graphical display of the Topology Viewer in Figure 10-32 on page 471 and Figure 10-33.

472

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-34 Both computers and their new assets in the Assets View of the Explorer

The last screen capture in this sequence in Figure 10-35 shows how the VDisks are presented within the San Volume Controller.

Figure 10-35 The SVC has four new VDisks created

10.1.5 Manual zone configuration with TPC


In the following sequence of screen captures, we show how to modify an existing ZoneSet using TPC, so that there will be a new zone included. This scenario was captured while defining the new zone for AZOVs external disks. 1. To start with Zone Configuration. navigate to Fabric Manager Fabrics in the Navigation Tree of the TotalStorage Productivity Center GUI. The Fabric panel is displayed as shown in Figure 10-36 on page 474. Click Zone Configuration.

Chapter 10. Managing and monitoring your storage subsystem

473

Figure 10-36 Entry panel for Zone Configuration

2. The panel in Figure 10-37 shows the tab with the name of the fabric, followed by the word Zoning. On this panel, you see the existing zones. Select the appropriate Zone, then click Change.

Figure 10-37 The panel named for your fabric is displayed.

3. The window shown in Figure 10-38 on page 475 is displayed. Here you see a list of the zones contained in the active zoneset. Click Add to start the definition of the new zone.

474

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-38 The active zone-definition panel

4. Enter a name for the new zone and provide a description of the zone in the panel shown in Figure 10-39 on page 476. Click Next to proceed.

Chapter 10. Managing and monitoring your storage subsystem

475

Figure 10-39 Zone Configuration - consists of multiple parts

5. On the screen in Figure 10-40, you must expand the Aliases section on the left-hand side to see the elements in your SAN.

Figure 10-40 Zone configuration Aliases

476

IBM TotalStorage Productivity Center: The Next Generation

6. They all are displayed with their aliases only as shown in Figure 10-41. Make the selections that are necessary to have your host adapters and the subsystem adapters in your zone. The shaded aliases are the aliases we have selected. Click Next after you have made your selections.

Figure 10-41 All aliases have been selected

7. In the next two screens, TPC wants to know to which Zoneset the newly defined zone should belong. We have only one in our environment, as shown in Figure 10-42 on page 478.

Chapter 10. Managing and monitoring your storage subsystem

477

Figure 10-42 Available Zone Sets

8. Select the Zone Set as shown in Figure 10-43. Having entered all the information for the new zone, it can be written into the existing Zone Set. Click Finish to proceed.

Figure 10-43 The Zone Set is selected

478

IBM TotalStorage Productivity Center: The Next Generation

The next screen that is displayed (see Figure 10-44) indicates that the zone set is being saved.

Figure 10-44 Write Operation in progress.

9. Now that the new Zone is added, you have the following options (see Figure 10-45 on page 480): Press Update Only to complete the updates on the Zone Set only. There will be differences between the active Zone Set and the one with which you are working. Press Update and Activate to have the changes written and activated in one step.

Chapter 10. Managing and monitoring your storage subsystem

479

Figure 10-45 The new Zone is added to the list

10.Either action will result in the panel shown in Figure 10-46. Open both sides of the view by clicking the plus (+) sign.

Figure 10-46 The displayed panel after Activation or Update of a Zone Set

The view expands and you can see the content of the active versus the non-active Zone Sets, as shown in Figure 10-47 on page 481.

480

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-47 Zone Sets and their content.

In this case you can see no differences. The last action performed seems to have been an Update and Activate.

10.2 Case Study 2: Detect and alert unwanted files


In this section we describe the steps necessary to detect and alert unwanted files. First, we setup a Scan and a Report to report data about a specific filesystem. The purpose is to have a basis we can use to test and validate the ongoing process. After we have tested our Report, we will be able to detect the unwanted files. The next step is to set up the conditions which will make up our usage-constraint policy. Having this policy in place, TotalStorage Productivity Center for Data will be able to place an alert to report that there are unwanted files in the filesytem. The last step will be the backup-archive operation including the deletion of these unwanted files.

10.2.1 Prerequisite steps


In this section we describe a few prerequisites that you must make sure are in place before you create the procedures for detecting and alerting unwanted files. 1. The first prerequisite is, that you must have your data agents up and running. Figure 10-48 on page 482 shows the agents in our environment.

Chapter 10. Managing and monitoring your storage subsystem

481

Figure 10-48 Data Agents must be up and running

2. The next important checkpoint is that you complete a probe on all your agents. Figure 10-49 shows that we have a probe running. When the probe has completed successfully, the icon turns green.

Figure 10-49 Probe is running

482

IBM TotalStorage Productivity Center: The Next Generation

Modify the report profile to be used


Reminder: Each report you are running in TotalStorage Productivity Center can report data only in that amount and quality based on what is collected in the database through preceding scans. Having this in mind, check the settings of your scan job and respectively the profile it uses. To modify the profile, navigate to Data Manager Monitoring Profiles in the TotalStorage Productivity Center GUI. We selected the TPCUser. Summary By File Type profile and enter 1000 in the field, file types using the most space. This enables the Scan to deliver more than the default number of filetypes into the database.

Figure 10-50 Modify the profile to be used

10.2.2 Create a Scan targeted to the specific filesystem


Here we show how TotalStorage Productivity Center lets you define your own job, scan, or report definitions. You start with the predefined definitions, modify, and save them as your own customized set of definitions. First, define a target for the next operations. In our case we decided to use the computer named Colorado and its filesystem on the C: drive. The next step is to create a customized Scan job that is focused on this filesystem only. 1. Navigate to Data Manager Monitoring Scans and click the right mouse button. Select Create Scan as shown in Figure 10-51 on page 484.

Chapter 10. Managing and monitoring your storage subsystem

483

Figure 10-51 Create a new Scan definition

The Create Scan window is displayed (see Figure 10-52 on page 485).

484

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-52 The Create Scan dialog

2. Select the filesystem that you want to be scanned. In our case, we select Colorado - C: drive only. Expand the tree in the right-hand pane to select the filesystem, as shown in Figure 10-53.

Figure 10-53 Select the desired filesystem and give a description

Chapter 10. Managing and monitoring your storage subsystem

485

Reminder: Each report you are running in TotalStorage Productivity Center can report data only in that amount and quality based on what is collected in the database through preceding scans. Keeping this in mind, you have to check the settings of your scan job and its profiles. To do this, you must select the appropriate profile to complete the definitions of your SCAN. This is done by clicking the Profiles tab and selecting only the profile named TPCUser.Summary By File Type (Figure 10-54). The name of the profile indicates its planned use.

Figure 10-54 Select a profile to use in your SCAN

In our example we modified this profile earlier, because otherwise it would reduce the amount of displayed and reported filetypes. Keep in mind that it might be necessary to increase this number again. 3. Enter a description and save the SCAN definition. It might be helpful to use the same text for the description as well as for the name of the SCAN. 4. Note, that TPC will auto-submit your SCAN. After a few minutes you can check the results. To do this you might have to refresh the joblist. Verify that the SCAN job was successful as shown in Figure 10-55 on page 487.

486

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-55 Save, submit ,and check the results

10.2.3 Create a customized report for the scan-results


Now we need a customized report to have a detailed view into the filesystems data. Again, start with the predefined report section of the explorer view. 1. Navigate to the position shown in Figure 10-56 on page 488 in the TotalStorage Productivity Center GUI.

Chapter 10. Managing and monitoring your storage subsystem

487

Figure 10-56 Select the report as base for your own report

In the screen in Figure 10-56 adjust the number of maximum returned rows per file type, because for this report it cannot be adjusted later. For better readability, we have split this picture. After editing this value, you should save your report definition and give it a meaningful name. 2. In addition, you must define the selection criteria to include only the filesystems or computers you want to include in this report. To do this, click the selection button in the upper right-hand corner. In the split screen in Figure 10-57 on page 489, you see the file type selection dialog box in the upper left corner. Do not change any entries here. In the lower right corner you see the filesystem selection dialog box. Change the entries to reflect the selection of C: on Colorado. Note: The file types displayed in this list are the ones which are detected during each scan. That means if there are any new filetypes in your filesystem, this list will grow automatically.

488

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-57 Select C: filesystem on Colorado

3. When your report is saved give it a meaningful name as shown in Figure 10-58.

Figure 10-58 Save the customized report

4. Let your customized report run, to see what kind and amount of data is displayed. To do this, navigate to your reports ( tpcadmins.Reports), then click Generate Report as shown in Figure 10-59 on page 490. The result will be presented immediately, because the report is generated from the information in the database. If you are going to do further tests with this report, you should always rescan your filesystem.

Chapter 10. Managing and monitoring your storage subsystem

489

Figure 10-59 Run your customized report

5. Check the results of your report. In our case, we want to make sure that certain files (for example, *.mp3 and *.mpg) are detected. Scroll down the list as shown in Figure 10-60 to see them.

Figure 10-60 Results of the customized report

Here you can see the entries for the mp3 and mp4 files. If you cannot see the files you are looking for, the number you set in the Number of Rows field in your report might be too small. This can be corrected by repeating the steps described in 10.2.1, Prerequisite steps on page 481 and increasing the number of rows.

10.2.4 Set up the constraint definition


Now that you have a sample setup to do specific scans and reports on the filesystem for this specific test case, it is time to define the constraint conditions. 1. Navigate to the structure shown in Figure 10-61 on page 491 in the explorer view. When you have selected the Constraints entry, right-click to create a new one.

490

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-61 Create a new Constraint definition

2. The Create Constraint window opens (Figure 10-62). Define the scope of the constraint definition by selecting the filesystem, in our case C: on Colorado.

Figure 10-62 Define the scope of the constraint definition - part1

Important: In general, all of the possible selections which can be made from within the Create Constraint dialog box are combined with a logical OR operation. That means you have to pay attention when you define the scope of your constraint.

Chapter 10. Managing and monitoring your storage subsystem

491

Figure 10-63 Define the scope of the constraint definition - part 2

3. Select the File Types tab to choose the file types by which the constraint should be defined. In our case, we chose jpg and mp3 files in Figure 10-64.

Figure 10-64 Define the scope of the constraint definition - part 3

492

IBM TotalStorage Productivity Center: The Next Generation

You can easily add your own patterns and have them added to the right-hand selection box as shown in Figure 10-65.

Figure 10-65 Define the scope of the constraint definition - part 4

4. You also have to define the users for whom the constraint definition should be valid. Select the TPCADMIN user only. The screen in Figure 10-66 summarizes your input, as it displays your choices of file types and users within one single filter statement named File Filter Text.

Figure 10-66 Select a user the constraint should be defined for.

5. In the last screen (see Figure 10-67 on page 494), you can add two definitions of how and where the alert should be displayed. The first condition, the Triggering Condition, means that only if the sum of all filtered files exceeds 1 megabyte consumption will the alert be raised. And the second option, the Triggered Actions, is to determine what should happen in case of an alert condition. We select to drop an entry in the Windows event log, type Warning. 6. Give a description, save, and name the constraint definition.

Chapter 10. Managing and monitoring your storage subsystem

493

Figure 10-67 Give description and save it

10.2.5 Rescan the filesystem to see if there is a violation


Use the predefined SCAN to rescan the filesystem. Because the constraint-violation is defined, you will see results in at least two locations. 1. The first location is the alert log. Check the alert log section in the explorer view. Navigate to the section shown in Figure 10-68 on page 495.

494

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-68 Select the Alerting / Alert Log / Filesystem portion

2. When you click Filesystem, which is highlighted in red to indicate that there are some alerts to display, you see on the right-hand side a screen similar to Figure 10-69.

Figure 10-69 Filesystem Alerts because of a constraint violation are being displayed.

3. You can drill further into the information about these alerts by clicking the magnifying glass icon. Figure 10-70 on page 496 is just one example of a Detail for Alert.

Chapter 10. Managing and monitoring your storage subsystem

495

Figure 10-70 Detailed Information about a constraint violation

4. Now that you know that there are constraint violations, you might want to know exactly what files triggered these constraint violations. If you are interested in the exact name, size, and location of these files, there is only one way. You must look into the constraint-violations report. Navigate to the following location in the explorer view: Data Manager Reporting Usage Violations Constraint Violations By Filesystem. On the right, you see the two Buttons Selection and Filter. Use the Selection Button to display a list of all the computers and select only the entries for Colorado and its filesystems. The result should look similar to Figure 10-71 on page 497.

496

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-71 Select only the rows of interest

5. In Figure 10-72, you can select one of the displayed entries to get all the detailed information about the files which caused the constraint violations.

Figure 10-72 Detailed information about the files causing the constraint violation

Chapter 10. Managing and monitoring your storage subsystem

497

10.2.6 Activate Archive/Delete Operation


To enable an action which is to be performed against the identified files, you must enable one or multiple of the Triggered Actions in the dialog boxes shown in Figure 10-73 and Figure 10-74 on page 499.

Figure 10-73 Select either action or a combination

If you use a script to perform an action on the identified files, you name the script in the dialog box shown Figure 10-74 on page 499. You must create and store a script with this name in the /scripts directory of the TPC server.

498

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-74 Two possible options: TSM Archive then delete or a script

The location of the scripts, either stored on the server and at runtime transferred to the clients and deleted afterwards or stored local on the agent computer side, are as shown in Table 10-1.
Table 10-1 Script location Computer Location of the Scripts (can vary if you used other installation directories)

TPC Server TPC Data Agent

C:\Program Files\IBM\TPC\data\scripts C:\Program Files\Tivoli\ep\subagents\TPC\Data\scripts

In either directory, you can find sample scripts.

10.3 Case Study 3: Policy management


In this section, we describe the steps necessary to set up a Policy management to detect and act on a filesystem threshold-exceeded condition. This function is limited to AIX computers using a JFS filesystem.

Chapter 10. Managing and monitoring your storage subsystem

499

10.3.1 The steps to perform


This function can be demonstrated only on an AIX server with a JFS filesystem. In our lab environment, we chose the AIX server AZOV, prepared a volume group on local disks, and created a JFS type logical volume in it. 1. Locate the Filesystem Extension node in the explorer tree, which is shown in Figure 10-75. Then, select the AIX server and its filesystem which is eligible for extension.

Figure 10-75 FilesystemExtension Dialog: selected AZOV and /tpc_data

2. We can proceed with the definition of the filesystem extension setup. The next three panels in Figure 10-26 on page 468 through Figure 10-28 on page 469 show the entries we made. We chose an extension by 5 percent, and the extension to happen when the filesystem reaches a usage level of 75 percent as we have specified 25 percent freespace (see Figure 10-76 on page 501).

500

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-76 Definition of FilesystemExtension Criteria - part one of three

3. We selected to enforce this policy after each scan of the filesystem (see Figure 10-77 on page 502). Doing it this way is only one possibility. However, keep in mind that the policy and its execution is bound to the data found in the database, which, in turn, is refreshed and updated only after a new scan.

Chapter 10. Managing and monitoring your storage subsystem

501

Figure 10-77 Definition of FilesystemExtension Criteria - part two of three

4. When you click the Alert tab, you can see the possible actions which can be triggered when a filesystem extension has taken place as shown in Figure 10-78 on page 503. We did not make any selections. Finally, we saved our definitions. Note that it appears in the explorer tree view under the node on which we are working.

502

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-78 Definition of FilesystemExtension Criteria: part three of three

10.3.2 Create a Scan and a Report targeted to a specific filesystem


To be able to scan and get a report about the data on this specific filesystem, we followed these steps. 1. We created both a scan and a report dedicated to AZOV and the filesystem /tpc_data. Only the results of these scan and report are shown as in Figure 10-79 on page 504.

Chapter 10. Managing and monitoring your storage subsystem

503

Figure 10-79 A scan of AZOVs filesystem before the filesystem was filled

2. We checked the situation from the AIX perspective and queried the volume group (see Figure 10-80). Note that there are 201 used PPs.

lsvg tpc_data VOLUME GROUP: tpc_data 0009cd9a00004c0000000109b2c77f62 VG STATE: active VG PERMISSION: read/write MAX LVs: 256 LVs: 2 OPEN LVs: 2 TOTAL PVs: 2 STALE PVs: 0 ACTIVE PVs: 2 MAX PPs per VG: 32512 MAX PPs per PV: 1016 LTG size (Dynamic): 256 kilobyte(s) HOT SPARE: no

VG IDENTIFIER: PP SIZE: TOTAL PPs: FREE PPs: USED PPs: QUORUM: VG DESCRIPTORS: STALE PPs: AUTO ON: MAX PVs: AUTO SYNC: BB POLICY: 128 megabyte(s) 270 (34560 megabytes) 69 (8832 megabytes) 201 (25728 megabytes) 2 3 0 yes 32 no relocatable

Figure 10-80 AIX volume group usage before filesystem extension

3. We started to fill up the filesystem by creating large files up to a limit more than the defined threshold. ( > 75 % ). Figure 10-81 on page 505 shows the filesystem contents afterwards.

504

IBM TotalStorage Productivity Center: The Next Generation

-rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 drwxrwx--2 root system 512 Feb 28 # df -k Filesystem 1024-blocks Free %Used Iused /dev/tpc_data_lv 13107200 2209588 84% Figure 10-81 Filesystem filled - threshold reached.

14:28 14:29 14:29 14:30 14:31 14:32 14:32 14:33 14:38 14:38 14:36

dummyfile dummyfile1 dummyfile2 dummyfile3 dummyfile4 dummyfile5 dummyfile6 dummyfile7 dummyfile8 dummyfile9 lost+found

%Iused Mounted on 27 1% /tpc_data

4. To demonstrate the function of TotalStorage Productivity Center for Data in principle, we ran a report against our filesystem (see Figure 10-82).

Figure 10-82 Filesystem populated - report based on old scan data

5. Rerun the defined Scan to update the information in the database and rerun the report again. The scan you just performed has led to two results: The data in the database now shows the near real time situation in the filesystem. The scan triggered the filesystem extension, The easiest way to verify that is by looking into the AIX lsvg information. Here you can see, that we now use 211 PPs (see Figure 10-83 on page 506).

Chapter 10. Managing and monitoring your storage subsystem

505

# lsvg -L tpc_data VOLUME GROUP: tpc_data 0009cd9a00004c0000000109b2c77f62 VG STATE: active VG PERMISSION: read/write MAX LVs: 256 LVs: 2 OPEN LVs: 2 TOTAL PVs: 2 STALE PVs: 0 ACTIVE PVs: 2 MAX PPs per VG: 32512 MAX PPs per PV: 1016 LTG size (Dynamic): 256 kilobyte(s) HOT SPARE: no df -k /dev/tpc_data_lv 15073280 1799692 Figure 10-83 The filesystem is now extended.

VG IDENTIFIER: PP SIZE: TOTAL PPs: FREE PPs: USED PPs: QUORUM: VG DESCRIPTORS: STALE PPs: AUTO ON: MAX PVs: AUTO SYNC: BB POLICY: 73% 21 128 megabyte(s) 270 (34560 megabytes) 59 (7552 megabytes) 211 (27008 megabytes) 2 3 0 yes 32 no relocatable 1% /tpc_data

10.3.3 Define an alert for a filesystem threshold condition (optional)


If you define a filesystem alert for this kind of situation, as shown in the next Figures, you will see detailed information when and why this happened. 1. To define a Filesystem Alert you have to go to the appropriate node in the Navigation Tree view as shown in Figure 10-84.

Figure 10-84 Start to define a Filesystem Alert.

2. Clicking the right mouse button and selecting Create Alert, results in the screen shown in Figure 10-85 on page 507. 3. Select the desired filesystem by first going to the Filesystems tab.

506

IBM TotalStorage Productivity Center: The Next Generation

Figure 10-85 Select the filesystem alert selecting the filesystem

4. After you have selected the filesystem, select the Alert tab, where you can define your conditions for this alert as shown in Figure 10-86. Ensure that the Enabled check box is marked in the upper right hand corner of the Alert tab screen.

Figure 10-86 Define the conditions for this alert.

Figure 10-87 on page 508 shows a filesystem alert entry.

Chapter 10. Managing and monitoring your storage subsystem

507

Figure 10-87 The alert shows up and you can see detailed information about the cause.

This completes this exercise.

508

IBM TotalStorage Productivity Center: The Next Generation

11

Chapter 11.

Hints, tips and good to knows


This chapter provides useful information about the components of IBM TotalStorage Productivity Center such as: Selecting an SMS or DMS tablespace DB2 installation known issues How to get rid of Engenio provider 5989 port DB2 view for CIMOM discovered each subsystem Valid characters for user ID and passwords How to change timeout value Common Agent and Agent Manager documentation SLP configuration recommendation Tivoli Common Agent Services Verifying if a port is in use

Copyright IBM Corp. 2006. All rights reserved.

509

11.1 Selecting an SMS or DMS tablespace


There are a number of trade-offs to consider when determining which type of tablespace you should use to store your data. A tablespace can be managed using either system managed space (SMS), or database managed space (DMS). For an SMS tablespace, each container is a directory in the file space of the operating system, and the operating systems file manager controls the storage space. For a DMS tablespace, each container is either a fixed size pre-allocated file, or a physical device such as a disk, and the database manager controls the storage space. Tables containing user data exist in regular table spaces. The system catalog tables exist in a regular tablespace. Tables containing long field data or large object data, such as multimedia objects, exist in large table spaces or in regular table spaces. The base column data for these columns is stored in a regular tablespace, while the long field or large object data can be stored in the same regular tablespace or in a specified large tablespace. Indexes can be stored in regular table spaces or large table spaces. Temporary table spaces are classified as either system or user. System temporary table spaces are used to store internal temporary data required during SQL operations such as sorting, reorganizing tables, creating indexes, and joining tables. Although you can create any number of system temporary table spaces, it is recommended that you create only one, using the page size that the majority of your tables use. User temporary table spaces are used to store declared global temporary tables that store application temporary data. User temporary table spaces are not created by default at database creation time.

Advantages of an SMS tablespace


These are some of the advantages of an SMS tablespace. Space is not allocated by the system until it is required. Creating a database requires less initial work, because you do not have to predefine containers. A container is a physical storage device and is assigned to a tablespace. A single tablespace can span many containers, but each container can belong to only one tablespace.

Advantages of a DMS tablespace


These are the advantages of a DMS tablespace: The size of a tablespace can be increased by adding containers. Existing data is rebalanced automatically across the new set of containers to retain optimal I/O efficiency. A table can be split across multiple table spaces, based on the type of data being stored: Long field data Indexes Regular table data You might want to separate your table data for performance reasons, or to increase the amount of data stored for a table. For example, you could have a table with 64 GB of regular table data, 64 GB of index data, and 2 TB of long data. If you are using 8 KB pages, the table data and the index data can be as much as 128 GB. If you are using 16 KB pages, it can be as much as 256 GB. If you are using 32 KB pages, the table data and the index data can be as much as 512 GB. The location of the data on the disk can be controlled, if this is allowed by the operating system.

510

IBM TotalStorage Productivity Center: The Next Generation

If all table data is in a single tablespace, a tablespace can be dropped and redefined with less overhead than dropping and redefining a table. In general, a well-tuned set of DMS table spaces will outperform SMS table spaces. In general, small personal databases are easiest to manage with SMS table spaces. On the other hand, for large, growing databases, you will probably only want to use SMS table spaces for the temporary table spaces, and separate DMS table spaces, with multiple containers, for each table. In addition, you will probably want to store long field data and indexes on their own table spaces.

11.2 DB2 installation known issues


If you update DB2 from an older version, say from DB2 7.2 to DB2 8.2, then TPC installer does not recognize the DB2 version. The workaround is: 1. Execute the following command in a DB2 environment (on Windows 2003 you can use db2cmd).
db2 update dbm cfg using JDK_PATH <DB2_InstallLocation>\java\jdk db2 update dbm cfg using JDK_PATH "C:\PROGRAM Files\IBM\SQLLIB\java\jdk"

2. Start TPC V3.1 installer.

11.3 How to get rid of Engenio provider 5989 port


In the CIMOM discovery definition, deselect the Scan Local Subnet and specify the SLP DA to be contacted.

11.4 DB2 view for CIMOM discovered each subsystem


The following DB2 view in Figure 11-1 is used to determine which CIMOM discovered each subsystem.

create view TPC.T_ITSO_CIMOM2SS (CIMOM_URL, SUBSYSTEM_ID) as (select TPC.T_RES_REGISTERED_CIMOM.SERVICE_URL, TPC.T_RES_STORAGE_SUBSYSTEM.NAME from TPC.T_RES_REGISTERED_CIMOM,TPC.T_RES_STORAGE_SUBSYSTEM where REG_CIMOM_ID = (select CIMOM_ID from TPC.T_RES_CIMOM2NAMESPACE where CIM_NAMESPACE_ID = (select distinct CIM_NAMESPACE_ID from TPC.T_RES_CIMKEY_SUBSYSTEM where SUBSYSTEM_ID= TPC.T_RES_STORAGE_SUBSYSTEM.SUBSYSTEM_ID)))
Figure 11-1 DB2 view for CIMOM discovery of subsystems

11.5 Valid characters for user ID and passwords


This section lists possible valid characters for login sequences.

Chapter 11. Hints, tips and good to knows

511

11.5.1 Typical installation


For a typical installation, user IDs and passwords can contain the following characters: Common User ID and Password (DB2 Administrator / User, WebSphere Application Server Administrator, Host Authentication, Common Agent Windows Service and NAS Filer) must contain characters from the following categories: Uppercase characters: A through Z Lowercase characters: a through z Numeric characters: 0 through 9 Nonalphanumeric characters: - _ .

Common Agent Registration Password or Resource Manager Registration User ID / Password must contain characters from the following categories: Uppercase characters: A through Z Lowercase characters: a through z Numeric characters: 0 through 9 Nonalphanumeric characters: ` ~ @ # % ^ & * ( ) - _ = + [ ] { } \ | ; : ' " , . < > / ?

11.5.2 Custom installation


For Custom installation, use the following characters: DB2 Administrator User ID / Password or DB User ID / Password must contain characters from the following categories: Uppercase characters: A through Z Lowercase characters: a through z Numeric characters: 0 through 9 Nonalphanumeric characters: ~ @ # % ^ & ( ) - _ { } .

WebSphere Application Server Administrator User ID / Password or Host Authentication Password must contain characters from the following categories: Uppercase characters: A through Z Lowercase characters: a through z Numeric characters: 0 through 9 Nonalphanumeric characters: - _ .

Common Agent Windows Service User ID or NAS Filer User ID must contain characters from the following categories: Uppercase characters: A through Z Lowercase characters: a through z Numeric characters: 0 through 9 Nonalphanumeric characters: ` ~ # % ^ & ( ) - _ { } ' .

Common Agent Windows Service Password or NAS Filer Password must contain characters from the following categories: Uppercase characters: A through Z Lowercase characters: a through z Numeric characters: 0 through 9 Nonalphanumeric characters: ` ~ @ # % ^ & * ( ) - _ = + [ ] { } \ | ; : ' " , . < > / ?

512

IBM TotalStorage Productivity Center: The Next Generation

Common Agent Registration Password or Resource Manager Registration User ID / Password must contain characters from the following categories: Uppercase characters: A through Z Lowercase characters: a through z Numeric characters: 0 through 9 Nonalphanumeric characters: ` ~ @ # % ^ & * ( ) - _ = + [ ] { } \ | ; : ' " , . < > / ?

11.6 How to change timeout value


The timeout value in effect for the CIM client used by IBM TotalStorage Productivity Center can be configured by increasing the value for http.timeout in the table T_RES_CONFIG_DATA in the IBM TotalStorage Productivity Center database. The value is the number of milliseconds that may pass before the CIM client times out. You might have to increase the default value depending on your environment. To change the http.timeout value, follow these steps: 1. Open the DB2 Control Center. 2. Connect to the TPCDB database schema (the default name for the IBM TotalStorage Productivity Center database). 3. To set the CIM client timeout value to 30 minutes (1800000 seconds), run the following SQL statement:
INSERT INTO "TPC"."T_RES_CONFIG_DATA" VALUES (http.timeout, 1800000, CIM);

11.7 Common Agent and Agent Manager documentation


It will be necessary to gather documentation for diagnostic purposes should an error occur.

11.7.1 For an installed Common Agent


Go to the directory C:\Program Files\Tivoli\ep and run the service.bat or service.sh script to collect logs.

11.7.2 For an installed Agent Manager


In the installation directory\toolkit there is a LogCollector.readme file which describes how to collect the logs. LogCollector creates the LogCollector.zip file, which is located in the root of the Agent Manager installation tree.

11.7.3 Scripts to clean up TPC V3.1 component or complete install


It is not advisable to edit the Windows registry manually and any error could lead to unpredictable consequences. To help this process, there is a Windows batch script which could be used to cleanup the entire TPC V3.1 install. This is available in Cleanup.zip file. 1. Unzip the file cleanup.zip. 2. Look for three files: sc.exe, InstallUtil.exe, CleanUp.TPC.bat.

Chapter 11. Hints, tips and good to knows

513

3. sc.exe and InstallUtil.exe are required utility files. They are invoked from within the batch script. You need to invoke the batch file to do the cleanup. 4. Place all three files in one directory and invoke CleanUpTPC.bat from within the same directory.

11.8 SLP configuration recommendation


Some configuration recommendations are provided to enable TotalStorage Productivity Center to discover a larger set of storage devices. These recommendations cover some of the more common SLP configuration problems. This section discusses router configuration, SLP Directory Agent configuration, and environment configuration.

Router configuration
Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address, port 239.255.255.253, and port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by Productivity Center for Disk. To configure your router hardware and software, refer to your router reference and configuration documentation.

SLP Directory Agent configuration


Configure the SLP Directory Agents (DAs) to circumvent the multicast limitations. With statically configured DAs, all service requests are unicast by the User Agent (UA). Therefore, it is possible to configure one DA for each subnet that contains storage devices that are to be discovered by Productivity Center for Disk. One DA is sufficient for each of such subnets. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. Productivity Center for Disk sends unicast service requests to each of these statically configured DAs, and sends multicast service requests on the local subnet on which TotalStorage Productivity Center is installed. Configure an SLP DA by changing the configuration of the SLP Service Agent (SA) that is included as part of an existing Common Information Model (CIM) Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA. Note: The change from SA to DA does not affect the CIM Object Manager (CIMOM) service of the subject CIM Agent, which continues to function normally, sending registration and deregistration commands to the DA directly.

Environment configuration
It can be advantageous to configure SLP DAs in the following environments: Where there are other non-TotalStorage Productivity Center SLP UAs that frequently perform discovery on the available services, this ensures that the existing SAs are not overwhelmed by too many service requests. Where there are many SLP SAs, A DA helps decrease network traffic that is generated by the multitude of service replies. It also ensures that all registered services can be discovered by a given UA. We particularly recommend the configuration of an SLP DA when there are more than 60 SAs that need to respond to any given multicast service request.

514

IBM TotalStorage Productivity Center: The Next Generation

11.8.1 SLP registration and slptool


TotalStorage Productivity Center uses SLP discovery. This requires that all of the CIMOMs that TotalStorage Productivity Center discovers are registered using the SLP. SLP can only discover CIMOMs that are registered in its IP subnet. For CIMOMs outside of the IP subnet, you need to use an SLP DA and register the CIMOM using slptool. Ensure that the CIM_InteropSchemaNamespace and Namespace attributes are specified. For example, type the following command:
slptool register service:wbem:https://myhost.com:port

Here, myhost.com is the name of the server hosting the CIMOM, and port is the port number of the service, such as 5989.

11.9 Tivoli Common Agent Services


This section provides some tips that can help you when you run into problems during the registration of the Common Agent or the Resource Manager.

11.9.1 Locations of configured user IDs


This section lists the user IDs and their locations for TotalStorage Productivity Center components.

Resource Manager
You can find the locations of the configured user ID for the Resource Manager in the files listed in Table 11-1.
Table 11-1 Resource Manager user ID and password Component/server File

Tivoli Agent Manager Productivity Center for Data Productivity Center for Fabric

...\AgentManager\config\Authorization.xml ...\TPC\Data\config\ep_manager.config ...\TPC\Fabric\manager\conf\AgentManager\config\endpoint.properties

The password of the Resource Manager is stored in the same files, but the password is in a readable format only on the Tivoli Agent Manager. The Resource Manager certificate subdirectory contains a file called pwd, which contains the agent registration password, which is required to open the certificate files.

Chapter 11. Hints, tips and good to knows

515

Common Agent
The Common Agent does not have a user ID. Instead it has a context name which the agent uses for the communication shown in Table 11-2.
Table 11-2 Common Agent user ID and password Component/Server File

Tivoli Agent Manager Common Agent

...\WebSphere\AppServer\installedApps\<server>\AgentManager.ear\Agent Manager.war\WEB-INF\classes\resourcesl\AgentManager.properties ...\Tivoli\ep\config\endpoint.properties

AgentManager.properties has the context name and the password stored in clear text, so it is easy to find the values if you did not note them during the installation. On the Common Agent, the password is encrypted and stored in the pwd file in the same directory as the certification files, which are located in ...\Tivoli\ep\cert. If you go through the procedure of replacing the current certificates with a new one, do not forget to delete the pwd file, because it no longer matches the certificate file.

Ikeyman.exe
On all systems that have the one of the Tivoli Common Agent Services components installed, you will find a tool called ikeyman.exe. You can use this tool to open the certificate file, if you know the agent registration password. This is a quick way to verify that you still know the password that was used to lock a certificate file.

11.9.2 Tivoli Agent Manager status


There is no GUI for looking at the status and configuration of the Agent Manager. Nevertheless there is a way to obtain information about the manager when DB2 is used as the Agent Managers registry. You open the DB2 control center, navigate to the tables view and open a table as shown in Figure 11-2 on page 517.

516

IBM TotalStorage Productivity Center: The Next Generation

Figure 11-2 Tivoli Agent Manager tables in DB2

In the table called IP_ADDRESS (Figure 11-3), you find all the IP addresses of all registered Common Agents and Resource Managers.

Figure 11-3 DB2 table with IP addresses

Chapter 11. Hints, tips and good to knows

517

11.10 Verifying if a port is in use


A quick way to verify whether a port is in use, or to see if a certain application is running (if you know the port), is to open a telnet connection to that system. Normally, when you do not specify a port with telnet, you are connected to port 23 on the target system. You can change this and try to connect to any other port by adding the port number after the target address. In the following example, we try to determine if the common agent is running on a certain machine. The port 9510 is the default port of the common agent. From a command prompt, enter:
c:\telnet 9.1.38.104 9510

If the common agent is running, it listens for requests on that port and opens a connection. You simply see an empty screen. The common agent is not running if you see the message Connecting To 9.1.38.104...Could not open a connection to host on port 9510 : Connect failed.

518

IBM TotalStorage Productivity Center: The Next Generation

Appendix A.

Worksheets
This appendix contains worksheet examples that are meant for you to use during the planning and the installation of the TotalStorage Productivity Center. Decide whether you need to use them, if, for example, you already have all or most of the information collected elsewhere these worksheets are recommended. If the tables are too small for your handwriting, or you want to store the information in an electronic format, simply use a word processor or spreadsheet application. Alternatively, you can use our examples as a guide to create your own installation worksheets. This appendix contains the following worksheets: User IDs and passwords Storage device information IBM TotalStorage Enterprise Storage Server (ESS) IBM Fibre Array Storage Technology (FAStT) IBM SAN Volume Controller

Copyright IBM Corp. 2006. All rights reserved.

519

User IDs and passwords


We created a table to help you write down the user IDs and passwords that you will use during the installation of IBM TotalStorage Productivity Center for reference during the installation of the components and for future add-ons and agent deployment. Use this table for planning purposes. You need one of the worksheets in the following sections for each machine where at least one of the components or agents of Productivity Center will be installed. The reason for this is you might have multiple DB2 databases or logon accounts and you need to remember the IDs of each DB2 individually.

Server information
Table A-1 contains detailed information about the servers that comprise the TotalStorage Productivity Center environment.
Table A-1 Productivity Center server Server Configuration information

Machine Hostname IP address ____.____.____.____

In Table A-2, simply mark whether a manager or a component will be installed on this machine.
Table A-2 Managers and components installed Manager/component Installed (y/n)?

Productivity Center for Disk Productivity Center for Fabric Productivity Center for Data Tivoli Agent Manager DB2

520

IBM TotalStorage Productivity Center: The Next Generation

User IDs and passwords for key files and installation


Use Table A-3 to note the password that you used to lock the key file.
Table A-3 Password used to lock the key files Default key file name Key file name Password

agentTrust.jks Enter the user IDs and password that you used during the installation in Table A-4. Depending on the selected managers and components, some of the lines are not used for this machine.
Table A-4 User IDs used on this machine Element Default/ recommended user ID Enter user ID Enter password

DB2 User DB2 Instance Owner Resource Manager Common Agent Common Agent TotalStorage Productivity Center universal user IBM WebSphere Host Authentication

db2admina db2inst1 managerb AgentMgrb itcauserb tpcsuida


c

a. This account can have any name you choose. b. This account name cannot be changed during the installation.

Storage device information


This section contains worksheets which you can use to gather important information about the storage devices that will be managed by TotalStorage Productivity Center. You must have this information during the configuration of the Productivity Center. You need some of the information before you install the device specific Common Object Model (CIM) Agent, because this sometimes depends on a specific code level. Determine if there are firewalls in the IP path between the TotalStorage Productivity Center server or servers and the devices, which may not allow the necessary communication. In the first column of each table, enter as much information as possible to identify the devices later.

Appendix A. Worksheets

521

IBM TotalStorage Enterprise Storage Server, DS6000, DS8000


Use Table A-5 to collect the information about your storage devices. Important: Check the device support matrix for the associated CIM Agent.
Table A-5 Enterprise Storage Server Name, location, organization Both IP addresses LIC level User name Password CIM Agent host name and protocol

522

IBM TotalStorage Productivity Center: The Next Generation

IBM DS4000
Use Table A-6 to collect the information about your DS4000 devices. Check the device support matrix before you install the CIM Agent for the correct level.
Table A-6 FAStT devices Name, location, organization Firmware level IP address CIM Agent host name and protocol

Appendix A. Worksheets

523

IBM SAN Volume Controller


Use Table A-7 to collect the information about your SVC devices Keep in mind that the SVC CIMOM is part of the Master Console. For TPC to communicate with the SVC CIMOM it requires a valid account be defined on the SVC.
Table A-7 SAN Volume Controller devices Name, location, organization Firmware level Cluster IP address User ID Password CIM Agent host name and protocol

524

IBM TotalStorage Productivity Center: The Next Generation

Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.

IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks on page 526. Note that some of the documents referenced here may be available in softcopy only. TCP/IP Tutorial and Technical Overview, GG24-3376 Exploring Storage Management Efficiencies and Provisioning: Understanding IBM TotalStorage Productivity Center and IBM TotalStorage Productivity Center with Advanced Provisioning, SG24-6373 IBM TotalStorage SAN Volume Controller, SG24-6423

Other publications
These publications are also relevant as further information sources: IBM TotalStorage Productivity Center Users Guide, GC32-1775 IBM TotalStorage Productivity Center Installation and Configuration Guide, GC32-1774 IBM TotalStorage Enterprise Storage Server Command-Line Interface User's Guide, SC26-7494 IBM TotalStorage Productivity Center Problem Determination Guide, GC32-1778 IBM TotalStorage Productivity Center Messages, GC32-1776

Online resources
These Web sites and URLs are also relevant as further information sources: Tivoli software products index
http://www-306.ibm.com/software/tivoli/products/

Open Software Family


http://www.storage.ibm.com/software/index.html

Apache Software Foundation


http://www.apache.org

Engenio
http://www.engenio.com

FibreAlliance
http://www.fibrealliance.org

FibreAlliance MIB introduction


http://www.fibrealliance.org/fb/mib_intro.htm

Copyright IBM Corp. 2006. All rights reserved.

525

The Static Registration File


http://www.openslp.org/doc/html/UsersGuide/SlpReg.html

Storage Networking Industry Association (SNIA)


http://www.snia.org

How to get IBM Redbooks


You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site:
ibm.com/redbooks

Help from IBM


IBM Support and downloads
ibm.com/support

IBM Global Services


ibm.com/services

526

IBM TotalStorage Productivity Center: The Next Generation

Index
A
addess command 263 addessserver command 262263 adduser command 266 administrative rights DB2 user 76 Advanced Brocade API 359 agent communication ports 206 Agent deployment unattended (silent) installation 210 agent deployment 201 Common Agent logs 228 interactive installation 203 LINUX system 214 local installation 201 log files 226 remote installation 202 run scripts 205 service 225 silent installation 201 agent install verification 223 Agent Manager 32 certificates 51 database connection 93 default password 103 default user ID 103 healthcheck utility 92 IBMCDB database 91 key file 51 registration port 9511 87 security certificate 89 server installation 103 server port 9511 103 TCP/IP ports 46 verifying the installation 92 Agent manager public communication port 87 Agent Manager installation 83 Agent Recovery Service 34, 65 Agent Registration password 89 Agent registration password 51 agent registration password 33 agent remote installation 211 agent types 196 Agent uninstallation procedures 228 agents required 196 agentTrust.jks file 33, 35, 51 aggregation rules 427 AIX installation directory 118 AIX group 124 AIX install adm group 124 administrator 124 Agent Manager 146 Agent Registration Password 154 Application Server Name for Agent Manager field 152 Common agent 181 DAS user ID 133 Data server 165 DB2 127 DB2 fenced user ID 137 DB2 fix pack 143 DB2 port 50000 168 DB2 UDB 128 Device server 171 Graphical User Interface 182 hardware prerequisites 120 Host authentication password 175 HTTP Server installer 187 httpd.conf file 192 IBMCDB database 150 installing agents 177 NAS discovery 169 ports 125 primary domain name servers 122 remote installation 119 root user 145 Schema name field 163 Security Certificate 153 Security Domain 154 shut down DB2 environment 144 SMIT 123 SNMP community 170 software prerequisites 120 su - db2inst1 143 superuser ID 124 TPCDB 162 verify DB2 installation 143 Web application server 191 Web browser 186 Web browser interface 187 AIX V5.3 installation 118 alert scan profile 483 unwanted files 481 alerts 398 computers, file systems, directories 399 storage subsystems, fabrics, switches 406 ARS.version field 93 ava applet 114

B
Brocade SMI Agent 304 changing ports 315 CIM-XML CPA 309 configuration files 313

Copyright IBM Corp. 2006. All rights reserved.

527

connecting through TPC 315 event settings 310 installing 305 mutual authentication 309 Brocade switch 432

C
CD layout 57 Certificate Authority file 51 changeMe password 89, 103 CIM Agent 15, 2829, 35 agent code 28 client application 28 device 28 device provider 28 overview 31 CIM Browser interface 271 CIM Client 15 CIM Managed Object 15 CIM Object Manager 240 CIM Object Manager (CIMOM) 15, 28, 30, 35 CIM Provider 15 CIM request 28 CIM Server 15 CIM-compliant 28 CIMOM adding using the GUI 269 customization 240 DS Open API 242 ESS 800 242 IBM website 242 interoperability namespace 330 LIC-Level to Bundle Version 243 CIMOM (CIM Object Manager) 28, 30, 35 CIMOM alert log 328 CIMOM configuration 346 CIMOM configuration recommendations 241 CIMOM considerations 241 CIMOM discovery 30, 327 CIMOM discovery alert 347 CIMOM discovery job 347348 CIMOM registration 333 CIMOM SVC console 292 CIM-Workshop 274 Cisco 432 CISCO switch CIM server 317 Cisco switch 316, 432 Cleanup.zip file 513 client application CIM requests 28 collecting data 37 Common Agent 35, 516 demo certificates 33 subagent 33 Common Agent Registration password 206 Common Agent registration password 103 Common Agent Services Agent Manager 32 Resource Manager 32

Common HBA API 15 Common Information Model (CIM) 14 common schema 28 communication data paths 37 Computer L1 tabular view 438 Computer view 437 configured user ID locations 515 constraint Triggered Actions 493, 498 Triggering Condition 493 constraint definition 490 constraint violations 496 constraints 398 pre-defined 407 core schema 28 Custom install 57

D
DA benefit 23 Data Agent 196 Data agent install option 180 Data agent upgrade 235 data collection 360 Data Manager 53 Data Manager security issues 54 Data Server port 9549 102 Data server component 4 Data Server port 9549 205 Data user levels 54 database requirements 45 databases IBMCDB 62, 122 TPCDB 62, 122 DB2 file system ownership 159 DB2 Administration Server 76 DB2 database performance 97 DB2 database sizing 97 DB2 install 71 DB2 installation verify installation 82 DB2 log files 99 DB2 user account 76 DB2 user IDs 68 DB2 user rights 68 DB2 window services 82 db2grp1 group 125 db2level command 82 default installation directory 56 demo keys 33 demonstration certificates 153 device management 17 Device Server port 9550. 102 Device server component 4 Device Server port 9550 205 diagnostic 513 Directory Agent 18, 21, 36, 514

528

IBM TotalStorage Productivity Center: The Next Generation

configuring for subnet 26, 514 discover all services (DAS) 514 Discovery 39 Distance Vector Multicast Routing Protocol 25 DMS table space 510 DNS suffix 62 domain name system 62 DS CIM Agent log files 257 verify install 260 DS CIMOM CIM Browser interface 271 restart 264 telnet command 271 DS Open API ESS considerations 260 firewall rules 261 port 5989 254 DS Open CIM Agent addessserver command 262 CIMOM.LOG File 264 configuring 260 post install tasks 259 DS8000 Extend Pools 377 DVMRP 25

Fabric L1 443 Fabric L2 444 fabrics 356 FAStT CIM Agent SLP registration 281 Fibre Channel 14 Common HBA API 15 Generic Service 14 filesystem scan 503 firewall configuration 26

G
General Parallel File System (GPFS) 54 GPFS (General Parallel File System) 54 GUI applet 107 GUI for web access 107 GUID 64 gunzip command 147

H
Health overlay health values 426 Health status 425 Health Status Overlays Topology Viewer Health Status Overlays 424 health value 426 healthcheck utility 92 Heatth status aggregation rules 425 High-Performance Parallel Interface (HIPPI) 14 HIPPI (High-Performance Parallel Interface) 14 Home Directory tab 112 Host authentication password 215 HTTP 14 httpd.conf file 192

E
EFCM Management Software 294 Engenio provider 274 arrayhosts.txt 279 providerStore 279 slptool command 281 Engenio SMI-S Provider service 280 entity 418 Entity Groups 416, 421 ESS CIM Agent addess command 263 addessserver command 263 install 249 setuser interactive tool 265 ESS CIMOM SLP registration 268 ESS CIMOM verification 267 ESS CLI install 244 verification 249 verifyconfig 268 esscli command 249 extension schema 28

I
IBM FAStT 519 IBM SMI-S Agent for Tape 318 IBMCDB database 45, 91 ICAT (Integrated Configuration Agent Technology) 28 IGMP (Internet Group Management Protocol) 25 IIS Port 8080 111 ikeyman utility 33 ikeyman.exe 516 Inband discovery 36 Information Overlays 422 infrastructure communication 345 installation check list 45 hardware 44 Internet Information Services 65 user ID privileges 50 user IDs 50 installation licenses 95 Integrated Configuration Agent Technology (ICAT) 28 Intelligent Peripheral Interface (IPI) 14

F
fabric asset and configuration data 386 customized reports 389 reports 387 Fabric Agent 196 Fabric Agents 357 Fabric agents 197

Index

529

Internet Assigned Numbers Authority 24 Internet Group Management Protocol (IGMP) 25 Internet Information Server (IIS) 34 Internet Information Services 65, 108 IP address 517, 520 IP address assignment 24 IP multicasting 24 IP network 18 itcauser 207, 218 ITSO lab environment 340

nestat command 254 NETBIOS 64 netstat command 47

O
Open Shortest Path First (OSPF) 26 OS Type attribute 418 OSPF (Open Shortest Path First) 26 out of band discovery job 358 Out Of Band Fabric agents 197 outband discovery 36 Overlays tabular view 422 own subnet 514

J
Java applet 343

K
key file for Agent Manager 51

P
performance data 383 Performance Monitor 361 Performance Monitor jobs 370 Performance Overlay 427 performance status 427 PIM 26 Ping 361 Ping alert 369 ping command 266 Ping creation 368 pinning 430 PMM_Exception table 427 Policy management 499 port number 518 printing Topology Viewer 450 privileges for install user ID 50 Probe creation 361 progressive information disclosure 416 Protocol-Independent Multicast 26 provider component 30 proxy model 28 ps -ef command 226 public communication port 9513 103

L
lCIMOM ogs 355 Limited Edition 11 local agent uninstall 229 local subnet 26, 514 multicast service requests 514 log files CLI 107 Device server 107 GUI 107 LogCollector.readme file 513

M
management application 15 Management Information Base 17 Management Information Base (MIB) 17 McDATA interface configuring 299 Direct Connection mode 294 EFCM Proxy mode 295 TPC connection 303 McData switch 432 McData switches 292 mdisk to vdisk relationship 447 MIB compiler 17 Microsoft Windows domain 51 mini-map 424 mini-map navigation,Topology Viewer Mini-map navigation 435 MOSPF (Multicast Open Shortest Path First) 26 multicast 24 multicast group 24 individual hosts 25 multicast messages 19 Multicast Open Shortest Path First (MOSPF) 26 multicast request 21 multicast traffic 23

R
Redbooks Web site 526 Contact us xvi report constraint violation 402 reports servers and computers 393 SVC LUN mapping 380 Resource Manager 32, 515, 521 configured user ID 515 Reverse Path Forwarding 25 role based administration 343 rsTestConnection command 261 rsTestConnection.exe command 261

N
NAS discovery 169

S
Scan 361

530

IBM TotalStorage Productivity Center: The Next Generation

scan customization 486 filesystem 503 report 487 Scan creation 365 scan creation 483 Scan profiles 366 schema log files 99 schema name 97 security certificates 51 Security Certificates panel 88 Semantic Zooming 417 service accounts 52 Service Agent 19, 514 service information 19 Service Location Protocol multicast 22 Service Location Protocol (SLP) 18 service agent 18 user agent 18 service URL 20, 22 setuser command 266 setuser interactive tool 265 silent install response file 210 Simple Network Management Protocol 17 SLP active DA discovery 21 address 24 broadcast communication 2324 CIM Agent 28 configuration recommendation 332, 514 DA configuration 514 DA considerations 331 DA discovery 21 DA functions 22 directory agent configuration 332, 514 Discovery 515 environment configuration 514 firewall configuration 26 messages 25 multicast address and port 332 multicast communication 2324 multicast group 24 multicast messages 19 multicast request 21 multicast service request 21 passive DA discovery 22 port 427 26 port number 24, 515 registration 19 router configuration 332, 514 service agent 19 service attributes 19 service type 19 slp.conf 334 starting 260 unicast 23, 331 unicast communication 23 user agent 19 User Datagram Protocol message 19

verify active 267 verify install 259 verifyconfig command 336 when to use DA 23, 331 SLP (Service Location Protocol) 18 SLP DA 18, 514 SLP discovery summary 515 SLP environment 18 slp.conf file 334 slptool 268 slptool command 515 SMI-S 16, 35 SMI-S intiative 242 SMS table space 510 SNIA (Storage Networking Industry Association) 15 SNIA certification 241 SNIA certified devices 16 SNIA Web site 11 SNMP 17 SNMP management application 17 station product 17 SNMP manager 17 SNMP trap 17 Standard Edition 12 startcimbrowser command 271 status propagation 425 storage device 13 inband management 14 Storage Management Initiative - Specification 16, 35 Storage Networking Industry Association (SNIA) 15 storage subsystem asset and configuration data 376 reports 380 volume information 377 Storage Subsystem view Topology Viewer Storage Subsystem 445 storage subsystems 355 su command 143 subnet 514 query group membership 25 Subsystem Device Driver (SDD) 44 Subsystems L2 view 447 SVC CIMOM 285 console 285 register to SLP DA 292 SVC console account 286 TPC console account 286 verification 292 SVC Performance Monitor 370 SVC performance reporting 384 SVC zoning configuration 410 switch Performance Monitor 374

T
table space considerations 163 tape library 356 reports 391 Tape SMI-S Agent Index

531

configuring 326 tar command 147 TCP/IP ports 46 telnet command 271, 292 timeout value 513 Tivoli Agent Manager 34, 200 Tivoli Common Agent Service 515 Tivoli Common Agent Services 32 Topology Viewer Action drop down 435 aggregated states 427 Computer L2 view 439 Computer view 437 Computers L1 view 438 Device Group 438 entity classes 418 entity connections 424 Entity Groups 416 Fabric group 438 Fabric view 442 Graphical View 423 health status 425 hovering 429 Information Overlays 422 introduction 8 launch 434 launch point 423 mini-map 424 navigation 436 Other entity class 419 pinning 430 printing 450 progressive information disclosure 416 refresh function 433 refresh view rate 433 removing entities 433 Semantic Zooming 417 status propagation 425 Tabular View 424 Tape Libraries 449 Unknown state 419 User Defined Properties 418 zones 431 zoom levels 417 Total Storage Productivity Center function in V3.1 8 TotalStorage Productivity Center Navigation Tree 344 agent infrastructure 200 basics 36 CIMOM configuration 346 component install 94 Data server 4 database requirements 45 Device server 4 first steps sequence 345 function summary 9 license 95 overview 2

performance considerations 332 role based administration 343 server 521 services 344 starting the GUI 342 structure 2 tape library support 9 zone creation 473 TotalStorage Productivity Center (TPC) 519 TotalStorage Productivity Center for Data 5 Data agent 6 Data manager 6 ommunication 36 Web server 6 TotalStorage Productivity Center for Disk 6 performance functions 7 TotalStorage Productivity Center for Fabric 7 communication 36 Fabric manager 7 TotalStorage Productivity Center GUI 110 TPCDB database 45 trap 17 troubleshooting cimom.log file 270 TSRMsrv1 user ID 106 Typical install 56 typical install 74

U
Uniform Resource Locator (URL) 19 Unknown health value 426 update zone configuration 479 upgrade.zip files 235 User Agent (UA) 1820, 514 SLP User Agent interactions 21 User Datagram Protocol (UDP) 19 message 19 User Defined Properties 418 user ID 515, 519, 521 user interface 343 User Rights Assignments 50

V
verify ESS CLI 249 verifyconfig command 268269, 292, 336 Virtual Disk creation 410

W
WBEM (Web-Based Enterprise Management) 14 WBEM architecture 15 WBEM browser 271 WBEM initiative 14 Web browser 186 Web server 6 Web-Based Enterprise Management (WBEM) 14 Websphere Application Server IP address 88 Windows Services 51

532

IBM TotalStorage Productivity Center: The Next Generation

worksheets DS4000 523 key file 521 SAN Volume Controller (SVC) 524 servers 520 storage devices 522

X
XML 30 xmlCIM 14 X-Windows display 128

Z
zone creation 473 zones 431 zoning information 432

Index

533

534

IBM TotalStorage Productivity Center: The Next Generation

IBM TotalStorage Productivity Center: The Next Generation

IBM TotalStorage Productivity Center: The Next Generation


IBM TotalStorage Productivity Center: The Next Generation

(1.0 spine) 0.875<->1.498 460 <-> 788 pages

IBM TotalStorage Productivity Center: The Next Generation

IBM TotalStorage Productivity Center: The Next Generation

IBM TotalStorage Productivity Center: The Next Generation

Back cover

IBM TotalStorage Productivity Center: The Next Generation


Effectively use the IBM TotalStorage Productivity Center Efficiently manage your storage subsystems using one interface Easily customize reports for your environment
IBM TotalStorage Productivity Center is a suite of infrastructure management software that can centralize, automate, and simplify the management of complex and heterogeneous storage environments. It can help reduce the effort of managing complex storage infrastructures, improve storage capacity utilization, and improve administration efficiency. IBM TotalStorage Productivity Center allows you to respond to on demand storage needs and brings together, in a single point, the management of storage devices, fabric, and data. TotalStorage Productivity Center V3.1 is a rewrite of previous versions. This IBM Redbook shows you how to access the functions as compared to the previous releases. This IBM Redbook is intended for administrators and users who are installing and using IBM TotalStorage Productivity Center. It provides an overview of the product components and functions. It describes the hardware and software environment required and provides a step- by-step installation procedure. Customization and usage hints and tips are also provided.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE


IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks


SG24-7194-00 ISBN 0738495948

You might also like