Professional Documents
Culture Documents
Mary Lovelace Tom Conway Werner Eggli Marta Greselin Hartmut Harder Stefan Lein Massimo Mastrorilli
ibm.com/redbooks
International Technical Support Organization IBM TotalStorage Productivity Center: The Next Generation September 2006
SG24-7194-00
Note: Before using this information and the product it supports, read the information in Notices on page xi.
First Edition (September 2006) This edition applies to Version 3, Release 1, of IBM TotalStorage Productivity Center (product number 5608-VC0).
Copyright International Business Machines Corporation 2006. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii xiii xvi xvi
Chapter 1. Introduction to IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . 1 1.1 What is IBM TotalStorage Productivity Center? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 TotalStorage Productivity Center structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.2 IBM TotalStorage Productivity Center components . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 What is new in IBM TotalStorage Productivity Center V3.1 . . . . . . . . . . . . . . . . . . . . . . 8 1.2.1 Changes between TPC V3.1 and V2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.1 IBM TotalStorage Productivity Center Limited Edition . . . . . . . . . . . . . . . . . . . . . 11 1.3.2 IBM TotalStorage Productivity Center Standard Edition . . . . . . . . . . . . . . . . . . . . 12 1.3.3 IBM TotalStorage Productivity Center for Replication. . . . . . . . . . . . . . . . . . . . . . 12 Chapter 2. Key concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Standards used in IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 ANSI standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Web-Based Enterprise Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 Simple Network Management Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.5 Fibre Alliance MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Service Location Protocol overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 SLP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 SLP communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Configuration recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Common Information Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Component interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 CIMOM discovery with SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 How CIM Agent works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Tivoli Common Agent Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Tivoli Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Common Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Communication in TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 IBM TotalStorage Productivity Center basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Collecting data in general . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.2 Role-based Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. Installation planning and considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Installation prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Preinstallation check list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 TCP/IP ports used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 14 14 14 15 17 18 18 18 23 25 28 30 30 31 32 34 35 35 36 37 40 43 44 44 44 45 45 46 iii
3.4 User IDs and security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 User IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Increasing user security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Certificates and key files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 Services and service accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Starting and stopping the managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Server recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Supported subsystems, devices, filesystems, databases . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Storage subsystem support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.2 Tape library support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.3 File system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.4 Network File System support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.5 Database support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49 50 50 51 51 52 52 53 53 53 53 54 54 54
Chapter 4. TotalStorage Productivity Center installation on Windows 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.1 TotalStorage Productivity Center installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.1 Typical installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.2 Custom installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.1.3 CD layout and components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.1 One-server environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.2.2 Two-server environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.3 Hardware prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.3.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.3.2 Disk space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.4 Software prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.4.1 Databases supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.5 Preinstallation steps for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.5.1 Verify primary domain name systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.5.2 Activate NetBIOS settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.5.3 Internet Information Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.5.4 Create Windows user ID to install Device server and Data server . . . . . . . . . . . . 65 4.5.5 User IDs and password to be used and defined . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.6 DB2 install for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.6.1 Agent Manager installation for Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.7 Install TotalStorage Productivity Center components . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.7.1 Verifying installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.7.2 Installing Data and Device Servers, GUI, and CLI . . . . . . . . . . . . . . . . . . . . . . . 100 4.8 Configuring the GUI for Web Access under Windows 2003 . . . . . . . . . . . . . . . . . . . . 107 4.8.1 Installing Internet Information Services (IIS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.8.2 Configuring IIS for the TotalStorage Productivity Center GUI. . . . . . . . . . . . . . . 110 4.8.3 Launch the TotalStorage Productivity Center GUI . . . . . . . . . . . . . . . . . . . . . . . 113 Chapter 5. TotalStorage Productivity Center installation on AIX . . . . . . . . . . . . . . . . 5.1 TotalStorage Productivity Center installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Typical installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Custom installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 CD layout and components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Hardware Prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Software Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Databases supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
IBM TotalStorage Productivity Center: The Next Generation
5.5 Preinstallation steps for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Verify primary domain name servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 User IDs, passwords, and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Create the TotalStorage Productivity Center user ID and group. . . . . . . . . . . . . 5.5.4 Creating and sizing file systems and logical volumes . . . . . . . . . . . . . . . . . . . . . 5.5.5 Verify port availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 DB2 installation for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Accessing the installation media with CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Accessing the installation media with a downloaded image . . . . . . . . . . . . . . . . 5.6.3 Preparing the display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.4 Beginning the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.5 Verifying the DB2 installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.6 Removing the CD from the server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Installing the DB2 fix pack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Obtaining and installing the latest DB2 fix pack . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.2 Add the root user to the DB2 instance group . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Agent Manager installation for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.1 Accessing the installation media using CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.2 Accessing the installation media using a downloaded image . . . . . . . . . . . . . . . 5.8.3 Preparing the display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.4 Beginning the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.5 Removing the CD from the server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Installing IBM TotalStorage Productivity Center for AIX . . . . . . . . . . . . . . . . . . . . . . . 5.9.1 Order of component installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.2 Accessing the installation media with a CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.3 Accessing the installation media with a downloaded image . . . . . . . . . . . . . . . . 5.9.4 Preparing the display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.5 Sourcing the environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.6 Assigning file system ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.7 Installing the database schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.8 Installing Data server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.9 Installing Device server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.10 Installing agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.11 Installing the Java Graphical User and the Command Line Interface . . . . . . . . 5.10 Installing the user interface for access with a Web browser . . . . . . . . . . . . . . . . . . . 5.10.1 Distributing the Graphical User Interface with a Web browser . . . . . . . . . . . . . Chapter 6. Agent deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Functional overview: Which agents do I need?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Types of agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 TotalStorage Productivity Center component use of agents. . . . . . . . . . . . . . . . 6.2 Agent infrastructure overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Agent deployment options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Local installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Remote installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Local installation of Data and Fabric Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Interactive installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Unattended (silent) installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Remote installation of Data and Fabric Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Preparing the remote installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Performing the remote installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Logfiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
122 122 123 124 125 125 127 127 127 127 128 143 143 143 143 145 146 146 146 147 147 157 157 157 158 158 158 159 159 159 165 171 177 182 186 187 195 196 196 197 200 201 201 202 202 203 210 211 212 214 223 226
Contents
6.7 Uninstalling Data and Fabric Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 Remote uninstallation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.2 Local uninstallation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Upgrading the Data Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 7. CIMOM installation and customization. . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Planning considerations for CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 CIMOM configuration recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 SNIA certification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Installing CIM Agent for ESS 800/DS6000/DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 CIM Agent and LIC level relationship for DS8000. . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 CIM Agent and LIC level relationship for DS6000. . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 ESS CLI Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.4 DS CIM Agent install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.5 Post-installation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.6 Configuring the DS CIM Agent for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.7 Restart the CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.8 CIMOM user authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Verifying connection to the storage subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Adding your CIMOM to the TotalStorage Productivity Center GUI . . . . . . . . . . . 7.5.2 Problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.3 Confirming that ESS CIMOM is available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.4 Start the CIM Browser. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Installing CIM agent for IBM DS4000 family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Registering DS4000 CIM agent if SLP-DA is in place. . . . . . . . . . . . . . . . . . . . . 7.6.2 Verifying and managing CIMOM availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Configuring CIMOM for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.1 Adding the SVC TotalStorage Productivity Center for Disk user account. . . . . . 7.7.2 Registering the SAN Volume Controller host in SLP . . . . . . . . . . . . . . . . . . . . . 7.8 Configuring CIMOM for McData switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.1 Planning the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.2 Supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.3 Installing SMI-S Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.4 Configuring the SMI-S interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.5 Verifying the connection with TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9 Configuring the CIM Agent for Brocade Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.1 Planning the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.2 Installing the CIM Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.3 SLP installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.4 Changing the ports used in the CIM Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.5 Connecting the CIM Agent with TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10 Configuring the CIM Agent for Cisco switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10.1 Enabling and configuring the CIM Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10.2 Connecting the CIM Agent with TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11 Configuring the CIM Agent for IBM Tape Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.2 SMI-S Agent for Tape Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.3 Configuring the SMI-S Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12 Discovering endpoint devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13 Verifying and managing CIMOMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14 Interoperability namespace summary table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.15 Planning considerations for SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
228 228 229 235 239 240 240 241 241 242 243 244 244 249 259 260 264 265 266 269 270 271 271 274 281 281 285 286 292 292 293 294 296 299 303 304 304 305 315 315 315 316 317 317 318 318 318 326 327 328 330 331
vi
7.15.1 Considerations for using SLP DA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.15.2 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.15.3 General performance guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16 CIMOM registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16.1 Manual method to add a CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16.2 Automated method to add CIMOMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16.3 Configuring TotalStorage Productivity Center for SLP discovery . . . . . . . . . . . 7.16.4 Registering the CIM Agent to SLP-DA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16.5 Creating slp.reg file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8. Getting Started with TotalStorage Productivity Center. . . . . . . . . . . . . . . 8.1 Infrastructure summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 TotalStorage Productivity Center function overview . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 First steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Starting the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Logging on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 GUI basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Configuring CIMOMs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Verifying Data and Fabric Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Configuring out-of-band fabric connectivity (SNMP/API) . . . . . . . . . . . . . . . . . . 8.5 Collecting data about your infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Creating Probes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Creating Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Creating Pings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.4 Creating Performance Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Retrieving and displaying data about your Infrastructure . . . . . . . . . . . . . . . . . . . . . . 8.6.1 Viewing data about your storage subsystems. . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.2 Viewing data about your fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Viewing data about your tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.1 Viewing data about your computers and file systems . . . . . . . . . . . . . . . . . . . . . 8.8 Alerting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9 Configuring your storage subsystems and switches . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9. Topology viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Design principles and concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Progressive information disclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.2 Semantic Zooming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.3 Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.4 Information Overlays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.5 Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.6 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.7 Status propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.8 Hovering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.9 Pinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.10 Zone and zonesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.11 Removing entities from the database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.12 Refreshing the Topology Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Getting started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Launch the Topology Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
331 332 332 333 333 333 335 336 336 339 340 341 342 342 343 343 344 346 357 358 360 361 365 368 370 376 376 386 391 393 398 409 415 416 416 417 418 422 423 424 425 429 430 431 433 433 434 434 436 437 442
Contents
vii
9.2.5 Storage Subsystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 9.3 Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 9.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 Chapter 10. Managing and monitoring your storage subsystem . . . . . . . . . . . . . . . . 10.1 Case study 1: adding new servers and storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Server tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.2 Storage provisioning and zoning on Colorado . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Storage Provisioning and Zoning on AZOV ( 002-AZOV ) . . . . . . . . . . . . . . . . 10.1.4 Post activities for Colorado and AZOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.5 Manual zone configuration with TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Case Study 2: Detect and alert unwanted files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Prerequisite steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Create a Scan targeted to the specific filesystem . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Create a customized report for the scan-results . . . . . . . . . . . . . . . . . . . . . . . . 10.2.4 Set up the constraint definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.5 Rescan the filesystem to see if there is a violation . . . . . . . . . . . . . . . . . . . . . . 10.2.6 Activate Archive/Delete Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Case Study 3: Policy management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 The steps to perform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Create a Scan and a Report targeted to a specific filesystem . . . . . . . . . . . . . 10.3.3 Define an alert for a filesystem threshold condition (optional) . . . . . . . . . . . . . Chapter 11. Hints, tips and good to knows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Selecting an SMS or DMS tablespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 DB2 installation known issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 How to get rid of Engenio provider 5989 port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 DB2 view for CIMOM discovered each subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Valid characters for user ID and passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.1 Typical installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.2 Custom installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 How to change timeout value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7 Common Agent and Agent Manager documentation . . . . . . . . . . . . . . . . . . . . . . . . 11.7.1 For an installed Common Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.2 For an installed Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.3 Scripts to clean up TPC V3.1 component or complete install . . . . . . . . . . . . . . 11.8 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.8.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.9 Tivoli Common Agent Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.9.1 Locations of configured user IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.9.2 Tivoli Agent Manager status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.10 Verifying if a port is in use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User IDs and passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User IDs and passwords for key files and installation. . . . . . . . . . . . . . . . . . . . . . . . . . Storage device information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM TotalStorage Enterprise Storage Server, DS6000, DS8000 . . . . . . . . . . . . . . . . . IBM DS4000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 454 454 459 465 470 473 481 481 483 487 490 494 498 499 500 503 506 509 510 511 511 511 511 512 512 513 513 513 513 513 514 515 515 515 516 518 519 520 520 521 521 522 523 524
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
Contents
ix
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.
xi
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX 5L AIX Cloudscape DB2 Universal Database DB2 DS4000 DS6000 DS8000 Enterprise Storage Server ESCON eServer FlashCopy ibm.com IBM iSeries NetView Power PC POWER4 POWER5 pSeries Redbooks (logo) Redbooks System Storage Tivoli Enterprise Console Tivoli Enterprise Tivoli TotalStorage WebSphere xSeries z/OS zSeries
The following terms are trademarks of other companies: Java, JDBC, JDK, JRE, JVM, Solaris, Sun, Sun Microsystems, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Active Directory, Internet Explorer, Microsoft, Windows Server, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Pentium, Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
xii
Preface
IBM TotalStorage Productivity Center is a suite of infrastructure management software that can centralize, automate, and simplify the management of complex and heterogeneous storage environments. It can help reduce the effort of managing complex storage infrastructures, improve storage capacity utilization, and improve administration efficiency. IBM TotalStorage Productivity Center allows you to respond to on demand storage needs and brings together, in a single point, the management of storage devices, fabric, and data. TotalStorage Productivity Center V3.1 is a rewrite of previous versions. This IBM Redbook shows you how to access the functions as compared to the previous releases. This IBM Redbook is intended for administrators and users who are installing and using IBM TotalStorage Productivity Center. It provides an overview of the product components and functions. It describes the hardware and software environment required and provides a stepby-step installation procedure. Customization and usage hints and tips are also provided. This book is not a replacement for existing IBM Redbooks or product manuals that detail the implementation and configuration of individual products that make up the IBM TotalStorage Productivity Center, or the products as they may have been called in previous versions. We refer to those books as appropriate throughout this book.
From left to right: Massimo, Marta, Tom , and Mary Copyright IBM Corp. 2006. All rights reserved.
xiii
Mary Lovelace is a Consulting IT specialist at the International Technical Support Organization. She has more than 20 years of experience with IBM in large systems, storage and Storage Networking product education, system engineering and consultancy, and systems support. She has written many redbooks on TotalStorage Productivity Center and z/OS storage products. Tom Conway is an Infrastructure Architect in the United States of America. He has 16 years of experience in the Open Systems Infrastructure field. He joined IBM in 2001 and became the Chief Engineer of the IBM Global Services SAN Interoperability Lab at the IBM National Test Center in Gaithersburg, Maryland. His areas of expertise include Open Systems server hardware, operating systems, networking, and storage hardware and software, including the IBM TotalStorage Productivity Center. He is an IBM Certified Professional Server Expert. Werner Eggli is a Senior IT Specialist with IBM Switzerland. He has more than 20 years experience in Software Development, Project Managment and Consulting concentrating in the Telecommunication Segment. Werner joined IBM in 2001 and works in presales as a Storage SE for Open Systems. His expertise is the design and implementation of IBM Storage Solutions (ESS/FAStT/LTO/NAS/SAN/SVC). He holds a degree in Dipl.Informatiker (FH) from Fachhochschule Konstanz, Germany. Marta Greselin is an IT specialist working for IBM Software Group in Italy. She joint IBM in 1999. Her role is Technical Sales Support in Tivoli. She has seven years of experience in selling and implementing Proof of Concept scenarios for Storage Management software solutions both in Tivoli and TotalStorage Open Software. She holds a degree in Physics from Universit Statale di Milano. Her area of expertise include IBM Tivoli Storage Manager, IBM TotalStorage Productivity Center, IBM TotalStorage SAN Volume Controller, IBM TotalStorage SAN File System. Hartmut Harder is an IT Specialist based in Karlsruhe, Germany. Before joining IBM in 1985 he ompleted his education as a certified engineer of electronics. He started with IBM ITS Delivery as a Hardware Specialist for Large System customers. In his more than 20 years of IT experience, he has been working two thirds of this time in several areas of Systems Management software products like Tivoli Framework,Tivoli Monitoring,Software Distribution xiv
IBM TotalStorage Productivity Center: The Next Generation
with NetView DM/2. For the last six years he has focussed on storage related products like Tivoli Storage Manager. His working knowledge is gained from supporting customers with planning,implementing and supporting IBM Storage Management Solutions. Stefan Lein is a Consulting IT Specialist working for Field Technical Sales Support in the IBM Storage Sales Organization in Germany. He joined IBM in 1993 and has worked in several sales and technical roles. He has five years of experience experience providing presales and postsales support for IBM TotalStorage solutions for open systems. His areas of special expertise include IBM Disk Systems and the IBM Storage Software solution portfolio. Stefan is a certified IBM Certified Specialist for TotalStorage Networking and Virtualization Architecture and for Open Systems Storage Solutions. He holds a degree in Computer Science of the University of Applied Science in Nrnberg Germany and a degree in economical engineering of the University of Applied Science in Wrzburg/Schweinfurt, Germany. Massimo Mastrorilli is an Advisory IT Storage Specialist in Switzerland. He joined IBM Italy in 1989 and seven years ago he moved to IBM Switzerland, based in Lugano. He has 16 years of experience in implementing, designing, and supporting Storage solutions in S390 and Open Systems environment. His areas of expertise include IBM Tivoli Storage Manager, SAN Storage Area Network, and Storage solutions for Open Systems. He is an IBM Certified Specialist for TSM, Storage Sales and Open System Storage Solutions. He is a member of Tivoli GRT Global Response Team group. Thanks to the following people for their contributions to this project: Robert Haimowitz Sangam Racherla International Technical Support Organization Diana Duan Doug Dunham Paul Lee Curtis Neal Jeanne Ostdiek Scott Venuti San Jose, California IBM USA Russ Warren Storage Software Project Management Research Triangle Park, North Carolina IBM USA Mike Griese Technical Support Marketing Rochester, Minnisota IBM USA Derek Jackson Advanced Technical Support Gaithersburg, Maryland IBM USA
Preface
xv
Tina Dunton Nancy Hobbs Sudhir Koka Bryant Lee Arvind Surve Bill Tuminaro Miki Walters TotalStorage Productivity Center Development San Jose, California IBM USA Eric Butler Andreas Dieberger Roberto Pineiro Ramani Routray IBM Research San Jose, California IBM USA
Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at:
ibm.com/redbooks
Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
xvi
Chapter 1.
Logical Structure
The logical structure of TotalStorage Productivity Center V3.1 has three layers, as shown in Figure 1-1 on page 3. The infrastructure layer consists of basic function such as messaging, scheduling, logging, device discovery, and a consolidated database shared by all components of TotalStorage Productivity to ensure consistent operation and performance. The application layer consists of core TotalStorage Productivity Center management functions, based on the infrastructure implementation, that provide different disciplines of storage or data management. These application components are most often associated with the product components that make up the product suite, such as fabric management, disk management, replication management and data management. The interface layer presents integration points for the products that make up the suite. The integrated graphical user interface (GUI) brings together product and component functions into a single representation that seamlessly interacts with the components to centralize the tasks for planning, monitoring, configuring, reporting, topology viewing, and problem resolving.
Interfaces
WSDL
CL
Management Applications
Fabric Fabric
Disk Disk
Replication Replication
Performance Performance
Data Data
Others Others
Infrastructure
Monitor Copy Services (eCS) CIM ESSNI Agent SNMP Client Library Library Library Library
CIMSc anner SLPSc annerSLPPars er CIMPars er CIMObj ect Pars er Infr astr ucture Domai n s pecific Plug-ins Legend CIMProc ess or Profil e
CIMXMLParser
C l C a l C s a l s C s a M l s s a a M s p s a M p s p a e M p p r a DB e Driver p p r e p r e r
Physical structure
IBM TotalStorage Productivity Center is comprised of the following elements: A data component IBM TotalStorage Productivity Center for Data (formerly IBM Tivoli Storage Resource Manager) A fabric component IBM TotalStorage Productivity Center for Fabric (formerly IBM Tivoli SAN Manager) A disk component IBM TotalStorage Productivity Center for Disk (formerly IBM TotalStorage Multiple Device Manager) A replication component (formerly IBM TotalStorage Multiple Device Manager Replication Manager) IBM TotalStorage Productivity Center includes a centralized suite installer and IBM TotalStorage Productivity Center for Data and IBM TotalStorage Productivity Center for Fabric share a common agent to manage the fabric as well as capacity utilization of file systems and databases. Figure 1-2 on page 4 shows the TotalStorage Productivity Center V3.1 physical structure.
T PC D atabase
The Data server is the control point for product scheduling functions, configuration, event information, reporting, and GUI support. It coordinates communication with agents and data collection from agents that scan file systems and databases to gather storage demographics and populate the database with results. Automated actions can be defined to perform file system extension, data deletion, and Tivoli Storage Manager backup or archiving or event reporting when defined thresholds are encountered. The Data server is the primary contact point for GUI user interface functions. It also includes functions that schedule data collection and discovery for the Device server. The Device server component discovers, gathers information from, analyzes performance of, and controls storage subsystems and SAN fabrics. It coordinates communication with agents and data collection from agents that scan SAN fabrics. The single database instance serves as the repository for all TotalStorage Productivity Center components. The Data agents and Fabric agents gather host, application, and SAN fabric information and send this information to the Data server or Device server. The GUI allows you to enter information or receive information for all TotalStorage Productivity Center components. The command-line-interface (CLI) allows you to issue commands for major TotalStorage Productivity Center functions.
created, accessed and modified and by what group or user. This type of information enables system administrators to map the actual storage resource to the consumers of that resource. The ability to map storage consumption to storage hardware has become increasingly important as the size of open systems environments have increased. In addition to understanding the current consumption and usage of data within the enterprise, TotalStorage Productivity Center for Data keeps track of this information over time. Not only does this historical view of storage consumption and utilization allow you to see usage trends over time, it also enables the system administrator to see a projected use of storage into the future. This allows the system administrator to plan the purchase of additional capacity in a planned proactive manner rather than just reacting to being out of space. The major components of TotalStorage Productivity Center for Data are: Data Manager The manager controls the discovery, reporting, and alert functions. It does the following: Receives information from the agents and stores that information in the central repository Issues commands to agents for jobs. Receives requests from clients for information and retrieves the requested information from the central data repository. Data agents on managed systems An agent resides on each managed system. Each agent performs the following functions: Runs probes and scans. Collects storage-related information about the volumes or file systems that are accessible to the managed systems. Forwards information to the manager to be stored in the database repository. Web server The optional Web server permits remote Web access to the server. Clients Clients communicate directly to Data Manager to perform administration, monitoring, and reporting. A client can be a locally installed interface to Data Manager, or it can use the Web server to access the user interface through a Web browser.
TotalStorage Productivity Center for Disk performance functions include: Collect and store performance data and provide alerts Provide graphical performance reports Helps optimize storage allocation Provide volume contention analysis Through the use of data collection, setting of thresholds and use of performance reports, performance can be monitored for the ESS, DS4000, DS6000, DS8000, SVC, and any other storage subsystem that supports the SMI-S block server performance subprofile. The performance function starts with the data collection task, responsible for capturing performance statistics for the devices and storing the data in the database. Thresholds can be set for certain performance metrics depending on the type of device. Threshold checking is performed during data collection. When performance is outside the specified boundaries, alerts can be generated. Once performance data has been collected, you can configure TotalStorage Productivity Center for Disk to present graphical or text reports on the historical performance behavior of specified devices. The performance reports provide information about the performance metrics and display past or current performance in graphical form.
The components of TotalStorage Productivity Center for Fabric are: Fabric Manager The manager performs the following functions: Discovers SAN components and devices, Gathers data from agents on managed hosts, such as descriptions of SANs, and host information.
Generates Simple Network Management Protocol (SNMP) events when a change is detected in the SAN fabric. Forwards events to the Tivoli Enterprise Console or an SNMP console. Monitors switch performance by port and by constraint violations Fabric agents on managed hosts Each agent performs the following functions: Gathers information about the SAN by querying switches and devices for attribute and topology information. Gathers event information detected by host bus adapters (HBAs).
Topology Viewer
Within TotalStorage Productivity Center the Topology Viewer is designed to provide an extended graphical topology view; a graphical representation of the physical and logical resources (for example, computers, fabrics, and storage subsystems) that have been discovered in your storage environment. In addition, the Topology Viewer depicts the relationships among resources (for example, the disks comprising a particular storage subsystem). Detailed, tabular information (for example, attributes of a disk) is also provided. With all the information that topology viewer provides, you can easily and more quickly monitor and troubleshoot your storage environment. The overall goal of the Topology Viewer is to provide a central location to view a storage environment, quickly monitor and troubleshoot problems, and gain access to additional tasks and function within the TotalStorage Productivity Center UI without users losing their orientation to the environment. This kind of flexibility through the Topology Viewer UI will afford better cognitive mapping between the entities within the environment, and provide data about entities and access to additional tasks and functionality associated with the current environmental view and the user's role. The Topology Viewer uses the TotalStorage Productivity Center database as the central repository for all data that it displays. It actually reads the data in user definable intervals from the database and updates, if necessary, the displayed information automatically. Figure 1-3 on page 9 shows the Topology Viewer Overview view.
be SMI-S 1.02 or SMI-S 1.1 compliant. This support includes storage provisioning, as well as asset and capacity reporting. Consolidated and enhanced device discovery and control through CIMOM All CIMOM related information gathered during CIMOM discovery are shared by all TotalStorage Productivity Center components Consistent reporting capabilities (scheduled and ad hoc) The scheduling capabilities of TotalStorage Productivity Center for Data have been extended to all components. Consolidated message logging Data export capabilities (HTML, CSV) Single set of services for consistent administration and operations: Policy definitions Event handling Resource groups New command line interface tpctool for configuration, fabric and disk management, and performance reporting. TotalStorage Productivity Center for Fabric adds support for SMI-S-based fabric, collecting performance statistics from IBM and third-party SAN fabrics. TotalStorage Productivity Center for Fabric is designed to provide an extended graphical topology view of your storage area network that displays the hosts, SAN fabric and storage, showing the SAN connectivity and its availability and also the fabric performance metrics and the status of the ports on the SAN fabric. TotalStorage Productivity Center V3.1 will provide the following support for any disk subsystems (including non IBM devices) that are SNIA SMI-S 1.0.2 or 1.1 compliant (for example, SNIA CTP provider certified). The support provided for these SMI-S compliant subystems will include those functions enabled by support for the required profiles of the SMI-S standard. Typically, that will include: Discovery of CIMOMs and storage subsystems (through SLP) Reporting on subsystem asset and capacity data (with details on storage subsystems, disk groups, disks, storage pools and volumes) Monitoring Provisioning (volume creation and volume mapping/masking to host server) Performance metrics for storage subsystem ports, subsystem volumes, and top level storage Computer Systems (including overall performance metrics for the storage device). It should also be noted that IBM will rely on the testing and certification being performed through SNIA for SMI-S compliance. The anticipated list of non IBM disk subsystems that TotalStorage Productivity Center V3.1 will support through SMI-S compliance for the functions listed here include: EMC Symmetrix EMC Clariion Engenio subsystems HDS Thunder 9500V HDS Lightning 9900V HPQ XP 512, XP 1024 HPQ Storage Works Virtual Array family
10
For a complete list of third-party device support of SMI-S, consult the SNIA Web site:
http://www.snia.org/ctp
1.3 Licensing
All IBM TotalStorage Productivity Center licenses offer a common set of general functionalities, like the TotalStorage Productivity Center Administrative Services, Disk Management (not including performance management), Fabric Management (not including performance management) and Tape Management (reporting). IBM TotalStorage Productivity Center licensing is based on the full usable terabyte capacity of the associated storage devices to be managed
11
Display an end-to-end topology view of your storage infrastructure and health console. Enable a simple upgrade path to IBM TotalStorage Productivity Center Standard Edition (or single-priced modules). IBM TotalStorage Productivity Center Limited Edition is a management option offered with IBM midrange and high-end storage systems. This tool provides storage administrators with a simple way to conduct device management for multiple storage arrays and SAN fabric components from a single integrated console that also is the base of operations for the IBM TotalStorage Productivity Center suite. This offering is available with Productivity Center for Data, Fabric and Disk.
12
Chapter 2.
Key concepts
There are industry standards and protocols that are the basis of the IBM TotalStorage Productivity Center. The understanding of these concepts is important for installing and customizing the IBM TotalStorage Productivity Center. This chapter describes the standards on which the IBM TotalStorage Productivity Center is built, as well as the methods of communication used to discover and manage storage devices. Communication between various components of IBM TotalStorage Productivity Center will also be discussed. Diagrams are provided to show the relationship and interaction of the various elements in the IBM TotalStorage Productivity Center environment.
13
T11 committee
Since the 1970s, the objective of the ANSI T11 committee is to define interface standards for high-performance and mass storage applications. Since that time, the committee has completed work on three projects: High-Performance Parallel Interface (HIPPI) Intelligent Peripheral Interface (IPI) Single-Byte Command Code Sets Connection (SBCON) Currently the group is working on Fibre Channel (FC) and Storage Network Management (SM) standards.
14
Hypertext Transfer Protocol over Secure Socket Layer (HTTPS) HTTP and HTTPS are used as a way to enable communication between a management application and a device that both use CIM. For more information go to:
http://www.dmtf.org/standards/wbem/
WBEM architecture
The WBEM architecture defines the following elements: CIM Client The CIM Client is a management application that uses CIM to manage devices. A CIM Client can reside anywhere in the network, because it uses HTTP(S) to talk to CIM Object Managers and Agents. TotalStorage Productivity Center incorporates a CIM Client. CIM Managed Object A CIM Managed Object is a hardware or software component that can be managed by a management application using CIM. CIM Agent The CIM Agent is embedded into a device or it can be installed on the server using the CIM provider as the translator of devices proprietary commands to CIM calls, and interfaces with the management application (the CIM Client). The CIM Agent is linked to one device. CIM Provider A CIM Provider is the element that translates CIM calls to the device-specific commands. That is the reason why a CIM-Provider is - in most of the cases - delivered from the hardware vendor. You can imagine it like a device driver. A CIM Provider is always closely linked to a CIM Object Manager or CIM Agent. CIM Object Manager A CIM Object Manager (CIMOM) is a part of the CIM Server that links the CIM Client to the CIM Provider. It enables a single CIM Agent to talk to multiple devices. CIM Server A CIM Server is the software that runs the CIMOM and the CIM provider for a set of devices. This approach is used when the devices do not have an embedded CIM Agent. This term is often not used. Instead people often use the term CIMOM when they really mean the CIM Server.
15
Managers or components that use this standard include: IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Data
16
Device management
An SNMP manager can read information from an SNMP agent to monitor a device. Therefore the device needs to be polled on an interval basis. The SNMP manager can also change the configuration of a device, by setting certain values to corresponding variables. Managers or components that use these standards include the IBM TotalStorage Productivity Center for Fabric.
Traps
A device can also be set up to send a notification to the SNMP manager (this is called a trap) to inform this SNMP manager asynchronously of a status change. Depending on the existing environment and organization, it is likely that your environment already has an SNMP management application in place. The managers or components that use this standard are: IBM TotalStorage Productivity Center for Fabric (sending and receiving of traps) IBM TotalStorage Productivity Center for Data can be set up to send traps, but does not receive traps
17
18
Service Agent
The SLP SA is a component of the SLP architecture that works on behalf of one or more network services to broadcast the availability of those services by using broadcasts. The SA replies to external service requests using IP unicasts to provide the requested information about the registered services, if it is available. The SA can run in the same process or in a different process as the service itself. In either case, the SA supports registration and deregistration requests for the service (as shown in the right part of Figure 2-1). The service registers itself with the SA during startup, and removes the registration for itself during shutdown. In addition, every service registration is associated with a life-span value, which specifies the time that the registration will be active. In the left part of the diagram, you can see the interaction between a UA and the SA.
A service is required to reregister itself periodically, before the life-span of its previous registration expires. This ensures that expired registration entries are not kept. For instance, if a service becomes inactive without removing the registration for itself, that old registration is removed automatically when its life span expires. The maximum life span of a registration is 65535 seconds (about 18 hours).
User Agent
The SLP UA is a process working on the behalf of the user to establish contact with some network service. The UA retrieves (or queries for) service information from the Service Agents or Directory Agents. The UA is a component of SLP that is closely associated with a client application or a user who is searching for the location of one or more services in the network. You can use the SLP UA by defining a service type that you want the SLP UA to locate. The SLP UA then retrieves a set of discovered services, including their service Uniform Resource Locator (URL) and any service attributes. You can then use the services URL to connect to the service. The SLP UA locates the registered services, based on a general description of the services that the user or client application has specified. This description usually consists of a service type, and any service attributes, which are matched against the service URLs registered in the SLP Service Agents. The SLP UA usually runs in the same process as the client application, although it is not necessary to do so. The SLP UA processes find requests by sending out multicast messages to the network and targeting all SLP SAs within the multicast range with a single User Datagram Protocol (UDP) message. The SLP UA can, therefore, discover these SAs with a
19
minimum of network overhead. When an SA receives a service request, it compares its own registered services with the requested service type and any service attributes, if specified, and returns matches to the UA using a unicast reply message. The SLP UA follows the multicast convergence algorithm and sends repeated multicast messages until no new replies are received. The resulting set of discovered services, including their service URL and any service attributes, are returned to the client application or user. The client application or user is then responsible for contacting the individual services, as needed, using the services URL (see Figure 2-2).
An SLP UA is not required to discover all matching services that exist in the network, but only enough of them to provide useful results. This restriction is mainly the result of the transmission size limits for UDP packets. They can be exceeded when there are many registered services or when the registered services have lengthy URLs or a large number of attributes. However, in most modern SLP implementations, the UAs can recognize truncated service replies and establish TCP connections to retrieve all of the information of the registered services. With this type of UA and SA implementation, the only exposure that remains is when there are too many SAs within the multicast range. This can cut short the multicast convergence mechanism. This exposure can be mitigated by the SLP administrator by setting up one or more SLP DAs. The SLP process shown in the diagram above can be described as follows. The application needs a specific service and issues an find request The UA starts a multicast service request One or more SAs are answering and providing service information The application gets feedback from the UA The application is able to contact and use the service by their own Note: TotalStorage Productivity Center acts as a User Agent in terms of SLP communication. 20
IBM TotalStorage Productivity Center: The Next Generation
Directory Agent
The SLP DA is an optional component of SLP that collects and caches network service broadcasts. The DA is used primarily to simplify SLP administration and to improve SLP performance. You can consider the SLP DA as an intermediate tier in the SLP architecture. It is placed between the UAs and the SAs so that both UAs and SAs communicate only with the DA instead of with each other. This eliminates a large portion of the multicast request or reply traffic in the network. It also protects the SAs from being overwhelmed by too many service requests if there are many UAs in the environment. Figure 2-3 shows the interactions of the SLP UAs and SAs in an environment with SLP DAs.
Figure 2-3 SLP User Agent interactions with User Agent and Service Agent
When SLP DAs are present, the behavior of both SAs and UAs changes significantly. When an SA is first initializing, it performs a DA discovery using a multicast service request. It also specifies the special, reserved service type service:directory-agent. This process is also called active DA discovery. It is achieved through the same mechanism as any other discovery using SLP. Similarly, in most cases, an SLP UA also performs active DA discovery using multicasting when it first starts. However, if the SLP UA is statically configured with one or more DA addresses, it uses those addresses instead. If it is aware of one or more DAs, either through static configuration or active discovery, it sends unicast service requests to those DAs instead of multicasting to SAs. The DA replies with unicast service replies, providing the requested service URLs and attributes. Figure 2-4 on page 22 shows the interactions of UAs and SAs with DAs, during active DA discovery.
21
The SLP DA functions similarly to an SLP SA, receiving registration and deregistration requests, and responding to service requests with unicast service replies. There are a couple of differences, where DAs provide more functionality than SAs. One area, mentioned previously, is that DAs respond to service requests of the service:directory-agent service type with a DA advertisement response message, passing back a service URL containing the DAs IP address. This allows SAs and UAs to perform active discovery on DAs. One other difference is that when a DA first initializes, it sends a multicast DA advertisement message to advertise its services to any existing SAs (and UAs) that may already be active in the network. UAs can optionally listen for, and SAs are required to listen for, such advertisement messages. This listening process is also sometimes called passive DA discovery. When the SA finds a new DA through passive DA discovery, it sends registration requests for all its currently registered services to that new DA. Figure 2-5 shows the interactions of DAs with SAs and UAs, during passive DA discovery.
22
Unicast
The most common communication method, unicast, requires that a sender of a message identifies one and only one target of that message. The target IP address is encoded within the message packet, and is used by the routers along the network path to route the packet to the proper destination. If a sender wants to send the same message to multiple recipients, then multiple messages must be generated and placed in the network, one message per recipient. When there are many potential recipients for a particular message, then this places an unnecessary strain on network resources, since the same data is duplicated many times, where the only difference is the target IP address encoded within the messages.
23
Broadcast
In cases where the same message must be sent to many targets, broadcast is a much better choice than unicast, because it puts much less strain in the network. Broadcasting uses a special IP address, 255.255.255.255, which indicates that the message packet is intended to be sent to all nodes in a network. As a result, the sender of a message needs to generate only a single copy of that message, and can still transmit it to multiple recipients, that is to all members of the network. The routers multiplex the message packet, as it is sent along all possible routes in the network to reach all possible destinations. This puts much less strain on network bandwidth, because only a single message stream enters the network, as opposed to one message stream per recipient. However, it puts much more strain on the individual nodes (and routers) in the network, since every node receives the message, even though most likely not every node is interested in the message. This means that those members of the network that were not the intended recipients, who receive the message anyway, must receive the unwanted message and discard it. Due to this inefficiency, in most network configurations, routers are configured to not forward any broadcast traffic. This means that any broadcast messages can only reach nodes on the same subnet as the sender.
Multicast
The ability of the SLP to discover automatically services that are available in the network, without a lot of setup or configuration, depends in a large part on the use of IP multicasting. IP multicasting is a broad subject in itself, therefore only a brief, simple overview is provided here. Multicasting can be thought of as more sophisticated broadcast, which aims to solve some of the inefficiencies inherent in the broadcasting mechanism. With multicasting, again the sender of a message has to generate only a single copy of the message, saving network bandwidth. However unlike broadcasting, with multicasting, not every member of the network receives the message. Only those members who have explicitly expressed an interest in the particular multicast stream receive the message. Multicasting introduces a concept called a multicast group, where each multicast group is associated with a specific IP address. A particular network node (host) can join one or more multicast groups, which notifies the associated router or routers that there is an interest in receiving multicast streams for those groups. When the sender, who does not necessarily have to be part of the same group, sends messages to a particular multicast group, that message is routed appropriately to only those subnets, which contain members of that multicast group. This avoids flooding the entire network with the message, as is the case for broadcast traffic.
Multicast addresses
The Internet Assigned Numbers Authority (IANA), which controls the assignment of IP addresses, has assigned the old Class D IP address range to be used for IP multicasting. Of this entire range, which extends from 224.0.0.0 to 239.255.255.255, the 224.0.0.* addresses are reserved for router management and communication. Some of the 224.0.1.* addresses are reserved for particular standardized multicast applications. Each of the remaining addresses corresponds to a particular general purpose multicast group. The Service Location Protocol uses address 239.255.255.253 for all its multicast traffic. The port number for SLP is 427, for both unicast and multicast.
24
SLP messages
In order to be able to provide a framework for service location, SLP agents communicate with each other using eleven different types of messages. The dialog between agents is usually limited to these very simple exchanges of request and reply messages shown in Figure 2-6.
Message Service Request (SrvRqst) Service Reply (SrvRply) Service Registration (SrvReg) Service Deregister (SrvDeReg) Service Acknowledge (SrvAck) Attribute Request (AttrRqst) Attribute Reply (AttrRply)
Description Sent by UAs to SAs and DAs to request the location of a service. Sent by SAs and DAs containing information about a service that is no longer available. Sent by SAs to DAs containing information about a service that is available. Sent by SAs to inform DAs that a service is no longer available. A generic acknowledgement that is sent by DAs to SAs as a reply to SrvReg and SrvDeReg messages. Sent by UAs to request the attributes of a service. Sent by SAs and DAs in reply to a AttrRqst. The AttrRply contains the list of attributes that were requested. Sent by UAs to SAs and DAs requesting the types of services that are available. Sent by SAs and DAs in reply to a SrvTypeRqst. The SrvTypeRply contains a list of requested service types. Sent by DAs to let SAs and UAs know where they are. Sent by SAs to let UAs know where they are.
Service Type Request (SrvTypeRqst) Service Type Reply (SrvTypeReply) DA Advertisement (DAAdvert) SA Advertisement (SAAdvert)
Router configuration
The majority of components which allow multicasting to work is implemented in the router operating system software. As a result, it is necessary to configure the routers properly in the network to allow multicasting to work effectively. Unfortunately, there is a dizzying array of protocols and algorithms which can be used to configure particular routers to enable multicasting. The most common ones are: Internet Group Management Protocol (IGMP) is used to register individual hosts in particular multicast groups, and to query group membership on particular subnets. Distance Vector Multicast Routing Protocol (DVMRP) is a set of routing algorithms that use a technique called Reverse Path Forwarding to decide how multicast packets are to be routed in the network.
25
Protocol-Independent Multicast (PIM) comes in two varieties: dense mode (PIM-DM) and sparse mode (PIM-SM). They are optimized to networks where either a large percentage of nodes require multicast traffic (dense), or a small percentage require the traffic (sparse). Multicast Open Shortest Path First (MOSPF) is an extension of OSPF, a link-state unicast routing protocol that attempts to find the shortest path between any two networks or subnets to provide the most optimal routing of packets. The routers of interest are all those which are associated with subnets that contain one or more storage devices which are to be discovered and managed by TotalStorage Productivity Center. You can configure the routers in the network to enable multicasting in general, or at least to allow multicasting for the SLP multicast address, 239.255.255.253, and port, 427. This is the most generic solution and permits discovery to work the way that it was intended by the designers of SLP. To properly configure your routers for multicasting, refer to your router manufacturers reference and configuration documentation. Although older hardware may not support multicasting, all modern routers do. However, in most cases, multicast support is disabled by default, which means that multicast traffic is sent only among the nodes of a subnet but is not forwarded to other subnets. For SLP, this means that service discovery is limited to only those agents which reside in the same subnet.
Firewall configuration
In the case where one or more firewalls are used between TotalStorage Productivity Center and the storage devices that are to be managed, the firewalls need to be configured to pass traffic in both directions as SLP communication is 2-way. This means that when TotalStorage Productivity Center, for example, queries an SLP DA which is behind a firewall for the registered services, the response will not use an already opened TCP/IP session but will establish another connection in the direction from the SLP DA to the TotalStorage Productivity Center. For this reason port 427 should be opened in both directions otherwise the response will not be received and TotalStorage Productivity Center will not recognize services offered by this SLP DA.
SLP DA configuration
If router configuration is not feasible, another technique is to use SLP DAs to circumvent the multicast limitations. Since with statically configured DAs all service requests are unicast instead of multicast by the UA, it is possible to simply configure one DA for each subnet that contains storage devices which are to be discovered by TotalStorage Productivity Center. One DA is sufficient for each of such subnets, although more can be configured without harm perhaps for reasons of fault tolerance. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow Productivity Center to discover all of the devices, you must statically configure it with the addresses of each of these DAs. You accomplish this using the IBM TPC panel shown here. As described previously, Productivity Center unicasts service requests to each of these statically configured DAs, but also multicasts service requests on the local subnet on which Productivity Center is installed. Figure 2-7 on page 27 displays a sample environment where DAs have been used to bridge the multicast gap between subnets in this manner.
26
Figure 2-7 Recommended SLP configuration - redraw graphic - change MDM to TPC
You can easily configure an SLP DA by changing the configuration of the SLP SA included as part of an existing CIM Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA instead. The procedure to perform this configuration is explained in SLP directory agent configuration on page 332. Note that the change from SA to DA does not affect the CIMOM service of the subject CIM Agent, which continues to function as normal, sending registration and deregistration commands to the DA directly.
27
Core schema define classes and relationships of objects. Common schema define common components of systems. Extension schema are the entry point for vendors to implement their own schema.
The CIM/WBEM architecture defines the following elements: Agent code or CIM Agent An open-systems standard that interprets CIM requests and responses as they transfer between the client application and the device. The Agent is embedded into a device, which can be hardware or software. CIM Object Manager The common conceptual framework for data management that receives, validates, and authenticates the CIM requests from the client application. It then directs the requests to the appropriate component or a device provider such as a CIM Agent. Client application or CIM Client A storage management program, such as TotalStorage Productivity Center, that initiates CIM requests to the CIM Agent for the device. A CIM Client can reside anywhere in the network, because it uses HTTP to talk to CIM Object Managers and Agents. Device or CIM Managed Object A Managed Object is a hardware or software component that can be managed by a management application by using CIM, for example, a IBM SAN Volume Controller. Device provider A device-specific handler that serves as a plug-in for the CIMOM. That is, the CIMOM uses the handler to interface with the device. Note: The terms CIM Agent and CIMOM are often used interchangeably. At this time, few devices come with an integrated CIM Agent. Most devices need a external CIMOM for CIM to enable management applications (CIM Clients) to talk to the device. For ease of installation, IBM provides an Integrated Configuration Agent Technology (ICAT), which is a bundle that includes the CIMOM, the device provider, and an SLP SA.
28
The CIM Agent or CIMOM translates a proprietary management interface to the CIM interface. The CIM Agent for the IBM TotalStorage ESS includes a CIMOM inside it. In the future, more and more devices will be native CIM compliant, and will therefore have a built-in Agent as shown in the Embedded Model in Figure 2-8. When widely adopted, SMI-S will streamline the way that the entire storage industry deals with management. Management application developers will no longer be required to integrate incompatible feature-poor interfaces into their products. Component developers will no longer be required to push their unique interface functionality to application developers. Instead, both will be better able to concentrate on developing features and functions that have value to end-users. Ultimately, faced with reduced costs for management, end-users will be able to adopt storage-networking technology faster and build larger, more powerful networks.
Agent
1 1
0..n
Proprietary
Proprietary
Device or Subsystem
Device or Subsystem
Proxy Model
Embedded Model
Proxy Model
29
The CIM Agent includes a CIMOM, which adapts various devices using a plug-in called a provider. The CIM Agent can work as a proxy or can be embedded in storage devices. When the CIM Agent is installed as a proxy, the IBM CIM Agent can be installed on the same server that supports the device user interface.
30
Lock Manager
SA SA 0..n UA
Directory Manager
DA 0..n
0..n
Proprietary
Proprietary
Device or Subsystem
Device or Subsystem
Proxy Model
Embedded Model
Proxy Model
31
32
The Common Agent provides the platform for the application specific agents. Depending on the tasks for which a subagent is used, the Common Agent is installed on application servers, desktop PCs, or notebooks. Note: In different documentation, Readme files, directory and file names, you also see the terms Common Endpoint, Endpoint, or simply EP. This always refers to the Common Agent which is part of the Tivoli Common Agent Services. The Common Agent talks to the application specific subagent, with the Agent Manager and the Resource Manager, but the actual system level functions are invoked by the subagent see Figure 2-13 on page 34). The information that the subagent collects is sent directly to the Resource Manager by using the applications native protocol. This is enabled to have down-level agents in the same environment, as the new agents that are shipped with the TotalStorage Productivity Center. Certificates are used to validate if a requester is allowed to establish a communication. Demo keys are supplied to set up and configure a small environment quickly. Because every installation CD uses the same certificates, this is not secure. If you want to use Tivoli Common Agent Services in a production environment, we recommend that you use your own keys that can be created during the Tivoli Agent Manager installation. One of the most important certificates is stored in the agentTrust.jks file. The certificate can also be created during the installation of Tivoli Agent Manager. If you do not use the demo certificates, you must have this file available during the installation of the Common Agent and the Resource Manager. This file is locked with a password (the agent registration password) to secure the access to the certificates. You can use the ikeyman utility in the java\jre subdirectory to verify your password.
Chapter 2. Key concepts
33
Data Server
Resource Manager
Device Server
Resource Manager
IBMCDB
Holds the Registration Of all Agents And Resource Managers
SubAgent Data
SubAgent Fabric
CommonAgent
Figure 2-13 Schematic diagram of Agent Manager and its services
In this picture you can see a simplified diagram showing the two most important services the Agent Manager provides. Also shown are the ports which are used for these services. A more detailed list of all the ports and their relationship can be found in the Installation and Configuration Guide.
Recovery Service is located by a DNS entry with the unqualified host name of TivoliAgentRecovery. During the installation, you also have to specify the agent registration password and the Agent Registration Context Root. The password is stored in the AgentManager.properties file on the Tivoli Agent Manager. This password is also used to lock the agentTrust.jks certificate file. Important: A detailed description about how to change the password is available in the corresponding Resource Manager Planning and Installation Guide. Because this involves redistributing the agentTrust.jks files to all Common Agents, we encourage you to use your own certificates from the beginning. To control the access from the Resource Manager to the Common Agent, certificates are used to make sure that only an authorized Resource Manager can install and run code on a computer system. This certificate is stored in the agentTrust.jks and locked with the agent registration password.
35
Outband discovery is the process of discovering SAN information, including topology and
device data, without using the Fibre Channel data paths. Outband discovery uses SNMP queries, invoked over IP network. Outband management and discovery is used normally to manage devices such as switches and hubs which support SNMP. Inband discovery is the process of discovering information about the SAN, including topology and attribute data, through the Fibre Channel data paths. Inband discovery uses the following general process: The Agent sends commands through its Host Bus Adapters (HBA) and the Fibre Channel network to gather information about the switches. The switch returns the information through the Fibre Channel network and the HBA to the Agent. The Agent queries the endpoint devices using RNID and SCSI protocols. The Agent returns the information to the Manager over the IP network. The Manager then responds to the new information by updating the database and redrawing the topology map if necessary.
36
TPC f o r Di s k
Topology Viewer
Data Agent
Fabric Agent
TPC f o r Da t a
cent r al TPC DB
Computers
SNMP
TPC f or Fabr i c
Figure 2-14 Data paths within the TotalStorage Productivity Center environment
These data paths can be used on demand to collect data. In this case the queries are triggered by manual interaction with the system. But in general you will setup your TotalStorage Productivity Center environment to perform data collection at intervals you define. Each time when these jobs are run, another record of information is stored in the associated database tables. The continuously gathered information is the basis for all reports, statistics, performance charts and other reports that can be produced with TotalStorage Productivity Center. The frequency of this data collection is a critical factor for the timeliness of the data which is displayed in other elements of the GUI. This brings up the crucial fact, that elements that are shown in the explorer - the elements you might want to perform a specific action on - might be missing until you go out for the next data collection and update the database contents. The behavior of the Topology Viewer is to look up the information in the database instead of collecting it from the managed systems. The principle with updating the database contents is that only the scheduled jobs, scans, or probes will update the information.
37
There are only two exceptions: A subsystem can issue an indication to its associated CIM Agent to have it update a specific information in the database (for example, the addition of a newly created LUN). A device managed through SNMP ( for example, a switch) can send a trap indicating there was a change to which TotalStorage Productivity Center should be notified. In this version of the product, the information is stored in a single database. There is a second database for Tivoli Agent Manager which is described this later in the book.
38
Discovery
Discovery is the first task to be performed, after the infrastructure is completely rolled out. Discovery is the process of using the established datapaths to see which devices are out there. Discovery is about seeing if you can reach all your installed storage subsystems. The discovery should be repeated after you have installed new resources in your environment. For a successful recovery there are involved several protocols and standards which are explained later in this chapter. To name only a few of them you have to have a properly setup the IP, SLP, SNMP infrastructure so that protocols like SLP, WBEM and SMI-S can find your hardware devices. The following table gives an overview of the various types of discovery.
Type of discovery
Data source
Top-level entities
CIMOM
CIMOM
Fabric ( switches ) Storage subsystems Tape subsystems Fabric ( switches). A fabric discovery gets all the fabric information. NetWare trees Computers Files Clusters Names or agents
Out-of-band
Probing
Probing is the process of gathering more detailed information from within the subsystems which have been discovered before. The goal is to get statistic and asset information about components such as disk controllers, hard disks, clusters, fabrics, storage subsystems, LUNs, and so forth. When you perform your probes on a regular basis you can become aware of any changes in your environment. such as the addition or deletion of LUNs on a storage subsystem.
Reporting
Within IBM TotalStorage Productivity Center there are several reporting categories. Predefined are some dozens of reports in the area of Data,Disk and Fabric. A user can define its own reports. He can export the reports to formats like CSV,HTML,PDF and others. The scope of these reports can be like: Asset Reporting Availability Reporting Capacity Reporting Usage Reporting Usage Violation Reporting Backup Reporting In addition to the predefined reports the user can define and customize its own reports. Important to understand is the principle, that every report - a user can define as its own report - is based on the standard definitions which are built in the predefined reports. That means they inherit most of their definitions, but every aspect can be changed. Each user defined report can be scheduled to run on a regular base.
39
Another important concept in IBM TotalStorage Productivity Center in the context of Reporting is the Grouping-Concept. The administrator can define his reports, his monitoring jobs, his schedules scans in that way, that they are running against pre,-or self-defined groups. A group is a collection of entities of the same type. The concept of grouping can be found at several other areas within IBM TotalStorage Productivity Center.
Monitoring
Monitoring can be broken up into three main tasks. They are, depending on which area of TPC on which you are working: Pings Scans Probes They are the simplest way to test the connectivity to its devices. They are used, for example, for collecting data from filesystems on a computer/ They are the most intensive queries to collect data from subsystems.
Profiles
Profiles are a means to control what the contents of your scan should be. There are a lot of predefined profiles which can be modified, but you can create your own.
Alerting
Each of the functions in IBM TotalStorage Productivity Center can be described as active, meaning they are directly interacting with the storage devices, computer or the SAN devices, you as the administrator are able to define what kind of actions should be performed to raise an alarm. The possibilities vary from plain logging, SNMP, e-mail to issuing an TEC Event, and even others.
Policy Management
Within the widespread capabilities you can achieve with Policies are things like Quota definitions Definition of which file type, sizes and so forth are acceptable (constraint definitions) Automation of Backup/Archive operation on identified files Filesystem Extension on filesystems exceeding the specifications
Superuser Productivity Center Administrator Disk Administrator Disk Operator Fabric Administrator
Has full access to all IBM TotalStorage Productivity Center functions Has full access to operations in the Administration section of the GUI. Has full access to IBM TotalStorage Productivity Center disk functions Has access to reports only for IBM TotalStorage Productivity Center disk functions. This includes reports on tape devices. Has full access to IBM TotalStorage Productivity Center for Fabric functions
40
Role
Authorization level
Has access to reports only for IBM TotalStorage Productivity Center for Fabric functions. Has full access to IBM TotalStorage Productivity Center for Data functions Has access to reports only for Data functions.
1. The following is an example of role mappings. In the TPC GUI you must navigate to the Role-to-Group-Mappings view, as shown in Table 2-1 on page 40.
In this panel we see an entry for each of the roles listed in the table above. The administrator has to group the user entities into specific groups within the operating system where the TPC server is running. 2. Navigate to your Windows User Administration and create the groups and populate them with the users you want to have in these groups. Table 2-17 on page 42 gives an example of how inputting these groups looks. The other possible platforms use their own tools to administer the users and groups, but the process is the same.
41
3. The group name you used here is to be entered in the Role-to-Group Mappings view, shown in Figure 2-18.
Figure 2-18 Edit the Group name for the Data Operators sample
42
Chapter 3.
43
3.1 Configuration
You can install the components of IBM TotalStorage Productivity Center on a variety of platforms. However, for the IBM TotalStorage Productivity Center suite, when all components are installed on the same system, the only common platforms for the managers are: AIX 5.3 Red Hat Enterprise Linux v3.0 (Intel only) Windows 2003 Server Windows 2003 Enterprise Server Note: Refer to the following Web site for the updated support summaries, including specific software, hardware, and firmware levels supported:
http://www.storage.ibm.com/software/index.html
3.2.1 Hardware
The following hardware is required: For all servers: Network connectivity Subsystem Device Driver (SDD), for IBM TotalStorage Productivity Center for Disk (optional) Up to 80 GB of free space for databases For Intel servers running Microsoft Windows 2003 Server or Microsoft Windows 2003 Enterprise Server: Dual Pentium 4 or Intel Xeon 2.4 GHz or faster processors 4 GB of DRAM 4 GB of available disk space For Intel servers running Red Hat Enterprise Linux v3.0: Dual Pentium 4 or Intel Xeon 2.4 GHz or faster processors 4 GB of DRAM 4 GB of available disk space 500 MB free space in /tmp 1.3 GB free space in /opt 1 GHz or faster processor 4 GB DRAM 500 MB free space in /tmp 1.3 GB free space in /opt
44
3.2.2 Database
This section describes the database requirements. As we write this section, we referenced the following Web site:
http://www-1.ibm.com/support/docview.wss?rs=1133&uid=ssg1S1002813
For the Agent Manager Repository: IBM DB2 UDB v8.2 with FixPak 7a (or higher) is supported for local and remote installation For the IBM TotalStorage Productivity Center Server repository IBM DB2 UDB v8.2 with Fixpak 7a (or higher) is supported: The Data Agent can monitor these databases: IBM DB2 UDB v7.2, 8.1, and 8.2 Microsoft SQL Server 7.0, 2000, and 7.0 2000 Oracle 8i, 9i, and 10g Sybase
Note: We recommend using IBM DB2 UDB V8.2 with Fixpak 7a (or higher) as the single repository for the Agent Manager as well as the TotalStorage Productivity Center.
45
5. Identify any firewalls and obtain the required authorization to pass network traffic through them. 6. Obtain the static IP addresses that will be used for the TotalStorage Productivity Center servers.
9511 9512
Registering agents. Registering resource manager Providing configuration updates Renewing and revoking certificates Querying the registry for agent information Requesting ID resets Requesting updates to the certificate revocation list Requesting Agent Manager information Downloading the truststore file Alternate port for the agent recovery service Recovery Service (optional)
9513
80
Insecure
Table 3-2 lists the ports for the different components of IBM TotalStorage Productivity Center.
Table 3-2 TCP/IP ports used by IBM TotalStorage Productivity Center Component session initiator (server perspective) Inbound / Outbound (server perspective) Firewall port Inbound / Outbound (agent perspective) Session initiator (agent perspecti ve)
Data server Device server Common Agent Agent Manager Agent Manager Agent Manager Yes No Yes No
9549 9550 9510 9511 9512 9513 Inbound Outbound Both Outbound No Yes Yes Yes
46
Component
Firewall port
Common Agent (no access needed) Common Agent (no access needed) Agent Manager Recovery Service PUSH UNIX PUSH WINDOWS No Yes Yes Inbound Outbound Outbound
9514
Local to server
9515
Local to server
80 SSH(22) NetBIOS sessions service (139) RSH (514) REXEC (512) 601 High ports 3000+ TPCD Server 2078 427 162
Outbound Both
Yes No
PUSH UNIX PUSH UNIX PUSH UNIX PUSH ALL PUSH ALL
Both Both
No No
Both
No
Both Inbound
You can find the port numbers used on your system by running the netstat -a or netstat -ano commands (to see the PID using that port, too), as shown in Figure 3-1 on page 48 and Figure 3-2 on page 49.
47
48
49
Granting privileges
Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center for Data, IBM TotalStorage Productivity Center for Disk, and the IBM TotalStorage Productivity Center for Fabric. These user rights are governed by the local security policy and are not initially set as the defaults for administrators. They might not be in effect when you log on as the local administrator. If the TotalStorage Productivity Center installation program does not detect the required user rights for the logged on user name, the program can set them optionally. The program can set the local security policy settings to assign these user rights. Alternatively, you can set them manually prior to performing the installation. To set these privileges manually, follow these steps: 1. Click Start Settings Control Panel. 2. Double-click Administrative Tools. 3. Double-click Local Security Policy. 4. The Local Security Settings window opens. Expand Local Policies. Then double-click User Rights Assignments to see the policies in effect on your system. For each policy added to the user, perform the following steps: a. Highlight the policy to be selected. b. Double-click the policy and look for the users name in the Assigned To column of the Local Security Policy Setting window to verify the policy setting. Ensure that the Local Policy Setting and the Effective Policy Setting options are selected. c. If the user name does not appear in the list for the policy, you must add the policy to the user. Perform the following steps to add the user to the list: i. In the Local Security Policy Setting window, click Add. ii. In the Select Users or Groups window, under the Name column, highlight the user of group. iii. Click Add to place the name in the lower window. iv. Click OK to add the policy to the user or group. 5. After you set these user rights, either by using the installation program or manually, log off the system and then log on again for the user rights to take effect. 6. Restart the installation program to continue with the TotalStorage Productivity Center installation.
50
51
Table 3-3 Services and service accounts Element Service name Service account Comment
DB2 Agent Manager IBM WebSphere Application Server V5 Tivoli Agent Manager IBM Tivoli Common Agent C:\Program Files\tivoli\ep IBM TotalStorage Productivity Center for Data server IBM WebSphere Application Server V5 Fabric Manager
db2admin LocalSystem
The account needs to be part of Administrators and DB2ADMNS. You need to set this service to start automatically, after the installation.
Common Agent Productivity Center for Data Productivity Center for Fabric
DB2 Agent Manager Common Agent Productivity Center for Data Productivity Center for Fabric
Note that there are several services for db2 IBM WebSphere Application Server V5 - Tivoli Agent Manager IBM Tivoli Common Agent C:\Program Files\tivoli\ca IBM TotalStorage Productivity Center Data Server IBM WebSphere Application Server V5 - Device Server
52
53
54
Chapter 4.
55
Streamlined Installation and Packaging Single User Interface - File System, Database, Fabric and Storage management - Asset, Capacity and Performance Reporting Single Database Correlated host, fabric, storage information
Consistent Reporting Capabilities (Scheduled, Ad-Hoc) Data Export Capabilities (html, CSV) Single set of services for consistent administration and operations Policy Definitions, Event Handling, Resource Groups
NEW: Storage topology viewer NEW: Role based administration NEW: AIX support for TPC server NEW: Fabric performance management NEW: Tape discovery and asset/capacity reporting NEW: Performance Management for DS4000
IBM TotalStorage Productivity Center provides an integrated storage infrastructure management solution that is designed to allow you to manage every point of your storage infrastructure, between the hosts through the network and fabric through to the physical disks. It can help simplify and automate the management of devices, data, and storage networks. IBM TotalStorage Productivity Center V3.1 offers a simple, easy to install package with management server support added for IBM AIX V5.3, and integrating IBM DB2 as the management server database. The default installation directory is: c:\Program Files\IBM\... (for Windows) /opt/IBM/...(for Unix and Linux) You can change this path during installation setup. There are two types of installation: typical and custom.
56
CD2:
CD3: Data upgrade for all platforms For information about the deployment of agents refer to Chapter 6, Agent deployment on page 195.
4.2 Configuration
The TotalStorage Productivity Center components are: Data server Device server Database Agents GUI CLI There are two supported environments, a one-server and a two-server environment. You must install the Data Server and Device Server on one computer.
57
4.3.1 Hardware
For Windows and Linux on Intel, IBM eServer xSeries server or other Intel technology-compatible platforms the hardware requirements are: Dual Pentium 4 or Xeon 2.4 GHz or faster processors 4 GB of RAM Network connectivity For AIX on IBM eServer iSeries and IBM eServer pSeries: Machine - Minimum 1.0 GHz processor 4 GB of RAM Network connectivity
58
Windows 2000 Advanced Server Windows 2000 Datacenter Windows Server 2003 Standard Edition 32-bit or 64-bit Windows Server 2003 Enterprise Edition 32-bit or 64-bit Red Hat Enterprise Linux AS Version 3.0 IBM eServer xSeries Red Hat Enterprise Linux AS Version 3.0 IBM eServer pSeries on POWER5 Red Hat Enterprise Linux AS Version 3.0 IBM eServer iSeries on POWER5 United LInux 1.0 IBM eServer xSeries United LInux 1.0 IBM eServer zSeries SUSE LINUX Enterprise Server 8 IBM eServer pSeries on POWER4 zSeries
NO NO YES
NO NO YES
YES NO YES
YES
YES
YES
YES
YES
YES
NO
NO
NO
NO
NO
NO
NO NO NO
NO NO NO
NO NO NO
59
Platform
GUI
Agent Manager v1.2 (install fix pack 2, even if you can optionally use the Agent Manager you installed with v2.3)
SUSE LINUX Enterprise Server 9 IBM eServer xSeries, pSeries on POWER5 iSeries on POWER5 zseries (Data agent only) IBM AIX 5.1 (32-bit) IBM AIX 5.1 (64-bit) IBM AIX 5L (32-bit) IBM AIX 5L (64-bit) IBM AIX 5.2 (32-bit) IBM AIX 5.2 (64-bit) IBM AIX 5.3 (32 bit)
NO
NO
NO
YES YES YES YES YES YES YES (32- and 64-bit)
YES YES NO
Table 4-2 shows the platforms supported to install and deploy Data agents and Fabric agents. Important: Data agents and Fabric agents at the 2.x version are supported by IBM TotalStorage Productivity Center V3.1 managers.
Table 4-2 Platform support for Data agent and Fabric agent Platform Data agent and Fabric agent
Windows 2000 Advanced Server Windows 2000 Datacenter Windows Server 2003 Standard Edition 32-bit or 64-bit Windows Server 2003 Enterprise Edition 32-bit or 64-bit Red Hat Enterprise Linux AS Version 3.0 IBM eServer xSeries
60
Platform
Red Hat Enterprise Linux AS Version 3.0 IBM eServer pSeries on POWER5 Red Hat Enterprise Linux AS Version 3.0 IBM eServer iSeries on POWER5 United LInux 1.0 IBM eServer xSeries United LInux 1.0 IBM eServer zSeries SUSE LINUX Enterprise Server 8 IBM eServer pSeries on POWER4 zSeries SUSE LINUX Enterprise Server 9 IBM eServer xSeries, pSeries on POWER5 iSeries on POWER5 zseries (Data agent only) SUSE LINUX Enterprise Server 9 IBM eServer xSeries, pSeries on POWER5 iSeries on POWER5 zseries (Data agent only) IBM AIX 5.1 (32-bit) IBM AIX 5.1 (64-bit)
Data agent (ALL) Fabric agent (xSeries only) YES with AIX 5100-05 maintenance level YES in compatibility mode with AIX 5100-05 maintenance level YES in compatibility mode with AIX 5100-05 maintenance level YES in compatibility mode with AIX 5100-05 maintenance level YES with AIX 5200-02 maintenance level YES in compatibility mode with AIX 5200-02 maintenance level YES with AIX 5300-01 maintenance level and APAR IY70336 YES YES Only Data agent
61
The Agent Manager repository is supported only on the following databases. DB2 Enterprise Server Edition version 8.2 with fix pack 7a or higher. There is only one database instance created for IBM TotalStorage Productivity Center on DB2. The default database name for the repository is TPCDB. The Agent Manager repository uses its own database. The default name of this database is IBMCDB. The Data agent can monitor these databases: DB2 7.2 - 32 bit only DB2 8.1 - 32 or 64 bit DB2 8.2 - 32 or 64 bit Microsoft SQL Server 7.0 Microsoft SQL Server 2000 Oracle 8i Oracle 9i Oracle 10g Sybase support is not included in TotalStorage Productivity Center V3.1. Note: We recommend that you install a single instance of IBM DB2 UDB Enterprise Server Edition version 8.2 with fix pack 7a or higher as your repository for both the Agent Manager and IBM TotalStorage Productivity Center.
62
2. Click Properties. The System Properties panel is displayed as shown in Figure 4-2. 3. On the Computer Name tab, click Change.
4. Enter the host name in the Computer name field. Click More to continue (see Figure 4-3).
5. In the next panel, verify that Primary DNS suffix field displays a domain name. Click OK (see Figure 4-4 on page 64).
63
6. If you made any changes, you might need to restart your computer (see Figure 4-5).
64
On the WINS tab, select Enable NetBIOS over TCP/IP and click OK (Figure 4-7).
4.5.4 Create Windows user ID to install Device server and Data server
In order to install Device server and Data server, you must have a Windows User with all the proper required rights. We created a unique user ID, as described in Table 4-5 on page 67. It is a good practice to use the worksheets in Appendix A, Worksheets on page 519 to record the user IDs and passwords used during the installation of TotalStorage Productivity Center.
Chapter 4. TotalStorage Productivity Center installation on Windows 2003
65
Installing DB2 and Agent Manager Installing Device server or Data server Installing Data agent or Fabric agent DB2 administration server user Certificate authority password Common agent registration Common agent service logon user ID and password Host authentication password NAS filer login user ID and password Resource manager registration user ID and password WebSphere Application Server administrator user ID and password
Table 4-4 on page 66 Table 4-5 on page 67 Table 4-6 on page 67 Table 4-7 on page 68 Table 4-8 on page 69 Table 4-9 on page 69 Table 4-10 on page 70 Table 4-11 on page 70 Table 4-12 on page 70 Table 4-13 on page 70 Table 4-14 on page 71
Important: To verify the valid characters you could use for these user IDs and passwords, refer to 11.5, Valid characters for user ID and passwords on page 511. Table 4-4 through Table 4-14 on page 71 contain information about the user IDs and passwords used during the installation of the TotalStorage Productivity Center prerequisites and components.
Table 4-4 Installing DB2 and Agent Manager item Installing DB2 and Agent Manager OS description created when used when
All
group
Administr ators
used to log on
Administrator / password
66
Table 4-5 Installing Device server or Data server item Installing Device server or Data server OS description created when used when
All
Add user ID to DB2 Admin group or assign the user rights: - Log on as a service - Act as part of the operating system - Adjust memory quotas for a process - Create a token object - Debug programs - Replace a process level token. On Linux or Unix give root authority
user ID
Has to be created before starting Device server and Data server installation.
group
password
Administr ators
Table 4-6 Installing Data agent or Fabric agent item Installing Data agent or fabric agent OS description created when used when
All
User rights: - act as part of the operating system - Log on as a service. On Linux or Unix, give root authority
user ID
group
Administr ators
To install a GUI or CLI you do not need any particular authority or special user ID.
67
Table 4-7 DB2 administration server item DB2 administration server user OS description created when used when
All
Used to run the DB2 administration server on your system. Used by the DB2 GUI tools to perform administration tasks. See rules below.
user ID
group
password
new user ID
new password
db2tpc / db2tpc
User IDs cannot begin with: IBM SQL SYS User IDs cannot include accented characters. UNIX users, groups, and instance names must be lowercase. Windows 32-bit users, groups, or instance names can be any case. DB2 creates a user group with the following administrative rights: Act as a part of an operating system Create a token object Increase quotas Replace a process-level token Log on as a service. Note: Adding the user ID used to install TotalStorage Productivity Center to the DB2 Admin group gives the user ID the necessary administrative rights.
68
Table 4-8 Certificate authority password r item Certificate authority password OS description created when used when
All
This password locks the CARootKeyRing.jks file. Specifying a value for this password is optional. You need to specify this password only if you want to be able to unlock the certificate authority files.
group
password
tpctpc
Important: Do not change the Agent Registration password under any circumstances. Changing this password will render the certificates unusable.
Table 4-9 Common agent registration passwords item Common agent registration OS description created when used when
All
This is the password required by the common agent to register with the Agent Manager
user ID
Used during common agent, Data agent and Fabric agent installation
ITSOs user ID and password
group
password
changeMe
69
Table 4-10 Common agent service logon user ID and password item Common agent service logon user ID and password OS description created when used when
Windows
This creates a new service account for the common agent to run under.
user ID
Specified when you install Data agent or Fabric agent (only local).
password ITSOs user ID and password
group
Administra tors
tpcadmin / tpcadmin
Table 4-11 Host authentication password item Host authentication password OS description created when used when
All
Used when you install Fabric agent, to communicate with the Device server.
ITSOs user ID and password
group
user ID
password
must be provided
Table 4-12 NAS filer login user ID and password item NAS filer login user ID and password OS description created when
tpctpc
used when
Windows
group
Table 4-13 Resource manager registration user ID and password r item Resource manager registration user ID and password OS description created when used when
ALL
Used when Device server and Data server have to register to Agent Manager
ITSOs user ID and password
group
manager / password
70
Table 4-14 WAS Webpshere administrator user ID and password item WebSphere Application Server WebSphere administrator user ID and password OS description created when used when
ALL
group
tpcadmin / tpcadmin
71
2. The next panel allows you to select the DB2 product to be installed. Click Next to proceed as shown in Figure 4-9 on page 73.
72
3. The DB2 Setup wizard panel is displayed, as shown in Figure 4-11 on page 74. Click Next to proceed.
73
4. You have to read and click I accept the terms in the license agreement (Figure 4-12).
5. To select the installation type, accept the default of Typical and click NEXT to continue (see Figure 4-13 on page 75).
74
6. Accept the defaults and proceed with Install DB2 Enterprise Server Edition on this computer (see Figure 4-14). Click NEXT to continue.
7. The panel shown in Figure 4-15 on page 76 shows defaults for drive and directory to be used as the installation folder. You can change these or accept the defaults, then click NEXT to continue.
75
8. Set the user information for the DB2 Administration Server; choose the domain of this user. If it is a local user, leave the field blank. Type a user name and password of the DB2 user account that you want to create (Figure 4-16 on page 77). You can refer to Table 4-7 on page 68. DB2 creates a user with the following administrative rights: Act as a part of an operating system. Create a token object. Increase quotas. Replace a process-level token. Log on as a service.
76
9. Accept the defaults in the panel shown in Figure 4-17, and click Next to continue.
10.Click OK when the warning window shown in Figure 4-18 on page 78 opens.
77
11.In the Configure DB2 instances panel, accept the default and click Next to continue (see Figure 4-19).
12.Accept the defaults, as shown in Figure 4-20 on page 79. Verify that Do not prepare the DB2 tools catalog on this computer is selected. Click Next to continue.
78
13.In the panel shown in Figure 4-21, click Defer the task until after installation is complete and then click Next to continue.
14.The panel shown in Figure 4-22 on page 80 is presented, click Install to continue.
79
The DB2 installation proceeds and you see a progress panel similar to the one shown in Figure 4-23.
15.When the installation completes, click Finish, as shown in Figure 4-24 on page 81.
80
16.If you see the DB2 product updates panel shown in Figure 4-25, click No because you have already verified that your DB2 version is at the latest recommended and supported level for TotalStorage Productivity Center, as mentioned in Databases supported on page 61.
17. Click Exit First Steps (Figure 4-26 on page 82) to complete the installation.
81
Figure 4-28 on page 83 shows the DB2 window services created at the end of the installation.
82
Note: Log on with a user ID that has administrative authority on Windows and root authority on UNIX or Linux. 2. The Installation Wizard starts; you see a panel similar to the one in Figure 4-29 on page 84.
83
4. Read and accept the terms of license agreement, select I accept the terms of the license agreement. Click Next to continue (see Figure 4-31).
5. Figure 4-32 on page 85 shows the Directory Name for the installation. Click Next to accept the default or click Browse to install to a different directory.
84
6. The Agent Manager Registry information panel is displayed, as shown in Figure 4-33. On this panel specify the type of database, Database Name or Directory and type of Database Connection. You can accept the defaults and click Next to continue.
85
7. In the next panel shown in Figure 4-34, enter the following database information: Database Software Directory Enter the directory where DB2 is installed on your system. The default directory is: C:\Program Files\IBM\SQLLIB (Microsoft Windows) /opt/IBM/SQLLIB (UNIX or Linux)
Database User Name Record this name also in Table A-4 on page 521. Database Password Record this password also in the tables in Appendix A, Worksheets on page 519. Host name of the Database Server Record this host name in the tables in Appendix A, Worksheets on page 519. Database Port This port number is required for a remote database. After entering the information, click Next to continue.
8. Enter the following information in the window in Figure 4-35 on page 87: Host name Alias or Fully Qualified Host name Review the preinstallation task mentioned in 4.5.1, Verify primary domain name systems on page 62. If you specify an IP address, you will see the warning panel shown in Figure 4-36 on page 88. Application Server Name for Agent Manager Accept the default name or enter a different name.
86
Registration port The default port is 9511 for the server-side SSL. Refer to 3.3.1, TCP/IP ports used on page 46. Secure port The default port is 9512 for client authentication, two-way SSL. Refer to 3.3.1, TCP/IP ports used on page 46. Public Port and Secondary Port for the Agent Recovery Service The public communication port default is 9513. Refer to 3.3.1, TCP/IP ports used on page 46. Do not use port 80 for the agent recovery service. Start the Agent Manager after the installation is complete. Autostart the Agent Manager each time the system restarts. It is recommended that you select both Start agent manger and Autostart the agent manager. To check for other applications which are using port 80, run the netstat -an command. Look for port 80 in the listening state. If there is an application using port 80, stop that application and then continue with the installation of Agent Manager. Note: If you want Agent Recovery Service to run, you must stop any service using port 80. If any service is using port 80, Agent Recovery Service installs, but does not start.
87
9. If you specify an IP address instead of a fully qualified host name for the WebSphere Application Server, you see the panel shown in Figure 4-36. We recommend you click the Back button and specify a fully qualified host name.
10.In the Security Certificates panel (see Figure 4-37), it is highly recommended that you accept the defaults to generate new certificates for a secure environment. Click Next to continue.
88
11.In the panel shown in Figure 4-38, specify the Security Certificate settings. To create Certificates, you must specify a Certificate Authority Password. You must specify this password to look at the certificate files after they are generated. Make sure you record this password in the worksheets in Appendix A, Worksheets on page 519. The second password specified in this panel is the Agent Registration password. The default Agent Registration password is changeMe. We recommend you specify a unique password and record it in the worksheets provided in Appendix A, Worksheets on page 519. After entering the passwords click Next to continue.
12.The User Input Summary panel is displayed (see Figure 4-39 on page 90). If you want to change any settings, click Back and return to the window where you set the value. If you do not need to make any changes, click Next to continue.
89
13.Review the summary information panel (see Figure 4-40) and click Next to continue.
The Agent Manager installation starts and you see several messages similar to those in Figure 4-41 on page 91 and Figure 4-42 on page 91.
90
14.The Summary of Installation and Configuration Results panel is displayed in Figure 4-43 on page 92. Verify that the Agent Manager has successfully installed all of its components. Review the panel and click Next to continue.
91
15.The last panel (Figure 4-44) shows that the Agent Manager has been successfully installed. Click Finish to complete the Agent Manager installation.
92
Verify that the ARS.version field shows the level you have installed (in our installation it is 1.2.2.2) and that at the end you see the message Health Check passed as shown in Figure 4-45.
After the completion of the Agent Manager installation, you can verify the connection to the database (see Figure 4-46 on page 94). From a command-prompt, enter:
db2cmd db2 connect to IBMCDB user db2tpc using db2tpc
93
3. The License Agreement panel is displayed. Read the terms and select I accept the terms of the license agreement. Then click Next to continue (see Figure 4-48 on page 95).
94
4. Figure 4-49 on page 96 shows how to select typical or custom installation. You have the following options: Typical installation allows you to install all of the components on the same computer by selecting Servers, Agents and Clients. Custom installation allows you to install each component separately. Installation licenses This selection installs the TotalStorage Productivity Center licenses. The TotalStorage Productivity Center license is on the CD. You only need to run this option when you add a license to a TotalStorage Productivity Center package that has already been installed on your system. For example, if you have installed TotalStorage Productivity Center for Data package, the license will be installed automatically when you install the product. If you decide to later enable TotalStorage Productivity Center for Fabric, run the installer and select Installation licenses. This option will allow you to install the license key from the CD. You do not have to install the IBM TotalStorage Productivity Center for Fabric product. In this chapter, we document Custom Installation. Click Next to continue.
95
5. In the Custom installation, you can select all the components in the panel shown in Figure 4-50. This is the recommended installation scenario. In our scenario, we show the installation in stages. As the first step, we select the option to Create database schema. and click Next to proceed (see Figure 4-50).
6. To start the Database creation, you must specify a DB2 user ID. We suggest you use the same DB2 user ID you created before (see Table 4-7 on page 68). Click Next, as shown in Figure 4-51 on page 97.
96
7. Enter your DB2 user ID and password again (see Table 4-7 on page 68). Do not take the default of Use Local Database. Click Create local database. By default, a database named TPCDB is created. Click Next to continue (Figure 4-52). .
8. The next panel allows you to change the default space assigned to the database. At this time, you do not need to change these values and you can accept defaults. You have to specify the Schema name. In our installation we chose TPC. For better performance, we recommend that you: Allocate TEMP DB on a different physical disk that the TotalStorage Productivity Center components. Create larger Key and Big Databases. 9. Select System managed (SMS) and click OK to proceed (see Figure 4-53 on page 98). To understand the advantage of an SMS database versus a DMS database, refer to 11.1, Selecting an SMS or DMS tablespace on page 510.
97
Figure 4-54 is the the Database schema installation progress panel. Wait for the installation to complete.
10.Upon completion, the successfully installed panel is displayed. Click Finish to continue (Figure 4-55 on page 99).
98
Attention: Do not edit or modify anything in DB2 Control Center. This could cause serious damage to your tablespace. Simply use DB2 Control Center to browse your configuration.
Log files
Check for errors and java exceptions in the log files at the following locations: Install<timestamp>.log file from system temp directory or <InstallLocation>. <InstallLocation>\dbschema\log
99
Look for dbschema.out, dbschema.err, DBSchema.log <InstallLocation>\log <InstallLocation>\TPC.log Check for the success message at the end of the INSTALL<timestamp>.log file for successful installation.
Preinstallation tasks
To install Data Server and Device Server components, you must log on to Windows 2003 with a User that has the following rights: Log on as a service. Act as part of the operating system. Adjust memory quotas for a process. Create a token object. Debug programs. Replace a process-level token. Be certain the following tasks are completed: We recommend you create a user ID for installation. We created the user ID TPCADMIN (refer to Table 4-5 on page 67). The database schema must be installed successfully to start the Data server installation. An accessible Agent manager must be available to start the Device server installation. Data server must be successfully installed prior to installing GUI. Device server must be successfully installed prior to installing CLI.
Custom installation
To perform a custom installation, follow these steps: 1. Start the TotalStorage Productivity Center installer. 2. Select the components you want to install. In our scenario, we select the four server components, as shown in Figure 4-57 on page 101. Tip: We recommend that you install Data Agent and Device Agent in a separate step. If you install all the components at the same time, if one fails for any reason (for example, space or passwords) the installation suspends and a rollback occurs, uninstalling all the previously installed components.
100
Specify the DB2 user ID and password defined for Table 4-7 on page 68 in the panel in Figure 4-58 and click Next.
3. Enter the DB2 user ID and password and click Use local database. Click Next to continue (Figure 4-59 on page 102).
101
4. In the panel in Figure 4-60 on page 103, enter the following information: Data Server Name Enter the fully qualified host name of the Data Server. Data Server Port Enter the Data Server port. The default is 9549. See 3.3.1, TCP/IP ports used on page 46 for more details. Device Server Name Enter the fully qualified host name of the Device Server. Device Server Port Enter the Device Server port. The default is 9550. TPC Superuser Enter the Administrators Group for the TPC Superuser. We created the user ID TPCADMIN and added this to the existing Administrators group. See 4.5.5, User IDs and password to be used and defined on page 66 for more details. Host Authentication Password This is the password used for the Fabric agents to communicate with the Device Server. Remember to record this password. See Table 4-11 on page 70. WebSphere Application Server admin ID and Password You can use the TPC Superuser here. In our case we used TPCADMIN. See Table 4-14 on page 71 for further details. Click Next to continue.
102
5. In the panel shown in Figure 4-61 on page 104, enter the Agent Manager information. You must specify the following information: Hostname or IP address Fully qualified name or IP address of the agent manager server. For further details about the fully qualified name, refer to 4.5.1, Verify primary domain name systems on page 62. Port (Secured) Port number of the Agent Manager server. If acceptable (not in use by any other application), use the default port 9511.See 3.3.1, TCP/IP ports used on page 46 for further details. Port (Public) The public communication port. If acceptable (not in use by any other application), use the default of 9513. User ID This is the user ID used to register the Data Server or Device Server with the Agent Manager. The default is manager. You previously specified this user ID during the Agent Manager install (see Figure 4-38 on page 89). Password This is the password used to register the Data Server or Device Server with the Agent Manager. The default is password. You previously specified this user ID during the Agent Manager install (see Figure 4-38 on page 89). Password - common agent registration password This is the password used by the common agent to register with the agent manager. this was specified when you installed the Agent Manager. The default is changeMe. See Table 4-9 on page 69 for further details. Click Next to continue.
103
6. The Summary information panel is displayed. Review the information, then click Install to continue (see Figure 4-62).
The installation starts. You might see several messages related to Data Server installation similar to Figure 4-63 on page 105.
104
You might also see several messages about the Device Server installation, as shown in Figure 4-64.
7. After the GUI and CLI installation messages, you see the summary information panel (Figure 4-65 on page 106). Read and verify the information and click Finish to complete the installation.
105
Verifying installation
At the end of the installation, the Windows Services shows the Data Server and Device Server services (shown in Figure 4-66) have been installed.
Check that the Administrators group contains the newly created TPC user ID. The user ID TSRMsrv1 is created by default by the install program.
106
INSTALL<timestamp>.log file should not have any exceptions and it should show install successful at the bottom. Server_000001.out file mentions that the server is ready to accept connections.
4.8 Configuring the GUI for Web Access under Windows 2003
You can configure the TotalStorage Productivity Center V3.1 user interface to be accessible from a Web browser. After this is done, a user can access the TotalStorage Productivity Center GUI by browsing to the URL. The TotalStorage Productivity Center GUI applet is downloaded into the browser and executed. It looks and acts exactly as though you were interacting with the native server. You can install the interface on any of the TotalStorage Productivity Center servers, management consoles, or workstations.
107
3. Highlight the entry for Application Server. Click Details to continue (Figure 4-68).
4. Check the Internet Information Services (IIS) box. Click OK to continue (Figure 4-69 on page 109).
108
5. You are returned to the panel in Figure 4-68 on page 108. Click Next to install IIS. An Installation Progress window shows the installation progress. Have your Windows Server 2003 CD-ROM available or have a 2003 I386 directory installed on a hard disk (see Figure 4-70). If you have Service Pack 1 installed, you also need to have the Service Pack 1 CD_ROM or SP1 I386 directory available as well.
6. When the installation has completed successfully, click the Finish button to complete the Windows Component Wizard (see Figure 4-71 on page 110).
109
7. Cancel the Add or Remove Programs dialog box. You now have IIS installed.
2. When the Internet Information Services (IIS) Manager opens, open the Explorer panel to display the Web Site Default Web Site. Right-click the Default Web Site name, and click the Properties entry from the context menu (see Figure 4-73 on page 111).
110
3. The Default Web Site Properties panel opens. There are three tabs that you must configure: Web Site tab On the Web Site tab (see Figure 4-74 on page 112), you can change the Description to TPC V3.1 GUI. Attention: Agent Manager is configured to use port 80 for the Agent Recovery service, you must change the default port to something other than 80. Port 8080 is a good alternative.
111
Home Directory tab On the Home Directory tab (Figure 4-75), change the Local Path to the GUI directory for TotalStorage Productivity Center. The default is C:\Program Files\IBM\TPC\gui.
112
Documents tab On the Documents tab (see Figure 4-76), click the Add button, and add TPCD.html to the list. Highlight the new name in the list, and click Move Up to move TPCD.html to the top of the list.
4. Click OK to save these changes. Close the IIS Manager by clicking the X in the upper right corner of the panel.
If you start the Web browser on your TotalStorage Productivity Center server machine, you can use localhost rather than the network name:
http://localhost
113
2. The IBM TotalStorage Productivity Center for Data panel is displayed (see Figure 4-77). This is the anchor page for the TotalStorage Productivity Center GUI java applet, and must remain open as long as the TotalStorage Productivity Center GUI is running.
3. A security certificate approval panel is displayed (see Figure 4-78 on page 115). Depending on network transmission rates, it could take a few minutes for the panel to open. Click Yes to accept the certificate. (If you click No, the TotalStorage Productivity Center GUI does not load, and you must relaunch the TotalStorage Productivity Center GUI URL to restart.)
114
At this point, the java applet for the TotalStorage Productivity Center GUI downloads. The applet jar file is 15.6 MB, and can take some time to load into your browser the first time. Be patient because there is no progress bar displayed while it downloads. When the applet jar file has been loaded into your browser, it remains in your browser cache until you dump it. Subsequent starts of the TotalStorage Productivity Center GUI will load much faster. 4. After the applet has loaded, it launches the TotalStorage Productivity Center GUI. In the center of the GUI, the Sign On panel opens. It should be prefilled with the Server address and access port (9549 for TotalStorage Productivity Center V3.1). Enter your TotalStorage Productivity Center Server User ID and password, and click OK to continue (see Figure 4-79).
The TotalStorage Productivity CenterGUI is displayed (see Figure 4-80 on page 116), and has all the functionality of the native GUI on the TotalStorage Productivity Center Server.
115
116
Chapter 5.
117
Streamlined Installation and Packaging Single User Interface - File System, Database, Fabric and Storage management - Asset, Capacity and Performance Reporting Single Database Correlated host, fabric, storage information
Consistent Reporting Capabilities (Scheduled, Ad-Hoc) Data Export Capabilities (html, CSV) Single set of services for consistent administration and operations Policy Definitions, Event Handling, Resource Groups
NEW: Storage topology viewer NEW: Role based administration NEW: AIX support for TPC server NEW: Fabric performance management NEW: Tape discovery and asset/capacity reporting NEW: Performance Management for DS4000
TotalStorage Productivity Center provides an integrated storage infrastructure management solution that is designed to allow you to manage every point of your storage infrastructure, including hosts, network and fabric, and physical disks. It can help simplify and automate the management of devices, data, and storage networks. TotalStorage Productivity Center V3.1 offers a simple, easy to install package with management server support added for IBM AIX V5.3, and integrating IBM DB2 as the management server database. The default installation directory is: /opt/IBM/TPC for AIX. You may change this path during installation setup. There are two types of installation, typical and custom.
118
CD2:
CD3: Data Upgrade for all supported platforms. If you need to perform a remote installation of Data Agents on an Operating System other than AIX, you need to copy the contents of both CD1 and CD2 to one location.
5.2 Configuration
The IBM TotalStorage Productivity Center components are listed below: Data Server Device Server DB2 Agent Manager Graphical User Interface (GUI) Command Line Interface (CLI) The Data Server and Device Server must be installed on the same server. You can install all the components on one server, or you can have a two server configuration. In a two server configuration you would install the components as follows: Server 1 DB2 Agent Manager Server 2 Device Server Data Server GUI CLI
119
IBM AIX 5.1 (32-bit) IBM AIX 5.1 (64-bit) IBM AIX 5L (32-bit) IBM AIX 5L (64-bit) IBM AIX 5.2 (32-bit) IBM AIX 5.2 (64-bit) IBM AIX 5.3 (32 bit)
YES YES YES YES YES YES YES (32- and 64-bit)
Table 5-2 shows the platform supported to install and deploy Data agent and Fabric agent. Important: Data agents and Fabric agents at the 2.x version are supported by IBM TotalStorage Productivity Center V3.1 managers.
Table 5-2 Platform support for Data agent and Fabric agent Platform Data agent and Fabric agent
Windows 2000 Advanced Server Windows 2000 Datacenter Windows Server 2003 Standard Edition
120
Platform
Windows Server 2003 Enterprise Edition Red Hat Enterprise Linux AS Version 3.0 IBM eServer xSeries Red Hat Enterprise Linux AS Version 3.0 IBM eServer pSeries on POWER5 Red Hat Enterprise Linux AS Version 3.0 IBM eServer iSeries on POWER5 United LInux 1.0 IBM eServer xSeries United LInux 1.0 IBM eServer zSeries SUSE LINUX Enterprise Server 8 IBM eServer pSeries on POWER4 zSeries SUSE LINUX Enterprise Server 9 IBM eServer xSeries, pSeries on POWER5 iSeries on POWER5 zseries (Data agent only) SUSE LINUX Enterprise Server 9 IBM eServer xSeries, pSeries on POWER5 iSeries on POWER5 zseries (Data agent only) IBM AIX 5.1 (32-bit) IBM AIX 5.1 (64-bit)
YES YES Only Data agent Only Data agent Only Data agent Only Data agent Only Data agent Only Data agent
Data agent (ALL) Fabric agent (xSeries only) YES with AIX 5100-05 maintenance level YES in compatibility mode with AIX 5100-05 maintenance level YES in compatibility mode with AIX 5100-05 maintenance level YES in compatibility mode with AIX 5100-05 maintenance level YES with AIX 5200-02 maintenance level YES in compatibility mode with AIX 5200-02 maintenance level YES with AIX 5300-01 maintenance level and APAR IY70336 YES YES Only Data agent
121
The Agent Manager repository is supported only on the following databases. DB2 Enterprise Server Edition version 8.2 with fix pack 7a or higher. There is only one database instance created for IBM TotalStorage Productivity Center on DB2. The default database name for the repository is TPCDB. The Agent Manager repository uses its own database. The default name of this database is IBMCDB.
or
lsnamsv -C
122
3. The file should contain a reference to one or more name servers, as well as the name of the DNS domain for the host. A sample file may look like this:
nameserver 1.2.3.4 domain mycompany.com
4. If the file does not contain those two references, edit the file to insert the IP address of a DNS server, and the domain name which should be used by the host. Save the file when you have finished your edits. Alternatively, you can use SMIT to accomplish this. From the main SMIT menu, choose Communications Applications and Services TCP/IP Minimum Configuration & Startup. From the menu that appears, select the appropriate network interface. Then in the NAMESERVER options, edit the properties for Internet ADDRESS (dotted decimal) and DOMAIN name. Change the START Now option to Yes, and press Enter. When the changes have been committed and the OK status appears, press F10 to exit SMIT.
agentTrust.jks
tpctpc
Table 5-4 shows the user IDs and passwords we use for the various components of TotalStorage Productivity Center. You may wish to refer to this table, or the worksheets in the Appendix as you read the remainder of this chapter.
Table 5-4 User IDs and passwords used in our environment Element Default user ID Our user ID Our password
DB2 DAS user DB2 instance owner DB2 fenced user Resource Manager Agent Manager Common Agent
123
Element
Default user ID
Our user ID
Our password
1 - This user ID is created by the installer and cannot be changed. 2 - Despite what it says, we recommend that you do NOT change this password. 3 - This user ID is created manually.
Some of these user IDs and passwords will be created by the installers, but there is one user ID that we recommend you create, the TotalStorage Productivity Center administrator. The next section discusses the creation of this user.
add to the group. Move the cursor to the desired user ID and press F7 to select that user ID. Multiple users can be selected by moving the cursor each desired user ID and pressing F7 while the user ID is highlighted. When you have finished selecting user IDs to add to the group, press Enter. You will be returned to the Change Group Attributes menu. 9. Press Enter to commit the changes you have made to the adm group. When the OK status appears, press F10 to exit SMIT (F12 if using the graphical version). 10.Set the initial password for the user ID you just created by typing passwd username at a command prompt, where username is the user ID you just created. Follow the on-screen prompts to enter the new password two times. Note: After DB2 is installed, and you have applied the latest fix pack, you must add the root user to the group db2grp1. The group will be automatically created during the installation of DB2. Adding the root user to this group allows the root user to source the instance owners environment prior to installing TotalStorage Productivity Center. Refer to 5.7.2, Add the root user to the DB2 instance group on page 145 for details.
125
Table 5-5 TCP ports used by TotalStorage Productivity Center components Component Port Number Inbound to or outbound from the server Inbound to our outbound from the agent
Remote agent deployment for UNIX agents Agent Manager recovery service (optional) or Web GUI (optional) Remote agent deployment for Windows agents Remote agent deployment for UNIX agents Remote agent deployment for UNIX agents Remote agent deployment for UNIX agents Remote agent deployment for all agents; Graphical User Interface Remote agent deployment for all agents; Common Agent Agent Manager Agent Manager Agent Manager Common Agent (no access needed) Common Agent (no access needed) Data Server; Web GUI Device Server
22 (SSH)
Outbound
Both
80 (HTTP)
Inbound
Outbound
139 (NetBIOS)
Outbound
Both
512 (REXEC)
Outbound
Both
514 (RSH)
Outbound
Both
601
Inbound
Outbound
2078
Inbound
Outbound
Inbound
Both
9549 9550
Both Both
Both Both
126
127
installation on page 128. However, if you are installing from a remote terminal session, you must setup an X-Windows display prior to beginning the installation process. First, you must start your local X-Windows server application. Examples are Hummingbird Exceed or Cygwin. After your local X-Windows server application is running, you must set the DISPLAY variable on your host. You must know the IP address of the system from which you intend to perform the installation. For example, if the IP address is 2.3.4.5, type the following command at the servers command prompt: export DISPLAY=2.3.4.5:0.0 You can verify that the X-Windows environment is properly set up by executing the following command on the host: xclock If the environment is successfully configured, you will see a graphical clock display, similar to the one shown in Figure 5-2:
128
2. Click Install products to begin the installation. A new window will display, similar to the one in Figure 5-4 which asks you which products you would like to install.
3. Select the option DB2 UDB Enterprise Server Edition and click Next. The DB2 setup wizard will now display, and will be similar to the one shown in Figure 5-5.
129
4. Click Next. The license agreement opens, and is similar to the one shown in Figure 5-6.
130
5. You must click Accept, then click Next to proceed. The Installation Type window opens and is similar to the one shown in Figure 5-7 on page 131.
6. Select the Typical installation option, then click Next. The installation action window opens and is similar to the one shown in Figure 5-8 on page 132.
131
7. Select Install DB2 UDB Enterprise Server Edition on this computer and click Next. The DAS user window opens and is similar to the one shown in Figure 5-9 on page 133.
132
8. If you would like the installer to create a DB2 Administration Server (DAS) user ID, you must enter a unique username for the DAS user in the User name field. You must also enter a password in both the Password and Confirm password fields. If you leave the UID and GID fields blank, and check the Use default boxes, the system assign a UID and GID for you. Optionally, you can check the Existing user button, and enter the name of an existing user ID which will become the DAS user. When you have completed this form, click Next. The Set up a DB2 instance window opens and appears similar to the one shown in Figure 5-10 on page 134.
133
9. Select the option Create a DB2 Instance - 32 bit and click Next. The instance partitioning window opens and appears similar to the one shown in Figure 5-11 on page 135.
134
10.Select Single-partition instance and click Next. The DB2 instance owner window opens and is similar to the one shown in Figure 5-12 on page 136.
135
11.If you would like the installer to create a DB2 instance owner user ID, you must enter a unique username for the instance owner in the User name field. You must also enter a password in both the Password and Confirm password fields. If you leave the UID and GID fields blank, and check the Use default boxes, the system will assign a UID and GID for you. Optionally, you can check the Existing user button, and enter the name of an existing user ID which will become the instance owner. After you have completed this form, click Next. The DB2 fenced user window opens and is similar to the one shown in Figure 5-13 on page 137.
136
12.If you would like the installer to create a DB2 fenced user ID, you must enter a unique username for the fenced user in the User name field. You must also enter a password in both the Password and Confirm password fields. If you leave the UID and GID fields blank, and check the Use default boxes, the system will assign a UID and GID for you. Optionally, you can check the Existing user button, and enter the name of an existing user ID which will become the fenced user. When you have completed this form, click Next. The Prepare the DB2 tools catalog window opens and is similar to the one shown in Figure 5-14 on page 138.
137
13.Click Do not prepare the DB2 tools catalog on this computer, then click Next. The Administration contact window opens and is similar to the one shown in Figure 5-15 on page 139.
138
14.Choose the options on this screen which pertain to your specific environment. If you already have DB2 servers in your environment, it can benefit you to use a contact list on an existing DB2 server. If so, select Remote and enter the name of the remote DB2 server from which to obtain the contact list. Otherwise, choose the default options of Local and Enable notification. The local host name is displayed in the Notification SMTP server field by default. You can change this option to suit your environment. After you have completed this form, click Next. The health monitor window opens, similar to the one shown in Figure 5-16 on page 140.
139
15.Enter information in this screen also in accordance with your particular environment. If you do not want to specify a contact, choose Defer this task until after installation is complete, then click Next. The Start copying files window opens, similar to the one shown in Figure 5-17 on page 141.
140
16.Scroll through the window to review the installation summary. When you are ready to proceed, click Finish. The DB2 installer begins the product installation. A progress screen opens and appears similar to the one shown in Figure 5-18.
17.When DB2 has been installed successfully, an installation summary screen opens, similar to the one shown in Figure 5-19 on page 142.
141
18.Review the information in the Post-install steps tab to see if there are any additional tasks you need to complete. You can also select the Status report tab. It appears similar to the one shown in Figure 5-20.
19.Each of the items in the status report should indicate Success. Click Finish to close the installer.
142
This will log you on to the system as the instance owner. Then, type the following commands
db2level exit
The output of the db2level command will appear similar to the text shown in Figure 5-21:
DB21085I Instance "db2inst1" uses "32" bits and DB2 code release "SQL08020" with level identifier "03010106". Informational tokens are "DB2 v8.1.1.64", "s040812", "U498350", and FixPak "7". Product is installed at "/usr/opt/db2_08_01". Figure 5-21 Output of the db2level command
2. Remove the CD from the drive by pressing the button on the front panel of the CD-ROM or DVD-ROM drive. This will eject the media tray. 3. Remove the CD from the media tray, and close the media tray by pressing the button again.
143
3. Download the latest IBM DB2 UDB fix pack from the IBM support FTP site. We donwnloaded fix pack 10 from the following location:
ftp://ftp.software.ibm.com/ps/products/db2/fixes2/english-us/db2aix5v8/fixpak/ FP10_U803920/FP10_U803920.tar.Z
4. Change to the directory where you stored the fix pack image. For example, if you downloaded the file to /usr/tarfiles, type the following command:
cd /usr/tarfiles
5. Extract the compressed image files. For version 10 of the fix pack, the command is:
gunzip -c FP10_U803920.tar.Z | tar -xvf -
6. Switch to the instance authority. For example, if your DB2 instance is db2inst1, type the following command:
su - db2inst1
9. Switch to the DAS user authority. For example, if your DB2 DAS user is db2tpc, type the following command: su - db2tpc 10.Type the following commands to source the environment and shutdown DB2 DAS: . $HOME/das/dasprofile db2admin stop exit 11.As the root user, issue the following commands to unload shared libraries and disable the DB2 fault monitor: /usr/sbin/slibclean cd /usr/opt/db2_08_01/bin Note: The above listed location is the default for DB2 installation. However, if you elected to install DB2 at another location, you should change into that directory structure instead. ./db2fmcu -d ./db2fm -i db2tpc -D Where db2tpc is the user ID of your DB2 DAS user. 12.Change to the directory which was created automatically when the fix pack files were uncompressed. For version 10 of the fix pack, the directory is named fixpak.s050811. To do this, type the following command: cd fixpak.s050811 13.Install the fix pack by issuing the following command: ./installFixPak -y
144
14.After the fix pack has been successfully installed, you must bind the database instance to the updated code. To do this, you must issue the db2iupdt command. For example, if your instance name is db2inst1, and you installed DB2 in the default location, type the following commands: /usr/opt/db2_08_01/instance/db2iupdt db2inst1 15.Next you must update the DB2 DAS. For example, if your DB2 DAS user ID is db2tpc, type the following command: /usr/opt/db2_08_01/instance/dasupdt db2tpc 16.Next, you must update the db2 instance owners user profile to update the number of shared memory segments allowed for a process. To do this, edit the userprofile located in the sqllib directory under the instance owners home directory. For example, if your instance is db2inst1, then type the following command to change to that directory: cd /home/db2inst1/sqllib 17.Then, edit the userprofile contained in that directory. Add the following lines to the file, then save the file: EXTSHM=ON export EXTSHM db2set DB2ENVLIST=EXTSHM 18.Next, you must restart DB2. To do this, switch to the instance authority. For example, if your DB2 instance is db2inst1, type the following command: su - db2inst1 19.Source the environment by issuing the following command: . $HOME/sqllib/db2profile 20.Type the following commands to start the instance and exit from the instance authority: db2start exit 21.Finally, you must login as the DAS user and restart DB2 DAS. To do this, switch to the DAS user authority. For example, if your DB2 DAS user is db2tpc, type the following command: su - db2tpc 22.Type the following commands to source the environment and start DB2 DAS: . $HOME/das/dasprofile db2admin start exit
145
3. In the Group NAME field, enter the name of the group to be modified. If you used the default group ID, enter the name db2grp1 and press Enter. 4. Highlight the USER list field and press F4. The USER list menu appears. 5. Highlight the root user ID and press F7 to select it. 6. Highlight the DB2 instance owner user ID and press F7 to select it. For example, select the user ID db2inst1. Then press Enter. The Change Group Attributes screen will reappear. Ensure that both the root user and the DB2 instance owner user ID appear in the USER list field, then press Enter. 7. When the OK status appears, press F10 to exit SMIT (F12 if using the graphical version).
146
3. Download or copy the installation image to the temporary directory you created. 4. Change to the directory where you have stored the image, for example, cd /usr/tarfiles 5. Extract the image files by following the instructions supplied at the repository from which you downloaded the image. This may involve running the tar or gunzip commands, or a combination of both, for example, tar -xvf agentmanager.tar 6. Change to the installation directory that was created automatically which you extracted from the image. For example: cd EmbeddedInstaller
147
3. Select the language you wish to use for the installation from the drop-down box, then click OK. The Agent Manager setup wizard will then initialize. The first item to display will be the license agreement screen, and it will appear similar to the one shown in Figure 5-24.
4. Click the button labeled I accept the terms of the license agreement, then click Next. The installation location window displays, and appears similar to the one shown in Figure 5-25 on page 149.
148
5. The default location for installing Agent Manager is /opt/IBM/AgentManager. You may choose to change this location to suit your requirements. Once you have entered the installation location path in the Directory Name field, click Next. The installing panel displays, and appears similar to the one shown in Figure 5-26.
Note: At this point, if you have an existing version of Agent Manager installed, it will be detected and you will be given a choice to upgrade to the latest version. If your Agent Manager version is older that 1.2, you must upgrade it. If you wish to upgrade, click Next. If not, click Cancel. Either way, the Agent Manager installer will continue.
149
6. The Agent Manager Registry Information window displays, and appears similar to the one shown in Figure 5-27.
7. Click the option DB2 Universal Database at the top of the screen. The Database Name or Directory field contains the name of the database which Agent Manager will use. The default database name is IBMCDB. We recommend that you do not change this name. Then, select the Type of Database Connection used in your environment. If you are installing Agent Manager on the same server on which you installed DB2, then choose Local database. If you are installing Agent Manager on a separate server, then choose Remote database. Once you have made your selections, click Next. The Database Connection Information window displays, and appears similar to the one shown in Figure 5-28 on page 151.
150
8. The Database Software Directory is the sqllib directory under the home directory of the instance owner. For example, if your instance name is db2inst1, then you would enter /home/db2inst1/sqllib. If you are installing Agent Manager on the same server as you installed DB2, you can also click Browse to navigate the directory structure and select the directory. Enter the user ID for the instance owner in the Database User Name field. Enter the password for the instance owner in the Database Password field. If you are installing Agent Manager on a separate machine from where you installed DB2, you must enter the host name of the DB2 server in the Host Name of the Database Server field, and the TCP port number in the Database Port field. The default port number for DB2 databases is 50000. Once you have completed this form, click Next. The WebSphere Application Server Information window displays, and appears similar to the one shown in Figure 5-29 on page 152.
151
9. Enter the fully qualified host name in the Host Name or Fully Qualified Host Name field. We recommend that you do not enter an IP address in this field. If you do, an information panel will display asking you to confirm the use of an IP address instead of a host name. The default name AgentManager is entered in the Application Server Name for Agent Manager field. We recommend that you do not change this from the default setting. The defaults of 9510, 9511, and 9512 are entered in the Registration Port, Secure Port, and Public Port and Secondary Port for the Agent Recovery Service fields, respectively. We recommend that you do not change these from their default settings. The Do not use port 80 for the agent recovery service option is unchecked by default, which means that port 80 will be used for this service. We recommend that you check this box. See the note below regarding this option. The Start the agent manager after the installation is complete and Autostart the agent manager each time this system restarts options are checked by default. We recommend that you leave these options checked. Note: By default, the Agent Manager agent recovery service will use port 80 for its communication. However, if your server functions as a Web server, or if you will install the IBM TotalStorage Productivity Center Graphical User Interface (GUI) Web access component, you should check the box labeled Do not use port 80 for the agent recovery service. This will avoid conflicts with Web services. Once you have completed this form, click Next. The Security Certificates screen displays, and appears similar to the on shown in Figure 5-30 on page 153.
152
10.The default option is Create certificates for this installation. We recommend that you accept the default. Note: Tip: Earlier versions of TotalStorage Productivity Center encountered difficulty creating and distributing security certificates. In those versions, we recommended using the demonstration certificates. However, TotalStorage Productivity Center V3.1 has resolved these issues, and can create and distribute the certificates correctly. We have therefore updated our recommendation to create certificates during the installation process. Click Next after you have made your selection. If you chose the option to create certificates, the Security Certificate Settings window displays, and appears similar to the one shown in Figure 5-31 on page 154.
153
11.The Certificate Authority Name field contains TivoliAgentManagerCA by default. This name must be unique in your environment. We recommend you use the default setting unless it has already been used in your environment. the Security Domain field is automatically populated with your DNS information. We recommend you accept this default setting. the Certificate Authority Password field is blank by default. If you leave it blank, a random password will be generated by the system. We recommend you enter a password in this field, in case you need to unlock the certificate files in the future. The Agent Registration Password is changeMe. Despite its setting, we recommend that you do not change it. Once you have completed this form, click Next. The User Input Summary screen displays, and appears similar to the one shown in Figure 5-32 on page 155.
154
12.Review the settings displayed on this screen. If you need to make changes, click Back. If you are ready to proceed with the installation, click Next. When you do, the Summary Information screen displays, and appears similar to the one shown in Figure 5-33.
13.When you have reviewed the summary information, click Next. The installation progress screen displays, and appears similar to the one shown in Figure 5-34 on page 156.
155
14.When the installation has finished, the installation results screen displays, and is similar to the one shown in Figure 5-35.
15.When you have reviewed the results, click Next. The installation summary screen displays, and is similar to the one shown in Figure 5-36 on page 157.
156
157
Tip: We recommend that you install the Database Schema first. Then, install Data Server and Device Server in a separate step. If you install all the components in one step, if any part of the installation fails for any reason (for example space or passwords) the installation suspends and rolls back, uninstalling all the previous components.
158
When your local X-Windows server application is running, you must set the DISPLAY variable on your host. You must know the IP address of the machine from which you intend to perform the installation. For example, if you IP address is 2.3.4.5, type the following command at the servers command prompt:
export DISPLAY=2.3.4.5:0.0
You can verify that the X-Windows environment is properly set up by executing the following command on the host: xclock If the environment is successfully configured, you will see a graphical clock display, similar to the one shown in Figure 5-37:
Note: You must install TotalStorage Productivity Center as the root user. However, you must still source the environment with the instance owner.
159
3. Select the language you wish to use for the installation from the drop-down box, then click OK. The TotalStorage Productivity Center setup wizard will then initialize. The first item to display will be the license agreement screen, and it will appear similar to the one shown in Figure 5-39.
4. Click the button labeled I accept the terms of the license agreement, then click Next. The installation types window displays, and appears similar to the one shown in Figure 5-40 on page 161.
160
5. Click Custom installation. In addition, you can change the TPC Installation Location from the default location of /opt/IBM/TPC to suit your requirements. Once you have filled out this form, click Next. The Select one or more components to install window displays, and appears similar to the one shown in Figure 5-41.
6. Deselect all the options except Create database schema. Then click Next. The database administrator information window displays, and appears similar to the one found in Figure 5-42 on page 162.
161
7. Enter the user ID for the DB2 instance owner in the Database administrator field, and the instance owners password in the Password field, then click Next. The database schema information window displays, and appear similar to the one shown in Figure 5-43.
8. You must enter the DB2 instance owners user ID in the DB user ID field, and the instance owners password in the Password field. Then you can choose which database connection type to use for TotalStorage Productivity Center: If you are upgrading from a current version of TotalStorage Productivity Center, choose the option Use local database. Enter in the port, database name, full path, and instance name in the appropriate fields. If this is a new installation (not an upgrade), choose the option Create local database. The default database name is TPCDB. We recommend that you do not change this name. After you have selected the database connection type which suits your requirements, click Schema creation details. The schema details window displays, and appears similar to the one shown in Figure 5-44 on page 163.
162
9. The default entry in the Schema name field is TPC. We recommend that you do not change this from the default setting. You then have the option of placing the various table spaces in different directories or file systems, and for setting an initial database size. For all but the largest Enterprise deployments, database sizes of 200 MB should be sufficient for initial creation. For best performance in medium and Enterprise deployments, you should consider placing the table spaces on separate file systems and on separate disk devices. If you have already created these file systems, enter their paths in the Normal, Key, Big, and Temp fields, or click the Browse button to search for them. The Normal, Key, and Big databases can be housed in the same file system. The Temp database should be housed on a separate file system for best performance. The differences between choosing System managed (SMS) and Database managed (DMS) containers are discussed in 11.1, Selecting an SMS or DMS tablespace on page 510. If you select Database managed, you can enter a path in which to house log files, and an initial size. Log files should be housed on a separate fleshiest from the table spaces for best performance. For all but the largest Enterprise deployments, an initial size of 20 MB should suffice. After you have filled out the form, click OK. You will be returned to the database schema information window. In that window, click Next. The summary information window displays, and appears similar to the one shown in Figure 5-45 on page 164.
163
10.Click Install to begin the database schema installation. A progress screen opens, and is similar to the one shown in Figure 5-46.
11.When the installation is complete, the installation results window displays, and appears similar to the one shown in Figure 5-47 on page 165.
164
3. Select the language you want to use for the installation from the drop-down box, then click OK. The TotalStorage Productivity Center setup wizard will then initialize. The first item to display will be the license agreement screen, and it will appear similar to the one shown in Figure 5-49 on page 166.
165
4. Click the button labeled I accept the terms of the license agreement, then click Next. The installation types window displays, and appears similar to the one shown in Figure 5-50.
5. Click Custom installation. In addition, you can change the TPC Installation Location from the default location of /opt/IBM/TPC to suit your requirements. Once you have filled out this form, click Next. The Select one or more components to install window displays, and appears similar to the one shown in Figure 5-51 on page 167.
166
6. Deselect all the options except Data server. Then click Next. The database administrator information window displays, and appears similar to the one found in Figure 5-52.
7. Enter the user ID for the DB2 instance owner in the Database administrator field, and the instance owners password in the Password field, then click Next. The database schema information window displays, and appears similar to the one found in Figure 5-53 on page 168.
167
8. Enter the DB2 instance owners user ID in the DB user ID field, and the instance owners password in the Password field. If you are installing Data server on the same machine on which DB2 is installed, check the option Use local database. The local database information should be populated automatically. Highlight the local database to be used. If you are installing Data server on a separate machine, check the option Use remote database. The default database name is TPCDB, but if you created a database with a different name, enter it here. Enter the DB2 servers host name in the Host name field. Enter the communication port number in the Port field. The default port number for DB2 is 50000. You will also need to enter the path to the JDBC driver. If your instance name is db2inst1, then the default path is /home/db2inst1/sqllib/java.db2jcc.jar. You can enter the path directly, or click Browse to search for it. When you have selected your database connection information and completed the form, click Next. The Data server information window displays, and appears similar to the one shown in Figure 5-54.
168
9. If it is not already displayed, you must enter the fully qualified host name of the server on which you are installing TotalStorage Productivity Center in the Data server name field. The default port of 9549 is listed in the Data server port field. You may change this to suit your requirements, but we recommend that you do not change it. The adm group is listed by default in the TPC superuser field. We recommend that you do not change it. If you want to perform advanced security role mapping, click Security roles. If you do, the security roles mapping screen displays, and appears similar to the one shown in Figure 5-55.
You can optionally enter group names to map to each specific role in TotalStorage Productivity Center. This allows for more customized control over your management environment. When you have finished filling out the form click OK. The Data server information screen will display. If you want to discover Network Attached Storage (NAS) devices in your environment, click NAS discovery. The NAS discovery options window will display, and will appear similar to the one shown in Figure 5-56.
169
You can add login information in the User name and Password fields in order to attach to Network Appliance storage devices. You can also add Simple Network Management Protocol (SNMP) community strings to search during the discovery process. To add an SNMP community, enter the community name in the SNMP community field and click Add. When you have finished filling out the form click OK. The Data server information screen will display. When you have finished making your selections, click Next. The Agent Manager information panel displays, and appears similar to the one shown in Figure 5-57.
10.Enter the fully qualified host name of the Agent Manager server in the Hostname or IP address field. The Port (secured) and Port (Public) fields will be populated with the defaults of 9511 and 9513, respectively. We recommend that you do not change them. In the User ID field, enter manager. In the Password field for that ID, enter password. These are the defaults and cannot be changed. The default agent registration password is changeMe. Enter it, or the agent registration password you created during the Agent Manager installation, into the final Password field. When you have finished filling out the form click Next. The summary information window displays, and appears similar to the on shown in Figure 5-58.
170
11.Click Install. Data server installation begins. A progress window displays, and appears similar to the one shown in Figure 5-59.
12.When the installation completes, an installation results window displays, and appears similar to the one shown in Figure 5-60.
171
3. Select the language you want to use for the installation from the drop-down box, then click OK. The TotalStorage Productivity Center setup wizard will then initialize. The first item to display will be the license agreement screen, and it will appear similar to the one shown in Figure 5-62.
4. Click the button labeled I accept the terms of the license agreement, then click Next. The installation types window displays, and appears similar to the one shown in Figure 5-63 on page 173.
172
5. Click Custom installation. In addition, you can change the TPC Installation Location from the default location of /opt/IBM/TPC to suit your requirements. After you have completed this form, click Next. The Select one or more components to install window displays, and appears similar to the one shown in Figure 5-64.
6. Deselect all the options except Device server. Then click Next. The database administrator information window displays, and appears similar to the one found in Figure 5-65 on page 174.
173
7. Enter the user ID for the DB2 instance owner in the Database administrator field, and the instance owners password in the Password field, then click Next. The database schema information window displays, and appears similar to the one found in Figure 5-66.
8. Enter the DB2 instance owners user ID in the DB user ID field, and the instance owners password in the Password field. If you are installing Device server on the same machine on which DB2 is installed, check the option Use local database. The local database information should be populated automatically. Highlight the local database to be used. If you are installing Device server on a separate machine, check the option Use remote database. The default database name is TPCDB, but if you created a database with a different name, enter it here. Enter the DB2 servers hos tname in the Host name field. Enter the communication port number in the Port field. The default port number for DB2 is 50000. You will also need to enter the path to the JDBC driver. If your instance name is db2inst1, then the default path is /home/db2inst1/sqllib/java.db2jcc.jar. You can enter the path directly, or click Browse to search for it. 174
IBM TotalStorage Productivity Center: The Next Generation
Once you have selected your database connection information and filled out the form, click Next. The Device server information window displays, and appears similar to the one shown in Figure 5-67.
9. If it is not already displayed, you must enter the fully qualified host name of the server on which you are installing TotalStorage Productivity Center in the Device server name field. The default port of 9550 is listed in the Device server port field. You may change this to suit your requirements, but we recommend that you do NOT change it. The adm group is listed by default in the TPC superuser field. We recommend that you do not change it. Enter a password in the Host authentication password field that will be used for fabric agents to communicate with the device server. The password should contain eight characters or less, and should contain only alphanumeric characters. The password is case-sensitive. Enter a user ID and password in the WAS admin ID and Password fields. This user ID and password is only used during the installation process by the Device server so that it can communicate with WebSphere. If you did not have a previous version of WebSphere installed, these entries can be anything. However, if you are installing Device server with an existing version of WebSphere, these must be the authentication credentials used by the installed version of WebSphere. If you want to perform advanced security role-mapping, click Security roles. If you do, the security roles mapping screen displays, and appears similar to the one shown in Figure 5-68 on page 176.
175
You can optionally enter group names to map to each specific role in TotalStorage Productivity Center. This allows for more customized control over your management environment. When you have finished filling out the form click OK. The Device server information screen will display. When you have finished making your selections, click Next. The summary information window displays, and appears similar to the on shown in Figure 5-69.
10.Click Install. Device server installation begins. A progress window displays, and appears similar to the one shown in Figure 5-70 on page 177.
176
11.When the installation completes, an installation results window displays, and appears similar to the one shown in Figure 5-71.
177
3. Select the language you wish to use for the installation from the drop-down box, then click OK. The TotalStorage Productivity Center setup wizard will then initialize. The first item to display will be the license agreement screen, and it will appear similar to the one shown in Figure 5-73.
4. Click the button labeled I accept the terms of the license agreement, then click Next. The installation types window displays, and appears similar to the one shown in Figure 5-74 on page 179.
178
5. Click Custom installation. In addition, you can change the TPC Installation Location from the default location of /opt/IBM/TPC to suit your requirements. After you have completed this form, click Next. The Select one or more components to install window displays, and appears similar to the one shown in Figure 5-75.
6. Deselect all the options except the agent(s) you wish to install. For example, you can leave Fabric agent checked. When you have finished selecting the agent(s) to install click Next. The Data server and Device server information window displays, and appears similar to the one shown in Figure 5-76 on page 180.
179
7. If they are not already displayed, you must enter the fully qualified host name of the Data server or the Device server in the appropriate fields, depending on which agents you are installing. The default ports of 9549 for Data server and 9550 for Device server are listed in the Data Server port and Device server port fields, respectively. You may change these to suit your requirements, but we recommend that you do not change them. The adm group is listed by default in the TPC superuser field. We recommend that you do not change it. If you are installing Fabric agent, you will need to enter the password used for authenticating with the Device server in the Host authentication password field. If you are installing Data agent, you can click Data agent options. If you do, the data agent options window displays, and is similar to the one shown in Figure 5-77.
There are two options, Agent should perform a scan when first installed, and Agent may run scripts sent by server. We recommend that you leave both options checked. However, if you are installing into a production environment and the agent host is heavily utilized, you may elect to uncheck the option Agent should perform a scan when first installed. This will use less resources on the agent host during the installation process.
180
The agent will not collect statistics about itself until a scan is scheduled by the TotalStorage Productivity Center administrator. When you have completed the options on this form click OK. The Data server and Device server information window will display. When you have finished making your selections, click Next. The common agent selection window displays, and appears similar to the one shown in Figure 5-78.
8. If you have an existing Common agent installed, you can select the option Select an existing common agent from the list below. Highlight the common agent you want to use. If not, select the option Install the new common agent at the location listed below. You can change the installation path to suit your requirements by entering the path directly in the path field, or clicking Browse. When you have made your selection, click Next. The summary information window displays, and appears similar to the on shown in Figure 5-79.
9. Click Install. Agent installation begins. A progress window displays, and appears similar to the one shown in Figure 5-80 on page 182.
181
10.When the installation completes, an installation results window displays, and appears similar to the one shown in Figure 5-81.
5.9.11 Installing the Java Graphical User and the Command Line Interface
TotalStorage Productivity Center Data Java Graphical User Interface (GUI) and Command Line Interface (CLI) use the same installation instructions. You can install either interface (or both interfaces together) using these directions. You can install the interfaces on any of the TotalStorage Productivity Center servers, management consoles, or workstations. Follow these steps to complete the installation process for the interfaces. 1. At the command prompt on the interface host, in the installation media directory, type the following command:
./setup.sh
182
2. The TotalStorage Productivity Center installer will open, and will prompt you to select an installation language. The prompt will appear similar to the one shown in Figure 5-82.
3. Select the language you wish to use for the installation from the drop-down box, then click OK. The TotalStorage Productivity Center setup wizard will then initialize. The first item to display will be the license agreement screen, and it will appear similar to the one shown in Figure 5-83.
4. Click the button labeled I accept the terms of the license agreement, then click Next. The installation types window displays, and appears similar to the one shown in Figure 5-84 on page 184.
183
5. Click Custom installation. In addition, you can change the TPC Installation Location from the default location of /opt/IBM/TPC to suit your requirements. When you have completed this form, click Next. The Select one or more components to install window displays, and appears similar to the one shown in Figure 5-85.
6. Deselect all the options except the interfaces you want to install. For example, you can leave GUI checked. When you have finished selecting the interfaces to install, click Next. The Data server and Device server information window displays, and appears similar to the one shown in Figure 5-86 on page 185.
184
7. If they are not already displayed, you must enter the fully qualified host name of the Data server and/or the Device server in the appropriate fields. The default ports of 9549 for Data server and 9550 for Device server are listed in the Data Server port and Device server port fields, respectively. You may change these to suit your requirements, but we recommend that you do not change them. The adm group is listed by default in the TPC superuser field. We recommend that you do not change it. You will also need to enter the password used for authenticating with the Device server in the Host authentication password field. When you have finished making your selections, click Next. The summary information window displays, and appears similar to the on shown in Figure 5-87.
8. Click Install. Interface installation begins. A progress window displays, and appears similar to the one shown in Figure 5-88 on page 186.
185
9. When the installation completes, an installation results window displays, and appears similar to the one shown in Figure 5-89.
5.10 Installing the user interface for access with a Web browser
This section details how to setup the TotalStorage Productivity Center Graphical User Interface (GUI) for use with a Web browser. To distribute the GUI with a Web browser, there are two prerequisites which must be met: 1. The Java-based GUI must be installed on the system that will act as the Web server. 2. A Web application server must be installed. Examples of this are Microsoft Internet Information Server (IIS) or IBM HTTP Server. For more information about installing IIS on Windows 2003 refer to 4.8.1, Installing Internet Information Services (IIS) on page 108.
186
After the prerequisite software has been installed, distributing the GUI with a Web interface is mostly a matter of configuring the Web application server.
2. Click on the link on that page to download the latest version of IBM HTTP Server. As of this writing, the latest version was 6.0.2.0. 3. Follow the instructions for downloading the image to a temporary location on your system. Note: Registration at the ibm.com Web site is required. There is no charge to download the software. 4. Change to the location where you stored the image file. For example:
cd /tarfiles
5. Extract the compressed image files. For example, if you downloaded Version 6.0.2.0, you would type the following command:
tar -xvf ihs.6020.aix.ppc32.tar
6. Change to the installation directory that was created automatically during the image extraction. For example:
cd IHS
187
First, you must start your local X-Windows server application. Examples are Hummingbird Exceed or Cygwin. Once your local X-Windows server application is running, you must set the DISPLAY variable on your host. You must know the IP address of the machine from which you intend to perform the installation. For example, if you IP address is 2.3.4.5, type the following command at the servers command prompt: export DISPLAY=2.3.4.5:0.0 You can verify that the X-Windows environment is properly set up by executing the following command on the host: xclock If the environment is successfully configured, you will see a graphical clock display, similar to the one shown in Figure 5-37 on page 159:
The graphical installation displays, and appears similar to the one shown in Figure 5-91.
2. Click Next. The license agreement screen displays, and appears similar to the one shown in Figure 5-92 on page 189.
188
3. Choose I accept the terms in the license agreement and click Next. The installation location screen displays, and appears similar to the one shown in Figure 5-93.
4. The default installation location is /usr/IBMIHS. You may change the installation location to suit your requirements by entering a path in the Directory Name field or by clicking Browse to search for it. Once you have entered the installation location, click Next. The setup type screen displays, and appears similar to the one shown in Figure 5-94 on page 190.
189
5. Select Typical, then click Next. The installation summary screen displays, and appears similar to the one shown in Figure 5-95.
6. Click Next to begin the installation. A progress screen displays, and appears similar to the one shown in Figure 5-96 on page 191.
190
When the installation is complete, an installation results screen displays, and appears similar to the one shown in Figure 5-97.
191
Change the text in quotes to point to the location where the TotalStorage Productivity Center GUI is installed. If you installed it at the default location, your edited line would look like this:
DocumentRoot "/opt/IBM/TPC/gui"
Note: The path is case-sensitive. 2. Farther down in the same section, locate the following line:
<Directory "/usr/IBMIHS/htdocs/en_US">
Change the text in quotes to point to the location where the TotalStorage Productivity Center GUI is installed. If you installed it at the default location, your edited line would look like this:
<Directory "/opt/IBM/TPC/gui">
Note: The path is case sensitive. 3. Farther down in the same section, locate the following line:
DirectoryIndex index.html index.html.var
At the end of the line, add a space, and then type TPCD.html. Your edited line will look like this:
DirectoryIndex index.html index.html.var TPCD.html
4. Save the changes you made to the file. Important: If you installed the IBM HTTP Server on the same machine that is running Agent Manager, and Agent Manager is using TCP port 80 for the agent recovery service, you will have to configure the Web server to listen on a different port than 80, which is the default for HTTP requests. To do this, locate the line in Section 1 of the httpd.conf file that looks like this:
Listen 80
Change the number 80 to an unused port. For example, you might configure the HTTP server to listen on port 8000. Then, when you attempt to access the GUI with the Web server, you must append the port number to the URL. For example:
http://myserver.com:8000
5. Start your Web application server. For IBM HTTP server installed at the default location, the command is:
/usr/IBMIHS/bin/apachectl start
6. If you want the Web application server to start automatically every time the server is booted, edit the /etc/inittab file and add the following line to it:
ibmihs:2:once:/usr/IBMIHS/bin/apachectl start >/dev/console 2>&1
192
7. Open a Web browser and enter your Web application servers fully qualified name in the URL field, for example:
http://myserver.com
8. You should initially see the TotalStorage Productivity Center welcome screen, which appears similar to the one shown in Figure 5-98.
You must leave this window open while you are using the GUI. The GUI applet will be downloaded to your system automatically. You must answer Yes to the security questions regarding the installation of the security certificate. When the GUI finishes loading the logon box displays, and appears similar to the one shown in Figure 5-99.
9. Enter a valid user ID and password in the appropriate fields and click OK. The GUI displays, and appears similar to the one shown in Figure 5-100 on page 194.
193
194
Chapter 6.
Agent deployment
TotalStorage Productivity Center uses different methods to collect information from the various systems and to interact with them to give you a complete view of the environment and a single point of control for your storage infrastructure. These methods are: SMI-S through Common Information Model (CIM) agents to communicate and interact with Storage Subsystems, Tape Libraries and Switches Simple Network Management Protocol (SNMP) Software agents on the attached servers (Data Agent and Fabric Agent) Proprietary interfaces (for only a few devices) This chapter discusses the function of the Data Agent and the Fabric Agent and provides a description of the various methods and necessary steps to roll out the agent infrastructure to your managed servers and computers.
195
Topology Viewer
Data Agent
Fabric Agent
central TPC DB
Computers
SNMP
The TotalStorage Productivity Center components collect data about the infrastructure using the Data Agents and Fabric Agents along with other information sources (CIM agents and SNMP) and, in turn, feed the central TotalStorage Productivity Center database. The TotalStorage Productivity Center for Data and Fabric components also use the Data Agent and the Fabric Agent to initiate and perform management and configuration tasks on the servers and the fabric devices. The TotalStorage Productivity Center Topology Viewer (see Chapter 9, Topology viewer on page 415) as a central common interface to all TotalStorage Productivity Center components reads the data from the central database fed by the various product components, combines and correlates it. As a result, this produces a complete end-to-end view of your server and storage infrastructure.
CIMOM agents
These agents are provided by the vendor of the storage device, fabric switch, or tape library. For storage, they are needed for storage asset information, provisioning, alerting, and performance monitoring. For fabric switches, they are used only for performance monitoring. For tape libraries, they are used for asset and inventory information. 196
IBM TotalStorage Productivity Center: The Next Generation
Data agents
These are the traditional Tivoli Storage Resource Manager agents. They are installed on every computer system you want TotalStorage Productivity Center to manage. This is commonly referred to as an Agents Everywhere methodology. These agents collect information from the server on which they are installed. Asset information, file and file system attributes, and any other information needed from the computer system is gathered. Data agents can also gather information about database managers installed on the server, Novell NDS tree information, and NAS device information. In TotalStorage Productivity Center, you can create pings, probes, and scans to run against the servers that have Data agents installed. Data agents can be remotely installed only by running the TotalStorage Productivity Center agent installer from the TotalStorage Productivity Center server machine. This will install both the common agent and the Data agent.
Fabric agents
These are the traditional Tivoli SAN Manager agents. They are installed on computer systems that have fiber connectivity (through HBAs) into the SAN fabrics you want to manage and monitor. Fabric agents use scanners to collect information. The scanners are written in O/S native code, and communicate through the HBA to collect fabric topology information, port state information, and zoning information. They also can identify other SAN attached devices (if they are in the same zone). Using O/S system calls, they collect information about the system on which they are installed. Fabric agents are discovered during the agent install process, and do not need to be discovered separately, nor is it possible to do so. You can only remotely deploy Fabric agents from the TotalStorage Productivity Center server. If you run the agent installer from the TotalStorage Productivity Center server to remotely deploy Fabric Agents, the common agent must already be installed, and registered to the Agent Manager with which TotalStorage Productivity Center is associated. If you install the Fabric agent locally on the server, the installer installs the common agent for you.
197
performance management, monitoring and alerting is performed through this path. There are reports within the TotalStorage Productivity Center for Disk component, correlating storage subsystem information and information about computers which would not be available without the presence of Data Agents out on the servers. So the capabilities of TotalStorage Productivity Center for Disk can be exploited almost completely without a server agent infrastructure. The TotalStorage Productivity Center for Fabric is designed to use three communication paths to gather information and interact with the fabric devices. However, at the time of writing, SMI-S can be used solely for switch-performance data collection and performance monitoring and alerting. Fabric configuration and status collection as well as fabric management (such as zone control) are either performed out-of-band or in-band through the Fabric Agents. Which one of these two ways TotalStorage Productivity Center actually employs is determined by the particular fabric components and is documented on the TotalStorage Productivity Center for Fabric Support Web site:
http://www-306.ibm.com/software/sysmgmt/products/support/IBM_TSANM_Device_Compatibility.html
To sum up, a decision whether or not the Fabric Agent should be deployed is largely determined by the fabric components which are to be managed. TotalStorage Productivity Center for Data receives almost all the information it provides in its reports and repositories solely through the Data Agents on the managed servers and computers. It also relies on the presence of a Data Agent infrastructure to perform policy driven management. Although TotalStorage Productivity Center for Data can also receive some information directly from the storage subsystems through CIM agents without the presence of any Data Agents, we recommend this only for very special requirements because this would limit the overall product capabilities to a large extent. The TotalStorage Productivity Center Topology Viewer is able to deliver a complete and meaningful end-to-end view of the server and storage-infrastructure only if both agents, the Data Agent and the Fabric Agents, are present on the managed computers. This end-to-end view of the infrastructure and the graphical presentation of the links and relations between the different elements of the environment is one of the big advantages of the Topology Viewer. Without the presence of the Data Agent and the Fabric Agents, most information about the attached servers and computers would be missing. If you use switches which support out-of-band discovery of the SAN infrastructure, only the SAN ports of the servers and computers would be depicted in the Topology Viewer. They would appear in the Other Group with their WWNNs as a label. In this case, you would be able to label the depicted SAN ports manually and classify them as Computers. If you use switches which support only in-band fabric discovery, the servers and computers would be missing in the Topology Viewer altogether. In both cases, the relationship between storage subsystem volumes and the managed computers would not be presented in the Topology Viewer. The following figures show two examples to give you an idea of the capabilities of the Topology Viewer with and without the server agents having been deployed. Figure 6-2 on page 199 shows an environment where several computers are present, but none of them has any agents installed. The switch is an IBM 2005-B32 and supports out-of-band infrastructure discovery. TotalStorage Productivity Center is able to discover the the HBAs of the computers. The Topology Viewer depicts them in the category Other and labels them with their WWNNs. The status is unknown.
198
Figure 6-2 Topology viewer without agents deployed to the managed servers
Figure 6-3 on page 200 shows the same environment, but this time the Data Agent and the Fabric Agent have been deployed on the computers. Now TotalStorage Productivity Center discovers them correctly as Computers and is able to display status information and even more details in the subsequent layers of the Topology Viewer.
199
Figure 6-3 Topology viewer with agents deployed to the managed servers
Recommendation: Except for limited, well-defined requirements, we strongly recommend the deployment of the server agents infrastructure regardless of the TotalStorage Productivity Center component being used.
200
Subagents
Fabric Agent Data Agent
authorization services
The Tivoli Agent Manager can be installed on the same server as the TotalStorage Productivity Center Server or can run on a different machine (for example, larger installations or in cases where an Tivoli Common Agent Services infrastructure is already in place). The Tivoli Agent Manager is installed in a separate step with its own installer. The installation is discussed in detail in 4.6.1, Agent Manager installation for Windows on page 83 . The Tivoli Common Agent, which has to be present on each managed computer as the container for the Data Agent and Fabric Agent, is installed during the TotalStorage Productivity Center Agent installation with the TotalStorage Productivity Center Installer, and is discussed in detail in the subsequent sections.
201
Local agent installation might be practical for a limited number of computers, but becomes rather elaborate and time-consuming as the number of managed computers grows.
202
Click Ok. 2. The International Program License Agreement is shown (Figure 6-6). Read the terms and select I accept the terms of the license agreement.
Click Next to continue. 3. In Figure 6-7 on page 204 you can choose the type of installation. We recommend you always use Custom Installation when you install the agents. Select Custom Installation. In this same panel, you can also choose the installation path of the agents. The default is C:\Program Files\IBM\TPC under Windows and /opt/IBM/TPC under UNIX and LINUX. In our example we keep the defaults. Note that the installer not only installs files in the location you specify in this panel. There are also some files installed to the C:\Program Files\Tivoli\ep Directory under Windows and the /usr/Tivoli/ep Directory under LINUX and UNIX.
203
Make sure, the installation location you specify in this panel is empty. Otherwise the installer fails.
Click Next to continue. 4. In the panel in Figure 6-8, select which components of TotalStorage Productivity Center you want to install. Select only Data Agent and Fabric Agent. Deselect any other options.
Click Next to continue. 5. In the panel shown in Figure 6-9 on page 205, enter the following information: Data Server Name This is the fully qualified host name or the IP address of the machine on which the TotalStorage Productivity Center Data Server and Device Server are running. At the time of writing, the Data Server and the Device Server must be installed on the same machine, so the Data Server Name and the Device Server Name will always be the same. In our environment, the TotalStorage Productivity Center Server is on gallium.almaden.ibm.com. 204
IBM TotalStorage Productivity Center: The Next Generation
Data Server Port The Data Agent uses the Data Server Port to communicate with the Data Server. It is set when you install the Data Server. We recommend keeping the default, 9549. Device Server Name This is is the fully qualified host name or the IP address of the Device Server. In TotalStorage Productivity Center V3.1 it has to match with the Data Server Name. In our environment the name of the Data Server and Device Server is gallium.almaden.ibm.com. Device Server Port. The Fabric Agent uses the Device Server Port to communicate with the Device Server. It is set when installing the Device Server. We recommend keeping the default, 9550. Host authentication password. This is the password used by the Fabric Agent to communicate with the Device Server. You specify this password when you install the Device Server. 6. Select options for the Data Agent, as shown in Figure 6-9.
Click Data Agent Options. The panel in Figure 6-10 on page 206 is displayed. Here, you have two options: Agent should perform a scan when first installed This option is selected by default. We suggest you accept this default, so that you make sure your Data Server receives a solid information base about your computer right after installation. Deselect this option if you do not want to have the Data Agent perform an initial scan of your computer after installation. Agent may run scripts sent by server This option is checked by default. The advantage of enabling this option is that you can store scripts in the server's \scripts directory, and do not have to keep a copy of the script on every agent computer. When a script must be run on a particular agent, the server accesses the script from its local \scripts directory and sends it to the appropriate agent. If you deselect Agent may run scripts sent by server, you must make sure that the script is stored in every agent's \scripts directory.
Chapter 6. Agent deployment
205
Note: If a script with the same name exists on both the server and the agent, the script stored on the agent will take precedence. This is useful if you want to run a special version of a script on one of your agents, while running a different version of the same script across all the other agents in your environment.
Click OK to continue. This will bring you back to the panel shown in Figure 6-9 on page 205 where you will have to click Next. 7. In the next panel shown in Figure 6-11 on page 207 you must enter the fully qualified host name or IP address of the Tivoli Agent Manager. Tivoli Agent Manager must already be installed and running. The Tivoli Agent Manager may run on the same machine as your TotalStorage Productivity Center Server, or on a separate machine. In our environment, we installed the Tivoli Agent Manager on the TotalStorage Productivity Center Server. You also must specify the ports which the agents use to communicate with the Tivoli Agent Manager. They are specified during the installation of the Agent Manager. We recommend keeping the default ports, which are 9511 (secure) and 9513 (public). Finally, enter the Common Agent Registration password. This is the password required by the Common Agent to register with the Agent Manager. It is specified when you install the Agent Manager. The default password is changeMe. Note: If you do not specify the correct Agent Manager password, you are not permitted to continue the installation.
206
Click Next to continue. 8. The Common Agent selection panel is displayed (Figure 6-12). If a Tivoli Common Agent is already running (for example, when you install a Fabric Agent and a Data Agent is already installed, or vice versa), you can choose to install your agent under the control of this Common Agent by selecting it in the lower selection box. If a Common Agent is not already installed on the system, you must elect to install it and specify a location. The default location is C:\Program Files\IBM\TPC\ca under Windows and /opt/IBM/TPC/ca under UNIX and LINUX.
9. If you click Window Service Info in Figure 6-12l, you open the Common Agent Service Information Panel (Figure 6-13 on page 208). This information is optional. You can enter a Common Agent service name, user ID, and password that the Installer will use to create a Windows service for the Common Agent. Otherwise, by default itcauser is created.
207
Figure 6-13 Local interactive installation, Common Agent services name and user information
10.Enter the information and click OK. This returns you to the panel shown in Figure 6-12 on page 207, where you click Next. 11.The Summary information panel is displayed (see Figure 6-14). You can review some of the information you have entered during the installation process.
Click Install to continue. The installer begins to install the Data Agent first (Figure 6-15 on page 209) and then the Fabric Agent (Figure 6-16 on page 209).
208
Important : Although you could Cancel the installation while the progress bars are displayed, we strongly recommend that you not do this. An inconsistent state of your system might be the result. Finally a panel is displayed announcing that the installation has finished successfully, as shown in Figure 6-17 on page 210.
209
210
However, as with the interactive installation, the variables most often will be used with their default values and need not to be touched when preparing the response file. Normally, you must review at least the variables in bold in Example 6-1 on page 210, on a LINUX system, for example. You should also check that the the target directory you specify is empty. Otherwise, the unattended (silent) installation fails. After having modified and reviewed the response file according to your needs, you can start the installer with the following command, executed from within the directory in which the response file is located as shown in Example 6-2.
Example 6-2 Microsoft Windows and Linux and Unix local unattended installation setup.exe -options "setup_agents.iss" -silent (for Windows) ./setup.sh -options "setup_agents.iss" -silent (for LINUX and UNIX)
The installer exits with a return code which can be used in your scripts. In addition, you should verify that the installation has completed successfully using the methods summarized in 6.6, Verifying the installation on page 223.
211
212
3. Insert the cross platform agent CD. It contains a directory for each supported agent platform. In each directory, there is a file called upgrade.zip. Copy this file for each platform you on which you want to install the product remotely in the <C:\TPCinstall>\data\upgrade (Windows) or </TPCinstall>/data/upgrade (LINUX or UNIX) directory of your installation directory. In our example, we perform a remote installation to Windows, AIX, and LINUX Systems. Our installation directory looks similar to Figure 6-19.
Figure 6-19 Remote installation, installation directory with cross-platform agent files
213
4. If you are installing the Data Agent to a LINUX system, you must perform a last check before you can start the installation. Verify, that the /etc/ssh/sshd_config file parameter PasswordAuthentication is set to yes. To set the parameter, follow these steps: a. Go to the following directory: /etc/ssh. b. Use a text editor such as vi to open and edit the /etc/ssh/sshd_config file. Change the PasswordAuthentication parameter to yes. c. Stop the daemon by running the following command: /etc/init.d/sshd stop. d. Start the daemon by running the following command: /etc/init.d/sshd start.l
6. The International Program License Agreement is shown (see Figure 6-21). Read and accept the terms by selecting I accept the terms of the license agreement.
Click Next.
214
7. In the panel shown in Figure 6-22, choose Custom Installation. Click Next to continue.
8. In the panel in Figure 6-23, select which components you want to install. Select Remote Data Agent and Remote Fabric Agent. Deselect other options. Click Next.
9. In the next panel (Figure 6-24 on page 216), enter the Host authentication password. This is the password used by the Fabric Agent to communicate with the Device Server. You specify this password when you install the Device Server. All other information is already set correctly because we are installing from the machine which is running the Data Server and the Device Server. Click Next to continue.
215
10.Next, a panel similar to the one shown in Figure 6-25 is displayed where you enter the remote computers on which you want to install the Data Agents and the Common Agents. You must enter remote LINUX and UNIX computers manually by host name or IP address. Microsoft Windows computers can be added either manually or from Microsoft Directory if you have an Active Directory environment.
11.We enter our target computers manually, so click Manually Enter Agents. You will see the panel in Figure 6-26 on page 217.
216
12.In Figure 6-26, enter the fully qualified host name or the IP address of the computer on which you want the Data Agent and Common Agent to be installed. Note: You can add multiple computers in this panel if they share the same user ID and password. If computers do not share the same user ID and password, you must add them individually. In our example we added five remote systems, each individually, because our target machines have different user IDs and passwords as shown in Figure 6-27 on page 218.
217
Figure 6-27 Remote Installation, list of remote computers to install the Data Agent
Note: Right-click a column name to filter or sort the listed computers. If you filter the names in the computer list, the computers you selected for an agent installation that do not match the filter criteria will not appear in the list. Those agents will still be installed to the unlisted computers with names that do not match the filter. 13.When you are satisfied with your list of target computers click Next. 14.The panel shown in Figure 6-28 on page 219 is displayed. Here, you specify the settings for the Common Agent Service on your Windows target machines. This information is optional. You can enter a Common Agent service name, user ID and password, and a listener port that the installer will use to create a Windows service for the Common Agent. Otherwise, itcauser is created and a random listener port is used by default. We recommend that you keep those defaults. This panel corresponds to the panel shown in Figure 6-13 on page 208 for the local installation. Click Next.
218
15.The TotalStorage Productivity Center Installer runs a mini-probe on all the computers you selected, to verify all prerequisites. The status for each computer will change several times. Finally you will see the panel in Figure 6-29.
In this panel, the TotalStorage Productivity Center Installer shows you the default installation directory for each target computer. 219
Note: The default installation directory for the Data Agent remote installation differs from the default installation directory for the Data Agent for a local installation. The installation directory for the Data Agent remote installation defaults to: C:\Program Files\Tivoli\ep for Windows /usr/tivoli/ep for LINUX and UNIX. The defaults for a local installation are: C:\Program Files\IBM\TPC for Windows d /opt/IBM/TPC for LINUX and UNIX. In this panel you can also select two settings for the Data Agents: Agent should perform a scan when first installed This option is enabled by default. We suggest you accept this default, so that your Data Server receives a solid information base about your computer right after installation. Deselect this option if you do not want to have the Data Agent perform an initial scan of your computer after installation. Agent may run scripts sent by server This option is enabled by default. The advantage of enabling this option is that you can store scripts in the server's \scripts directory, and do not have to keep a copy of the script on every agent computer. When a script must be run on a particular agent, the server accesses the script from its local \scripts directory and sends it to the appropriate agent. If you deselect Agent may run scripts sent by server, you must make sure that the script is stored in every agent's \scripts directory. Note: If a script with the same name exists on both the server and the agent, the script stored on the agent will take precedence. This is useful if you want to run a special version of a script on one of your agents, while running a different version of the same script across all the other agents in your environment. Click Install to continue. The TotalStorage Productivity Center Installer starts to install the Data Agents and the Common Agent to the remote target computers. The installation status (see Figure 6-30 on page 221) is shown in the upper pane and you can monitor the installation log in the lower pane. When the installation is complete, you will see the window in Figure 6-31 on page 221.
220
When completed, you will see the status Probed for all successfully installed Data Agents as shown in Figure 6-31.
Note: You can review the Installation log for each computer by double-clicking the computer name or the IP address.
221
16.Click Done to continue. The remote installation of the Data Agent and the Common Agent is now finished. The remote computers are now ready for the remote installation of the Fabric Agents because they now run a Common Agent. Because we have selected to install both, the Data Agent and the Fabric Agent, the TotalStorage Productivity Center Installer opens the panel shown in Figure 6-32, where we can select on which of the remote computers we want to install the remote Fabric Agents. 17.Select the remote computers where the Fabric Agents shall be deployed. Click Next.
Figure 6-32 Remote installation, select remote computers to deploy Fabric Agents
18.Review the list of the computers as shown in Figure 6-33 and click Next.
Figure 6-33 Remote installation, list of selected computers for remote Fabric Agent installation
You can see a panel as shown in Figure 6-34 on page 223, indicating the progress of the remote Fabric Agent deployment.
222
19.The TotalStorage Productivity Center Installer (see Figure 6-35) shows you a panel with a summary of the remote Fabric Agent deployment process. Click Next.
20.Verify that the installation has completed successfully using the methods summarized in 6.6, Verifying the installation on page 223.
223
3. Right-click the entries and check if the TotalStorage Productivity Center Server can reach the agent and if the agent is up and running. In addition to that, the same context menu provides a look at the log files and the possibility to set up a trace for each agent.
4. On the servers, the install process creates a directory structure, which should be similar to the one shown in Figure 6-37 on page 225 for a Windows server. For UNIX and LINUX System the tree is created under /opt/IBM/ by default and, otherwise, looks the same.
224
Figure 6-37 Directory tree for Data and Fabric Agent installation
Note: The Remote Installer however will create a different tree structure by default. It will install the Data Agent and the Common Agent to the following directory: C:\Program\Tivoli\ep on Windows and /usr/tivoli/ep on UNIX and LINUX. 5. Under Windows, look for a service called IBM Tivoli Common Agent (see Figure 6-38).
The Data Agent and Fabric Agent do not show up as a service. They run under the context of the Common Agent.
225
6. Under UNIX and LINUX, look for two processes: One is the nonstop process, which launches the Common Agent process The Common Agent itself Run the ps -ef command to shows results similar to Figure 6-39.
Figure 6-39 UNIX and LINUX process status after agent installation
6.6.1 Logfiles
The agent installation process creates a number of logs, which can be checked to retrace the installation process and also to monitor the activity during normal operation. These logs provide detailed information and are especially useful in case of a failed installation to determine the reason for the failure and to troubleshoot the installation. They are spread over several locations: Note: The default <InstallLocation> will differ in local and remote administration.
226
The following operational logs are for the Data Agent when installed locally for Windows. <InstallLocation>\ca\subagents\TPC\Data\log\<hostname>\ The following operational logs are for the Data Agent when installed locally for UNIX and LINUX <InstallLocation>/ca/subagents/TPC/Data/log/<hostname>/ The following operational logs are for the Data Agent when installed remotely for Windows. <InstallLocation>\logs\ <InstallLocation>\subagents\TPC\Data\log\<hostname> The following operational logs are for the Data Agent when installed remotely for UNIX and LINUX. <InstallLocation>/logs/ <InstallLocation>/subagents/TPC/Data/log/<hostname>
227
228
Although the context menu for the Fabric Agents also offer the option to delete the agent, this procedure erases the entries for the Fabric Agents from the navigation tree and the TotalStorage Productivity Center Repository as well, but does not uninstall the agent on the remote computer. Note: If you perform a remote uninstallation of the Data Agent on a remote computer where no Fabric Agent is installed, the remote uninstallation process also uninstalls the Common Agent. Otherwise, the remote uninstallation process keeps the Common Agent on the target computer.
229
2. Select the entry TotalStorage Productivity Center and Click Change/Remove. On a UNIX or LINUX machine go to the directory /opt/IBM/TPC/_uninst/ and run the uninstall program. In the first panel shown in Figure 6-42, select the preferred language for the uninstaller.
Click Ok to continue. You can see the TotalStorage Productivity Center Installer Welcome Panel shown in Figure 6-43 on page 231.
230
Click Next. 3. In the panel shown in Figure 6-44 you select the components you want to uninstall. The TotalStorage Productivity Center Installer offers all components which it has detected on your system. In our example the Data Agent and the Fabric Agent are installed and we want to uninstall them both in one step. Although there is a check box where you select to force the uninstallation of the Common Agent, it is not necessary to check this box, because the Common Agent will be deinstalled automatically when the last subagent is removed.
Click Next to continue. You can see a summary of the components which the TotalStorage Productivity Center Installer is about to uninstall, as shown in Figure 6-45 on page 232.
231
Click Next to continue. 4. The system uninstalls the selected components. You can see the panel Figure 6-46 when the uninstallation is finished. You must reboot a Microsoft Windows machine. Click Finish.
If you have installed the Fabric Agent and the Data Agent remotely, you cannot uninstall both agents and the Common Agent in one step. We recommend that you uninstall the Fabric Agent first, then uninstall the Data Agent, which uninstalls the Common Agent also. 1. To invoke the uninstall on a Microsoft Windows computer, select Start Settings Control Panel Add/Remove Programs. You now see a separate entry for the Data Agent. The uninstallation for the Fabric Agent is invoked by selecting TotalStorage Productivity Center Change/Remove, as shown in Figure 6-47 on page 233.
232
Figure 6-47 Local agent uninstall, Add or Remove Programs for remotely deployed agents
2. The uninstallation dialog box is the same as described previously, with the exception that the installer only offer to uninstall the Fabric Agent. You are not able to select the Data Agent. 3. When the uninstallation of the Fabric Agent is complete, you must reboot your system and again select Start Settings Control Panel Add/Remove Programs. 4. Now the TotalStorage Productivity Center entry is gone and you must select TotalStorage Productivity Center for Data - Agent as shown in Figure 6-48 on page 234. Click Change/Remove.
233
Figure 6-48 Local agent uninstall, Add or Remove Programs for remotely deployed agents
Now a different installer is presented, which is the one you have used to perform the remote installation of the Data Agent as shown in Figure 6-49. 5. Although there is a radio button, you are not be able to make any selections. Click Next.
Figure 6-49 Local agent uninstall, uninstall of remotely deployed Data Agent
A panel is shown (Figure 6-50 on page 235), where you are able to monitor the log of the deinstallation process in the lower pane.
234
Figure 6-50 Local agent uninstall, uninstall of remotely deployed Data Agent
When the deinstallation completes, you can see a panel announcing the successful installation of the Data Agent (see Figure 6-51).
Figure 6-51 Local agent uninstall, uninstall of remotely deployed Data Agent completed
235
File\IBM\TPC\data\upgrade (for Windows) and /opt/IBM/TPC/data/upgrade (for LINUX and UNIX) path of your TotalStorage Productivity Center Server installation. Note that if you do not copy the upgrade.zip file, you will break all of your agents. The tree structure should look similar to the figure in Figure 6-52.
Figure 6-52 Data Agent upgrade, copy the upgrade.zip file to the server upgrade directories
2. After having copied the files you need to the respective directories, launch the TotalStorage Productivity Center GUI and log on. In the navigation tree, select Administrative Services Configuration and right-click Data Agent Upgrade as shown in Figure 6-53 on page 237.
236
3. You will then see a panel similar to Figure 6-54 where you can select the computers for which you want to perform a Data Agent upgrade.
Figure 6-54 Data Agent upgrade, create an upgrade job, select computers to upgrade
237
You can either select Computer Groups (if you have defined them in the TotalStorage Productivity Center) or select single computers or all computers which have Data Agents installed. Verify that the Enable box in the top right corner of the panel is checked. 4. In the When to Run tab, specify if the upgrade should run immediately or should be scheduled at a later time. 5. The Options tab gives you some options for the upgrade of the Data Agents. You can specify if the Data Agent should be overwritten if the server already has the upgraded level installed, and you can select the correct language option. 6. In the Alert tab, you can chose what alerts the TotalStorage Productivity Center Server will generate for the upgrade job. 7. After having reviewed all tabs select File Save. You must specify a name for the job. The upgrade job will be saved and run either immediately, or at the time you chose in the When to Run tab. 8. To check if the upgrades have completed successfully, right-click the Data Agent upgrade and select Refresh. You can see an entry for the upgrade job you submitted. Click the little plus sign (+) to the left of your job name and an entry with the time stamp of the submission of your job opens. Click this entry and you can see the log for the job on the right pane, as shown in Figure 6-55.
Figure 6-55 Data Agent upgrade, job log of the upgrade job
9. You can click the symbol next to the job log entry and examine the log for your upgrade job.
238
Chapter 7.
239
7.1 Introduction
After you have completed the installation of TotalStorage Productivity Center, you must install and configure the Common Information Model Object Manager (CIMOM) and Service Location Protocol (SLP) agents. Note: For the remainder of this chapter, we refer to the TotalStorage Productivity Center, TotalStorage Productivity Center for Fabric and TotalStorage Productivity Center for Data simply as TotalStorage Productivity Center. The TotalStorage Productivity Center uses SLP as the method for CIM clients to locate managed objects. The CIM clients may have built in or external CIM agents. When a CIM agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. In this chapter we describe the steps for: Planning considerations for CIMOM Installing and configuring CIM agent for Enterprise Storage Server and DS6000/DS8000 Installing and configuring CIM agent for DS 4000 Family Configuring CIM agent for SAN Volume Controller Planning considerations for Service Location Protocol (SLP) SLP configuration recommendation Setting up Service Location Protocol Directory Agent (SLP DA) General performance guidelines
You can plan to install CIM agent code on the same server which also has the specific management or you can install it on a separate server.
240
Attention: At this time, only a few devices come with an integrated CIM Agent, most devices need a external CIMOM for CIM-enabled management applications (CIM Clients) to be able to communicate with a device. For the ease of the installation, IBM provides an Integrated Configuration Agent Technology (ICAT) which is a bundle that includes the CIMOM, the device provider, and an SLP SA.
You must have the CIMOM supported firmware level on the storage devices. It you have an incorrect version of the firmware, you might not be able to discover and manage the storage devices which have the incorrect level of software or firmware installed. The data traffic between CIMOM agent and device can be very high, especially during performance data collection. As a result, it is recommended that you have a dedicated server for the CIMOM agent. Although, you can configure the same CIMOM agent for multiple devices of same type. Locate the server containing the CIMOM within the same data center where the managed storage devices are located. This is in consideration of firewall port requirements. Typically, it is a best practice to minimize firewall port openings between the data center and the external network. If you consolidate the CIMOM servers within the data center, then you might be able to minimize the need to open the firewall ports only for TotalStorage Productivity Center communication with the CIMOM. It is strongly recommended that you have separate and dedicated servers for CIMOM agents and TotalStorage Productivity Center. The reason for this is resource contention, TCP/IP port requirements, and system services coexistence. It is highly recommended that you have separate systems for each CIMOM that you need to install. They do not need to be dedicated servers. It depends on the workload you expect the servers to support. This recommendation is based on the following: CIMOMs by default should use either port 5988 (HTTP) or port 5989 (HTTPS) for communication. If you collocate two CIMOMs, you will have a port conflict. Most CIMOMs have different interoperability namespaces. If you collocate two CIMOMs, you could have an interoperability namespace mismatch. The user ID that will be used by the CIMOM to communicate with the storage device must have superuser, administrator, or equivalent read and write authorization on the storage device. This level of authorization is required to manage, manipulate, and configure the storage device, as well as to gather performance data. Read-only authorization is insufficient for all but basic inventory collection tasks.
241
implements many of its disk, tape, and fabric management functions through exploitation of the SMIS 1.0.2 and 1.1 levels of the standard. SMIS 1.1 supports all of the functions of SMIS 1.0.2 plus additional functionality (such as performance management). If you are interested in finding out what is new in SMI-S 1.1 compared to 1.0.2 or just generally, you can visit the following Web site (subscription needed): http://www.snia.org/smi/tech_activities/smi_spec_pr/spec You can find a list of all device vendors that are participating the SMI-S initiative and that have successfully passed the SNIA Conformance Testing Program by visiting the following Web site: http://www.snia.org/ctp/conformingproviders If you select one of the vendors you are brought to a vendor-specific page where certified devices are listed together with minimum software requirements for the SMI-S Agent. We strongly suggest you visit this Web site to find all the latest information about conforming SNIA CTP provider devices. Any new device that is SMI-S 1.0.2 or 1.1 can be monitored and managed by TotalStorage Productivity Center because of the standard approach adopted on both sides.
IBM Support for IBM Common Information Model (CIM) Agent for DS Open (API) The IBM Web site for the IBM CIM Agent for DS Open API Code is located at:
http://www-1.ibm.com/support/search.wss?rs=1118&tc=STC4NKB&dc=D400&dtm
The software components have requirements that have to be checked in advance. Verify the minimum ESS LIC level and ESSCLI version documented on these Web pages: IBM ESS CIM Agent Compatibility Matrix
http://www-1.ibm.com/support/docview.wss?rs=586&context=STHUUM&dc=DB520&dc=DA460&dc= DB540&uid=ssg1S1002397&loc=en_US&cs=utf-8&lang=en
Figure 7-2 on page 243 shows the interaction between the DS Open API and the storage subsystems it manages.
242
3. In the new window CDA Install History you should be able to locate the latest entry. There you will find a relation from LIC-Level to Bundle Version. This information makes it possible to find the relationship between the CIM Agent, LIC level, and Bundle Version. Be sure that the prerequisites are met before proceeding with the software installation.
243
3. This results in the panel hold the first part of the information that you need. The second part is found on the following Web site:
http://www-1.ibm.com/support/dlsearch.wss?rs=1112&lang=en&loc=en_US&r=10&cs= utf-8&rankfile=0&cc=&spc=&stc=&apar=include&q1=ssg1*&q2=&dc=D420&atrn=SWPlatform&atrv= all&atrn1=SWVersion&atrv1=all&tc=HW2A2&Go.x=17&Go.y=9
4. Here you can cross-reference the internal LIC level to a Release Level. 5. Lastly, on the IBM CIM Agent for DS Open API Compatibility Matrix site you can find the CIM Agent Version requiered for your DS6000.
244
1. On the license agreement panel in Figure 7-4, select I accept the terms of the license agreement and click Next to continue.
2. Verify the target operating system as shown in Figure 7-5 on page 246 and click Next to continue.
245
3. The panel in Figure 7-6 specifies the CLI installation directory. Accept the default or enter the appropriate directory and click Next to continue.
4. The next panel is a summary of the installation requirements as shown in Figure 7-7 on page 247. Click Next to continue.
246
5. You can see a panel showing the progress of the installation. When the installation is complete, you can see a panel similar to Figure 7-8. Click Next to continue.
6. The next panel contains the ESS CLI readme file (see Figure 7-9 on page 248). Read the information and click Next.
247
7. The next panel (see Figure 7-10) gives you the option to restart your system before proceeding with the ESS CIM Agent installation. You must do this because the ESS CLI is dependent on environmental variable settings which will not be in effect for the ESS CIM Agent. The reason for this is the CIM Agent runs as a service, unless you reboot your system.
248
8. After your server has restarted, you should verify that the ESS CLI is installed: a. Click Start Settings Control Panel. b. Double-click Add/Remove Programs. c. Verify that there is an IBM ESS CLI entry. 9. Verify that the ESS CLI is operational and can connect to the ESS. From a command prompt window, issue the following command:
esscli -u userid -p password -s 9.1.11.111 list server
Where: 9.1.11.111 represents the IP address of the Enterprise Storage Server userid represents the Enterprise Storage Server Specialist user name password represents the Enterprise Storage Server Specialist password for the user name Figure 7-11 shows the response from the esscli command.
At the time of writing, we used the code which is contained in the package ibm-ds-smis-agent-5.1.0.45.zip. To install the DS Open API in your Windows system, perform the following steps: 1. Log on to your system as the local administrator. 2. Insert the CIM Agent for DS Open Api Code CD into the CD-ROM drive. Alternatively, use the Windows Explorer to select the directory where you have stored the DS Open API Code.
249
3. Start the setup.exe found in ......\ibm-ds-smis-agent-5.1.0.45\W2K. The Install Wizard launchpad starts automatically, if you use a CD and have autorun mode set on your system,. Look for a launchpad similar to Figure 7-12. Open and review the readme file from the launchpad menu. Subsequently, you can Click Installation Wizard. The Installation Wizard starts the setup.exe program and shows the Welcome panel in Figure 7-13 on page 251. The DS CIM Agent program starts within 15 - 30 seconds, if you have autorun mode set on your system. If the installer window does not open, perform the following steps: a. Use a command prompt or Windows Explorer to change to the Windows directory on the CD. b. If you are using a Command Prompt window, run launchpad.bat. c. If you are using Windows Explorer, double-click the launchpad.bat file.
4. The Welcome window opens suggesting what documentation you should review prior to installation. Click Next to continue (see Figure 7-13 on page 251).
250
5. The License Agreement window opens. Read the license agreement information. Select I accept the terms of the license agreement, then click Next to accept the license agreement (see Figure 7-14 on page 252).
251
6. The window shown in Figure 7-15 on page 253 only opens if no valid ESS CLI installed. If you do not plan to manage an ESS from this CIM agent, then click Next. Important: If you plan to manage an ESS from this CIM agent, then click Cancel. Install the ESS CLI following the instructions in 7.4.3, ESS CLI Installation on page 244.
252
7. The Destination Directory window opens. Accept the default directory and click Next to continue (see Figure 7-16 on page 254).
253
8. The Updating CIMOM Port window opens (see Figure 7-17 on page 255). Click Next to accept the default port if it is available and free in your environment. For our ITSO setup, we used default port 5989. Note: As mentioned throughout this book, it is not recommended to use anything other than the default ports. Use the following commands to check which ports are in use:
netstat -a netstat -an | find/f 598
254
9. The Installation Confirmation window opens (see Figure 7-18 on page 256). Click Install to confirm the installation location and file size.
255
The Installation Progress window opens (see Figure 7-19 on page 257), indicating how much of the installation has completed.
256
10.When the Installation Progress window closes, the Finish window opens (see Figure 7-20 on page 258). Select View post installation tasks to view the post-installation tasks Readme when the wizard closes. We recommend that you review the post-installation tasks. Click Finish to exit the installation wizard (Figure 7-20 on page 258). Note: Before proceeding, you might want to review the log file for any error messages. The log file is located in xxx\logs\install.log, where xxx is the destination directory where the DS CIM Agent for Windows is installed.
257
11.If you checked the view post installation tasks box, then the window shown in Figure 7-21 opens. Close the window when you have finished reviewing the post installation tasks.
12.The launch pad window (Figure 7-12 on page 250) opens. Click Exit.
258
259
3. If SLP is not started, right-click the SLP and select Start from the pop-up menu. Wait for the Status column to be changed to Started and make sure that the Startup-Type is Automatic.
3. If the CIM Object Manager is not started, right-click the CIM Object Manager - DS Open API and select Start from the pop-up menu. Wait for the Status column to change to Started. If you are able to perform all of the verification tasks successfully, the DS CIM Agent has been installed successfully on your Windows system. Next, perform the configuration tasks. Tip: All commands used for configuration and verification of the CIM Agent are described either in the ReadMe files or in the InstallGude.pdf which are part of the package. It is not advisable to use executables found in the working directory of the CIM Agent.
If the ESS is on a different subnet than the DS CIMOM server and behind a firewall, then you must authenticate through the firewall first before registering the ESS with the CIMOM. You must set up the firewall between the ESS subsystem and the CIMOM to allow bidirectional traffic between them. Rules might need to be put in place to open specific ports. You must authenticate through the firewall in both directions, that is from the ESS to CIMOM server and also from the CIMOM server to ESS. You must verify the connection between the ESS devices and the CIMOM server using rsTestConnection command of ESS CLI code. When you are satisfied that you are able to authenticate and receive the ESS CLI heartbeat with all registered ESS subsystems, you may proceed with entering ESS IP addresses. If the CIMOM agent fails to authenticate with ESSs, then it will not start-up properly and may be very slow, since it retries the authentication. Figure 7-25 shows an example of the rsTestConnection.exe command to check the connection to your ESS subsystems.
Files\IBM\ESScli>rsTestConnection.exe /v /s 9.12.6.29 Using 9.12.6.29 as server name HeartBeat to the server was successful. command successful
Perform the following steps to configure the DS CIM Agent. 1. Configure the ESS CIM Agent with the information for each Enterprise Storage Server the ESS CIM Agent is to access. Select Start Programs CIM agent for the IBM TotalStorage DS Open API Enable DS Communications, as shown in Figure 7-26. The setdevice.bat utility starts.
261
2. Type help to see the available commands as shown in Figure 7-28. Note that the addess, lsess, rmess commands are for the ESS-family subsystems. The and others such as addessserver, lsessserver, rmessserver commands are for the DS6000, DS8000 and the copyservice-server in the ESS-family subsystems.
3. Enter the following commands for each DS6000, DS8000, or ESS copyservices server that is to be configured. Both clusters must be added to the CIM Agent.
addessserver <ip> <user> <password>
<ip> represents the IP address of the storage subsystem <user> represents the DS Storage Server HMC or SMC user name <password> represents the DS Storage Server password for the user name Attention: If the username or password entered is incorrect or the DS CIM agent does not connect to the storage subsystem, this will cause a error and the DS CIM Agent will not start and stop correctly. Use the following command to remove the entry that is causing the problem and reboot the server.
rmessserver <ip>
Whenever you add or remove a storage subsystem from CIMOM registration, you must restart the CIMOM to pick up the updated device list. Figure 7-29 on page 263 shows a sample output of the systems in our lab environment.
262
Figure 7-29 List some ESS subsystems versus some DS8K subsystems
Note: The CIMOM collects and caches the information from the defined storage subsystems at startup time. The first time it is started might take longer than subsequent starts.
<ip> represents the IP address of the Enterprise Storage Server <user> represents the Enterprise Storage Server Specialist user name <password> represents the Enterprise Storage Server Specialist password for the user name An example of the addess command is:
addess <ip> <user> <password>
Type this command for each ESS (as shown in Figure 7-30). Where <ip> represents the IP address of the cluster of Enterprise Storage Server.
263
2. Restart the CIMOM by selecting Start Programs CIM Agent for the IBM TotalStorage DS Open API Start CIMOM service. A Command Prompt window opens to track the progress of the starting of the CIMOM. If the CIMOM has started successfully, the message shown in Figure 7-32 displayed:
Note: The restarting of the CIMOM might take a while, because it is connecting to the defined storage subsystems and is caching that information for future use. As an alternative, you might use the Services Applet in the Windows Control Panel. 3. You should now have a look at the CIMOM.LOG File. Figure 7-33 on page 265 is a sample output from our lab. Note the bolded text where the most important events are highlighted.
264
CMMOM0200I SSG/SSD CIM Object Manager CMMOM0203I **** CIMOM Server Started **** CMMOM0204I CIMOM Version: 5.1.0.45 CMMOM0205I CIMOM Build Date: 01/05/06 Build Time: 07:26:32 PM CMMOM0206I OS Name: Windows 2000 Version: 5.0 CIMServer[initialize]: Namespace \root\ibm initialized CMMOM0400I Authorization module = com.ibm.provider.security.EnhancedAuthModule CMMOM0410I Authorization is active CMMOM0901I IndicationProcessor started CMMOM0902I Indication subscription created for http://9.1.38.36:1272/1303 CMMOM0902I Indication subscription created for http://9.1.38.36:1272/1303 CMMOM0902I Indication subscription created for http://9.1.38.36:1272/1303 CMMOM0902I Indication subscription created for http://9.1.38.36:1272/1303 CMMOM0905I 4 indication subscriptions re-started EssProvider[initialize]: ESS Provider is starting... EssProvider[initialize]: CLI already configured for ESS 2105.22513. Adding aditional information 9.12.6.30 EssProvider[initialize]: End initialize ESS Provider NCache EssProvider[initialize]: com.ibm.provider.common.ProviderContext@26a15b3f : finished for 2107.75BALB1 PrimeCache[run]: Alternative ip = 9.12.6.30 PrimeCache[run]: Starting Thread Creating cache for ip 9.12.6.29 EssProvider[initialize]: ESS 2105.22513 configured at 9.12.6.29 EssProvider[initialize]: ESS 2107.75BALB1 configured at Internal Service 9.12.6.17 EssProvider[initialize]: ESCON Connectivity interval set to: 60 minutes CMMOM0403I Platform is Windows CMMOM0404I Security server starting on port 5989 CMMOM0409I Server waiting for connections... CMMOM0500I Registered service service:wbem:https://9.1.38.35:5989 with SLP SA PrimeCache[run]: Ending Thread Creating cache for ip 9.12.6.29
265
C:\Program Files\IBM\cimagent> setuser -u superuser -p passw0rd Application setuser started in interactive mode To terminate the application enter: exit To get a help message enter: help >>> help Available commands: ? exit rmuser adduser h[elp] setentry chuser lsuser setoutput >>> Figure 7-34 Available commands for setuser.bat
The users which you configure to have authority to use the CIMOM are defined uniquely to the CIMOM software and have no required relationship to operating system user names, the ESS Specialist user names, or the ESS Copy Services user names. Note: We recommend that you change the superuser password to something other than the default or delete the superuser user ID after defining a new CIMOM user ID and password. Following are the steps to define users to the CIMOM: 1. Open a Command Prompt window and change directory to the CIM Agent directory, for example:
C:\Program Files\IBM\cimagent
2. Type the command setuser -u superuser -p passw0rd at the command prompt to start the setuser interactive session to identify users to the CIMOM. 3. Type the command adduser cimuser cimpass in the setuser interactive session to define new users. cimuser represents the new user name to access the CIMOM. cimpass represents the password for the new user name to access the CIMOM. 4. Close the setuser interactive session by typing exit. 5. The users which you configure to have authority to use the CIMOM are now functional.
266
Verify that the CIMOM is active by selecting Start Settings Control Panel Administrative Tools Services. Launch the Services panel and select CIM Object Manager service. Verify the Status is shown as Started, as shown in Figure 7-35.
Verify that the SLP is active by selecting Start Settings Control Panel. Double-click Administrative Tools. Double-Click Services. You should see a panel similar to Figure 7-23 on page 259. Ensure that Status is Started. Verify that SLP has dependency on CIMOM. This was configured automatically when you installed the CIM agent software. Verify this by selecting Start Settings Control Panel. Double-click Administrative Tools. Double-Click Services. Select Properties on Service Location Protocol, as shown in Figure 7-36.
267
Click Properties and select the Dependencies tab, as shown in Figure 7-37. You must ensure that CIM Object Manager has a dependency on Service Location Protocol. This should be the default.
Verify CIMOM registration with SLP by selecting Start Programs CIM Agent for the IBM TotalStorage DS Open API Check CIMOM Registration. A window opens displaying the wbem services as shown in Figure 7-38. These services have either registered themselves with SLP, or you have registered them explicitly with SLP using slptool. If you changed the default ports for a CIMOM during installation, the port number should be correctly listed here. It might take some time for a CIM Agent to register with SLP.
Note: If the verification of the CIMOM registration is not successful, stop and restart the SLP and CIMOM services. Note that the CIMOM will attempt to contact each storage subsystem registered to it. Therefore, the startup may take some time, especially if it is not able to connect and authenticate to any of the registered devices. Use the verifyconfig -u superuser -p passw0rd command, where superuser is the user name and passw0rd is the password for the user name that you configured to manage the
268
CIMOM, to locate all WBEM services in the local network. You need to define the TotalStorage Productivity Center superuser name and passw0rd in order for TotalStorage Productivity Center to have the authority to manage the CIMOM. The verifyconfig command checks the registration for the ESS CIM Agent and checks that it can connect to the ESSs. At the ITSO lab, we configured two ESSs (as shown in Figure 7-39).
After entering the CIMOM information, test the connectivity to your CIMOM. Click the Test CIMOM connectivity before adding box. This is shown in Figure 7-41 on page 270.
269
This first entry indicates that the CIMOM has sucessfully registered with SLP using the port number specified at ESS CIM agent install time. The second entry indicates that it has started sucessfully and is waiting for connections.
270
If you still have problems, Refer to the DS Open Application Programming Interface Reference for an explanation and resolution of the error messages. You can find this Guide in the \doc directory at the root of the CIM Agent CD.
2. Run startcimbrowser.The WBEM browser in Figure 7-44 on page 272 opens. The default user name is superuser and the default password is passw0rd. If you have already changed it using the setuser command, the new userid and password must be provided. This should be set to the TotalStorage Productivity Center userid and password.
271
When login is successful, you should see a panel similar to the one in Figure 7-45.
3. Now that the CIM Browser has started, the following scenario will take you through the steps to find information about the installed subsystems. The scenario assumes that you have subsystems registered within your CIMOM. In the CIM Workshop, follow the path down to the element shown in Figure 7-46 on page 273.
272
4. After selecting the physical element, select Instances Show, as show in Figure 7-47.
In Figure 7-48 on page 274, you can see elements from our subsystem and their corresponding values.
273
This concludes the usage sample for this CIM-Workshop. Note, that other CIMOMs have their own CIMOM-Workshop. They might look and behave a little different than this one, but the principle is the same.
The following Engenio sites are useful for the most current CIMOM levels (Note: The information contained on this Web site is not for use with TotalStorage Productivity Center.):
http://www.engenio.com/products/smi_provider.html
Go the following Web site for information about TotalStorage Productivity Center:
http://www.engenio.com/products/smi_provider_archive.html
Before going any further, be sure to run one of the supported firmware levels mentioned there. Attention: Note that there are Web sites from Engenio where you can find the very latest SMI-S provider, but it must be stated that this code is inappropriate for use with TotalStorage Productivity Center. Throughout the remainder of this chapter the terms DS4000 and FaStT are used interchangeably.
274
The code level used for this redbook is V1.1.0.614. This information can be found in the directory of your CIMOM server.
C:\Program Files\EngenioProvider\SMI_SProvider\doc\ChangeLog.txt
1. From the Web site mentioned previously, select the operating system used for the server on which the Engenio SMI-S Provider has to be installed. For example, with Windows you have to download a setup.exe file. Save it to a directory on the server you in which will be installing the Engenio SMI-S Provider. 2. Launch the setup.exe file to begin the Engenio SMI-S Provider installation. The InstallShield Wizard for Engenio SMI-S Provider window opens (see Figure 7-49). Click Next to continue.
3. The Engenio License Agreement window opens next. If you agree with the terms of the license agreement, click Yes to accept the terms and continue the installation (see Figure 7-50).
4. The System Info window opens. The minimum requirements are listed along with the install system disk free space and memory attributes as shown in Figure 7-51 on
275
page 276. If the target system fails the minimum requirements evaluation, then a notification window will appear and the installation will fail. Click Next to continue.
5. The Choose Destination Location window opens. Click Browse to choose another location or click Next to begin the installation of the DS4000/FAStT CIM agent (see Figure 7-52).
The InstallShield Wizard prepares and copies the files into the destination directory. Figure 7-53 on page 277 shows the installation progress.
276
6. In the Enter IPs and Hostnames window, enter the IP addresses and hostnames of the DS4000 devices this CIM Agent will manage as shown in Figure 7-54.
7. Use the Add New Entry button to add the IP addresses or host names of the DS4000 devices with which this DS4000 CIM Agent will communicate. Enter one IP address or host name at a time until all the DS4000 devices have been entered and click Next (see Figure 7-55 on page 278).
277
Important: Do not enter the IP address of a DS4000 device in multiple DS4000 CIM Agents within the same subnet. This can cause unpredictable results on the TotalStorage Productivity Center for Disk server and could cause a loss of communication with the DS4000 devices. 8. If the list of hostnames or IP addresses has been previously written to a file, use the Add File Contents button, opening Windows Explorer. Locate and select the file, then click Open to import the file contents. The file where all the IP-Addresses or hostnames are collected in is located in this Microsoft Windows directory:
C:\Program Files\EngenioProvider\SMI_SProvider\bin\arrayhosts.txt
When all the DS4000 device hostnames and IP addresses have been entered, your panel should list them all as shown in Figure 7-56.
Click Next to start the Engenio SMI-S Provider Service (see Figure 7-57 on page 279).
278
When the Service has started, the installation of the Engenio SMI-S Provider is complete (see Figure 7-58).
During the start of the service, the Engenio code processes all the entries in the arrayhosts.txt file. The configuration is stored in an another file named:
C:\Program Files\EngenioProvider\SMI_SProvider\bin\providerStore
Every time you change anything with the registered DS4000/FaStT Controllers and restart the Engenio CIMOM, and when you make a new discovery, the providerStore and arrayhosts.txt are updated with a new time stamp.
In this file, the IP addresses of installed DS 4000 units can be reviewed, added, or edited. After editing this file, you must restart the service.
279
Another method is to use the tool supplied in the installation directory of your DS4000 CIM Agent. The tool is named runservice.bat and can be found in the path:
C:\Program Files\EngenioProvider\SMI_SProvider\bin
runservice.bat -s Looking in C:\Program Files\EngenioProvider\SMI_SProvider\bin\Service.ini for service name Service Name: Engenio SMI-S Provider. Stopping Engenio SMI-S Provider Server. Engenio SMI-S Provider Server stopped. runservice.bat -q Looking in C:\Program Files\EngenioProvider\SMI_SProvider\bin\Service.ini for service name Service Name: Engenio SMI-S Provider. Service Status: SERVICE_STOPPED runservice.bat -t Looking in C:\Program Files\EngenioProvider\SMI_SProvider\bin\Service.ini for service name Service Name: Engenio SMI-S Provider. Starting Engenio SMI-S Provider Server. Engenio SMI-S Provider Server started. runservice.bat -q Looking in C:\Program Files\EngenioProvider\SMI_SProvider\bin\Service.ini for service name Service Name: Engenio SMI-S Provider. Figure 7-61 Sample usage of runservice.bat
280
Important: You cannot have the DS4000 management password set if you are using IBM TotalStorage Productivity Center, because the Engenio CIMOM has no capability to keep track of the user ID and password combinations which would be necessary to get into the managed DS4000/FaStT subsystems.
At this point, you can run following command on the SLP DA server, or any other system where SLP is installed, to verify that DS 4000 family DS4000 CIM agent is registered with SLP DA.
slptool findsrvs wbem
The response from this command will show the available services which you may verify.
281
2. Enter a value for userid and password. Because the CIMOM runs unauthenticated, you can use any values (see Figure 7-63 on page 283).
282
3. After you log on, you must switch to another namespace by using the drop-down button. For the IBM DS4000 it is LSISSI as shown in Figure 7-64 on page 284.
283
Figure 7-64 Change the namespace and navigate to the physical elements.
4. From the Action Menu, select Show Instances, as shown in Figure 7-65 on page 285.
284
285
For additional details on how to configure the SAN Volume Controller Console refer to the IBM Redbook IBM TotalStorage Introducing the SAN Volume Controller and SAN Integration Server, SG24-6423. To discover and manage the SAN Volume Controller, we need to ensure that our TotalStorage Productivity Center superuser name and password (the account specified in the TotalStorage Productivity Center configuration panel as shown in 7.7.1, Adding the SVC TotalStorage Productivity Center for Disk user account on page 286) matches an account defined on the SAN Volume Controller console. In our case we implemented username TPCSUID and password ITSOSJ. This userid and password combination has to be used later when you authenticate the CIMOM Agent in the TPC GUI. You might want to adapt a similar nomenclature and setup the username and password on each SAN Volume Controller CIMOM to be monitored with TotalStorage Productivity Center.
7.7.1 Adding the SVC TotalStorage Productivity Center for Disk user account
As stated previously, you should implement a unique user ID to manage the SAN Volume Controller devices in TotalStorage Productivity Center for Disk. This can be achieved at the SAN Volume Controller console using the following steps: 1. Log in to the SAN Volume Controller console with a superuser account. 2. Click Users under My Work on the left side of the panel (see Figure 7-67 on page 287).
286
3. Select Add a user in the drop-down under Users panel and click Go (see Figure 7-68).
287
5. Enter the User Name and Password and click Next (see Figure 7-70 on page 289).
288
6. Select your candidate cluster and move it to the right, under Administrator Clusters (see Figure 7-71). Click Next to continue.
7. Click Next after you Assign service roles (see Figure 7-72 on page 290).
289
8. Click Finish after you Verify user roles (see Figure 7-73 on page 291).
290
9. After you click Finish, the Viewing users panel opens (see Figure 7-74).
291
To register the SAN Volume Controller Console, perform the following command on the SLP DA server:
slptool register service:wbem:https://ipaddress:5999
Where ipaddress is the SAN Volume Controller console IP address. Run a verifyconfig command to confirm that SLP is aware of the SVC console registration. Tip: If your SAN Volume Controller console resides in the same subnet as the TotalStorage Productivity Center server, SLP registration will be automatic so you do not need to perform the SLP registration.
292
As an example of exploiting SMI-S for switch monitoring, we focus our tests on two McDATA switches available. McDATA, as Brocade, is providing an proxy CIM Agent to connect to its devices. In order to manage McDATA fabric using SMI-Specifications V1.1, we need the software component mentioned on this site:
http://www.snia.org/ctp/conformingproviders/mcdata
The McDATA SMI-S Interface is a software product that provides support for the SMI-S for McDATA director and switch products. McDATA SMI-S Interface provides an SMI-S interface for management of McDATA products. It exposes a WBEM (specifically, CIM XML) interface for management.
Prerequisites
The McDATA SMI-S Interface solution requires the hardware and software described in the following sections. Table 7-1 lists the switch hardware supported for this release of McDATA SMI-S Interface and the minimum levels of firmware required by the hardware.
Table 7-1 Supported Hardware Manufacturer Product Minimum Firmware Supported
McData
ED-5000 Intrepid 6064 Intrepid 6140 Sphereon 3016 Sphereon 3032 Sphereon 3216 Sphereon 3232 Sphereon 4300 Sphereon 4500 Sphereon 4700 Intrepid 10000
Firmware version 4.1 EOS version 7.0 EOS version 7.0 EOS version 7.0 EOS version 7.0 EOS version 7.0 EOS version 7.0 EOS version 7.0 EOS version 7.0 EOS version 7.0 E/OSn version 6.3
The McDATA SMI-S Interface, version 1.1, implementation supports the following operating system (OS) platforms: Windows 2000, with Service Pack 2 or higher Solaris 9
Chapter 7. CIMOM installation and customization
293
294
295
If you use the EFCM, we recommend using the EFCM Proxy model.
The McDATA OPENconnectors SMI-S Interface User Guide can be downloaded from the Resource Library.
http://www.mcdata.com/wwwapp/resourcelibrary/jsp/navpages/index.jsp?mcdata_category=www_res ource&resource=index
In this section, we guide you through an EFCM Proxy Model Installation. Note: Note that the figures shown are taken at a SMI-S level V1.0 installation. The installation at SMI-S level v1.1 might vary a little.
296
1. Locate the installation image and launch the Setup.exe. 2. When the panel shown in Figure 7-78 is shown. Click Next to proceed.
3. You are presented with the License Agreement. Select I Accept the terms of the license agreement as shown in Figure 7-79 and click Next.
The installer gives you the option of installing Service Location Profile (SLP) software. If you already have SLP installed in your system, you can choose to install SMI-S Interface only. In this case, you receive a warning message. 4. To install both components, select SMI-S Interface - SLP as shown in Figure 7-80 on page 298 and then click Next.
297
5. You are prompted for the installation folder. You can either Change to a different directory, or accept the shown default (see Figure 7-81) directory. Click Next to continue.
6. The next panel is the summary panel (Figure 7-82 on page 299) that allows you to start the installation. Click Install to continue.
298
After the McDATA SMI-S interface is installed, the installer launches the SMI-S interface server configuration tool, as detailed in the next subsection.
299
2. A message displays verifying whether the test succeeded or failed. If the user ID or password fails, a message explains that the user ID used could not be validated. If the network address fails, a message explains that the server could not be found at the address. If the test succeeds, a message explains that the server was found and user ID validated. 3. Select the OK button to save changes. .
4. When you click the Test Connection, you are presented with the progress panel in Figure 7-85.
5. When the connection is successfully tested, you see the message shown in Figure 7-86. Click OK to continue.
6. The final panel shown in Figure 7-87 on page 301 indicates the McDATA provider is successfully installed. Click Finish to exit the installer.
300
7. Verify the configuration in the CIMOM properties file as shown in Figure 7-88.
C:\McDATA_Provider\wbemservices\cimom\bin\mcdataProductInterface.Properties
We suggest you verify that both Service Location Protocol and SMI-S Interface Server services are running from the Windows Services panel. If JRE is not installed yet in your system, we recommend you install Java JRE v1.4.2 before starting the configuration tool. Use these steps to access the server configuration window. 1. Open a Command Prompt window. 2. Change the working directory to the wbemservices\cimom\bin directory. For Windows, the default location is: C:\McDATA_Provider\wbemservices\cimom\bin 3. Run the McDATA_server_configuration_tool.bat file. The login window is displayed, as in Figure 7-89.
301
4. Enter your user ID and password in the appropriate fields, then choose Login. The default user ID is Administrator. The default password is password. Click Login. 5. The Server Configuration window (see Figure 7-90) opens. You can set options, see users, and perform other tasks.
Use the HTTP Interface field to enable and disable the use of the Hyper Text Transfer Protocol (HTTP) interface for the SMI-S Interface server. When the HTTP interface is enabled, a CIM client can communicate with the SMI-S Interface using HTTP. To enable the HTTP interface, select Enable. To disable it, clear the Enable check box. Important: By default, HTTPS (secure) is always enabled. 6. If you click the Management Platform tab, you can import fabric definitions from EFCM into SMI-S Library. The information you can import are the following data, as shown in Figure 7-91 on page 303: Import the Zone Library Importing Nicknames Update the Login ID Password
302
7. When the connection is established successfully between SMI-S Interface and EFCM and you can click Import Zone Library. You are presented with the panel in Figure 7-92, where the active zone set in your fabric is shown. By default, they are imported in the SMI-S library.
You are now ready to work with McDATA SMI-S through TotalStorage Productivity Center.
303
You are prompted to provide the following data: Host: The IP address or the host name of the SMI-S Interface Server Port: By default, port 5989 is used by HTTPS or 5988 if you enabled HTTP previously. Username: Administrator Password: password Interoperability Namespace: /interop Protocol: either HTTP or HTTPS, depending on your setting Display Name: Any name you want to identify the CIM Agent test CIMOM connectivity: We suggest you leave the check in the box and let TPC perform the connection test.
At the time of the writing, we are using Brocade SMI-S Agent v110.3.0a. The Brocade SMI Agent (SMI-A) is a proxy agent to multiple fabrics. It resides on a separate host. The SMI-A does not require any modification or upgrade to deployed fabrics when it is deployed. All the support required in Brocade switches is already in place. At the time of the writing, this link can be used to obtain the Brocade SMI-S Agent.
http://www.brocade.com/support/SMIAGENT.jsp
304
Your software environment must meet the following requirements before you install the SMI-A: Minimum of 256 MB RAM One of the following operating systems (32-bit versions only): Microsoft Windows 2000 Professional or Windows Server 2003 Sun Solaris version 8, 9, or 10 Linux Red Hat AS 3.0 or Suse 9.1 Professional Sun Microsystems JRE version 1.4.2_06 is bundled with the SMI-A and is automatically installed when the SMI-A is installed. Attention: Note that SMI-A 110.3.0 is not compatible with JDK 1.5. The memory required for the SMI-A depends on the size of the fabric and the number of fabrics being managed. You should increase the memory accordingly as the number of fabrics (number of switches) being managed increases. You should also increase the memory heap size for the JVM based on the number of switches and number of switch ports and devices.
2. You are prompted to accept the Brocade License Agreement. Select Accept the terms of the License Agreement and click Next, as shown in Figure 7-95 on page 306.
305
3. The installer checks the system prerequisites before starting the installation as shown in Figure 7-96. If the test is successful, click Next.
4. Before starting the installation, it is recommended that you close any other open applications (Figure 7-97 on page 307). Click Next to continue
306
5. You are prompted for the installation directory. Either accept the default or enter a custom directory, but be careful, there should not be any spaces in the path (see Figure 7-98).
6. The installer notifies you that JRE 1.4.2 will be installed on your system. Click OK in panel Figure 7-99 to continue.
307
7. When the code is copied, you are prompted for the fabric manager Server Configuration panel. The SMI-A can connect to the Brocade Fabric Manager server database, if available, and retrieve historical port statistics information. Otherwise, you can leave those fields blank (this step can be done later) and proceed by clicking Next as shown in Figure 7-101. Note: The Brocade Fabric manager is not required.
The SMI-A installation wizard provides options for enabling mutual authentication for clients and indications. This can also be done after installation, without rerunning the installation wizard.
308
8. If you enable mutual authentication, you should disable the CIM-XML client protocol adapter (CPA) for the SMI-A so that the clients can use only HTTPS communication. If you do not disable the CIM-XML CPA, then any client can communicate with the SMI-A using HTTP access. The client and server certificates that are used in the mutual authentication are only private certificates that are generated by Brocade and are not verified by any certificate authority. To avoid using certificates, just select No and click Next, as shown in Figure 7-102.
The Enable Mutual Authentication for Indications panel is shown in Figure 7-103 on page 310. 9. You can restrict delivery of indications using mutual SSL authentication to only clients that are trusted by the SMI-A. By default, mutual authentication for indications is disabled, which means that the SMI-A uses SSL to send CIM-XML indications to a WBEM client listener, but does not attempt to verify the identity of the WBEM client listener. When mutual authentication for indications is enabled, then only those clients with certificates that have been added to the SMI-A Indications TrustStore can use SSL to receive indications from the SMI-A. That is, the SMI-A must have a TrustStore that contains a certificate for an entry in the clients Indications KeyStore.
309
The Enable Security for SMI Agent panel is shown in Figure 7-104. 10.When security is enabled, Windows authentication of the system on which the SMI-A is installed is used by default for authenticating username and password. If domain authentication is enabled on Windows, then the corresponding domain is used for authenticating username and password. If you do not want to enable security for SMI Agent, select No and click Next.
11.You are prompted with the event settings panel (Figure 7-105 on page 311). The SMI-A delivers events in the form of two types of indications: alert indications and life-cycle indications. If you do not need events to be managed by SMI-A, just click Next.
310
12.In the Enable Console or through file logging, you can choose how to manage SMI-A logs. If you want to enable both of them, select both Yes buttons and then click Next as shown in Figure 7-106.
13.Select a file to be used as a log file (see Figure 7-107 on page 312).
311
14.Enter the details of each switch and director you want to manage through the SMI-A, as shown in Figure 7-108. You need to add both clusters in the M10 switch. Proxy IP: IP address of the switch/director Username: username of a switch administrator Password: password of the previously defined administrator Login-scheme: leave standard Number of RPC handles: leave the default of 5.
15.When all your switches have been entered successfully, you see the complete list in the Configure Multiple Proxy panel as shown in Figure 7-109 on page 313. Click Next to continue.
312
16.All the configurations are saved to the configuration files listed in Figure 7-110. Take note of the locations and click Next.
17.You are now ready to choose if you want to start the SMI-A as a Windows Service. If you want to do that, select Yes and click Next as shown in Figure 7-111 on page 314.
313
18.The panel in Figure 7-112 means that you have installed the Brocade SMI Agent successfully. Click Done to exist the installer.
19.To check the Windows Service status, open the Services panel to verify that the Brocade SMI Agent is running as in Figure 7-113 on page 315.
314
3. Issue the command to stop the server: ./stop_server 4. Issue the following command for each port: echo HTTPPort=5970> cimxmlcpa.properties echo HTTPSPort=5971> cimxmlscpa.properties 5. Issue the command to start the server: ./start_server 6. Verify that the CIMOM is now listening on the newly assigned ports.
315
Display name: any name that can help you identify the CIM Agent We suggest to keep the default selection for Check the CIMOM connectivity before adding. 2. When all the entries have been filled in as requested, click Save.
At the time of the writing, the current version is is McDATA SAN OS V2.0. At the time of the writing SNIA, did not published the SAN OS version which conforms to SMI-S 1.1. Each switch or director in the Cisco MDS 9000 Family includes an embedded CIM server. The CIM server communicates with any CIM client to provide SAN management compatible with SMI-S. The CIM server includes the following standard profiles, subprofiles, and features as defined in SMI-S: Service Location Protocol version 2 (SLPv2) Server profile CIM indications Fabric profile Zoning Control subprofile Enhanced Zoning and Enhanced Zoning Control subprofile Switch profile, including the Blade subprofile xmCIM encoding and CIM operations over HTTP as specified by the WBEM initiative
316
HTTPS using Secure Socket Layer (SSL) HTTPS is optional, but provides enhanced security by encrypting communications between the CIM server and the CIM client. Cisco MDS SAN-OS Release 2.0(1b) and later support SLP V2, CIM indications, the Server profile and supports SMI-S 1.0(2). The technical document describing how to set up the CISCO CIM server is Cisco MDS 9000 Family CIM Programming Reference and can be found at the following URL:
http://www.cisco.com/en/US/products/ps5989/products_programming_reference_guide_ chapter09186a0080211ac0.html
Be sure to run the correct level of SAN-OS to support the needed SMI Specifications version.
317
2. We suggest to keep the default selection for Check the CIMOM connectivity before adding.
Important: The Interoperability namespace for Cisco switches is root/cimv2 (no leading slash).
7.11.1 Prerequisites
All software required to run the IBM SMI-S Agent for Tape is contained on the CD-ROM. The recommended operating systems and hardware requirements are listed here: IBM xSeries with an Intel Pentium 4 processor and a minimum of 550./MB of RAM SuSE Linux Enterprise Server 9 The SMI-S Agent for Tape code can be downloaded from the following Web site:
http://www-03.ibm.com/servers/storage/support/software/smisagent/downloading.html
This will result in a file named IBM-Tape-SMIS-Agent-1.2.1.502.tar 5. This file has to be deflated using this command:
tar -xvf IBM-Tape-SMIS-Agent-1.2.1.502.tar
The following screen captures show the installation process. 1. Go to the directory
/tmp/TAPE-CIMOM-SLES9/SMI-S/LINUX
See Figure 7-116. 2. Start the graphical installation program by entering the command:
,/launchpad-linux
3. The SMI-S Agent for Tape main panel is presented as shown in Figure 7-117 on page 320. Select the Installation Wizard.
319
4. The next panel is the wizard Welcome (see Figure 7-118). Refer to the Installation Guide and the Readme to modify the CIMOM installation afterwards.
320
5. The documentation can be found in the source directory from where you started the installation. These documents are not copied to the installation target directory.
KLCHL5H:/tmp/TAPE-CIMOM-SLES9/SMI-S/doc
In this directory are the following files: -r-xr-xr-x 1 57686 1311 39310 Jan 10 16:28 Readme_smis.txt -r-xr-xr-x 1 57686 1311 1253083 Jan 10 16:28 install_guide.pdf -r--r--r-- 1 57686 1311 1877 Jan 10 16:28 overview.unix.txt 6. Click Next on this panel and the License Agreement is shown. On this screen (see Figure 7-119) check the radio button to accept the license agreement. Click Next to continue.
7. The dialog box for the installation target is shown (see Figure 7-120). Accept the default installation directory path or make changes as necessary. Click Next to continue.
8. The next panel is the summary of the installation information, as shown in Figure 7-121 on page 322. Click Install to start the installation.
321
The next screen (see Figure 7-122) is the progress for the Tape Agent.
322
9. After a few minutes, the Server Communication Configuration is displayed as shown in Figure 7-123. On this screen, you can choose other values for the ports, but it is recommended that you keep the default values. HTTP Port is the port number that SMI-S Object Manager server will use for HTTP transport. The default port is 5989. The port number must be a port that is not being used by another process on the system. HTTPS Port is the port that SMI-S Object Manager server will use for secure HTTP transport (HTTPS). The default port is 5988. The port number must be a port that is not being used by another process on the system. Click Next to continue, Back to go back, or Cancel to exit the installation.
You can see the progress of the SMI-S Agent for Tape installation, as shown in Figure 7-124 on page 324.
323
10.After a successful installation you can see the screen in Figure 7-125. Note the location of the installation log file. Click Finish to exit the installer.
324
11.You are taken back to the SMI-S Agent for Tape main page. Click Exit to exit the LaunchPad as shown in Figure 7-126.
12.After leaving the installation dialog box, you can go to a LINUX shell to verify that the agent has been started. To do this, use the command shown in in Figure 7-127.
KLCHL5H:/opt/IBM/smis # ps -ef | grep -i SMI root 31041 1 0 11:01 pts/1 00:00:35 /opt/SMIAgent/jre/bin/java -server -classpath /opt/SMIAgent/agent/server/jserver/lib/wbemstartup.jar -Xmx512m -Djava.security.manager -Dconsole_logging=true -Dfile_logging=/opt/SMIAgent/agent/server/jserver/logr/test.log -Dprovider_xml=/opt/SMIAgent/agent/server/jserver/bin/provider.xml -DSMIAgentConfig_xml=/opt/SMIAgent/agent/server/jserver/bin/SMIAgentConfig.xml -Djava.security.policy=/opt/SMIAgent/agent/server/jserver/bin/jserver.policy -Dsun.rmi.transport.http.connectionTimeout=180000 -DBaseDir=/opt/SMIAgent/agent/server org.wbemservices.wbem.bootstrap.StartWBEMServices /opt/SMIAgent/agent/server/jserver/bin/jserver.properties root 475 1 0 14:37 ? 00:00:00 /opt/IBM/smis/tapeagent/packages/snmp/bin/snmptrapd -c /opt/IBM/smis/tapeagent/packages/snmp/bin/snmptrapd.conf Figure 7-127 Check to see if the services are running
In the output, you can see the Java process which represents the CIMOM for the tape libraries. You should also have an snmp-trap-daemon running, because this is how the CIMOM interacts with the library.
325
2. Check to see that there is no library registered in the CIMOM using the command shown in Figure 7-129.
./setdevice.sh -alist -t3584 CLI ei -n root/ibm IBMTSSML3584_DeviceConfiguration Figure 7-129 List registered libraries - no Library registered
3. To register the Library to the CIMOM enter the command shown in Figure 7-130..
./setdevice.sh -aadd -t3584 -l9.11.213.187 -p161 -cpublic -s2 Figure 7-130 The 3584 is registered with the CIMOM
4. Issue the ./setdevice.sh -alist command again. The output is shown in Figure 7-132 on page 327.
326
./setdevice.sh -alist -t3584 CLI ei -n root/ibm IBMTSSML3584_DeviceConfiguration path= IBMTSSML3584_DeviceConfiguration.InstanceID="9.11.213.187" //Instance of Class IBMTSSML3584_DeviceConfiguration [Description ("The SettingData class represents configuration-related " "and operational parameters for one or more ManagedElements. " "A ManagedElement can have multiple SettingData objects associated " "with it. The current operational values for the parameters " "of the element are reflected by properties in the Element itself " "or by properties in its associations. These properties do not " "have to be the same values that are present in the SettingData " "object. For example, a modem might have a SettingData baud " "rate of 56Kb/sec but be operating at 19.2Kb/sec. Note: The " "CIM_SettingData class is very similar to CIM_Setting, yet both " "classes are present in the model because many implementations " "have successfully used CIM_Setting. However, issues have arisen " "that could not be resolved without defining a new class. Therefore, " "until a new major release occurs, both classes will exist in " "the model. Refer to the Core White Paper for additional information. " "SettingData instances can be aggregated together into higher- " "level SettingData objects using ConcreteComponent associations.") : Translatable] instance of class IBMTSSML3584_DeviceConfiguration { string Caption; string Description; string InstanceID = "9.11.213.187"; string ElementName; string Port = "161"; string Community = "public"; sint32 SnmpVersion = 2; }; Figure 7-132 Displaying the Library via setdevice -alist -t3584
327
Discovered or Switch Discovered alert depending on the type of CIM Agent. For more information refer to 8.8, Alerting on page 398. 1. To see if this kind of alert has been missed, look at the Alert log by navigating in the Navigation Tree to IBM TotalStorage Productivity Center Alerting. Otherwise, you can look at the log of the CIMOM Discovery job. Every time you submit a CIMOM discovery, many logs are generated, one log for each CIM Agent and several overall logs. 2. If you look at the log of a specific CIM Agent, you can see an output similar to the one shown in Figure 7-133. The CIMOM located on host with 9.1.39.171 has discovered four devices (switches): 100000051E34E895/IBM_2005_B32 1000006069201D74/itsosw2 1000006069201D4E/itsosw1 10000060691064CF/itsosw3
Tip: This is a way to discern the relationship between CIMOM and related managed devices. So if you need to perform maintenance on the CIMOM, you can also know which devices will be affected by this outage. When the setup of the CIM Agents is complete, before running any probe against the managed subsystems or switches, you must rerun a CIMOM discovery job. The amount of time the CIMOM discovery job takes will depend on the number of CIMOMs, the number of subsystems you have, and whether you are scanning the local subnet or not. The CIMOM discovery job can be run on a schedule. How often you run it will depend on how dynamic your environment is. It will need to be run to detect a new subsystem. The CIMOM discovery job also performs basic health checks of the CIMOM and subsystem. For information about setting up the CIMOM discovery job refer to CIMOM discovery job on page 347.
328
TotalStorage Productivity Center check the communication and authentication with that CIMOM (see Figure 7-134).
If the test is successful, you obtain the result shown in Figure 7-135.
2. If you need to get rid of a previously defined or discovered CIMOM, you can select Remove CIMOM, as shown in Figure 7-136 on page 330.
329
Cisco Brocade
root/cimv2
/root/interop or /root/brocade1 Note: Contact your switch vendor for the correct namespace to use.
/interop
McDATA
/root/ibm
/interop /root/emc
330
Vendor
Interoperability Namespace
Hitachi
/root/hitachi/dm35 for HiCommand 3.5 /root/hitachi/dm42 for HiCommand 4.0 /root/hitachi/dm42 for HiCommand 4.2 /root/hitachi/dm43 for HiCommand 4.3
/root
HP SUN StorEdge
/root/ibm root/cimv2
331
Your network does not have multicast enabled and consists of multiple subnets that must share services. The configuration of an SLP DA is particularly recommended when there are more than 60 SAs that need to respond to any given multicast service request.
Router configuration
Routers are one common source of problems in your effort to setup the communication needs for SLP. Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address and port, 239.255.255.253, port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by TotalStorage Productivity Center. To configure your router hardware and software, refer to your router reference and configuration documentation. Attention: Routers are sometimes configured to prevent passing of multicast packets between subnets. Routers configured this way prevent discovery of systems between subnets using multicasting. Routers can also be configured to restrict the minimum multicast time-to-live (TTL) for packets it passes between subnets, which can result in the need to set the Multicast TTL higher to discover systems on the other subnets of the router. The Multicast TTL controls the time-to-live for the multicast discovery packets. This value typically corresponds to the number of times a packet is forwarded between subnets, allowing control of the scope of subnets discovered. Multicast discovery does not discover systems using TCP/IP protocol stacks that do not support multicasting (for example, some older Windows 3.x and Novell 3.x TCP/IP implementations).
information about SLP Service Agents and Directory Agents (DA) that reside in the same subnet as the TotalStorage Productivity Center server. You should have not more than one DA per subnet. Misconfiguring the TotalStorage Productivity Center CIMOM discovery preferences might impact performance on auto discovery or on device presence checking. It might also result in application time-outs, as attempts are made to resolve and communicate with hosts that are not available. It should be considered mandatory to run the CIM Agent software on a separate host from the TotalStorage Productivity Center server. Attempting to run a full TotalStorage Productivity Center implementation on the same host as the CIM agent, will result in dramatically increased wait times for data retrieval. You might also experience resource contention and port conflicts.
333
TotalStorage Productivity Center GUI to make them available and fully functioning. The DAs can be located in subnets other than the one where your TPC server is located. This setup will reduce multicast traffic in your network. You can use the following procedure to set up one or more Service Location Protocol Directory Agents (SLP-DAs) so that TotalStorage Productivity Center for Disk can discover devices that reside in subnets other than the one in which TotalStorage Productivity Center for Disk resides. In this case the discovered CIMOMs will be displayed in the TPC GUI. For example, say they are delivered. The following screen shows a new entry for a CIMOM which is not fully operational, because it might have a userid and password combination which does not fit into TPC.
The TPC Administrator determines which CIMOMs should be used and then provides the information to make it possible for TPC to login into them and use the CIMOMs. Perform the following steps to set up the SLP DAs: 1. Identify the various subnets that contain devices that you want TotalStorage Productivity Center to discover. 2. Each device is associated with a CIM Agent. There might be multiple CIM Agents for each of the identified subnets. Choose one of the CIM Agents for each of the identified subnets. (It is possible to choose more than one CIM Agent per subnet, but it is not necessary for discovery purposes.) The following steps will change the run mode of the CIM-Agent. 3. Each of the identified CIM Agents contains an SLP service agent (SA), which runs as a daemon process. The default is, that it runs in SA mode. We change it to have it run in DA mode. Each of these SAs is configured using a configuration file named slp.conf. Perform the following steps to edit the file: a. For example, if you have DS CIM agent installed in the default install directory path, then go to C:\Program Files\IBM\cimagent\slp directory. b. Look for file named slp.conf c. Make a backup copy of this file and name it slp.conf.bak.
334
d. Open the slp.conf file and scroll down until you find (or search for) the line
;net.slp.isDA = true
Remove the semi-colon (;) at the beginning of the line. Ensure that this property is set to true (= true) rather than false. Save the file. e. Copy this file (or replace it if the file already exists) to the main windows subdirectory for Windows machines (W2K C:\WINNT; W2K3 C:\WINDOWS), or in the /etc directory for UNIX machines. 4. It is recommended that you reboot the server where the SLP is going to be modified at this stage. Alternatively, you can choose to restart the SLP and CIMOM services. You can do this from your windows desktop by selecting Start Menu Settings Control Panel Administrative tools Services. Launch the Services GUI and locate the Service Location Protocol. Right-click and select stop. Another panel displays which will asks if you want stop IBM CIM Object Manager service. Click Yes. You can start the SLP daemon again after it has stopped sucessfully. Alternatively, you can choose to restart the CIMOM using the command line as described in 7.4.7, Restart the CIMOM on page 264.
TotalStorage Productivity Center sends unicast service requests to each of these statically configured DAs, and sends multicast service requests on the local subnet on which TotalStorage Productivity Center is installed. The prerequsite is to configure an SLP DA by changing the configuration of the SLP service agent (SA) as mentioned previously.
Chapter 7. CIMOM installation and customization
335
You have now converted the SLP SA of the CIM Agent to run as an SLP DA. The CIMOM is not affected and will register itself with the DA instead of the SA. However, the DA will automatically discover all other services registered with other SLP SAs in that subnet.
Where ipaddress is the CIM Agent IP address. 3. Issue a verifyconfig command, as shown in Figure 7-39 on page 269 to confirm that SLP is aware of the registration. Attention: Whenever you update SLP configuration as shown here, you might have to stop and start slpd daemon. This will enable SLP to register and listen on newly configured ports. Also, whenever you restart the SLP daemon, ensure that the CIMOM agent has also restarted. Otherwise, you can issue the startcimom.bat command, as shown in previous steps. Another alternative is to reboot the CIMOM server.
336
service:wbem:https://9.43.226.237:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Open Systems Lab, Cottle Road authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 service:wbem:https://9.11.209.188:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Tucson L2 Lab authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 #service:wbem:https://9.42.164.175:5989,en,65535 # use default scopes: scopes=test1,test2 #description=SVC CIMOM Raleigh SAN Central #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------# Register Service - SANFS CIMOMS #---------------------------------------------------------------------------#service:wbem:https://9.82.24.66:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Gaithersburg ATS Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #service:wbem:https://9.11.209.148:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Tucson L2 Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20
337
#---------------------------------------------------------------------------# Register Service - FAStT CIMOM #---------------------------------------------------------------------------#service:wbem:https://9.1.39.65:5989,en,65535 #CIM_InteropSchemaNamespace=root/lsissi #ProtocolVersion=0 #Namespace=root/lsissi # use default scopes: scopes=test1,test2 #description=FAStT700 CIMOM ITSO Lab, Almaden #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20
338
Chapter 8.
339
340
Tape-3584-7819833. A 3584 tape library, which is accessed through the tape CIMOM running on the KLCHL5H server.
IP Network
DA = Data Agent FA = Fabric Agent
KLCH4VZ DS-OPEN-API no DA, no FA 9.1.38.35 ITSOSVC DA+FA HELIUM no DA, no FA KLCHL5H DA+FA Brocade CIMOM on port 5970/71 Tape CIMOM on port 5989 DA, no FA 9.1.39.70 2005-B32 SNMP/API pqdi DA+FA colorado DA+FA DS4000CIMOM 9.1.38.39 AZOV DA+FA
WAN
DS8000-2107-75BALB1 ITSO Poughkeepsie
SAN
gallium T P C V 3. 1 DA+FA
Tape-3584-7819833 Tucson
SVC-2145-ITSOSVC01 Almaden
DS4500-ITSODS4500 Almaden
9.43.252.17 DS-OPEN-API
341
We are not looking at the individual product components. Instead we look at the infrastructure as a whole. We cover all functions except the policy management of your computers, which is beyond the scope of this chapter. Policy management is described in the IBM TotalStorage Productivity Center Users Guide, GC32-1775.
Computers
Configuring Devices
As an application
To start the IBM TotalStorage Productivity Center GUI on Microsoft Windows, open the IBM TotalStorage Productivity Center GUI by clicking Start Programs IBM TotalStorage Productivity Center Productivity Center. You can also double-click the IBM TotalStorage Productivity Center icon if it is installed on your desktop. To start the IBM TotalStorage Productivity Center GUI on Linux, go to a command prompt window and run the command TPC. To start the IBM TotalStorage Productivity Center GUI on AIX, go to a command prompt window and issue the following command: /usr/bin/TPC.
342
Note: For UNIX or Linux, to invoke the TPC command from a command prompt window, include /usr/bin in your PATH.
As a Java applet
The benefit of running the user interface as a Java applet is that you are not required to install the GUI component on every workstation. Users can simply access the IBM TotalStorage Productivity Center applet from any Web-enabled workstation in your environment and the appropriate applets are downloaded automatically for them on an as-needed basis. To start and run the user interface as a Java applet: 1. Start a Web browser session. 2. Point your browser to the URL to your TotalStorage Productivity Center server and the appropriate port you set up when enabling the Web server on that machine (see 4.8, Configuring the GUI for Web Access under Windows 2003 on page 107 and 5.10, Installing the user interface for access with a Web browser on page 186.
8.3.2 Logging on
TotalStorage Productivity Center provides a role-based administration model as described in Chapter 2, Key concepts on page 13. When installing the TotalStorage Productivity Center Server you have to specify an operating system group, whose members will become superusers of the TotalStorage Productivity Center product. You can now log on to TotalStorage Productivity Center using one of the users of this operating system user group. In addition to the superuser role, TotalStorage Productivity Center provides nine additional roles to which you can assign operating system user groups. Additional details about the role based administration model is in the IBM TotalStorage Productivity Center Users Guide, GC32-1775.
343
When you start IBM TotalStorage Productivity Center, the Navigation Tree is expanded to show all the high level functions. You can drill down on an element in the tree by clicking on it or by clicking on the expand icon. When you right-click a node a pop-up (context) menu displays, which lets you perform additional actions for the node. If the Navigation Tree gets too large, or if you want to return it to its original state, right-click the major nodes of the tree and select Collapse Branch or Expand Branch. If you right-click the IBM TotalStorage Productivity Center node and select Collapse Branch, the entire Navigation Tree collapses. Then, right-click the main IBM TotalStorage Productivity Center node and select Expand Branch to return the Navigation Tree to its original state, expanded only to show the main functions. The Content Pane opens on the right side of the main window. When you select a node in the Navigation Tree, the corresponding function window opens in the Content Pane. You can use the windows that open in the Content Pane to define and run the different functions (for example, monitoring jobs, alerts and reports) available within TotalStorage Productivity Center. The information shown on the pages in the Content Pane will vary, depending on the function with which you are working.
344
Now that you have started the TotalStorage Productivity Center GUI successfully and have verified that all the internal services are up and running, the first thing you obviously want to do is to collect information about your infrastructure. However, before you can start to do this, you first must establish the communication with your infrastructure through the various communication channels TotalStorage Productivity Center is going to use. Figure 8-5 gives an overview of the sequence of the next steps.
1.
establish communication to your infrastructure
2.
collect data about your infrastructure discover elements and retrieve basic Information collect performance data
3.
Retrieveing and displaying data - Monitoring and Alerting - Managing
CIMOMs
discover / register
probe
Fabric Agent
As shown in Figure 8-5, TotalStorage Productivity Center uses basically three major channels to communicate with the infrastructure. These are: CIM Object Managers to interact with storage subsystems, tape libraries and SAN components.
scan
345
Agents on the managed servers and computers (Data Agents to interact with servers and computers and Fabric Agents to interact with Host Bus Adapters and SAN components through an in band channel) SNMP and proprietary APIs to interact with SAN components out of band. We now show you how to set up all those communication channels by guiding you through the configuration steps we used for our lab environment described in 8.1, Infrastructure summary on page 340.
We now introduce a very important principle of operating the TotalStorage Productivity Center. We tell the TotalStorage Productivity Center to perform a certain task or action. Many of these tasks are handled as special objects, called jobs, within the TotalStorage Productivity Center. We can define those jobs, save them and run them at a later time, and schedule them for a single or repeated runs, or run them simultaneously.
346
3. The next tab, Alert, allows you to specify what to do when certain conditions for the job you are defining arise at runtime. For a CIMOM discovery job, there is only one condition for which you can define a reaction. You can specify what kind of Alerts TotalStorage Productivity Center will trigger if the CIMOM discovery job fails. You will find these first two tabs in almost all job definitions. 4. Figure 8-8 on page 348 below shows a screen capture of the Alert tab with the single condition for this job we can choose, which is Job Failed.
347
5. In the last tab, Options, you can enter information specific for the type of job you are defining. When defining a CIMOM discovery job, you can enter the IP addresses of the SLP Directory Agents of the environment you want the discovery job to query for CIM Agents (see 2.2.1, SLP architecture on page 18). In our case, we enter the IP address of the SVC Master Console (ITSOSVC), which we have configured as an SLP Directory Agent.
Figure 8-9 Configuring CIMOMs - initiate automatic CIMOM discovery - Enter DA addresses
6. You have now entered all information needed by the CIMOM discovery job to run successfully. We could now just save the job definition or save the job definition and have TotalStorage Productivity Center execute the job at that point in time we have specified in the When to Run tab. To initiate the latter, click Enabled in the upper right corner of the Content Pane and select File Save in the menu bar. You can see a message box saying that the CIMOM job has been submitted (see Figure 8-10 on page 349).
348
Important: The CIMOM discovery is designed as a two-stage process: 1. The CIMOM discovery job locates all CIMOMs through the Service Location Protocol by broadcasting in its subnet and querying all SLP Directory Agents for which IP addresses have been entered in the job definition (in our case 9.1.38.48). 2. The discovery job tries to login to the CIMOMs it has discovered and tries to retrieve information about the elements managed by this CIMOM. Up to this time however, it was not possible to enter any user credentials for these log ins. So the discovery job uses null as a user ID and password to try to log into the CIMOMs. This will be successful for only CIMOMs which have been set up to not require any user authentication. So it is very likely that the first discovery job will end up with errors and with a status that the discovery and retrieval for the elements has succeeded only for few CIMOMs, if for any at all. For the other CIMOMs, a second discovery job must be initiated after entering the user credentials to retrieve the basic information for the elements behind those CIMOMs. Here is how we can monitor our CIMOM discovery job: There can exist only one CIMOM job definition in the system. We have defined this job definition in the previous steps and saved it. Every time we click Administrative Services Discovery CIMOM, we can view and change this job definition. This job definition can run multiple times. Each run produces an entry below the Administrative Services Discovery CIMOM Node of the Navigation Tree. This is the case for all types of jobs and you will see this mechanism implemented throughout the whole TotalStorage Productivity Center user interface. Because we have not just saved our CIMOM discovery job definition, but have also started the execution of the job, TotalStorage Productivity Center has created such an entry for your job.
349
Figure 8-11 below shows two entries. The upper one belongs to a job we ran earlier and the lower one is the job we just submitted. The blue circle beside this entry indicates that the job is running. By clicking the job entry, we can see a list of all logs for that job in the Content Pane. We can look at the logs by clicking the icon next to a log entry. This works even if the job has not yet finished, but is still running.
We can update the status of the job by right-clicking Administrative Services Discovery CIMOM and select Update Job Status. The status will not update, unless we refresh it this way. We can finally see the following screen in Figure 8-12 on page 351, which indicates that our discovery job has completed with errors.
350
Examining the logs of the failed part of the job, we find that the errors are caused by failed log ins, just as we expected. Note that other parts of the jobs have completed successfully. These are the log ins and retrievals of information for the managed elements which do not require any authentication (in our example the DS4000 CIMOM on the colorado server) behind those CIMOMs. We can view what CIMOMs our discovery job has detected (Figure 8-13 on page 352). Expand the Administrative Services Agents node of the Navigation Tree, right-click the CIMOM node, select Refresh and then expand the CIMOM node. Look for an entry for each of the discovered CIMOMs. Those CIMOMs for which the login of the discovery job has been successful are marked by a green square. Those CIMOMs for which no login could be established are marked by a red circle. Our CIMOM discovery job detected three CIMOMs. The DS4000 CIMOM was discovered over two ports (secure and nonsecure). For this CIMOM, the login has been successful and the the information for the elements managed by this CIMOM could be retrieved. The other two CIMOMs are the DS-OPEN-API on the KLCH4VZ server (3.1.38.35) and the SVC CIMOM residing on the SVC Master Console ITSOSVC (9.1.38.38). Those CIMOMs require authentication, so the login to those CIMOMs has not been successful and TotalStorage Productivity Center could not retrieve the information for the managed elements.
351
TotalStorage Productivity Center has now discovered three of our six CIMOMs and was able to login to one of those three. Three CIMOMs are still missing.
Figure 8-14 Enter CIMOM user ID, password and display name
352
3. Save these entries by selecting File Save in the menu bar. Note that we have selected the Test CIMOM connectivity before updating box. This causes TotalStorage Productivity Center to connect to the CIMOM and try to login to it with the credentials we just have specified. If this is successful, the status indication of the CIMOM will turn green. However, TotalStorage Productivity Center does not retrieve the information about the elements managed by this CIMOM. This requires another discovery job. After updating all our CIMOM definitions, you can see the following screen in Figure 8-15:
353
4. After entering our missing CIMOMs, we see the following CIMOM entries under Administrative Services Agents CIMOM (Figure 8-17).
Note: The Brocade CIMOM and the Engenio SMI-S connect to HTTP and HTTPS over two different ports, so they appear two times in the list. We have now completed the configuration of our CIMOMs in the TotalStorage Productivity Center. However, the retrieval of the information about the managed elements (storage subsystems, tape library, and switch) has not yet occurred for those CIMOMs which require authentication. So we have to run a further CIMOM discovery job. This CIMOM discovery job will not discover any new CIMOMs (as long as we did not add any new ones to our infrastructure in the meantime). However, the CIMOM discovery job will now be able to login in each of the configured CIMOMs and retrieve all information about the managed storage subsystems, tape libraries and switches.
354
This CIMOM discovery job now completes without errors and produces the output shown in Figure 8-18.
Figure 8-18 Configuring CIMOMs - second CIMOM discovery job completed successfully
Logs
We should now inspect the logs to verify that all our storage subsystems, tape libraries, and switches have been discovered successfully. We can also verify the discovery of a storage subsystem by inspecting the Alert Log. TotalStorage Productivity Center comes with a default Alert configured that will raise an entry in the storage subsystem Alert Log each time a new storage subsystem is discovered. View this Alert Log by selecting IBM TotalStorage Productivity Center Alerting Alert Log Storage Subsystem. Note that a SAN Volume Controller is not considered a storage subsystem in this context, so an Alert for the discovery of an SVC is not generated by default. TotalStorage Productivity Center also provides default Alerts for the discovery of switches, fabrics, and endpoints. We cover TotalStorage Productivity Center Alerting in greater detail in 8.8, Alerting on page 398. Now that we have discovered all our storage subsystems, tape libraries and switches behind our CIMOMs successfully, we can confirm if they show up in TotalStorage Productivity Center, where they should.
Storage subsystems
The storage subsystems and SVCs are located under Disk Manager Storage Subsystems. You can see a list of all storage subsystems and SVCs. Launch a detail page for each storage subsystem by clicking on the symbol left of the storage subsystem name. On this detail page, you can enter some information for the storage subsystem such as a display name, some user defined properties and the address of the element manager, in cases this address is not retrieved correctly through the CIMOM. In Figure 8-19 on page 356 you see a list of our five storage subsystems including one SVC.
355
Figure 8-19 Configuring CIMOMs - list of discovered storage subsystems and SVCs
The tape library should be visible under Tape Manager Tape Libraries. As with the storage subsystems, we see a list of all tape libraries. We again can launch a detail page for each library by clicking the symbol left of the tape library name. In our case, we see one tape library (Figure 8-20).
Fabrics
The fabrics the discovery job has detected are listed under Fabric Manager Fabrics. You can see a list of all discovered fabrics in Figure 8-21 on page 357; however a list of all discovered switches is not available. You can see a list of switches in the topology viewer. The use of this part of the TotalStorage Productivity Center GUI is covered in detail in Chapter 9, Topology viewer on page 415. Note that the information of the fabric we see does not necessarily come from the CIMOM. The reason for this is we also have already installed some fabric agents in our infrastructure which deliver information to TotalStorage Productivity Center without any discovery process (see 8.4.2, Verifying Data and Fabric Agents on page 357).
356
Important: You should set up a CIMOM discovery job for running in recurrent intervals to detect changes in your infrastructure automatically. The length of the interval should be adjusted to the frequency of those changes in your specific environment.
Figure 8-22 Verifying Data and Fabric Agent connectivity - check agents
357
You can also view a list of all computers with agents installed by selecting Data Manager Reporting Asset By Computer. See Figure 8-23. Note, that our computer, Helium, which has no agents installed is not listed in this asset list.
Figure 8-23 Verifying Data and Fabric Agent connectivity - computer asset list
358
Figure 8-24 Configuring out of band fabric connectivity - set up discovery job
3. After submitting the job by selecting File Save in the menu bar, you can monitor the job by expanding the Administrative Services Discovery Out of Band Fabric and clicking the job run entry. In the Content Pane, you can examine the logs. 4. Our out of band discovery job detects one switch (our 2005-B32), so TotalStorage Productivity Center creates an entry under Administrative Services Agents Out of Band Fabric, representing this switch and labeled with its host name. Click the entry and see the details for that particular switch. TotalStorage Productivity Center automatically subscribes to the following traps: LinkUp, LinkDown, Activated, Deactivated, PowerOn, PowerOff, UnknownEvent, LIPReinitialize, LIPReset, RSCN, so these events will be reported without any further configuration. 5. To activate the Advanced Brocade API (which we need for zoning support for the 2005-B32 switch) you must select Enable Advanced Brocade Discovery and enter the user ID and password for the switch, as shown in Figure 8-25.
Figure 8-25 Configuring Out of Band Fabric connectivity - enable Advanced Brocade API
359
6. Save this information by selecting File Save in the menu bar. The out-of-band Fabric discovery for our 2005-B32 switch is now complete. As an alternative if we had chosen to enter our switch manually, we could have done this by expanding Administrative Services Agents and right-clicking Out of Band Fabric. This brings up a context menu where we can select Add. TotalStorage Productivity Center would open a dialog box where we could enter the host name or IP address of the switch to add, as shown in Figure 8-26. In this case, we could enable the Advanced Brocade API right away.
Figure 8-26 Configuring Out of Band Fabric connectivity - manually enter switch
Discovery jobs locate data sources and collects basic information about these data sources. In the scope of this chapter, we run discovery jobs against CIMOMs and out-of-band fabric agents. When a discovery job is run against a storage subsystem CIMOM, the job locates all storage subsystems behind this CIMOM and retrieves all information the CIMOM holds for these systems. The discovery job however does not cause the CIMOM to login to the storage subsystems and retrieve more detailed information. When a discovery job is run against a fabric, the job retrieves all available information for the fabric if supported by the switches. Probe jobs collect detailed statistics on all the assets of the managed infrastructure, such
as computers, disk controllers, hard disks, clusters, fabrics, storage subsystems, LUNs, tape libraries, and file systems Probe jobs can also discover information about new or removed disks and file systems. Probe jobs can be directed against any elements in the 360
managed infrastructure. In our examples we run Probe jobs against storage subsystems, fabrics and computers.
Scan jobs collect statistics about the actual storage consumption. Scans are always
directed against a Data Agent and deliver very detailed information about the file systems, files, and databases of computers.
Ping jobs gather statistics about the availability of the managed computers. Ping jobs
generate TCP/IP pings and consider the computer available if it gets an answer. This is purely ICMP-protocol based and there is no measurement of individual application availability. Like Scans, pings can be directed against computers only.
Performance Monitor jobs collect statistics about the performance of storage subsystems
and switches. Performance Monitor jobs can be run against storage subsystems, SVCs, and switches and always need a CIMOM to communicate with the elements. Figure 8-27 shows an overview of the types of data collection jobs and the components from which they gather information.
Fabric Agent
SNMP / API
DISCOVERY
CIMOM
PROBE
SAN Components
central TPC DB
Data Agent
SCAN
Computers
In 8.4, Initial configuration on page 344, we show in detail how to define and run discovery jobs. In the following sections we create Probe jobs, Scan jobs, Ping jobs and Performance Monitor jobs for our infrastructure, and show the detailed steps for each of the data collection jobs.
PING
361
2. In the What to Probe tab in the Content Pane, select which of the infrastructure elements against which your Probe job will run. You could define one Probe job for all of the elements in your infrastructure. However, we recommend defining multiple Probe jobs to improve the granularity of your data collection strategy. For our infrastructure, we are going to define one Probe job for each element category (Storage subsystems, tape library, computers and fabric). 3. To select all computers for your first Probe job, expand the Computers node of the Available column and look for the list of computers. See Figure 8-29 on page 363. Select All computers because as we want to include all computers (also includes computers added in the future) in our Probe job.
362
We chose to run the Probe job repeatedly once a day at 1:00 am. 4. Review the When to Run and Alert tabs, and set the schedule and Alert options accordingly. Select Enabled in the upper right corner of the Content Pane. Lastly, select File Save in the menu bar. TotalStorage Productivity Center asks for a name for the Probe job. We name our job Probe Computers. Note: A Probe job gathers detailed information about your infrastructure. Therefore, we recommend running the Probe jobs in recurrent intervals. To determine those intervals, you should take many individual factors into account. In any case, there will be a trade-off between information currency and overhead generated by the Probe jobs. Generally, you might want to have one Probe job per element per day. 5. We have now defined our first Probe job for all our computers. We scheduled the job to run each day at 1:00 am. However, we want the job to run right now as well. See Figure 8-30 on page 364. Expand IBM TotalStorage Productivity Center Monitoring Probes, and right-click the entry for the job we just have defined. In the context menu, select Run now. This will run the computer Probe job right away without modifying it.
363
6. We now repeat the steps for our other infrastructure elements which are storage subsystems, tape libraries, and fabrics. After defining Probe jobs for all of those element classes and running them right now, we see the following screen in Figure 8-31.
Figure 8-31 Collecting Data - all Probe jobs defined and execution started
In Figure 8-31 we see an entry for each Probe job we just have defined under the IBM TotalStorage Productivity Center Monitoring, Probes Node of the Navigation Tree. Their names are built from the user ID which we used to logon to TotalStorage Productivity Center and the name we specified when saving the job definition. 364
IBM TotalStorage Productivity Center: The Next Generation
Each Probe job has one entry for the run we initiated when we selected that the job should run now. There will be another entry created each day at 1:00 am. The blue circles next to the entries for the job runs indicate that the jobs are still running. The little yellow triangle next to our run of the Probe Computers job indicates that the job run has completed with warnings. We can examine the logs in the Content Pane to find the reasons for the warnings. Tip: TotalStorage Productivity Center allows you to create individual groups of infrastructure elements within the element class. So you could create computer groups or storage subsystem groups for example. You can then use these groups to specify the elements for your Probe job. Refer to the TotalStorage Productivity Center V3.1 Users Guide for details on Grouping.
2. In the Filesystems tab (Figure 8-33 on page 366), select either computers or single file systems of computers. We would also be able to select groups of file systems if we had defined them before.
Selecting a computer means implicitly selecting all the file systems of this computer. In our
case, we select all the computers of our infrastructure with all their file systems to be scanned.
365
Figure 8-33 Collecting Data - create Scans, all computers and file systems selected for Scan
Note: Only file systems found by Probe jobs will be available for Scans. 3. The Directory Groups tab allows us to specify directory groups, which would allow us to direct the Scan to specific directories within a file system without having to scan the entire file system. Directory groups can be either defined by expanding Data Manager Monitoring Groups, right-clicking Directory, and clicking Create Directory Group or directly from the Directory Groups tab of the Scan job definition.
Profiles
In the Profiles tab, we can select one or more profiles for our Scan job. Profiles allow us to specify what statistical information is gathered and to fine-tune and control what files are scanned. Profiles are generally used to: Limit files to be scanned. Specify file attributes to be scanned. Select the summary view: Directories and file systems User groups Operating system user groups Set statistic retention periods. TotalStorage Productivity Center provides a selection of default profiles and allows you to create user-defined profiles. Profiles can be defined either by expanding Data Manager Monitoring, right-clicking Profiles, and clicking Create Profile, or directly from the Profiles tab of the Scan job definition. The creation and use of profiles in Scan jobs is documented in detail in the TotalStorage Productivity Center V3.1 Users Guide.
366
Each of the default profiles allows us to select a specific statistic needed for the default reports which TotalStorage Productivity Center provides. Because we want to gather all of the statistical information available from our computers and file systems, we select all of the default profiles and apply them to file systems and directories when we create our Content Pane (Figure 8-34).
Figure 8-34 Collecting Data - create Scans, select all default profiles
In the When to Run tab, we choose to run our Scan job each morning at 5:00 am. After reviewing the When to Run and Alert tabs, setting the schedule and Alert options accordingly, and checking Enabled in the upper right corner of the Content Pane, we select File Save in the menu bar. TotalStorage Productivity Center asks us for a name for our Scan job. We name it Scan Computers. We again want our job also to execute immediately, so expand Data Manager Monitoring, Scan, and right-click the entry for the job we just have defined, then select Run now. By selecting the entry created for the run, we can then examine the logs for our Scan job (Figure 8-35 on page 368).
367
2. In the Computers tab, select the computers which our Ping job will ping. For our Ping job, we select to ping all computers. 3. In the When to Run tab, we schedule the frequency of our ping. We want TotalStorage Productivity Center to run a ping against all our computers every 10 minutes. So, we select to run the job immediately and repeatedly, with Run Now and repeating the ping every 10 minutes (Figure 8-37 on page 369).
368
4. In the Options tab, we can specify how often the Ping statistics are saved in the database repository. By default, TotalStorage Productivity Center keeps its Ping statistics in memory for one hour before flushing them to the database and calculating an average availability. We can change the flushing interval to another amount of time, or a number of Pings (for example to calculate availability after every 10 Pings). The system availability is calculated as:
(Count of successful Pings) / (Count of Pings)
A lower interval can increase database size, but gives you more accuracy on the availability history. We selected to save each Ping into the database at each Ping, which means we will have an availability of 100% or of 0%, but we have a more granular view of the availability of our servers. 5. For a Ping job, we must specify a condition in the Alert tab. We must specify what kind of Alert TotalStorage Productivity Center will generate if a computer is not reachable a certain number of times. We choose to generate an email if a computer is not reachable more than 5 Times.
369
6. After checking that Enabled in the upper right corner of the Content Pane is selected, we select File Save in the menu bar. TotalStorage Productivity Center again asks us for a name for our Ping job. We name it Ping Computers. 7. By selecting the entry created for the run we can examine the logs ouf our Ping job.
2. In the left column of the Storage Subsystems tab of the Content Pane, we see a list of all subsystems which are available in our infrastructure for performance monitoring. We could select all our storage subsystems to be monitored within this Performance Monitor job, however we recommend to specify a job for each storage subsystem for which you would like to collect performance data. This would improve flexibility, as you can specify different intervals and durations for the jobs as well as different alert conditions (what to do, if the Performance Monitor fails). 3. We begin with the definition of a Performance Monitor job for our SVC system, so we select the entry for the SVC and move it to the Selected subsystems column (Figure 8-40 on page 371).
370
Figure 8-40 Collecting Data - create subsystem Performance Monitor - select storage subsystem
4. In the Sampling and Scheduling tab, we can specify the resolution (sampling interval) and duration of our performance data collection as well as a point in time to start the Performance Monitor. Which interval lengths (sampling interval, resolution) we can choose are determined by the storage subsystem. The SVC offers us intervals between 15 minutes and one hour. We choose 15 minutes. We also choose to run the performance data for 24 hours starting immediately (Figure 8-41).
Figure 8-41 Collecting Data - create subsystem Performance Monitor - set time line
5. If we click Advanced next to the selection for the interval length, TotalStorage Productivity Center offers to specify a frequency. This frequency may be greater than or equal to the
Chapter 8. Getting Started with TotalStorage Productivity Center
371
interval length. If the frequency is larger than the interval length, not every sample gathered will be saved in the TotalStorage Productivity Center database, which will save disk space. However, every sample that contains a Constraint Violation will be saved regardless (see 8.8, Alerting on page 398). In our example we leave the frequency equal to the interval length, so that every sample of performance data retrieved from the SVC will be saved in the central database. 6. In the Alert tab, we specify what Alerts TotalStorage Productivity Center will trigger when the performance collection job fails. Note: The Alert tab does not specify any conditions which are raised when performance thresholds are exceeded. Those Alerts are specified in a different way. Refer to 8.8, Alerting on page 398. 7. After verifying that Enabled is selected, we select File Save in the menu bar. TotalStorage Productivity Center asks us for a name for our Performance Monitor job. We name it SVC Performance Monitor. 8. Our job definition has created an entry under the Disk Manager Monitoring Subsystem Performance Monitors node. We can see it by right-clicking this node and selecting Refresh. Below this entry, we see a further entry for each run of the job. As we have chosen to start the execution of the job immediately, we find an entry for our run, which is marked by a blue circle indicating that the job is currently running.
9. If we select the entry for the run of our Performance Monitor job we see the job log in the Content Pane. We can examine the job log by clicking on the symbol next to it. The log shows us that the Performance Monitor begins with a re-retrieval of the storage subsystem configuration data (which is also retrieved by Probe jobs). We then see entries for the performance data retrieval for every interval in the log. In our example in Figure 8-43 on page 373 we see that the Performance Monitor job has found that the SVC has I/O Group, 3 MDisk Groups, 13 MDisks, and 16 VDisks. It has retrieved a incomplete sample of performance data for some reason and has then inserted 29 performance records into the database.
372
10.We define a Performance Monitor for each of our storage subsystems except the DS4500 (performance monitoring for the DS4500 is not available with the Version 3.1.0 of TotalStorage Productivity Center). After completing the definition, we see the following screen in Figure 8-44.
373
2. In the left column of the Switches tab of the Content Pane, we see a list of all switches which are available in our infrastructure for performance monitoring. In our case we see our only switch, the 2005-B32. We could select all available switches (if there were more) to be monitored within this Performance Monitor job, however again we recommend to specify a job for each switch for which you would like to collect data. 3. We select the entry for the 2005-B32 switch and move it to the Selected switches column. 4. In the Sampling and Scheduling tab, we set our interval, frequency, and schedule for the switch Performance Monitor as we did for the storage subsystem Performance Monitors. We choose a interval length and frequency of 5 minutes for our Performance Monitor job and select to have it Run Now and run for 24 hours (Figure 8-46 on page 375).
374
Figure 8-46 Collecting Data - create switch Performance Monitor - set time line
5. In the Alert tab, we specify what Alerts TotalStorage Productivity Center shall trigger when the Performance Monitor job fails. Again these Alerts have nothing to do with Alerts you might want to have generated when certain performance thresholds are exceeded. 6. We select File Save in the menu bar. TotalStorage Productivity Center asks us for a name for our Performance Monitor job. We name it Switch Performance Monitor. 7. If we refresh the Switch Performance Monitor node we see an entry for our switch Performance Monitor job definition. If we expand the entry we see our job run. Clicking it allows us to examine the job log while the Performance Monitor is running. As with storage subsystem Performance Monitors, the switch Performance Monitor starts with retrieving the configuration data of the switch. We find in the log, that the Performance Monitor has detected 32 ports (Figure 8-47).
375
376
We see the five storage subsystems of our infrastructure, one DS8000, one DS6000, one ESS, one DS4500 and a SVC. 2. By clicking the symbol next to the entries of the storage subsystems we see a detail panel with the basic information about these storage subsystems in the Content Pane. Figure 8-49 shows the detail panel for our DS8000 storage subsystem.
Figure 8-49 Viewing storage subsystem data - storage subsystem detail panel
3. We now return to the storage subsystem list by selecting the Storage Subsystems tab. 4. Next we want to display information about the volumes of our storage subsystems. We can get the information by highlighting a storage subsystem and clicking Volumes (Figure 8-50).
Figure 8-50 Viewing storage subsystem volume information - select storage subsystem
5. Because we have highlighted our DS8000 storage subsystem, we have to choose the extend pool for which we like to have the volumes listed in the next panel. We have two Extend Pools for open systems configured on our DS8000. They are named ITSO_TPC_EP0 and ITSO_TPC_EP1 (see Figure 8-51 on page 378). We select ITSO_TPC-EP0.
377
Figure 8-51 Viewing storage subsystem volume information - select extend pool
We receive the following list of volumes in this extend pool as shown in Figure 8-52.
6. If we click the icon next to a certain volume, we can view a detail screen for this volume as shown in Figure 8-53 on page 379.
378
Figure 8-53 Viewing storage subsystem volume information - volume detail panel
7. Besides the information about the volumes in our storage subsystem, we can find a very detailed view on the assets of our storage subsystem if we expand Data Manager Reporting Asset By Storage Subsystem (see Figure 8-54), and then double-click the storage subsystem asset.
379
Reports
We can browse through the details of the storage subsystem configurations by expanding and collapsing the nodes for the logical entities. TotalStorage Productivity Center presents information about storage subsystems also in form of reports. There are predefined reports within TotalStorage Productivity Center and you can create your own reports. All the predefined reports for storage subsystems are performance-related and can be found under IBM TotalStorage Productivity Center My Reports System Reports Disk. We discuss them when looking at the storage subsystem performance data.
Custom reports
Because there are no predefined reports related to asset and configuration data of storage subsystems, here we generate reports by ourselves. We show how to do this by generating a report that lists the LUN mapping of the Virtual Disks of our SVC which are bigger than 100 MB in capacity, as an example. Note: The concept of reports is widely used within the TotalStorage Productivity Center user interface and works similar for all kinds of reports. 1. To generate the report for our SVC LUN mapping we expand Disk Manager Reporting Storage Subsystems LUN to HBA Assignment and select By Storage Subsystem. In the Content Pane we see all the data which would be available for a LUN to HBA Assignment report. By default TotalStorage Productivity Center includes all available columns in the report. We are able to deselect some of the included columns and to reorder them according to our needs.
Figure 8-55 Viewing storage subsystem information, create reports, select data
2. We then click Selection. TotalStorage Productivity Center shows us a list of all available storage subsystems and we select which storage subsystems we want to include in our report (Figure 8-56 on page 381).
380
Figure 8-56 Viewing storage subsystem information, create reports, select storage subsystem
3. We want to generate our report for our SVC system only, so we deselect all storage subsystems except the SVC. Clicking OK brings us back to the panel shown in Figure 8-55 on page 380. 4. Click Filter. We can now specify the conditions which have to be met for each single row of the report. The filter, of course, applies only to the storage subsystems which have been selected for the report (in our case the SVC). We want to have only Virtual Volumes reported, which exceed the capacity of 100 MB. So we select LUN Capacity as the column on which we want to apply a filter on the next panel, as shown in Figure 8-57.
Figure 8-57 Viewing storage subsystem information, create reports, apply filter
5. Specify which condition the capacity of the LUN has to meet to be included in our report. We select >= as an Operation and enter 100M as Value 1. If you are not sure about the notation of the values, use the Edit button. 6. Clicking OK brings us back to the panel shown in Figure 8-55 on page 380. 7. We have now successfully defined our report. We save the report by selecting File Save in the menu bar. We are prompted to specify a report name and name the report SVC LUN mapping. The report definition was saved under IBM TotalStorage Productivity Center My Reports tpcadmins Reports and can be generated by simply clicking it. The report can also be generated directly from the panel shown in Figure 8-55 on page 380 by clicking Generate Report, even without saving the report definition.
381
The report shows us 19 LUN mappings for Virtual Disks greater than 100MB of capacity. It shows us the HBAs to which the LUNs are mapped and all the other information we selected to be included in our report. Many reports within TotalStorage Productivity Center allow us to zoom into more details of the rows of the report. In our example, if we click to the little icon left of the single rows, we can see a detail panel for the Virtual Disk of this specific LUN mapping showing us on which Managed Disks the extends of this Virtual Disk are located as shown in Figure 8-59.
Figure 8-59 Viewing storage subsystem information, example report, SVC Virtual Disk details
382
Now that you know how to generate reports within TotalStorage Productivity Center, you can generate further reports related to asset and configuration information for you storage subsystems under the Disk Manager Reporting Storage Subsystems node of the Navigation Tree.
9. We can generate one of the reports simply by clicking it. In the example in Figure 8-61 on page 384, we generate the Subsystem Performance report.
383
Figure 8-61 Viewing storage subsystem performance, predefined report Subsystem Performance
We see an aggregation of the overall performance of the storage subsystems for our infrastructure for which performance data has been collected and is available (remember, we have no support for DS4000 performance management with the code bundle used to write this book). Note: A SAN Volume Controller is not considered a storage subsystem in terms of performance management. At the time of writing, performance data cannot be aggregated for an entire SVC cluster. For this reason there is no SVC performance data on a storage subsystem level. Performance data for a SVC is reported on an IO-Group, MDisk-Group and VDisk level instead. This might change in future releases of TotalStorage Productivity Center 10.By clicking on the Selections tab, we can review and also change the definition for this predefined report and regenerate it. 11.We can now generate a chart out of our predefined report. The chart generation process is initiated by clicking the little pie chart icon in the top left corner of the Content Pane (see Figure 8-62). Each time you see this icon, you are able to generate an chart out of a report or parts of a report.
384
12.TotalStorage Productivity Center offers us a menu where we can select the chart type and rows of the report which will be included in the chart as well as the performance metric we want to have charted (see Figure 8-63). We select to draw a History Chart over all available samples, include all storage subsystems in our chart and that we want to see the Total IO Rate as a metric.
Figure 8-63 Viewing storage subsystem performance, generate chart, chart type and metrics
If we select just Chart as the chart type, the system draws a bar diagram for all three storage subsystems representing the average Total IO Rate over the complete time where samples are available. Because we selected a History Chart, we see the result in Figure 8-64.
We see the history of the total IO Rate for our three storage subsystems for all available samples. The DS6000 has not reported any IO activity in the monitored time frame. We could now limit the time scale and the resolution and redraw the chart.
385
Note, that TotalStorage Productivity Center generates also some trends for the performance of our storage subsystems in the future which are depicted by the dotted prolongation of the performance graphs. This can be very useful to foresee performance bottlenecks and determine appropriate measures to prevent them from occurring. In addition to the predefined storage subsystem performance reports, you can generate your own reports under Disk Manager Reporting Storage Subsystem Performance. The generation of those performance reports works similar to the generation of reports about asset and configuration data. TotalStorage Productivity Center allows you to generate your own storage subsystem reports by: Storage Subsystem Controller IO-Group (SVC) Array Managed Disk Group (SVC) Volume Managed Disk (SVC) Port
2. We can select the fabric and click Zone Configuration. TotalStorage Productivity Center shows us a panel with the zoning information of this fabric (see Figure 8-66 on page 387). We can also change the zoning in these panels. We discuss zoning changes in 8.9, Configuring your storage subsystems and switches on page 409.
386
Besides this information, TotalStorage Productivity Center offers us a number of predefined reports to view asset and configuration data about our SAN infrastructure. We find these reports under IBM TotalStorage Productivity Center My Reports System Reports Fabric as shown in Figure 8-67 on page 388. There reports show the following data:
Port Connections lists information about all free and occupied ports on the switches in
your SAN. These reports provide information about the port connections and status, as well as the port and switch IDs, and other details.
SAN Assets (All) lists information about all assets discovered by TotalStorage Productivity
Center.
SAN Assets (Connected Devices) lists information about all assets discovered by TotalStorage Productivity Center, that are currently connected to the SAN. SAN Assets (Switches) lists all the switches discovered by TotalStorage Productivity
Center.
Port Errors lists information about errors being generated by ports on the switches in your
SAN. These reports provide information about the error frames, dumped frames, link failures, port and switch IDs, and other details. This report can only be generated for switches that have performance data collected.
387
Switch Performance displays a list and charts on the overall performance of the switches
in your infrastructure. Overall performance means, that TotalStorage Productivity Center aggregates the performance data over all ports in the switches.
Top Switch Port Data Rate Performance displays a list of the 25 ports in your fabric which show the highest data rates (averaged over the time, for which performance data has been collected). Top Switch Port Packet Rate Performance displays a list of the 25 ports in your fabric
which show the highest packet rates (averaged over the time, for which performance data has been collected). Figure 8-68 on page 389 shows the Top Switch Port Data Rate Performance report for our infrastructure as an example.
388
Figure 8-68 Viewing fabric performance data - Top Switch Port Data Rate Performance report
In addition to the predefined reports, we can generate our own, customized reports and charts by expanding Fabric Manager Reporting Storage Subsystem Performance Switch Performance and selecting By Port. As an example, we generate a chart showing the performance development of the ports to which our SAN Volume Controller is connected. We have a two-node SAN Volume Controller in our infrastructure, so it is connected to eight ports of our 2005-B32 switch. 1. To determine which ports the SVC is connected to, we look at the Port Connection report under IBM TotalStorage Productivity Center My Reports System Reports Fabric. We sort the report by Device by clicking on the column header. We can see the following list shown in Figure 8-69.
Figure 8-69 Viewing fabric performance data - generate customized report, determine ports to report
We note, that our SVC is connected to the ports 5, 6, 12, 13, 14, 15, 21, 22. 2. Expand Fabric Manager Reporting Storage Subsystem Performance Switch Performance, select By Port and click Selection in the Content Pane.
389
3. TotalStorage Productivity Center shows us a panel where we can select the ports we want to have included in our report. Note, that in this panel TotalStorage Productivity Center offers us only the worldwide names of the ports, so we have to map it to the port numbers. In our example, port 5 is 200500051E34E895, port 6 is 200500051E34E895 and so on. We click Deselect All and then select our eight ports (Figure 8-70).
Figure 8-70 Viewing fabric performance data - generate customized report, select ports to report
4. We click OK and then Generate Report. TotalStorage Productivity Center shows us the performance report for the eight ports we selected. 5. We now want to generate the chart, so we select the little pie chart icon on the top left corner of the Content Pane. In the dialog panel which opens, we specify the type of chart and the metric we want TotalStorage Productivity Center to use to generate the our chart. We select History Chart and as a metric we select the Total Port Data Rate as shown in Figure 8-71 on page 391.
390
Figure 8-71 Viewing fabric performance data - generate customized report, specify chart
6. After clicking OK, TotalStorage Productivity Center shows us our performance history chart depicting the Total Port Data Rate for the eight SAN Volume Controller Ports, as shown in Figure 8-72.
We can save the report by selecting File Save in the menu bar. Note, that this would save the report definition, but not the definition for our chart.
391
2. In our example, we see the one tape library we have in our lab environment. TotalStorage Productivity Center shows us the status of the library, the number of drives, cartridges and the maximal number of cartridges. We can launch a detail panel for our tape library by clicking the icon next to the entry (Figure 8-74).
3. In this panel, you can enter a name for your tape library and some user-defined properties. Selecting the Tape Libraries tab of the Content Pane returns you to the list of tape libraries. 4. You can now generate lists for drives, media changers, IO ports and cartridges. You can also launch the element manager of our tape library. Figure 8-73 shows a list of the available drives in our lab tape library as an example.
392
There are no further reports, tables or charts for tape libraries available with TotalStorage Productivity Center V3.1.
393
Figure 8-76 Viewing computer and file system information - asset browsing, grouping
2. By drilling further into the nodes of the individual computers we will find all important storage-related information such as disk controllers, disks, file systems, network shares and so on. Clicking an element in the Navigation Tree always shows a detail page about the element in the Content Pane. Figure 8-77 on page 395 shows a fully expanded asset tree for a Microsoft Windows 2003 server to provide an overview of the information you will be able to browse.
394
Figure 8-77 Viewing computer and file system information: asset browsing, overview
Predefined reports
Next we will look at the system supplied predefined reports on computers and file systems. You can find those reports under IBM TotalStorage Productivity Center My Reports System Reports Data. TotalStorage Productivity Center offers the following predefined reports:
Access Time Summary Report is a summary of the number of files in your environment and
when they were last accessed.
Disk Capacity Summary Report includes disk capacity, per disk, per computer, per cluster, per computer group, per domain or for the whole environment. Access File Summary Report is an overview of the information for files by directory,
directory group, file system, file system group, cluster, computer, computer group, domain, and for the entire network.
Disk Defects Report lists any disk defects on the computers being monitored by
TotalStorage Productivity Center through Data Agents.
Most at Risk Files Report displays information about the oldest files that have been
modified and have not yet been backed-up or archived since they were modified.
Oldest Orphaned Files Report displays information about files that have the oldest creation date and no longer have the owners registered as users on the computer or network. Most Obsolete Files Report lists information about files that have not been accessed or
modified for the longest period of time.
395
Storage Access Times Report lists when files were last accessed and how long ago they
were accessed.
Storage Availability Report displays the availability of the computers that are monitored
with pings.
Storage Capacity Report lists storage capacity information about each computer within
your organization.
Storage Modification Times Report provides information about files within the network that
were modified.
Total Freespace Report displays the total amount of unused storage across a network. User Quota Violations Report lists which users within the enterprise have violated a Data
Manager quota.
User Space Usage Report lists information about storage statistics related to a specific
user within your enterprise.
Wasted Space Report lists information about storage statistics on non-OS files not
accessed in the last year and orphan files. Some of the reports allow you to drill down into the single rows of the reports and some allow you to generate charts out of the report. 1. As an example, we take a look at the Disk Capacity Summary Report. When generated, it first shows us a one row report summarizing the total disk capacity of all computers in our infrastructure running Data Agents shown in Figure 8-78.
Figure 8-78 Computer and file system information, predefined reports, and Disk Capacity Summary
2. In our environment, the six computers which have Data Agents installed have a total disk capacity of 19.26 TB. We can now drill into this row and see the disk capacities for each of those six computers, as shown in Figure 8-79 on page 397.
396
Figure 8-79 Viewing computer and file system information, Disk Capacity per computer
3. By clicking the pie chart icon in the top left corner of the Content Pane, we are able to generate a graphical depiction of this disk capacity distribution in form of a bar chart (see Figure 8-80.)
Figure 8-80 Viewing computer and file system information, Disk Capacity per computer, bar chart
4. In addition to these predefined reports you can generate a wide variety of user-defined reports under Data Manager Reporting as shown in Figure 8-81 on page 398.
397
Figure 8-81 Viewing computer and file system information, user defined reports
The generation of those reports works similar to the generation of the reports we highlighted in previous sections. Refer to the IBM TotalStorage Productivity Center Users Guide, GC32-1775 for more details on the user-defined reports available.
8.8 Alerting
You can set up TotalStorage Productivity Center to examine the data about your infrastructure as it enters the system for certain, configurable conditions and trigger different events as a reaction to those conditions. For this purpose TotalStorage Productivity Center uses two mechanisms, Alerts and Constraints. Alerts and Constraints have different goals but are closely related, as we will see. First, we define what we understand by Alerts and Constraints.
Alerts
TotalStorage Productivity Center Alerts cause an entry in the TotalStorage Productivity Center Alert Log found under IBM TotalStorage Productivity Center Alerting Alert Log. In addition to this entry, an Alert can optionally cause one of the following actions: SNMP Trap Tivoli Enterprise Console (TEC) Event Login Notification (a notification of a TotalStorage Productivity Center user, which will be received upon login) Windows Event Log Entry The execution of a Script on any server in the infrastructure running a Data Agent E-mail
Constraints
TotalStorage Productivity Center Constraints are configurable conditions (often, but not always thresholds) which cause TotalStorage Productivity Center to log the occurrence of a 398
IBM TotalStorage Productivity Center: The Next Generation
detection of such a condition in its database as a Constraint Violation. These Constraint Violations can be viewed in reports and charts and can also be used to trigger certain actions. As we have already seen when defining our data collection jobs in the previous chapters, we can define Alerts for jobs within TotalStorage Productivity Center to notify the user when the job would complete with errors. In addition to those more implicit Alert definitions, we can define Alerts for a lot of other storage-related events in our environment. In this chapter we concentrate on Alerts and Constraints for: Computers and their file systems and directory structures Storage subsystems Fabrics, switches and SAN endpoints TotalStorage Productivity Center handles the Alerts and Constraints a little different for computer, file systems and directories on the one hand and storage subsystems and fabrics, switches and SAN endpoints on the other hand as we will see in the following sections.
generates
Alert
may generate
Constraint
generates
Figure 8-82 Alerts and Constraints for computers, file system and directories
There are a lot of very granular conditions TotalStorage Productivity Center allows you to define for Alerts and Constraints for computers, file systems and directories. This is especially true for Constraints which can become rather complex. However, we start our discussion with Alerts.
Creating an Alert
We now define an Alert and a Constraint for our infrastructure as an example. We want to receive an e-mail if the root file system of our AIX server (AZOV) goes below a threshold of 200 MB. We define this state as an Alert.
399
We further want to create a Constraint Violation each time a certain user (tpcadmin) stores an mp3 file to the file system on the C: drive of the pqdi server. We want to create an entry in the TotalStorage Productivity Center Alert Log if the total capacity of all these mp3 files exceeds 5 GB. 1. We start by setting up our Alert. Expand Data Manager Alerting, right-click Filesystem Alerts and select Create Alert (see Figure 8-83).
2. In the Content Pane (Figure 8-84), we must specify our condition, which should trigger TotalStorage Productivity Center to raise the Alert. In our example the condition is that a file systems freespace falls below 200 Megabytes. We further specify, that TotalStorage Productivity Center should send an email to a certain user, should the condition arise.
Figure 8-84 Creating file system Alert, specify alert condition and action
3. We then select the Filesystems tab of the Content Pane. Here we define the file system to which our condition will apply. In our case, we select the root file system of the AZOV machine (see Figure 8-85 on page 401).
400
4. We now save the Alert by selecting File Save in the menu bar. We name the Alert AZOV filesystem freespace alert (see Figure 8-86).
Now, the Alert is active. At the next Scan, when TotalStorage Productivity Center detects that the root file system of AZOV falls below the configured threshold of 200MB, an Alert will be raised in the TotalStorage Productivity Center Alert Log and an e-mail will be sent to the recipient specified in the Alert definition. Figure 8-86 below shows the TotalStorage Productivity Center Alert Log.
401
You can bring up a detail screen for the Alert by clicking the icon next to the Alert Log entry.
Constraint definition
In this section we define our Constraint. Remember we want to create a Constraint Violation each time a certain user (tpcadmin) stores a mp3 file to the file system of the C: drive of the pqdi server. In addition, we want to create an entry in the TotalStorage Productivity Center Alert Log if the total capacity of all these mp3 files exceeds 5 GB. 1. Expand Data Manager Policy Management Constraints. You can see a number of predefined Constraints which are already active when you install TotalStorage Productivity Center. 2. Right-click Constraints and select Create Constraint as shown in Figure 8-88.
3. Unlike the Alert definition, we first have to select the file system for which we like to define our Constraint. We select the C:\ file system of the pqdi server as shown in Figure 8-90 on page 403.
402
4. Select the File Types tab of the Content Pane. Here, you specify the type of files which you want to trigger the Constraint Violation. In our example, we select the predefined mp3 type. We could also define new file types in the panel shown in Figure 8-90.
5. We want further to limit the Constraint to one specific user, tpcadmin. We select this user in the Users tab of the Content Pane shown in Figure 8-91 on page 404.
403
6. In the Options tab, we could specify some more parameters for our Constraint such as the amount of violating filenames TotalStorage Productivity Center is going to store in its database and some conditions which would trigger an Alert other than those defined in the Constraint. 7. In the Alert tab, we specify the condition that an Alert should be raised if the total size of all violating mp3 files stored by this user stores exceeds 5 MB (Figure 8-92).
404
8. Save your Constraint and name it pqdi mp3 constraint. 9. Log on to pqdi using the tpcadmin user and store more than 5 MB of mp3 files on the C:\ file system. This will cause the generation of a Constraint Violation and an entry in the Alert Log on the next run of a Scan job against pqdi. 10.You can view the Constraint Violation by expanding Data Manager Reporting Usage Violations Constraints and selecting By Computer (Figure 8-93, Creating Constraint, Constraint Violation report on page 405).
11.We can even further drill into the Constraint Violation and list the files which cause the violation. In our case, the files are file1.mp3 to file4.mp3 and are stored in the root directory of C:\ as shown in Figure 8-94.
405
Because we defined it in our Constraint definition, the Constraint Violation has also triggered an Alert entry in the TotalStorage Productivity Center Alert Log. The reason is that the combined size of the four mp3 files is bigger than 5 MB (see Figure 8-95).
Alert
generates
Figure 8-96 Alerts and Constraints for storage subsystems, fabrics and switches
Although an Alert definition can generate a Constraint definition and you would be able to report on the Constraint Violations caused by those Constraint definitions, there is no straightforward way to view the Constraint definition itself.
406
Important: TotalStorage Productivity Center comes with three default Constraints which will cause Constraint Violations in the Constraint Violation Reports and may cause elevated status indications in the Topology Viewer. These Constraints however, do not raise a TotalStorage Productivity Center Alert Log when violated. Those predefined default Constraints are storage subsystem Constraints only. Their metrics and conditions are:
NVS_Full is defined as DASD Fast write operations delayed due to NVS space
Constraints divided by all I/O requests. Default values: critical level: 10, warning level: 3
Cache_HoldTime means cache holding time in seconds for this subsystem Default values: critical level: 30 sec. , warning level: 60 sec. Disk Utilization is expressed in percentages: critical level is 80%, warning level is 50%.
TotalStorage Productivity Center creates Constraint definitions for storage subsystems, and switches only, if you define Alerts with Threshold conditions. Those conditions are all performance related. This means that Constraint Violations will only be raised when performance data is entering the TotalStorage Productivity Center system while being collected. Here we create an Alert definition and an implicit Constraint definition for the condition, that the total Data Rate for our DS8000 subsystem exceeds 5 MB/s. 1. Expand Disk Manager Alerting, right-click Storage Subsystem Alerts and select Create Storage Subsystem Alert, as shown in Figure 8-97.
2. In the Content Pane we must specify the condition for our Alert. Select Total Data Rate Threshold. Enter the values for the metric. We can establish boundaries for normal expected subsystem performance when defining storage subsystem Alerts. When the collected performance data samples fall outside the range we set, an Alert will be generated. The upper boundaries are Critical Stress and Warning Stress. The lower boundaries are Warning Idle and Critical Idle. We can leave some of the values empty. In this case, TotalStorage Productivity Center will simply not check the performance data against this boundary. In our example shown in Figure 8-98 on page 408, we only define the Critical Stress value with 5 MB/s.
407
Figure 8-98 Creating storage subsystem Alert, specifying the alert condition
We do not specify any additional actions to be taken by TotalStorage Productivity Center when the Alert is raised. 3. In the Storage Subsystems tab, select the storage subsystem for which the Alert (and the implicit Constraint Violation) shall be raised. Select our DS8000 system as shown in Figure 8-99.
Figure 8-99 Creating storage subsystem Alert, specifying the alert condition
4. We now save our storage subsystem Alert definition and name it DS8000 Total Data Rate. Each time a performance data sample is retrieved by the Performance Monitor job running for our DS8000 storage subsystem, TotalStorage Productivity Center will now inspect the performance data against the threshold we specified in our Alert. When TotalStorage Productivity Center detects that the Total Data Rate exceeds 5 MB/s it will raise an Alert in
408
the Alert Log and will also log a Constraint Violation, because the Alert condition for this Alert was a Threshold condition. 5. We can view this Constraint Violation by expanding Disk Manager Reporting Storage Subsystem Performance. Select Constraint Violations. Select the Total Data Rate Threshold Column and generate the Report. The corresponding Alert can be viewed in the TotalStorage Productivity Center Alert Log. Note: If a performance manager job is currently running for a specific subsystem, fabric. or switch, you are able to define Alerts and implicit Constraints for that same element. However, these definitions will not be in effect unless you stop the Performance Monitor, delete it, redefine, and restart it. You can define Alerts for fabrics, switches and SAN endpoints and implicit Constraint definitions for switches in the same way as described above under Fabric Manager Alerting. The Constraint Violations can be viewed under Fabric Manager Reporting Switch Performance, by selecting Constraint Violations and generating the desired report. The Alerts can be viewed in the TotalStorage Productivity Center Alert Log.
409
Figure 8-100 Configuring storage subsystems and switches, invoke volume / virtual disk creation
2. Enter the zoning configuration dialog box by expanding Fabric Manager and selecting Fabrics. This shows a list with the available fabrics. Select a fabric and click Zone Configuration. It is also possible to enter the zone configuration dialog box directly from the Create Volume / Virtual Disk dialog box shown in Figure 8-101.
Figure 8-101 Configuring storage subsystems and switches, invoke zoning configuration
We will now create a Virtual Disk on our SAN Volume Controller and assign this Virtual Disk to our SVC Master Console ITSOSVC.almaden.ibm.com (see Figure 8-1 on page 341) as an example. There is no zone between this computer and the SVC, so that we need to create this zoning configuration also. 1. We enter the Create Virtual Disk dialog box by selecting the SAN Volume Controller and clicking Create Virtual Disk in the screen shown in Figure 8-100. 2. TotalStorage Productivity Center shows you a panel where you specify the properties of the Virtual Disk you want to create. Create one (1) Virtual Disk with a size of 500 MB, name it TPC_CREATED and let IO-Group 1 manage it. Specify to create the Virtual Disk in the DS4500_R10 Managed Disk Group. Click Next (see Figure 8-102 on page 411).
410
3. We now see a list with all WWPNs TotalStorage Productivity Center has located. The information about the WWPNs come from various sources (from the switches and from the host definitions in the subsystems) and they are not ordered by fabrics or any other criteria. As a result, we must look for one of our Master Console HBA, which we can locate easily because the system presents us with meaningful names for the HBAs of this computers. 4. Select one of the two HBAs (we could also select both) by moving the HBA to the Assigned Ports column. See Figure 8-103). Then we click Next.
411
5. TotalStorage Productivity Center now recognizes that there is no zone between the SVC and the Master Console. It asks you if it should create a new zone, update an existing one, or do nothing. Select Create a new zone with the name SVCMC_ITSOSVC01. See Figure 8-104 and click Next.
6. View the summary page as in Figure 8-105, where you can review your configuration request.
7. Click Finish. TotalStorage Productivity Center now creates three jobs: one for the creation of the Virtual Disk, one for the LUN mapping, and one for the creation of the new zone.
412
The first two jobs are related to the configuration changes of the SVC (Virtual Disk creation and LUN mapping) create an entry under Disk Manager Monitoring Jobs (see Figure 8-106). The job for the creation of the new zone causes an entry under Fabric Manager Monitoring Jobs.
8. We can monitor the jobs and examine their logs by clicking the entries and drilling into the job logs (see Figure 8-107).
The green squares and the entries in the job logs indicate that all the jobs have completed successfully, so that our SVC Master Console is able to access the Volume. 9. Look at your zone configuration and verify that the new zone has been created. Enter the zone configuration dialog box by expanding Fabric Manager Fabrics, select our fabric and click Zone Configuration. Expand the active zone set and see the new zone SVCMC_ITSOSVC01 with the correct ports (see Figure 8-108 on page 414).
413
10.As a last step, verify that the new Virtual Disk has been created and that the Virtual Disk was assigned to our Master Console HBA. For this, expand Disk Manager Reporting Storage Subsystems LUN to HBA Assignments and select By Storage Subsystem. Select the SVC to be included in your report and generate the report (see Figure 8-109).
Figure 8-109 Create Virtual Disk and zone, LUN to HBA Assignment Report
414
Chapter 9.
Topology viewer
Within IBM TotalStorage Productivity Center the Topology Viewer is designed to provide an extended graphical topology view; a graphical representation of the physical and logical resources (for example, computers, fabrics, and storage subsystems) that have been discovered in your storage environment. In addition, the Topology Viewer depicts the relationships among resources (for example, the disks comprising a particular storage subsystem). Detailed, tabular information (for example, attributes of a disk) is also provided. With all the information that topology viewer provides, you can easily and more quickly monitor and troubleshoot your storage environment. The overall goal of the Topology Viewer is to provide a central location to view a storage environment, quickly monitor and troubleshoot problems, and gain access to additional tasks and functions within the TotalStorage Productivity Center UI without users losing their orientation to the environment. This kind of flexibility through the Topology Viewer UI displays better cognitive mapping between the entities within the environment, and provides data about entities and access to additional tasks and functionality associated with the current environmental view and the user's role. The Topology Viewer uses the TotalStorage Productivity Center database as the central repository for all data it displays. It actually reads the data in user-defined intervals from the database and updates, if necessary, the displayed information automatically. The Topology Viewer is an easy-to-use and powerful new tool within TotalStorage Productivity Center. It will make a storage managers life easier. But as is true for every tool, you first have to understand the basics, the concept and also the dos and donts, to get the most out of it. This chapter guides you through the Topology viewer. After reading this chapter you will know how the Toplology Viewer is used, where to find the information you want to see, and how to navigate among the views within the tool. But first things first. We start with some definitions.
415
The same object displayed in a more detailed view of the Topology Viewer discloses much more detail. In Figure 9-2, for an object such as the DS8000, the device details such as the number of Disks, Pools, Volumes and LUNs are presented. By clicking the plus sign at the upper right of the displayed Entity Groups, you can see even more information. Entity Groups are described in the following sections.
416
Semantic Zooming is essentially a scaling technique applied to information abstractions, as opposed to graphical features, as in graphical zooming. While graphical zooming changes the scale of the graphical representation of the objects in a view, semantic zooming, changes the level of information abstractions.
Semantic Zooming is used in the Topology Viewer to address the scaling issue and is key to the design of the Topology Viewer. It is implemented with four distinct zoom levels or levels of abstractions. The four levels are defined as: 1. Overview, everything A global view which shows a highly aggregated view of the entire environment. 2. L0 (level 0) large groups of similar stuff A groups view focusing on a class of entities. It shows several groups of entities that correspond to the topology class selected. The entities are Computers, Fabrics, Storage Subsystems and other. 3. L1 (level 1), related groups of stuff A group view which focuses on one group of selected entities and shows their immediate neighbors such as a group of computers. 4. L2 (level2), one entity Detail view, the most detailed view that focuses on a selected entity and its neighbors. The four levels of Semantic Zooming within the Topology Viewer are used and described in detail in 9.2, Getting started on page 434. For example, at a high level an object representing a storage subsystems on L0 may be rendered as an icon with a title only as shown in Figure 9-3.
At lower levels, the same storage subsystem can be rendered as a box containing subentities representing disks and showing attributes of each individual disk. At yet another detail level, the ports of the storage subsystem might be shown, as well as status information for each port. The example in Figure 9-4 on page 418 shows the detailed view of a DS8000 Storage Subsystem on level L2.
417
Note that in a graphical zoom all of these details would be shown in all zoom levels though often too small to be useful and add clutter to the display. Semantic zooming thus reduces unnecessary details and allows the user to focus on relevant details at a particular level for a particular task.
9.1.3 Entities
An entity is defined as a physical device or logical resource discovered by TotalStorage Productivity Center. Each entity typically has a set of attributes. For example, an entity of class Computer has an OS Type attribute. Additionally, entities may have user defined attributes called User Defined Properties (UDPs) that can be set from the Topology Viewer. All discovered entities have a conventional identification associated with them, similar to a label. These labels are displayed in the graphical as well the tabular view.
Entity classes
The Topology Viewer assigns every entity discovered by TotalStorage Productivity Center to one of the four following entity classes: Computers Fabrics (includes switches) Storage (includes tape libraries) Other Specific views for each of these four classes are provided by the Topology Viewer. For each specific class, three separate views exist. These views match to the three different detail levels (L0-L2) defined in 9.1.2, Semantic Zooming on page 417. Including the special Overview, the Topology Viewer facilitates the display of thirteen different views. The table in Figure 9-5 on page 419 shows these views in a matrix.
418
Navigation between the single views, or cells, is simple and flexible. To drill down to a single computer, follow these steps: 1. Starting at the Overview, select the entity class Computer and view C0 will display. This is the Level 0 view of computers (groups of similar stuff). 2. Within C0 select one specific group and view C1 will display. This is the Level 1 view of computers (related groups of stuff). 3. Open the C2 view by selecting one single computer within the group displayed in C1, this will display the L2 view of computer (one entity). This is just an example. There are always multiple ways to find the requested information about the entities. Other examples and how to navigate between views is described in detail in 9.2.2, Navigation on page 436. Note: The Storage views include both storage subsystems and tape libraries.
419
The following example shows how this is done within Topology Viewer. 1. Assume that we have one entity in class Others. We expanded the group. The Topology Viewer displays information similar to Figure 9-6.
The entity possess two FC ports connected to a switch, but the entity type is unknown to TotalStorage Productivity Center. 2. Add information derived from other sources, to this entity. In this case the Type and a descriptive name are added. To add the detailed information, move the cursor over the unknown entity and right click. The Background Context Menu is displayed. Select Launch Detail Panel and the panel in Figure 9-7 opens.
3. By defining a type in our example Computer, the Topology is able to display this entity in the appropriate class. We also change the entities label from the originally displayed WWNN to TPC1. Opening the Computer Level 0 view (C0) displays the information in Figure 9-8 on page 421.
420
The changed entity is now be displayed as an entity with label TPC1 in class Computers.
Entity groups
Entities are the displayable elements within the Topology Viewer. On the overview level entities can be computers, fabrics or storage subsystems. On the detailed views entities such as ports, HBAs, disks, pools, LUNs, and volumes will be shown.
Because the number of entities can become overwhelming in a real system, grouping of entities is employed in order to reduce the number of entities shown in the topology view. The Topology Viewer provides grouping of entities that share some similarities. Depending on the entity type several grouping criteria are available. Users can change the default grouping algorithm so that entities can be regrouped dynamically based on the selected algorithm. By doing this, they can work most effectively based on their current tasks. Figure 9-9 shows grouping of Disk Subsystem entities in one single group:
To change the grouping algorithm, move the cursor over the specific group, right-click, then select group by option. Depending on the type of the entities in the group, different grouping criteria are available. Changing the grouping criteria in our example from Single Group to Group by Health Status, provides the following view of two groups shown in Figure 9-10.
421
Arrangement of entities within a group (sort order) can also be modified so that an ordering of entities is provided that best suits your current tasks. By clicking on the upper right corner (plus/minus sign) of an Entity Group, the group can be expanded or collapsed. In the graphical topology view as well as in the tabular view, a default grouping is applied so that collections of entities are shown as groups whenever possible.
Overlays are also displayed in the tabular view. The same Overlays as shown in Figure 9-11 are shown in in the tabular view in Figure 9-12.
In TotalStorage Productivity Center V3.1 Health Overlays are also applied to Connections by displaying them in different colors, see 9.1.6, Connections on page 424. This may be expanded to other overlays in future releases. Overlays can be turned on and off by setting the specific option in the Background Context Menu. This menu is available at all times for the background of the graphical views. The background is any part of the graphical view that is not displaying a group entity, individual entity, or connection. To invoke the context menu, right-click in the background. By selecting the Global Settings the overlay options are displayed.
422
Note: Overlay settings are not persistent, therefore you have to activate them for each session. Performance overlays are only visible on the detailed level, which is the L2 view.
9.1.5 Layout
The primary launch point for the Topology Viewer is the standard navigation area on the left side, for example the node tree, of the TotalStorage Productivity Center interface. The Topology Viewer provides several views of the environment. Each of the views is split into two synchronized views of the environment. Two subviews are displayed in the Topology Viewers panel, they are: Graphical Topology View displayed in the upper part of the panel Tabular View displayed in the lower part of the panel The Graphical and Tabular subviews are displayed together as one view and are always synchronized, they both display the same entities. Changes in one subview will update the other one and vice versa. After launching the Topology Viewer the top-level view is always open. If multiple views are open, only one is displayed in the panel. Each view creates its own Tab as shown in Figure 9-13. By selecting one of the Tabs you can quickly switch between the single views.
Graphical View
Essentially the Graphical View displays visual renderings of the environment with icons, boxes, and lines showing the entities and their logical and physical connections spatially. The Graphical View is a visual and spatial rendering of the entities and their relationship to each other in the environment. Entities in the environment are rendered as icons with labels. Groups of entities are rendered as boxes with labels. Depending on whether the group is collapsed or expanded, the individual entities in the group are visible, or a summary of the group's content is provided. Logical and physical connections among entities are rendered on demand as lines connecting the relevant entities or groups of entities. The advantage of the graphical view is that relationships among entities can be rendered spatially allowing users to spatially map the environment and orient themselves for their tasks.
423
Tabular View
The Tabular View displays a textual rendering of the environment using a set of tables organized in tabs by class of entities with each table organized in rows and columns showing different attributes of the entities in the environment.
Mini-map
To further facilitate effective navigation the graphical view contains a mini-map window that represents an abstract overview of the whole topology view, thereby providing an environmental and work context for the user. The mini-map indicates the visible portion of the topology, and facilitates user interaction to pan the visible portion.
The mini-map window renders the whole topology view in abstract form, reducing the level of detail and ensuring that the complete mini-map is visible at all times. Essentially, the map shows only the top and second level of detail of the current view and groups are represented as colored boxes based on the aggregated health value of that group. Note: In TotalStorage Productivity Center V3.1, the health overlay is shown. Future versions of topology might allow users to select what overlay is surfaced in the mini-map.
9.1.6 Connections
The Topology Viewer UI supports both physical and logical connections among entities in the environment. Connections are shown in both graphical and tabular views on demand. The connection option is turned on or off via the Background Context Menu (see 9.1.4, Information Overlays on page 422). Connections between entities are only shown at L1 and L2 view levels. In these views connections between entities are shown on demand, when the user selects entities or groups.
424
A connection to a collapsed group can represent either a single connection or several connections. Single connections are drawn as thin lines, but multiple connections are indicated using thicker lines. Therefore, Figure 9-15 indicates that the switch is connected to more than one entity in the collapsed group.
Health status
Health status is shown on connections. Individual connections can show normal (green) or critical state (red). If several connections are aggregated to be shown as one (thicker) line, this line may also show warning state (yellow) according to the normal aggregation rules. Aggregation rules are discussed in following sections. The table in Figure 9-16 on page 426 gives an overview of the actual states and their graphical representation within the Topology Viewer.
425
The Health Status Overlay displays operational status of the entities, such as Computers, Storage Subsystems, and Switch Ports. Entities can show the following different health values: Normal Warning Critical Missing Unknown Entity is normally operating. There is at least some part of the entity that is not operating or has serious problems. Entity is either not operating or has serious operational problems. Entity was discovered and recognized by TotalStorage Productivity Center but is no longer discovered as of the latest refresh cycle. Entity was was discovered but not recognized by TotalStorage Productivity Center.
Examples of Unknown health values are: A computer without any TotalStorage Productivity Center Agent, connected to a fabric or switch monitored by TotalStorage Productivity Center A Disk subsystem or Tape Library not registered to a CIMOM, or registered to a CIMOM not discovered by TotalStorage Productivity Center connected to a fabric or switch monitored byTotalStorage Productivity Center The example in Figure 9-17 shows three expanded entity groups. Each entity displays its specific state. The group state is displayed at the top to the left of the group label.
426
Performance status
The Performance Overlay displays the operational status of the entities, such as Computers, Storage Subsystems, Switch Ports, if predefined internal thresholds have been reached or exceeded. The Topology Viewer scans the PMM_Exception table for the existing constraint violations. Topology Viewer shows the performance status of an entity as one of the following: Normal Warning Critical Unavailable Entity is operating at expected performance. There is at least one part of the entry that is not operating at expected performance. Entity is operating below expected performance. Will be used if entity is not available in certain situations.
The performance status is aggregated up to a high level entity such as switch or subsystem based on the combination of all the Performance Manager exceptions from different entities underneath the high level entity. To get more information about a performance state or Critical or Warning shown in the Topology Viewer, select: Disk Manager Reporting StorageSubsystemPerformance ConstraintViolation Generate the report for the specific subsystem, look for the first critical or warning entry, and display it to get the details of Date, Time, and Values. Then search for the alert timestamp in the Alerts report to determine the most recent time alert. An example of Performance Overlay is shown in Figure 9-21 on page 430.
Aggregation rules
Multiple states from single entities are aggregated into group states or states of entities shown in upper layers. The table in Figure 9-18 on page 428 explains how entity states are aggregated within the graphical and tabular view of the Topology Viewer.
427
Following are the health and performance state aggregation rules: If all entities are in the same state, the aggregated state will reflect this state. An aggregated Critical State (red) occurs if: For a single entity, all the entries in the database are critical. One entity is in the Missing State and at least one other entity shows a Critical State (shown in the first and second rows). The state will remain critical as long as the entries are maintained in the PMM_Exception table. An aggregated Normal State occurs if: One entity is in the Normal State and all others are in Normal or Undefined state (fourth row from top). One entity is in the Undefined State and all others are in Normal or Undefined State (fifth row from top). An aggregated Warning State (yellow) occurs if: At least one entry for an entity is in the Warning state, or Critical and Warning State entities are combined. All other combinations not described under Critical State or Normal State will aggregate to Warning State.
428
States displayed within the tabs on top of the Topology View will be aggregated to a specific high-level entity class (refer to Entity classes on page 418). For Computers and Storage only the states of these entities are used for aggregation. For Fabric, the switch status only is used (Level 2). For Level 1 and Level 0 the states of all displayed entities are used.
9.1.8 Hovering
Hovering the mouse pointer over the status overlay icons reveals short textual descriptions of the status information, as shown in Figure 9-19.
Hovering displays performance details only where the data is actually available at that level of detail. For example, performance data is available for individual ports and switches as a whole. However, no aggregated performance data is being collected for port groups. Therefore, hovering over a port group's performance overlay icon reveals only a textual description of the overlay status. The first example in Figure 9-20 shows performance details when hovering over the performance overlay icon of a single port:
The example in Figure 9-21 on page 430 shows performance information when hovering over the performance overlay icon of the disk pool. Note that only a textual description of the overlay status is displayed.
429
9.1.9 Pinning
The simple selection paradigm is sufficient for most selection activities, however it does not allow for retention of the selected entities for future reference. In the Topology view, pinning entities is introduced as a way to overcome this problem. Pinning provides a method to retain a list of selected entities for future reference or quick access. Pinned entities are typically of high interest to the user, such as for monitoring purposes, and easy access is essential. Through pinning, the Topology Viewer can provide functionality that is similar to NetView SmartSets. Pinning marks entities for longer durations. The pins will remain visible as you change views in the Topology Viewer. It can be used to easily refer to a small number of entities throughout a session and to providing a direct path to find such items (see 9.2.2, Navigation on page 436). To pin an entity in the graphical view, open the Background Context Menu (see 9.1.4, Information Overlays on page 422), and select the Pin or Unpin menu option. In Figure 9-22 you see a pinned SVC at level S0 with Health state missing, as indicated in the upper left of the icon. Note that in an expanded group, pinning is shown on each pinned entity.
Figure 9-23 on page 431 shows the pinned entity as it is displayed in the Overview panel of the Topology Viewer. Note that in a collapsed group pinned entities are surfaced.
430
As you easily can see by this example, pinning in the Topology Viewer offers an easy-to-use and powerful method to select single or multiple entities you want to monitor.
Note: TotalStorage Productivity Center V3.1 supports only a single pin list and the pin list is not persistent. If the Topology Viewer is closed and reopened, the pins are not displayed. Future releases of the Topology Viewer might extend the pinning concept to multiple persistent pin lists and might introduce additional pinning functionality.
431
The zoning Information for this configuration will be shown in the tabular view of the Topology Viewer as shown in Figure 9-25.
In our example, each server HBA port is zoned to two ports of each of the two SVC nodes. We defined two zones (HBA1_ITSOSVC01 and HBA2_ITSOSVC01) containing five ports each. In the Label column, you can see the specific port WWPNs. Note: TotalStorage Productivity Center V3.1 Topology Viewer provides no control functions, therefore you cannot make zone changes. The Zone tab is a pure visualization of zone membership. Details about supported zoning actions can be found in 8.9, Configuring your storage subsystems and switches on page 409. At the time of writing this book, zoning information can be gathered in-band through the Fabric Agent for the supported McData and Cisco switches and out-of- band with the IP/API interface for Brocade switches.
432
The Topology Viewer refreshes its views from the database every five minutes by default. This can be changed by right-clicking anywhere the background of the Topology Viewers graphical view background. This will open the Background Context menu. Select Refresh Settings and update the setting based on your installation needs as shown in Figure 9-27 on page 434. This is a persistent setting.
433
Four child nodes are added under the Topology node. They are: Computers Fabrics Storage Other They provide four additional launch points for the Topology Viewer. They serve as a fast-track to unveil directly one of the four entity class views the Topology Viewer provides. When launching the Topology Viewer through one of the child nodes, it opens with two views. The selected child view at L0 will be displayed and the Overview Tab will be opened as well, so there will be an additional Tab displayed in the panel. As described in 9.1.5, Layout on page 423, this view remains open as long as the Topology Viewer is active. Launching the Topology Viewer by selecting IBM TotalStorage Productivity Center Topology will show the view in Figure 9-29 on page 435.
434
On the top left corner of the Topology Viewer panel the Overview Tab is displayed. Additional Tabs are added as you open new views. The graphical view shows the four entity classes (Computers, Fabrics, Storage and Other). For each class the actual number of entities and the aggregated Health status is displayed. This overlay is activated by default. The synchronized tabular view presents the same information. On top of the tabular view you find the Action drop down. It provides most of the function available through the Background Context Menu (accessed through a right click in the graphical view). The additional Locate drop down allows you to search items in the tabular view. Found items are highlighted. Tip: If you want to enjoy the Topology Viewer in premium cinemascope quality, select View from the TotalStorage Productivity Center UI and deselect the menu option tree. You can undo this by selecting tree again. On the top right corner of the Topology Viewer panel shown in Figure 9-29, the mini-map is displayed in the topology view. It indicates the visible portion of the topology, and facilitates user interaction to pan the visible portion (see 9.1.5, Layout on page 423). You can pan to the part of the topology you want to see easily by one of the following methods: Move the cursor to mini-map, press the left mouse button and move the cursor. From any point in the Topology Viewer, press the middle mouse button and move the cursor.
435
From any point in the Topology Viewer press and hold the Ctrl and Alt keys, move the cursor to navigate If you find it too hard to pan within the mini-map, simply resize the mini-map window and make it bigger or smaller. This will reduce/increase the speed of the movement of the visible part of the graphical view. Note: TotalStorage Productivity Center V3.1 supports the launching of the Topology viewer by using one of the five entry points as described above. In a future release, the Topology Viewer might be also launchable from a number of tasks such as reports.
9.2.2 Navigation
Navigating between the different views in the Topology Viewer is easy and simple. The only things you have to remember before you start are: There are four different entity classes (Computer, Fabric, Storage And Other). There are four (semantic) zooming levels per class (Overview, L0:groups of similar stuff, L1: related group of stuff, L2:one entity). There are a total of 13 views. The table in Figure 9-30 provides a simplified version of the different views available in the Topology Viewer. For the sake of simplicity, we omitted the entity class Other. A detailed description of this class is given in 9.1.3, Entities on page 418. The table in Figure 9-31 on page 437 shows the four entity classes (the columns), and the four zooming levels. Knowing where you are is good. Knowing where you can go from there is better. The possible navigation between levels is shown by the arrows in the table. For example, you can go from the Storage view S0 directly to the detail view S2.
Remember that pinned entities (see 9.1.9, Pinning on page 430) always offer the possibility to go directly to the specific entity view by simply clicking them. The following sections discuss the details of the single views of the Topology Viewer. We added a little navigator icon (see Figure 9-31 on page 437) to specific views only to help you
436
remember quickly which of the views is presented. Within this icon, we used the same terminology as in Figure 9-30 on page 436 (for example, F2 for entity class Fabric, view L2). Remember that this navigator icon is only added for educational purposes and is not part of the TotalStorage Productivity Center GUI. .
9.2.3 Computer
This section describes the levels within the Computer view.
L0: Computers
This is the default view upon entering the graphical view via selection of Computers in the navigation menu. The view renders all known computer groups. Entities always belong to some group so there are no ungrouped entities. The grouping criteria can be changed by the user, see 9.1.3, Entities on page 418. The example in Figure 9-32 shows the computer entities grouped by Health status, which is the default. Again note that the navigator icon is not part of the TotalStorage Productivity Viewer, but added by us. It is to help you identify where you are within the Topology Viewer.
437
L1: Computers
When the user navigates to the Computers L1 , the view shows one Computers Group and all known and related environmental entities for that Computers Group. Again, Computers group can be used to get an overview. Related entities are displayed as collapsed groups, by default, linked to the Computers group (see Figure 9-33). Physical connections are shown as a Fabric group, linked to the lower right. Logical connections to volumes are shown as a Device group linked to the group towards the upper right. The Device Group shows all related volumes. The Fabric Group shows all connected switches and fabrics. More details can be shown by expanding the group (see Figure 9-34 on page 439).
438
By selecting a specific computer, the connections tell you immediately which volumes on which disk subsystem are owned by this computer. More details about the volumes can be found in the tabular view by clicking the Volume tab. For more information about the volumes attributes and where the volumes are in the specific subsystem, go to view S2 (L2: Storage Subsystem Disk on page 447). By selecting a single computer entity, the more detailed L2: Computer view is started.
L2: Computer
When the user navigates to the Computer L2 view, the view shows an individual computer with its HBAs and ports, and all known and related environmental entities for that computer. The individual computer is the focus of this view and is displayed vertically in the middle of the view, left justified (see Figure 9-35 on page 440). As in the L1: Computers view, two groups are linked to the computer, containing Fabric and Device groups. The computer's HBAs and ports are shown when expanding the Fabric group. LUNs and Volumes for that computer are contained in the Device group.
439
Expanding the Device group reveals the relationship between this computers LUNs and the Volumes on the Storage System providing them. The tabular view shows the size of the single volumes (see Figure 9-36 on page 441).
440
The Fabric group contains a group with the computers HBAs (which can be expanded to explore individual ports for Dual-Port HBAs) and a group of connected switches, if applicable. Figure 9-37 on page 442 shows an example of an expanded Fabric group:
441
9.2.4 Fabric
The goal of this graphical view navigation class is to provide the user with Fabric and switch information. A Fabric is a network of entities that are attached through one or more switches. In addition, TotalStorage Productivity Center is to support the display of Virtual SANs (VSANs) and Logical SANs (LSANs). Although SANs, VSANs, and LSANs are implemented and managed differently depending on your environment, they are conceptualized in the same manner through the Topology Viewer. However, because of the conceptual similarities, they are grouped under the larger category of Fabrics in the Topology Viewer for ease in categorization and visualization. In general, the semantic zoom levels within this class allow you to view Fabric components, view relationships, and connections between switches and the entities of the Fabrics and view details of switches.
L0: Fabric
This is the default view upon entering the graphical view through selection of Fabric in the navigation menu. The view shows all known Fabrics and Virtual Fabrics as groups (Fabrics being physically networked entities versus being defined by fabric or zone relationships). Each group represents one Fabric, depending on what is discovered in the environment, therefore there can be a mix of Fabrics displayed at this level.
442
L1: Fabric
At the L1: Fabric level both Fabrics and Virtual Fabrics are represented in the same manner. Computer groups and Other groups are shown on the left of the switches, vertically stacked and left-aligned. By default, computer groups are shown collapsed. Storage groups are shown on the right of the switches, vertically stacked and left-aligned. By default, storage groups are shown collapsed. Groups of Tape Libraries are stacked below Subsystem groups. Groups of Other type entities are stacked below computer groups. Connections between the single groups, or if expanded, between the entities can be displayed. Figure 9-39 on page 444 shows the L1: Fabric view, all groups are expanded. The Other group shows an entity with a user defined name of Helium. At this point, no type has been assigned.
443
444
445
As this example shows, additional information is available in the tabular view of the panel.
446
Expanding the Device group will allow users to explore disk/pool/volume/computer relationships and aids in understanding LUN maskings and mappings. The following example covers this. The expanded Device group is shown in two figures. Figure 9-44 on page 448 shows the disk, pool, and volume relationships, in SVC terminology the managed disk to managed disk group to vdisk relationship.
447
Figure 9-45 shows the LUN masking and mapping aspects, the volume to LUN relationship. In SVC terminology how the vdisks are mapped to hosts.
448
The Fabric group covers the fabric connectivity of the storage entity. Figure 9-46 shows the eight connections of a 2-node SVC cluster to our lab switch.
449
9.3 Printing
The TotalStorage Productivity Center Topology Viewer provides print functionality. Right-clicking the graphical view background or the tabular view opens the Topology Viewers action window. Selecting Print or Print Preview generates a one page printout or preview of the content actually displayed in the graphical view part of the Topology Viewer. The content is scaled to fit one page if necessary. Printing the content of the tabular view is not supported with the current release of the Topology Viewer. Future versions of the Topology Viewer might provide additional print options such as printing the contents of individual tabular view tabs and printing on multiple pages.
9.4 Summary
The Topology Viewer provides users with options to view their topology based on their work and in context with the fabric environment. Four primary entry points are provided. An overview which displays the whole environment in a summary form, with three views each displaying a Topology L0 View focused on the SAN as a whole, on computers within the SAN, or on storage subsystems within the SAN.
450
Unlike its counterparts, the TotalStorage Productivity Center V3.1 Topology Viewer employs a design approach that supports the concept of progressive information disclosure, primarily to manage the scale and complexity common in storage environments today. This concept is supported by allowing users to use semantic zoom levels and overlays in support of key work path scenarios. The overall goal of the Topology Viewer is to provide a central location to view a storage environment, quickly monitor and troubleshoot problems, and gain access to additional tasks and functions within the TotalStorage Productivity Center UI without losing your orientation to the environment. The Topology Viewer is part of TotalStorage Productivity Center V3.1 and relies on the TotalStorage Productivity Center database as a central data repository. To be able to display information about the entities, the information has to be collected by the specific methods TotalStorage Productivity Center provides: discovery, probing, performance data collection, and alerting. To present an actual view on the current environment, the Topology Viewer refreshes its views in intervals you can define.
451
452
10
Chapter 10.
453
454
2. Probe the new computers by selecting them from the default probe under IBM TotalStorage Productivity Center Monitoring Probes tpcadmin Probe Computers as shown in Figure 10-2. .
455
3. After the probe is submitted, check the results. Select the corresponding Job entry and drill down into the log details (see Figure 10-3).
4. The following screen captures display the status of the two servers (AZOV and colorado) before the disk volumes are added. Open the Topology Viewer, the computers in our lab are shown as in Figure 10-4.
The two new servers are displayed in the ITSO group, connected to the switch IBM_2005_B32. On the right-hand side, the two subsystems including their connection to the SAN are displayed. 5. Displaying server AZOV in the Topology Viewer (see Figure 10-5 on page 457) shows no SAN devices, zero LUNS, and zero Volumes. On the SAN or Fabric side, we can see the two single port HBAs (fcs0, fcs1) with their specific WWPNs and the connections to switch IBM_2005_B32 (Figure 10-5 on page 457).
456
6. Using a cmd-line on server AZOV, we see only the three internal disks when we issue the command shown in Figure 10-6.
# lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive # Figure 10-6 Output of lsdev command to display AZOVs internal disks
7. The output of the lsdev command to show the adapters is displayed in Figure 10-7.
# lsdev -Cc adapter -H name status location description ent0 ent1 fcs0 fcs1 Available Available Available Available 1L-08 14-08 1Z-08 1D-08 10/100 Mbps Ethernet PCI Adapter II (1410ff01) 10/100 Mbps Ethernet PCI Adapter II (1410ff01) FC Adapter FC Adapter
8. The output of the lscfg command to display the WWPN of adapter fcs1 is shown in Figure 10-8 on page 458. This is necessary because we must use it later during the mapping of the host to subsystem ports. Make a note of which WWPN is to be used.
457
# lscfg -vl fcs1 fcs1 U0.1-P2-I5/Q1 FC Adapter .... Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C932A753 Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U0.1-P2-I5/Q1 pcmpath query device# Figure 10-8 Output of lscfg command to display AZOVs WWPNs
9. The output of the pcmpath command in Figure 10-9 shows no path from AZOV to any devices in the SAN.
# pcmpath query device # Figure 10-9 Output of pcmpath command to display AZOVs disks seen by the SDD
10.Figure 10-10 and Figure 10-11 on page 459 show the same information for the server Colorado. Colorado also has no external disks assigned.
458
Figure 10-12 Select the storage subsystem to provide you the storage
2. In our case, we select the SVC-2145 subsystem, then click Create Virtual Disk. The Create Virtual Disk Wizard in Figure 10-13 on page 460 opens.
459
3. Enter the number of Vdisks to be created, in our case two (2), and the prefix of the name for the vdisks. The system will add the suffix of the name.
4. After you have entered all parameters click the Next button. The Assign the virtual host to disk ports of the wizard displays, as in Figure 10-14 on page 461.
460
5. After you have selected the host port for Colorado from the Available ports list and moved it into the right column named Assigned ports, click the Next button. The third screen of the Wizard as shown in Figure 10-15 on page 462 opens. 6. Enter a name for the new zone, then select the ports in the storage subsystem which will be contained in the new zone you will create. Click Next after entering the zone name. Important: If you encounter errors while creating the zone definitions, one reason might be that you have chosen a name that already exists somewhere in the alias list of the switch configuration. All names must be unique.
461
7. The Summary panel shown in Figure 10-16 on page 463 lists the settings you entered in the previous panels. IBM TotalStorage Productivity Center creates two VDisks and performs the necessary steps for the zoning to have these vdisks assigned to our server Colorado. Click Finish to continue.
462
8. The Create Virtual Disk wizard starts a virtual-disk-creation job, as indicated in Figure 10-17. The job log is shown in Figure 10-18 on page 464 and Figure 10-19 on page 464.
463
Figure 10-18 TPC log output after LUN creation is done on Colorado - Part 1 of 2
Figure 10-19 TPC log output after LUN to hostport assignment is done on Colorado - Part 2 of 2
9. We zoned the new disks manually at the switch in this case. Through the Disk Management view on Colorado in Figure 10-20 you can see that there are new disks.
464
Here you have to enter the parameters for the server AZOV. We chose two VDisks, each 4 GB in size. For the name, we specified only the prefix part of the name. Click Next to proceed to the panel where you select the host ports to be used shown in Figure 10-22 on page 466. 2. Note that AZOV has two host ports. After you have selected the host ports click Next.
465
3. Select the appropriate Subsystem ports of the SVC as shown in Figure 10-23 on page 467. Click Next to proceed.
466
4. The next panel is the Summary panel displaying the information that has been entered (see Figure 10-24 on page 468). Review all the information you entered so far. When everything is correct, click Finish.
467
5. Note that TPC again starts two jobs. You can navigate to the Jobs node under the Disk Manager node in the explorer view (see Figure 10-25 and Figure 10-26) to locate and open the job logs.
468
Figure 10-27 and Figure 10-28 show the output from the jobs created by the Create Virtual Disk Wizard.
C:\Program Files\IBM\TPC\device\log>type msg.control.1046.1339.log 3/1/06 5:42:10 PM BTACS0000I Starting Control Process: createSVCDisks, Device Se rver RUN ID=1046, Job ID=1339. 3/1/06 5:42:11 PM HWN021675I Started creation of volume with size 524288000 in p ool DS4500_R10 on subsystem SVC-2145-ITSOSVC01-IBM 3/1/06 5:42:14 PM HWN021676I Volume creation completed successfully. New volume TPC_CREATED created with size 524288000 in pool DS4500_R10 on subsystem SVC-2145 -ITSOSVC01-IBM. 3/1/06 5:42:14 PM BTACS0001I Finished Control Process: Device Server RUN ID=1046 , Job ID=1339, Status=1, Return Code=0. Figure 10-27 TPCs Job Control Output - Assign the Vdisks
3/1/06 5:42:17 PM BTACS0000I Starting Control Process: assignStorageVolumesToWWP Ns, Device Server RUN ID=1071, Job ID=1364. 3/1/06 5:42:18 PM HWN021678I Started assignment of volume TPC_CREATED on subsyst em SVC-2145-ITSOSVC01-IBM to initiator port 210000E08B1A5996. 3/1/06 5:42:28 PM HWN021679I Finished assignment of volume TPC_CREATED on subsys tem SVC-2145-ITSOSVC01-IBM to initiator port 210000E08B1A5996. 3/1/06 5:42:28 PM BTACS0001I Finished Control Process: Device Server RUN ID=1071 , Job ID=1364, Status=1, Return Code=0. Figure 10-28 TPCs Job Control Output - Assign the host to the subsystem ports
In our scenarios, we have divided the steps of LUN creation, assignment, and zoning. In our lab we did the zoning manually. In 10.1.5, Manual zone configuration with TPC on page 473 we describe how the zoning can be done manually.
16 Bit LVD 16 Bit LVD 16 Bit LVD SAN Volume SAN Volume
SCSI Disk Drive SCSI Disk Drive SCSI Disk Drive Controller MPIO Device Controller MPIO Device
A subsequent query using the command pcmpath query device produces the output shown in Figure 10-30 on page 470.
469
DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 60050768018300C4180000000000001A ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 0 0 1 fscsi0/path1 CLOSE NORMAL 0 0 2 fscsi0/path2 CLOSE NORMAL 0 0 3 fscsi0/path3 CLOSE NORMAL 0 0 4 fscsi1/path4 CLOSE NORMAL 0 0 5 fscsi1/path5 CLOSE NORMAL 0 0 6 fscsi1/path6 CLOSE NORMAL 0 0 7 fscsi1/path7 CLOSE NORMAL 0 0 DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 60050768018300C4180000000000001B ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 0 0 1 fscsi0/path1 CLOSE NORMAL 0 0 2 fscsi0/path2 CLOSE NORMAL 0 0 3 fscsi0/path3 CLOSE NORMAL 0 0 4 fscsi1/path4 CLOSE NORMAL 0 0 5 fscsi1/path5 CLOSE NORMAL 0 0 6 fscsi1/path6 CLOSE NORMAL 0 0 7 fscsi1/path7 CLOSE NORMAL 0 0 Figure 10-30 Output of pcmpath query device
These devices showed up through Smitty as well and we were able to create a volume group, a logical volume and finally we put a filesystem on it as shown in Figure 10-31.
LPs 1 1
PPs 1 1
PVs 1 1
470
471
Figure 10-34 on page 473 shows the TotalStorage Productivity Center GUI navigation tree Data Manager Reporting showing the Colorado and AZOV servers and their disks. This information corresponds to the graphical display of the Topology Viewer in Figure 10-32 on page 471 and Figure 10-33.
472
Figure 10-34 Both computers and their new assets in the Assets View of the Explorer
The last screen capture in this sequence in Figure 10-35 shows how the VDisks are presented within the San Volume Controller.
473
2. The panel in Figure 10-37 shows the tab with the name of the fabric, followed by the word Zoning. On this panel, you see the existing zones. Select the appropriate Zone, then click Change.
3. The window shown in Figure 10-38 on page 475 is displayed. Here you see a list of the zones contained in the active zoneset. Click Add to start the definition of the new zone.
474
4. Enter a name for the new zone and provide a description of the zone in the panel shown in Figure 10-39 on page 476. Click Next to proceed.
475
5. On the screen in Figure 10-40, you must expand the Aliases section on the left-hand side to see the elements in your SAN.
476
6. They all are displayed with their aliases only as shown in Figure 10-41. Make the selections that are necessary to have your host adapters and the subsystem adapters in your zone. The shaded aliases are the aliases we have selected. Click Next after you have made your selections.
7. In the next two screens, TPC wants to know to which Zoneset the newly defined zone should belong. We have only one in our environment, as shown in Figure 10-42 on page 478.
477
8. Select the Zone Set as shown in Figure 10-43. Having entered all the information for the new zone, it can be written into the existing Zone Set. Click Finish to proceed.
478
The next screen that is displayed (see Figure 10-44) indicates that the zone set is being saved.
9. Now that the new Zone is added, you have the following options (see Figure 10-45 on page 480): Press Update Only to complete the updates on the Zone Set only. There will be differences between the active Zone Set and the one with which you are working. Press Update and Activate to have the changes written and activated in one step.
479
10.Either action will result in the panel shown in Figure 10-46. Open both sides of the view by clicking the plus (+) sign.
Figure 10-46 The displayed panel after Activation or Update of a Zone Set
The view expands and you can see the content of the active versus the non-active Zone Sets, as shown in Figure 10-47 on page 481.
480
In this case you can see no differences. The last action performed seems to have been an Update and Activate.
481
2. The next important checkpoint is that you complete a probe on all your agents. Figure 10-49 shows that we have a probe running. When the probe has completed successfully, the icon turns green.
482
483
The Create Scan window is displayed (see Figure 10-52 on page 485).
484
2. Select the filesystem that you want to be scanned. In our case, we select Colorado - C: drive only. Expand the tree in the right-hand pane to select the filesystem, as shown in Figure 10-53.
485
Reminder: Each report you are running in TotalStorage Productivity Center can report data only in that amount and quality based on what is collected in the database through preceding scans. Keeping this in mind, you have to check the settings of your scan job and its profiles. To do this, you must select the appropriate profile to complete the definitions of your SCAN. This is done by clicking the Profiles tab and selecting only the profile named TPCUser.Summary By File Type (Figure 10-54). The name of the profile indicates its planned use.
In our example we modified this profile earlier, because otherwise it would reduce the amount of displayed and reported filetypes. Keep in mind that it might be necessary to increase this number again. 3. Enter a description and save the SCAN definition. It might be helpful to use the same text for the description as well as for the name of the SCAN. 4. Note, that TPC will auto-submit your SCAN. After a few minutes you can check the results. To do this you might have to refresh the joblist. Verify that the SCAN job was successful as shown in Figure 10-55 on page 487.
486
487
Figure 10-56 Select the report as base for your own report
In the screen in Figure 10-56 adjust the number of maximum returned rows per file type, because for this report it cannot be adjusted later. For better readability, we have split this picture. After editing this value, you should save your report definition and give it a meaningful name. 2. In addition, you must define the selection criteria to include only the filesystems or computers you want to include in this report. To do this, click the selection button in the upper right-hand corner. In the split screen in Figure 10-57 on page 489, you see the file type selection dialog box in the upper left corner. Do not change any entries here. In the lower right corner you see the filesystem selection dialog box. Change the entries to reflect the selection of C: on Colorado. Note: The file types displayed in this list are the ones which are detected during each scan. That means if there are any new filetypes in your filesystem, this list will grow automatically.
488
3. When your report is saved give it a meaningful name as shown in Figure 10-58.
4. Let your customized report run, to see what kind and amount of data is displayed. To do this, navigate to your reports ( tpcadmins.Reports), then click Generate Report as shown in Figure 10-59 on page 490. The result will be presented immediately, because the report is generated from the information in the database. If you are going to do further tests with this report, you should always rescan your filesystem.
489
5. Check the results of your report. In our case, we want to make sure that certain files (for example, *.mp3 and *.mpg) are detected. Scroll down the list as shown in Figure 10-60 to see them.
Here you can see the entries for the mp3 and mp4 files. If you cannot see the files you are looking for, the number you set in the Number of Rows field in your report might be too small. This can be corrected by repeating the steps described in 10.2.1, Prerequisite steps on page 481 and increasing the number of rows.
490
2. The Create Constraint window opens (Figure 10-62). Define the scope of the constraint definition by selecting the filesystem, in our case C: on Colorado.
Important: In general, all of the possible selections which can be made from within the Create Constraint dialog box are combined with a logical OR operation. That means you have to pay attention when you define the scope of your constraint.
491
3. Select the File Types tab to choose the file types by which the constraint should be defined. In our case, we chose jpg and mp3 files in Figure 10-64.
492
You can easily add your own patterns and have them added to the right-hand selection box as shown in Figure 10-65.
4. You also have to define the users for whom the constraint definition should be valid. Select the TPCADMIN user only. The screen in Figure 10-66 summarizes your input, as it displays your choices of file types and users within one single filter statement named File Filter Text.
5. In the last screen (see Figure 10-67 on page 494), you can add two definitions of how and where the alert should be displayed. The first condition, the Triggering Condition, means that only if the sum of all filtered files exceeds 1 megabyte consumption will the alert be raised. And the second option, the Triggered Actions, is to determine what should happen in case of an alert condition. We select to drop an entry in the Windows event log, type Warning. 6. Give a description, save, and name the constraint definition.
493
494
2. When you click Filesystem, which is highlighted in red to indicate that there are some alerts to display, you see on the right-hand side a screen similar to Figure 10-69.
Figure 10-69 Filesystem Alerts because of a constraint violation are being displayed.
3. You can drill further into the information about these alerts by clicking the magnifying glass icon. Figure 10-70 on page 496 is just one example of a Detail for Alert.
495
4. Now that you know that there are constraint violations, you might want to know exactly what files triggered these constraint violations. If you are interested in the exact name, size, and location of these files, there is only one way. You must look into the constraint-violations report. Navigate to the following location in the explorer view: Data Manager Reporting Usage Violations Constraint Violations By Filesystem. On the right, you see the two Buttons Selection and Filter. Use the Selection Button to display a list of all the computers and select only the entries for Colorado and its filesystems. The result should look similar to Figure 10-71 on page 497.
496
5. In Figure 10-72, you can select one of the displayed entries to get all the detailed information about the files which caused the constraint violations.
Figure 10-72 Detailed information about the files causing the constraint violation
497
If you use a script to perform an action on the identified files, you name the script in the dialog box shown Figure 10-74 on page 499. You must create and store a script with this name in the /scripts directory of the TPC server.
498
Figure 10-74 Two possible options: TSM Archive then delete or a script
The location of the scripts, either stored on the server and at runtime transferred to the clients and deleted afterwards or stored local on the agent computer side, are as shown in Table 10-1.
Table 10-1 Script location Computer Location of the Scripts (can vary if you used other installation directories)
499
2. We can proceed with the definition of the filesystem extension setup. The next three panels in Figure 10-26 on page 468 through Figure 10-28 on page 469 show the entries we made. We chose an extension by 5 percent, and the extension to happen when the filesystem reaches a usage level of 75 percent as we have specified 25 percent freespace (see Figure 10-76 on page 501).
500
3. We selected to enforce this policy after each scan of the filesystem (see Figure 10-77 on page 502). Doing it this way is only one possibility. However, keep in mind that the policy and its execution is bound to the data found in the database, which, in turn, is refreshed and updated only after a new scan.
501
4. When you click the Alert tab, you can see the possible actions which can be triggered when a filesystem extension has taken place as shown in Figure 10-78 on page 503. We did not make any selections. Finally, we saved our definitions. Note that it appears in the explorer tree view under the node on which we are working.
502
503
Figure 10-79 A scan of AZOVs filesystem before the filesystem was filled
2. We checked the situation from the AIX perspective and queried the volume group (see Figure 10-80). Note that there are 201 used PPs.
lsvg tpc_data VOLUME GROUP: tpc_data 0009cd9a00004c0000000109b2c77f62 VG STATE: active VG PERMISSION: read/write MAX LVs: 256 LVs: 2 OPEN LVs: 2 TOTAL PVs: 2 STALE PVs: 0 ACTIVE PVs: 2 MAX PPs per VG: 32512 MAX PPs per PV: 1016 LTG size (Dynamic): 256 kilobyte(s) HOT SPARE: no
VG IDENTIFIER: PP SIZE: TOTAL PPs: FREE PPs: USED PPs: QUORUM: VG DESCRIPTORS: STALE PPs: AUTO ON: MAX PVs: AUTO SYNC: BB POLICY: 128 megabyte(s) 270 (34560 megabytes) 69 (8832 megabytes) 201 (25728 megabytes) 2 3 0 yes 32 no relocatable
3. We started to fill up the filesystem by creating large files up to a limit more than the defined threshold. ( > 75 % ). Figure 10-81 on page 505 shows the filesystem contents afterwards.
504
-rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 -rw-r--r-1 root sys 1073741312 Mar 01 drwxrwx--2 root system 512 Feb 28 # df -k Filesystem 1024-blocks Free %Used Iused /dev/tpc_data_lv 13107200 2209588 84% Figure 10-81 Filesystem filled - threshold reached.
14:28 14:29 14:29 14:30 14:31 14:32 14:32 14:33 14:38 14:38 14:36
dummyfile dummyfile1 dummyfile2 dummyfile3 dummyfile4 dummyfile5 dummyfile6 dummyfile7 dummyfile8 dummyfile9 lost+found
4. To demonstrate the function of TotalStorage Productivity Center for Data in principle, we ran a report against our filesystem (see Figure 10-82).
5. Rerun the defined Scan to update the information in the database and rerun the report again. The scan you just performed has led to two results: The data in the database now shows the near real time situation in the filesystem. The scan triggered the filesystem extension, The easiest way to verify that is by looking into the AIX lsvg information. Here you can see, that we now use 211 PPs (see Figure 10-83 on page 506).
505
# lsvg -L tpc_data VOLUME GROUP: tpc_data 0009cd9a00004c0000000109b2c77f62 VG STATE: active VG PERMISSION: read/write MAX LVs: 256 LVs: 2 OPEN LVs: 2 TOTAL PVs: 2 STALE PVs: 0 ACTIVE PVs: 2 MAX PPs per VG: 32512 MAX PPs per PV: 1016 LTG size (Dynamic): 256 kilobyte(s) HOT SPARE: no df -k /dev/tpc_data_lv 15073280 1799692 Figure 10-83 The filesystem is now extended.
VG IDENTIFIER: PP SIZE: TOTAL PPs: FREE PPs: USED PPs: QUORUM: VG DESCRIPTORS: STALE PPs: AUTO ON: MAX PVs: AUTO SYNC: BB POLICY: 73% 21 128 megabyte(s) 270 (34560 megabytes) 59 (7552 megabytes) 211 (27008 megabytes) 2 3 0 yes 32 no relocatable 1% /tpc_data
2. Clicking the right mouse button and selecting Create Alert, results in the screen shown in Figure 10-85 on page 507. 3. Select the desired filesystem by first going to the Filesystems tab.
506
4. After you have selected the filesystem, select the Alert tab, where you can define your conditions for this alert as shown in Figure 10-86. Ensure that the Enabled check box is marked in the upper right hand corner of the Alert tab screen.
507
Figure 10-87 The alert shows up and you can see detailed information about the cause.
508
11
Chapter 11.
509
510
If all table data is in a single tablespace, a tablespace can be dropped and redefined with less overhead than dropping and redefining a table. In general, a well-tuned set of DMS table spaces will outperform SMS table spaces. In general, small personal databases are easiest to manage with SMS table spaces. On the other hand, for large, growing databases, you will probably only want to use SMS table spaces for the temporary table spaces, and separate DMS table spaces, with multiple containers, for each table. In addition, you will probably want to store long field data and indexes on their own table spaces.
create view TPC.T_ITSO_CIMOM2SS (CIMOM_URL, SUBSYSTEM_ID) as (select TPC.T_RES_REGISTERED_CIMOM.SERVICE_URL, TPC.T_RES_STORAGE_SUBSYSTEM.NAME from TPC.T_RES_REGISTERED_CIMOM,TPC.T_RES_STORAGE_SUBSYSTEM where REG_CIMOM_ID = (select CIMOM_ID from TPC.T_RES_CIMOM2NAMESPACE where CIM_NAMESPACE_ID = (select distinct CIM_NAMESPACE_ID from TPC.T_RES_CIMKEY_SUBSYSTEM where SUBSYSTEM_ID= TPC.T_RES_STORAGE_SUBSYSTEM.SUBSYSTEM_ID)))
Figure 11-1 DB2 view for CIMOM discovery of subsystems
511
Common Agent Registration Password or Resource Manager Registration User ID / Password must contain characters from the following categories: Uppercase characters: A through Z Lowercase characters: a through z Numeric characters: 0 through 9 Nonalphanumeric characters: ` ~ @ # % ^ & * ( ) - _ = + [ ] { } \ | ; : ' " , . < > / ?
WebSphere Application Server Administrator User ID / Password or Host Authentication Password must contain characters from the following categories: Uppercase characters: A through Z Lowercase characters: a through z Numeric characters: 0 through 9 Nonalphanumeric characters: - _ .
Common Agent Windows Service User ID or NAS Filer User ID must contain characters from the following categories: Uppercase characters: A through Z Lowercase characters: a through z Numeric characters: 0 through 9 Nonalphanumeric characters: ` ~ # % ^ & ( ) - _ { } ' .
Common Agent Windows Service Password or NAS Filer Password must contain characters from the following categories: Uppercase characters: A through Z Lowercase characters: a through z Numeric characters: 0 through 9 Nonalphanumeric characters: ` ~ @ # % ^ & * ( ) - _ = + [ ] { } \ | ; : ' " , . < > / ?
512
Common Agent Registration Password or Resource Manager Registration User ID / Password must contain characters from the following categories: Uppercase characters: A through Z Lowercase characters: a through z Numeric characters: 0 through 9 Nonalphanumeric characters: ` ~ @ # % ^ & * ( ) - _ = + [ ] { } \ | ; : ' " , . < > / ?
513
3. sc.exe and InstallUtil.exe are required utility files. They are invoked from within the batch script. You need to invoke the batch file to do the cleanup. 4. Place all three files in one directory and invoke CleanUpTPC.bat from within the same directory.
Router configuration
Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address, port 239.255.255.253, and port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by Productivity Center for Disk. To configure your router hardware and software, refer to your router reference and configuration documentation.
Environment configuration
It can be advantageous to configure SLP DAs in the following environments: Where there are other non-TotalStorage Productivity Center SLP UAs that frequently perform discovery on the available services, this ensures that the existing SAs are not overwhelmed by too many service requests. Where there are many SLP SAs, A DA helps decrease network traffic that is generated by the multitude of service replies. It also ensures that all registered services can be discovered by a given UA. We particularly recommend the configuration of an SLP DA when there are more than 60 SAs that need to respond to any given multicast service request.
514
Here, myhost.com is the name of the server hosting the CIMOM, and port is the port number of the service, such as 5989.
Resource Manager
You can find the locations of the configured user ID for the Resource Manager in the files listed in Table 11-1.
Table 11-1 Resource Manager user ID and password Component/server File
Tivoli Agent Manager Productivity Center for Data Productivity Center for Fabric
The password of the Resource Manager is stored in the same files, but the password is in a readable format only on the Tivoli Agent Manager. The Resource Manager certificate subdirectory contains a file called pwd, which contains the agent registration password, which is required to open the certificate files.
515
Common Agent
The Common Agent does not have a user ID. Instead it has a context name which the agent uses for the communication shown in Table 11-2.
Table 11-2 Common Agent user ID and password Component/Server File
AgentManager.properties has the context name and the password stored in clear text, so it is easy to find the values if you did not note them during the installation. On the Common Agent, the password is encrypted and stored in the pwd file in the same directory as the certification files, which are located in ...\Tivoli\ep\cert. If you go through the procedure of replacing the current certificates with a new one, do not forget to delete the pwd file, because it no longer matches the certificate file.
Ikeyman.exe
On all systems that have the one of the Tivoli Common Agent Services components installed, you will find a tool called ikeyman.exe. You can use this tool to open the certificate file, if you know the agent registration password. This is a quick way to verify that you still know the password that was used to lock a certificate file.
516
In the table called IP_ADDRESS (Figure 11-3), you find all the IP addresses of all registered Common Agents and Resource Managers.
517
If the common agent is running, it listens for requests on that port and opens a connection. You simply see an empty screen. The common agent is not running if you see the message Connecting To 9.1.38.104...Could not open a connection to host on port 9510 : Connect failed.
518
Appendix A.
Worksheets
This appendix contains worksheet examples that are meant for you to use during the planning and the installation of the TotalStorage Productivity Center. Decide whether you need to use them, if, for example, you already have all or most of the information collected elsewhere these worksheets are recommended. If the tables are too small for your handwriting, or you want to store the information in an electronic format, simply use a word processor or spreadsheet application. Alternatively, you can use our examples as a guide to create your own installation worksheets. This appendix contains the following worksheets: User IDs and passwords Storage device information IBM TotalStorage Enterprise Storage Server (ESS) IBM Fibre Array Storage Technology (FAStT) IBM SAN Volume Controller
519
Server information
Table A-1 contains detailed information about the servers that comprise the TotalStorage Productivity Center environment.
Table A-1 Productivity Center server Server Configuration information
In Table A-2, simply mark whether a manager or a component will be installed on this machine.
Table A-2 Managers and components installed Manager/component Installed (y/n)?
Productivity Center for Disk Productivity Center for Fabric Productivity Center for Data Tivoli Agent Manager DB2
520
agentTrust.jks Enter the user IDs and password that you used during the installation in Table A-4. Depending on the selected managers and components, some of the lines are not used for this machine.
Table A-4 User IDs used on this machine Element Default/ recommended user ID Enter user ID Enter password
DB2 User DB2 Instance Owner Resource Manager Common Agent Common Agent TotalStorage Productivity Center universal user IBM WebSphere Host Authentication
a. This account can have any name you choose. b. This account name cannot be changed during the installation.
Appendix A. Worksheets
521
522
IBM DS4000
Use Table A-6 to collect the information about your DS4000 devices. Check the device support matrix before you install the CIM Agent for the correct level.
Table A-6 FAStT devices Name, location, organization Firmware level IP address CIM Agent host name and protocol
Appendix A. Worksheets
523
524
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks on page 526. Note that some of the documents referenced here may be available in softcopy only. TCP/IP Tutorial and Technical Overview, GG24-3376 Exploring Storage Management Efficiencies and Provisioning: Understanding IBM TotalStorage Productivity Center and IBM TotalStorage Productivity Center with Advanced Provisioning, SG24-6373 IBM TotalStorage SAN Volume Controller, SG24-6423
Other publications
These publications are also relevant as further information sources: IBM TotalStorage Productivity Center Users Guide, GC32-1775 IBM TotalStorage Productivity Center Installation and Configuration Guide, GC32-1774 IBM TotalStorage Enterprise Storage Server Command-Line Interface User's Guide, SC26-7494 IBM TotalStorage Productivity Center Problem Determination Guide, GC32-1778 IBM TotalStorage Productivity Center Messages, GC32-1776
Online resources
These Web sites and URLs are also relevant as further information sources: Tivoli software products index
http://www-306.ibm.com/software/tivoli/products/
Engenio
http://www.engenio.com
FibreAlliance
http://www.fibrealliance.org
525
526
Index
A
addess command 263 addessserver command 262263 adduser command 266 administrative rights DB2 user 76 Advanced Brocade API 359 agent communication ports 206 Agent deployment unattended (silent) installation 210 agent deployment 201 Common Agent logs 228 interactive installation 203 LINUX system 214 local installation 201 log files 226 remote installation 202 run scripts 205 service 225 silent installation 201 agent install verification 223 Agent Manager 32 certificates 51 database connection 93 default password 103 default user ID 103 healthcheck utility 92 IBMCDB database 91 key file 51 registration port 9511 87 security certificate 89 server installation 103 server port 9511 103 TCP/IP ports 46 verifying the installation 92 Agent manager public communication port 87 Agent Manager installation 83 Agent Recovery Service 34, 65 Agent Registration password 89 Agent registration password 51 agent registration password 33 agent remote installation 211 agent types 196 Agent uninstallation procedures 228 agents required 196 agentTrust.jks file 33, 35, 51 aggregation rules 427 AIX installation directory 118 AIX group 124 AIX install adm group 124 administrator 124 Agent Manager 146 Agent Registration Password 154 Application Server Name for Agent Manager field 152 Common agent 181 DAS user ID 133 Data server 165 DB2 127 DB2 fenced user ID 137 DB2 fix pack 143 DB2 port 50000 168 DB2 UDB 128 Device server 171 Graphical User Interface 182 hardware prerequisites 120 Host authentication password 175 HTTP Server installer 187 httpd.conf file 192 IBMCDB database 150 installing agents 177 NAS discovery 169 ports 125 primary domain name servers 122 remote installation 119 root user 145 Schema name field 163 Security Certificate 153 Security Domain 154 shut down DB2 environment 144 SMIT 123 SNMP community 170 software prerequisites 120 su - db2inst1 143 superuser ID 124 TPCDB 162 verify DB2 installation 143 Web application server 191 Web browser 186 Web browser interface 187 AIX V5.3 installation 118 alert scan profile 483 unwanted files 481 alerts 398 computers, file systems, directories 399 storage subsystems, fabrics, switches 406 ARS.version field 93 ava applet 114
B
Brocade SMI Agent 304 changing ports 315 CIM-XML CPA 309 configuration files 313
527
connecting through TPC 315 event settings 310 installing 305 mutual authentication 309 Brocade switch 432
C
CD layout 57 Certificate Authority file 51 changeMe password 89, 103 CIM Agent 15, 2829, 35 agent code 28 client application 28 device 28 device provider 28 overview 31 CIM Browser interface 271 CIM Client 15 CIM Managed Object 15 CIM Object Manager 240 CIM Object Manager (CIMOM) 15, 28, 30, 35 CIM Provider 15 CIM request 28 CIM Server 15 CIM-compliant 28 CIMOM adding using the GUI 269 customization 240 DS Open API 242 ESS 800 242 IBM website 242 interoperability namespace 330 LIC-Level to Bundle Version 243 CIMOM (CIM Object Manager) 28, 30, 35 CIMOM alert log 328 CIMOM configuration 346 CIMOM configuration recommendations 241 CIMOM considerations 241 CIMOM discovery 30, 327 CIMOM discovery alert 347 CIMOM discovery job 347348 CIMOM registration 333 CIMOM SVC console 292 CIM-Workshop 274 Cisco 432 CISCO switch CIM server 317 Cisco switch 316, 432 Cleanup.zip file 513 client application CIM requests 28 collecting data 37 Common Agent 35, 516 demo certificates 33 subagent 33 Common Agent Registration password 206 Common Agent registration password 103 Common Agent Services Agent Manager 32 Resource Manager 32
Common HBA API 15 Common Information Model (CIM) 14 common schema 28 communication data paths 37 Computer L1 tabular view 438 Computer view 437 configured user ID locations 515 constraint Triggered Actions 493, 498 Triggering Condition 493 constraint definition 490 constraint violations 496 constraints 398 pre-defined 407 core schema 28 Custom install 57
D
DA benefit 23 Data Agent 196 Data agent install option 180 Data agent upgrade 235 data collection 360 Data Manager 53 Data Manager security issues 54 Data Server port 9549 102 Data server component 4 Data Server port 9549 205 Data user levels 54 database requirements 45 databases IBMCDB 62, 122 TPCDB 62, 122 DB2 file system ownership 159 DB2 Administration Server 76 DB2 database performance 97 DB2 database sizing 97 DB2 install 71 DB2 installation verify installation 82 DB2 log files 99 DB2 user account 76 DB2 user IDs 68 DB2 user rights 68 DB2 window services 82 db2grp1 group 125 db2level command 82 default installation directory 56 demo keys 33 demonstration certificates 153 device management 17 Device Server port 9550. 102 Device server component 4 Device Server port 9550 205 diagnostic 513 Directory Agent 18, 21, 36, 514
528
configuring for subnet 26, 514 discover all services (DAS) 514 Discovery 39 Distance Vector Multicast Routing Protocol 25 DMS table space 510 DNS suffix 62 domain name system 62 DS CIM Agent log files 257 verify install 260 DS CIMOM CIM Browser interface 271 restart 264 telnet command 271 DS Open API ESS considerations 260 firewall rules 261 port 5989 254 DS Open CIM Agent addessserver command 262 CIMOM.LOG File 264 configuring 260 post install tasks 259 DS8000 Extend Pools 377 DVMRP 25
Fabric L1 443 Fabric L2 444 fabrics 356 FAStT CIM Agent SLP registration 281 Fibre Channel 14 Common HBA API 15 Generic Service 14 filesystem scan 503 firewall configuration 26
G
General Parallel File System (GPFS) 54 GPFS (General Parallel File System) 54 GUI applet 107 GUI for web access 107 GUID 64 gunzip command 147
H
Health overlay health values 426 Health status 425 Health Status Overlays Topology Viewer Health Status Overlays 424 health value 426 healthcheck utility 92 Heatth status aggregation rules 425 High-Performance Parallel Interface (HIPPI) 14 HIPPI (High-Performance Parallel Interface) 14 Home Directory tab 112 Host authentication password 215 HTTP 14 httpd.conf file 192
E
EFCM Management Software 294 Engenio provider 274 arrayhosts.txt 279 providerStore 279 slptool command 281 Engenio SMI-S Provider service 280 entity 418 Entity Groups 416, 421 ESS CIM Agent addess command 263 addessserver command 263 install 249 setuser interactive tool 265 ESS CIMOM SLP registration 268 ESS CIMOM verification 267 ESS CLI install 244 verification 249 verifyconfig 268 esscli command 249 extension schema 28
I
IBM FAStT 519 IBM SMI-S Agent for Tape 318 IBMCDB database 45, 91 ICAT (Integrated Configuration Agent Technology) 28 IGMP (Internet Group Management Protocol) 25 IIS Port 8080 111 ikeyman utility 33 ikeyman.exe 516 Inband discovery 36 Information Overlays 422 infrastructure communication 345 installation check list 45 hardware 44 Internet Information Services 65 user ID privileges 50 user IDs 50 installation licenses 95 Integrated Configuration Agent Technology (ICAT) 28 Intelligent Peripheral Interface (IPI) 14
F
fabric asset and configuration data 386 customized reports 389 reports 387 Fabric Agent 196 Fabric Agents 357 Fabric agents 197
Index
529
Internet Assigned Numbers Authority 24 Internet Group Management Protocol (IGMP) 25 Internet Information Server (IIS) 34 Internet Information Services 65, 108 IP address 517, 520 IP address assignment 24 IP multicasting 24 IP network 18 itcauser 207, 218 ITSO lab environment 340
O
Open Shortest Path First (OSPF) 26 OS Type attribute 418 OSPF (Open Shortest Path First) 26 out of band discovery job 358 Out Of Band Fabric agents 197 outband discovery 36 Overlays tabular view 422 own subnet 514
J
Java applet 343
K
key file for Agent Manager 51
P
performance data 383 Performance Monitor 361 Performance Monitor jobs 370 Performance Overlay 427 performance status 427 PIM 26 Ping 361 Ping alert 369 ping command 266 Ping creation 368 pinning 430 PMM_Exception table 427 Policy management 499 port number 518 printing Topology Viewer 450 privileges for install user ID 50 Probe creation 361 progressive information disclosure 416 Protocol-Independent Multicast 26 provider component 30 proxy model 28 ps -ef command 226 public communication port 9513 103
L
lCIMOM ogs 355 Limited Edition 11 local agent uninstall 229 local subnet 26, 514 multicast service requests 514 log files CLI 107 Device server 107 GUI 107 LogCollector.readme file 513
M
management application 15 Management Information Base 17 Management Information Base (MIB) 17 McDATA interface configuring 299 Direct Connection mode 294 EFCM Proxy mode 295 TPC connection 303 McData switch 432 McData switches 292 mdisk to vdisk relationship 447 MIB compiler 17 Microsoft Windows domain 51 mini-map 424 mini-map navigation,Topology Viewer Mini-map navigation 435 MOSPF (Multicast Open Shortest Path First) 26 multicast 24 multicast group 24 individual hosts 25 multicast messages 19 Multicast Open Shortest Path First (MOSPF) 26 multicast request 21 multicast traffic 23
R
Redbooks Web site 526 Contact us xvi report constraint violation 402 reports servers and computers 393 SVC LUN mapping 380 Resource Manager 32, 515, 521 configured user ID 515 Reverse Path Forwarding 25 role based administration 343 rsTestConnection command 261 rsTestConnection.exe command 261
N
NAS discovery 169
S
Scan 361
530
scan customization 486 filesystem 503 report 487 Scan creation 365 scan creation 483 Scan profiles 366 schema log files 99 schema name 97 security certificates 51 Security Certificates panel 88 Semantic Zooming 417 service accounts 52 Service Agent 19, 514 service information 19 Service Location Protocol multicast 22 Service Location Protocol (SLP) 18 service agent 18 user agent 18 service URL 20, 22 setuser command 266 setuser interactive tool 265 silent install response file 210 Simple Network Management Protocol 17 SLP active DA discovery 21 address 24 broadcast communication 2324 CIM Agent 28 configuration recommendation 332, 514 DA configuration 514 DA considerations 331 DA discovery 21 DA functions 22 directory agent configuration 332, 514 Discovery 515 environment configuration 514 firewall configuration 26 messages 25 multicast address and port 332 multicast communication 2324 multicast group 24 multicast messages 19 multicast request 21 multicast service request 21 passive DA discovery 22 port 427 26 port number 24, 515 registration 19 router configuration 332, 514 service agent 19 service attributes 19 service type 19 slp.conf 334 starting 260 unicast 23, 331 unicast communication 23 user agent 19 User Datagram Protocol message 19
verify active 267 verify install 259 verifyconfig command 336 when to use DA 23, 331 SLP (Service Location Protocol) 18 SLP DA 18, 514 SLP discovery summary 515 SLP environment 18 slp.conf file 334 slptool 268 slptool command 515 SMI-S 16, 35 SMI-S intiative 242 SMS table space 510 SNIA (Storage Networking Industry Association) 15 SNIA certification 241 SNIA certified devices 16 SNIA Web site 11 SNMP 17 SNMP management application 17 station product 17 SNMP manager 17 SNMP trap 17 Standard Edition 12 startcimbrowser command 271 status propagation 425 storage device 13 inband management 14 Storage Management Initiative - Specification 16, 35 Storage Networking Industry Association (SNIA) 15 storage subsystem asset and configuration data 376 reports 380 volume information 377 Storage Subsystem view Topology Viewer Storage Subsystem 445 storage subsystems 355 su command 143 subnet 514 query group membership 25 Subsystem Device Driver (SDD) 44 Subsystems L2 view 447 SVC CIMOM 285 console 285 register to SLP DA 292 SVC console account 286 TPC console account 286 verification 292 SVC Performance Monitor 370 SVC performance reporting 384 SVC zoning configuration 410 switch Performance Monitor 374
T
table space considerations 163 tape library 356 reports 391 Tape SMI-S Agent Index
531
configuring 326 tar command 147 TCP/IP ports 46 telnet command 271, 292 timeout value 513 Tivoli Agent Manager 34, 200 Tivoli Common Agent Service 515 Tivoli Common Agent Services 32 Topology Viewer Action drop down 435 aggregated states 427 Computer L2 view 439 Computer view 437 Computers L1 view 438 Device Group 438 entity classes 418 entity connections 424 Entity Groups 416 Fabric group 438 Fabric view 442 Graphical View 423 health status 425 hovering 429 Information Overlays 422 introduction 8 launch 434 launch point 423 mini-map 424 navigation 436 Other entity class 419 pinning 430 printing 450 progressive information disclosure 416 refresh function 433 refresh view rate 433 removing entities 433 Semantic Zooming 417 status propagation 425 Tabular View 424 Tape Libraries 449 Unknown state 419 User Defined Properties 418 zones 431 zoom levels 417 Total Storage Productivity Center function in V3.1 8 TotalStorage Productivity Center Navigation Tree 344 agent infrastructure 200 basics 36 CIMOM configuration 346 component install 94 Data server 4 database requirements 45 Device server 4 first steps sequence 345 function summary 9 license 95 overview 2
performance considerations 332 role based administration 343 server 521 services 344 starting the GUI 342 structure 2 tape library support 9 zone creation 473 TotalStorage Productivity Center (TPC) 519 TotalStorage Productivity Center for Data 5 Data agent 6 Data manager 6 ommunication 36 Web server 6 TotalStorage Productivity Center for Disk 6 performance functions 7 TotalStorage Productivity Center for Fabric 7 communication 36 Fabric manager 7 TotalStorage Productivity Center GUI 110 TPCDB database 45 trap 17 troubleshooting cimom.log file 270 TSRMsrv1 user ID 106 Typical install 56 typical install 74
U
Uniform Resource Locator (URL) 19 Unknown health value 426 update zone configuration 479 upgrade.zip files 235 User Agent (UA) 1820, 514 SLP User Agent interactions 21 User Datagram Protocol (UDP) 19 message 19 User Defined Properties 418 user ID 515, 519, 521 user interface 343 User Rights Assignments 50
V
verify ESS CLI 249 verifyconfig command 268269, 292, 336 Virtual Disk creation 410
W
WBEM (Web-Based Enterprise Management) 14 WBEM architecture 15 WBEM browser 271 WBEM initiative 14 Web browser 186 Web server 6 Web-Based Enterprise Management (WBEM) 14 Websphere Application Server IP address 88 Windows Services 51
532
worksheets DS4000 523 key file 521 SAN Volume Controller (SVC) 524 servers 520 storage devices 522
X
XML 30 xmlCIM 14 X-Windows display 128
Z
zone creation 473 zones 431 zoning information 432
Index
533
534
Back cover