Professional Documents
Culture Documents
ADMINISTRATORS GUIDE
P/N 069001067-02
EMC Corporation
Corporate Headquarters: (508) 435-1000, (800) 424-EMC2
171 South Street, Hopkinton, MA 01748-9103 Fax: (508) 435-5374 Service: (800) SVC-4EMC
Trademark Information
EMC2, EMC, MOSAIC:2000, Symmetrix, CLARiiON, and Navisphere are registered trademarks and EMC Enterprise Storage, The Enterprise Storage Company, The EMC Effect, Connectrix, EDM, SDMS, SRDF, Timefinder, PowerPath, InfoMover, FarPoint, EMC Enterprise Storage Network, EMC Enterprise Storage Specialist, EMC Storage Logic, Universal Data Tone, E-Infostructure, Celerra, Access Logix, SnapView, MirrorView, and RemoteView are trademarks of EMC Corporation. Windows and Windows NT are registered trademarks of Microsoft Corporation. All other trademarks mentioned herein are the property of their respective owners.
ii
Contents
Preface..............................................................................................................................ix Chapter 1 About EMC Navisphere Host Agent, CLI and ATF
Terminology.......................................................................................1-2 Storage Components ........................................................................1-3 Navisphere Software ........................................................................1-4 Sample Storage Installations ...........................................................1-6 Storage-System Configurations with ATF.....................................1-9 Unshared Direct Configuration...............................................1-9 Shared-or-Clustered Direct Configuration ..........................1-10 Shared Switched Configuration ............................................1-11 How ATF Handles Hardware Failures ................................1-12
Chapter 2
Chapter 3
Contents
Creating Windows NT Partitions on LUNs ..........................3-6 Making JBOD Disks Available to Windows NT...........................3-9 Determining the Disk Number for Each Disk.......................3-9 Creating Windows NT Partitions on JBOD Disks ..............3-11
Chapter 4
Chapter 5
Chapter 6
Using ATF
ATF Operation...................................................................................6-2 Testing ATF ........................................................................................6-2 ATF Storage-System Device Names...............................................6-3 Adding Devices to a Server After ATF Installation .....................6-3 Failover Messages.............................................................................6-3 Restoring the Original Path.............................................................6-4 Verifying Restored LUNs Using Windows Event Viewer ..........6-5 ATF Trespass Utility .........................................................................6-6
Appendix A
Troubleshooting
Troubleshooting Windows NT Problems ....................................A-2 Using the Windows NT Event Viewer..................................A-2 Server Passed Self-Test but Operating System Does Not Boot from Boot Disk .........................................................A-2 Windows NT Does Not Display Disks..................................A-2 Troubleshooting Windows 2000 Problems ..................................A-3 Using the Windows 2000 Event Viewer................................A-3
iv
Contents
System Configuration Problems ............................................ A-3 Fibre Channel Problems.......................................................... A-4 Boot Drive ................................................................................. A-4 Windows 2000 Does Not Display Disks ............................... A-4
Contents
vi
Figures
Sample Navisphere Shared Storage Configuration ................................. 1-7 Sample Navisphere Unshared Storage Configuration ............................ 1-8 Sample Unshared Direct Configuration .................................................... 1-9 Sample Shared Direct Configuration (FC4700) ...................................... 1-10 Sample Clustered Direct Configuration (Non-FC4700) ........................ 1-10 Sample Shared Switched Configuration ................................................. 1-11 Example of Assignment of Disk Numbers to JBOD Disks ................... 3-10
vii
Figures
viii
Preface
This manual describes how to set up a Microsoft Windows NT or Windows 2000 server to use Fibre Channel disk-array storage systems. These storage systems may or may not have storage processors (SPs). Storage systems without SPs consist of disk-array enclosures (DAEs) in a JBOD (just a bunch of disks) configuration. This manual assumes that you are familiar with the following: How This Manual Is Organized The Windows NT or Windows 2000 operating system, whichever is running on the server. How the operating system handles physical disks and disk partitioning.
Chapter 1
Introduces the Navisphere server software ATF (Application-Transparent Failover), the Host Agent, and the CLI that you will install. Describes how to install ATF, the Host Agent, and CLI. Describes making LUNs or JBOD disks in the storage system available to Windows NT. Describes making LUNs or JBOD disks in the storage system available to Windows 2000. Describes configuring the server for Windows NT or Windows 2000 Clusters. Describes how to use ATF on a Windows NT or Windows 2000 storage-system server.
ix
Preface
Appendix A
Describes troubleshooting some problems you might encounter after configuring the storage systems.
Related Documentation
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide, P/N 014003016 EMC Fibre Channel Storage Systems Models FC4500, FC5300, FC5500, and FC5700 Configuration Planning Guide, P/N 014003039 PCI Host Bus Adapter and Driver for Windows NT Installation Guide, P/N 014003033 PCI Host Bus Adapter and Driver for Windows 2000 Installation Guide, P/N 014003034 EMC Navisphere Command Line Interface (CLI) Version 5.X Reference, P/N 069001038 EMC FC-Series and C-Series Storage System and Navisphere Event Codes Version 5.X Reference, P/N 069001061 EMC Navisphere Manager 5.X Administrators Guide, P/N 069001036
CAUTION A caution contains information essential to avoid damage to the system or equipment. The caution may apply to hardware or software. EMC uses the following type style conventions in this guide: Boldface Specific filenames or complete paths. Dialog box names and menu items in text. Selections you can make from the user interface, including buttons, icons, options, and field names. Emphasis in cautions and warnings. New terms or unique word usage in text. Command line arguments when used in text.
Italic
Preface
Fixed space
Examples of specific command entries that you would type, displayed text, or program listings. For example:
QUERY [CUU=cuu|VOLSER=volser]
Fixed italic
Obtain technical support by calling your local sales office. If you are located outside the USA, call the nearest EMC office for technical assistance. For service, call: United States: (800) 782-4362 (SVC-4EMC) Canada: Worldwide: (800) 543-4782 (543-4SVC) (508) 497-7901
and ask for Customer Service. Your Comments Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please e-mail us at techpub_comments@emc.com to let us know your opinion or any errors concerning this manual.
xi
Preface
xii
1
About EMC Navisphere Host Agent, CLI and ATF
This chapter describes the EMC Navisphere storage-system management configurations and architecture. Major topics are Terminology ........................................................................................1-2 Storage Components .........................................................................1-3 Navisphere Software .........................................................................1-4 Sample Storage Installations ............................................................1-6 Storage-System Configurations with ATF......................................1-9
1-1
Terminology
Term Host Agent SP Agent managed Agent managed storage system RAID Group storage system Meaning EMC Navisphere Agent that runs on a storage-system server EMC Navisphere Agent that runs on the SPs in an FC4700 storage system Host Agent or SP Agent managed by EMC Navisphere management software Storage system managed by EMC Navisphere management software Storage system whose storage processors (SPs) are running Core or Base Software that has RAID Group functionality Storage system whose SPs are running Core or Base Software that does not have RAID Group functionality Storage system with the EMC Access Logix option, which provides data access control (Storage Groups) and configuration access control. A shared storage system is always a RAID Group storage system Storage system without the EMC Access Logix option FC4300/4500, FC4700, FC5600/5700, FC5400/5500, FC5200/5300, or FC5000 series storage system C1000, C1900, C2x00, or C3000 series storage system Storage system without storage processors (SPs); that is, one that contains only DAEs and no DPEs EMC Navisphere Application-Transparent Failover CLARiiON Driver Extensions, which provide basic failover features when ATF is not present EMC Navisphere Command Line Interface EMC Navisphere Event Monitor EMC Navisphere Manager
unshared storage system FC-series storage system C-series storage system JBOD storage system ATF CDE CLI Event Monitor Manager
1-2
Storage Components
The basic components of a storage-system configuration are One or more storage systems. One or more servers connected to the storage systems directly or through hubs or switches. A server can run Novell NetWare, Microsoft Windows NT, Microsoft Windows 2000, or one of several UNIX operating systems, such as the IBM AIX or Sun Solaris operating system. A host computer (called a management station) that is running Navisphere storage-management software and is connected over a local area network (LAN) to any storage-system servers and any storage processors (SPs) in an FC4700 storage system.
For a non-FC4700, the management station must be connected via a fibre cable.
A management station can also be a server if it is connected to a storage system. The management station must run the Microsoft Windows NT or the Windows 2000 operating system.
Storage Components
1-3
Navisphere Software
The components of the Navisphere storage-system software are described in the following table.
Software Host Agent SP Agent Manager Runs On Server FC4700 SPs Windows 2000 or Windows NT host that is the Navisphere Management station Server or the Navisphere Management station Server Server Required or Optional Required Required for FC4700 storage systems Optional but recommended
CLI ATF (Application Transparent Failover) CDE (Driver Extensions Software) Event Monitor
Optional shipped with Host Agent Optional Installation may be required, depending on configuation; shipped with the Host Agent Optional shipped with Manager
1-4
1
Host Agent and SP Agent You must install the Navisphere Host Agent on all servers connected to storage systems. The SP Agent is installed at the factory on the SPs in an FC4700 storage system. The Host Agents and SP Agents communicate with the management stations storage-management software. The Host Agent communicates with the Core Software in non-FC4700 storage systems and the SP Agent communicates with the Base Software on the FC4700 SP it runs on. Navisphere Manager, the storage-management software, provides a graphical user interface that lets you configure the storage systems connected to the servers on the LAN. CLI is a command line interface to the Host Agent and SP Agent that can run on a storage-system server or a management station. It provides an alternative to Manager. With CLI, you can configure, control, and retrieve status from any managed storage system. You can also use CLI to automate disk-storage management functions by writing shell scripts or batch files. Installing CLI is optional; the Host Agent and SP Agent can function normally without the CLI. ATF (Application Transparent Failover) is an optional software package designed for high-availability installations. ATF is required for a host that has two host-bus adapters connected to each storage system, and is optional for other hosts. ATF works with storage systems to let applications continue running after the failure of an SP, host-bus adapter (HBA), switch, hub, or storage-system cable. Without human intervention, ATF can route I/O through a secondary path to the disk logical units (LUNs) the applications need. With hardware that provides two paths to each SP (shown later in this chapter), ATF uses multipath I/O, directing the operating system to route I/O through all available paths to SPs and LUNs. Using all the available paths increases system throughout by providing dynamic load balancing making multipath I/O a major feature of ATF. If a server has only one HBA connected to a storage system, you install the driver extensions software (CDE) instead of ATF. CDE provides basic failover functionality. CDE is part of the EMC Fibre Channel HBA driver package for Windows NT or Windows 2000.
Manager
CLI
Navisphere Software
1-5
1
Event Monitor Navisphere Event Monitor can monitor all storage systems on a network and notify personnel by e-mail, fax, page, or telephone if any specified storage system event occurs.
1-6
Windows NT, Manager, Event Monitor, Integrator, optional Organizer, optional Analyzer LAN Windows NT server Windows 2000 server UNIX server UNIX server UNIX, Host Agent, ATF, optional CLI
SP A SP Agent
SP B SP Agent
SP A
SP B
Management connection (FC4700 only) First data path and management connection Second data path and management connection
Figure 1-1
1-7
1
Figure 1-2 shows a sample Navisphere unshared storage configuration in which a Windows NT host is a management station for unshared storage systems connected directly to their servers. The management station shown is not a storage-system server, but it could be. The CLI is running on multiple hosts, but only one instance is needed to the storage systems connected to any server.
Windows NT, Manager, Event Monitor, Integrator, optional Organizer, optional Analyzer LAN Windows NT server UNIX server Windows 2000 server UNIX server
SP A SP A SP B SP A SP B
SP B
1-8
Each configuration requires two HBAs in the server connected to each storage system. For best SP performance, you can bind some LUNs on one SP and the other LUNs on the other SP. The SP that binds a LUN is the default owner of that LUN and determines the primary I/O path to that LUN. The route through the other SP to the LUN is the secondary path, and is available if a component in the primary route fails.
SP A Path 1 Path 2
Figure 1-3
Storage system
EMC1836
Adapter
Adapter
SP B
Storage-System Configurations with ATF
1-9
1
Shared-or-Clustered Direct Configuration
A shared direct configuration (Figure 1-4) uses the Access Logix option in the storage systems to control LUN access. It is supported for FC4700 storage systems only. A clustered direct configuration (Figure 1-5) uses clustering software. It is supported for non-FC storage systems without the Access Logix option and by Windows NT or Windows 2000 when the servers are running Windows NT or Windows 2000 Clustering software. This configuration provides the highest availability by protecting against host failure.
Server 1 Server 2
Adapter
SP A Path 1 Path 2
Storage system
Figure 1-4
Server 1
Adapter
SP A
Server 2
Adapter
Adapter
Adapter
Adapter
SP B
Adapter
Adapter
Path 1 Path 2
Figure 1-5
Storage system
1-10
1
Shared Switched Configuration
Switches add flexibility and availability to a Fibre Channel site. Essentially, a switch lets you expand any of the previous configurations to include multiple servers and storage systems. A shared switched configuration uses the Access Logix option in the storage systems to control LUN access and one or two switch fabrics, depending on the switch hardware model. This configuration provides the highest availability when the servers run Windows NT or Windows 2000 Clustering software. A shared switched configuration can provide multiple paths to each SP, allowing ATF to use multipath I/O to storage system SPs and LUNs. Multipath I/O provides greater system throughput by sending I/O to all paths in round-robin sequence, spreading and dynamically balancing the overall I/O load.
Server 1 Adapter Adapter SP A Path 1 Path 2 Server 2 Adapter Adapter
Switch fabric
Switch fabric
SP B
Storage system
Figure 1-6
A shared switched configuration without multiple paths to each SP uses fewer switch ports and cables but does not provide multipath benefits. For an example of such a configuration, imagine the figure above with only one connection between each switch and each SP.
1-11
1
How ATF Handles Hardware Failures
Table 1-1 describes ATF features for recovery from hardware failure. Note that for hardware failures of a disk, fan module, or power supply, redundant hardware will allow operation to continue regardless of ATF software.
Table 1-1
How ATF Handles Hardware Failures Failing Component HBA or HBA cable SP or SP cable LCC or SP-to-LCC cable Host With Windows NT or Windows 2000 Clustering software, ATF transfers the LUNs in the failed path to the working path. ATF transfers the LUNs in the failed path to the working path. What Happens with ATF ATF transfers the LUNs in the failed path to the working path.
Hardware Unshared Direct Configuration One host, Two HBAs, Two SPs Shared Direct-or-Clustered Configuration Two hosts each with two HBAs, Two SPs
HBA or HBA cable SP or SP cable LCC or SP-to-LCC cable Shared Switched Configuration Multiple hosts each with two HBAs, Two switches or hubs, Two SPs Host
With Windows NT or Windows 2000 Clustering software, ATF transfers the LUNs in the failed path to the working path. The number of hosts supported varies with the cluster software. ATF transfers the LUNs in the failed path to the working path.
1-12
2
Installing and Removing ATF, Host Agent, and CLI
This chapter describes how to install ATF, the Host Agent, and the CLI on a Windows NT or Window 2000 storage-system server. The host Agent is required on all storage-system servers. ATF is required for all highly available configurations. It is also required for shared storage systems. CLI is optional for all storage systems. Major topics are Hardware and Software Requirements ..........................................2-2 Installing ATF .....................................................................................2-3 Installing the Host Agent and CLI ..................................................2-6 Using the Event Monitor Configuration File ...............................2-13 Configuring Storage Systems .........................................................2-13 Starting the Host Agent Service.....................................................2-14 Stopping the Host Agent Service...................................................2-14 Removing CDE or ATF....................................................................2-15 Removing the Host Agent or CLI..................................................2-17
2-1
For information about the specific revision of the storage-system Core or Base Software that is required for your revision of ATF and the Host Agent, see the release notes that shipped with the Navisphere media. The release notes also describe any changes to the installation process for the software revision that you are installing.
2-2
Installing ATF
!
CAUTION If the Host Agent is already installed and you are going to replace it, then remove it before installing ATF, as described in the section Removing the Host Agent or CLI on page 2-17. If you are not going to replace the Host Agent, you do not need to remove it before installing ATF. However, once Windows comes back up from the reboot required after the ATF installation, you must use the remote agent configuration of Manager or the Navisphere CLI command. Rescan the bus to find the new ATF devices for any non-FC4700 storage systems connected to the server, and save the new Agent configuration file. You will need to log in to the server as the Administrator or as someone with administrative privileges. Before installing ATF, disconnect any network drives. 1. Log in to the server as the Administrator or as someone with administrative privileges. 2. If a version of CDE or ATF is already installed, remove it as described in the section Removing CDE or ATF on page 2-15. If you do not know whether CDE or ATF is installed, open the Command Prompt dialog box and enter the following command: atf_console If you receive an error, neither CDE nor ATF is installed. 3. Insert the Navisphere ATF CD-ROM in the CD-ROM drive. The ATF Setup dialog box opens and displays the Welcome to the InstallShield Wizard for ATF dialog box. If the ATF Setup dialog box does not open, follow these steps: a. From the Windows taskbar, follow the path Start Run A Run dialog box opens. b. In the Command Line field of the Run dialog box, enter
drive:\SETUP (for example, D:\SETUP)
Installing ATF
2-3
2
If you did not remove an existing version of CDE or ATF, the Welcome to the InstallShield Wizard for ATF dialog box is not displayed in the ATF Setup dialog box, and you should do the following: a. Click OK when you are asked if you want to remove CDE or ATF. b. Select No, I will restart my computer later and click Finish. c. Repeat step 3. 4. In the Welcome to the InstallShield Wizard for ATF dialog box, click Next. 5. Click Yes to accept the license terms and proceed with the installation. The Choose Destination Location dialog box is displayed. 6. Click Next to accept the default destination folder, C:\Program Files\EMC\ATF. The Select Program Folder dialog box is displayed. 7. Click Next to select the default program folder that will hold the ATF software. The ATF files are installed as follows: the ATF driver (CLatf.SYS) is in the Windows drivers directory, and the ATF executable programs are in the program folder you specified in a previous step. 8. Leave Yes, I want to restart my computer now selected and click Finish. Windows shuts down and restarts, enabling the ATF software. After Windows restarts, you can run the atf_console program supplied with the ATF package. While it is running, the atf_console program displays ATF events in a DOS window that is updated several times each minute. These events are also recorded in the Windows Event Viewer. If you need instant ATF status, you can either refresh the Event Log Viewer display or run the atf_console program. To run the atf_console program, click the program name in the ATF program folder, or use the taskbar path Start Programs ATF atf_console.
2-4
2
9. If the Host Agent was installed when you installed ATF and you have non-FC4700 storage systems connected to the server, use either the remote agent configuration feature of Manager or the CLI command remoteconfig -setconfig to do the following: a. clear devices b. scan for devices 10. Verify that Windows recognized the storage-system SPs by starting the Event Viewer and examining the clatf messages. As with any Event Viewer event, you can double-click a clatf event to display a detailed explanation of the message. When Windows boots with ATF installed, it discovers each storage system and the SPs in it, and then records these events in the event log as clatf events. A sample text line from the Event log follows.
Found SP atf_sp0b at address [3/0/4/0]
11. Reconnect any network drives that you disconnected. What Next? If the Host Agent is not already installed or you want to update it or the CLI, go to the next section, Installing the Host Agent and CLI. If the Host Agent is already installed and you do not want to update it or the CLI, what you do next depends on whether you will use Navisphere Manager to manage the storage systems connected to the server. Using Manager - If you are setting up a new server, go to the section Configuring Storage Systems on page 2-13. If you are just upgrading the Host Agent or CLI on an existing system, you are finished with the upgrade. Not using Manager - Go to the section, Using the Event Monitor Configuration File on page 2-13.
Installing ATF
2-5
1. Ensure that all storage systems are connected to the server on which you are installing Navisphere Agent. 2. Log in to the Windows management station as Administrator or someone who has administrative privileges. 3. If the Agent and/or CLI is already installed, then before continuing, remove each one as described on page 2-17. The installation program does not let you overwrite an existing Agent or CLI. 4. Insert the Navisphere Agent/CLI CD in the servers drive. Installation starts automatically. The setup program prepares the InstallShield Wizard for the installation, and the following Navisphere Agent, CLI Setup dialog box opens.
2-6
2
5. If you do not see the Navisphere Agent, CLI Setup dialog box, follow these steps to start the installation: a. From the Windows taskbar, select Start Run b. In the Run dialog box, enter the following program name, and then click OK: drive: \setup.exe where drive is the letter for the CD drive. 6. In the Navisphere Agent, CLI Setup dialog box, click Next. The following Navisphere Agent, CLI Setup dialog box opens.
7. In the dialog box, select the Navisphere Agent check box to install the Agent, select the Navisphere CLI check box if you also want to install the CLI, and click Next.
2-7
2
If you have not already removed a previous version of Agent, a dialog box opens to inform you that you must remove the installed version. To manually remove the previous version, click Yes and follow the instructions on page 2-17 . To automatically remove the previous version of Agent, click No. Once you have removed the previous version of Agent, repeat steps 4 through 7 to install the new version. 8. In the Navisphere Agent Setup dialog box, click Next. 9. In the License Agreement dialog box, read the license agreement, and click Yes to accept the terms. 10. In the Customer Information dialog box, enter the appropriate information, and click Next. 11. In the Choose Destination Location dialog box, click Next to select the default location. The default location is drive:\Program Files\EMC\Navisphere Agent. 12. In the Select Program Folder dialog box, click Next to select the default program folder (Navisphere). The setup program copies files to the destination folder, and then it displays the message Navisphere Agent Service installed successfully. 13. In the Navisphere Agent Installer dialog box, click OK.
2-8
2
14. The following Initialize Privileged User List dialog box opens so you can add privileged users to a new or existing Host Agent configuration file.
In the Agent Configuration File you must specify at least one privileged user who can log in to the Navisphere management station and configure the Agent. If you do not specify a privileged user, an error message appears when you attempt to exit this dialog box.
If a Host Agent configuration file does not exist on the server, continue to step 15 to use the create or overwrite option. If a Host Agent configuration file exists on the server, go to step 16 to use the existing file option. 15. To create or overwrite a configuration file, follow these steps: a. Click Create/Overwrite File. b. Enter the pathname in Config File. 16. To use the existing configuration file, follow these steps: a. Click Use Existing File. The pathname of the file appears in the Config File and its privileged users appear in the Privileged User List.
2-9
2
b. If you want to use a different file, either enter the full pathname of the file or select Browse to find the file. 17. Make sure the Initialize Privileged User List contains entries for only those users allowed to configure storage systems connected to the server. To add a user to the list a. Click Add. The Add Privileged User dialog box opens. b. In User Name, enter the persons user account name. c. In System Name, enter the name of the host running Manager (Navisphere Management Station). d. Click OK. To remove a privileged user from the list a. Select the privileged user name. b. Click Remove. 18. Click OK to save the new privileged user list. The program saves the Host Agent configuration file with the new privileged user entries and starts the Host Agent. 19. In the Navisphere Agent Setup dialog box, click Finish. If you are not installing the Navisphere CLI on this host, go to step 26. If you are installing the Navisphere CLI on this host, continue to step 20. 20. In the Navisphere CLI Setup dialog box, click Next. 21. In the License Agreement dialog box, read the Software License Agreement, and click Yes to accept the terms. 22. In the Customer Information dialog box, enter the appropriate information (probably the same as for the Agent), and click Next. 23. In the Choose Destination Location dialog box, click Next to select the default location. The default location is drive:\Program Files\EMC\Navisphere CLI.
2-10
2
24. In the Select Program Folder dialog box, click Next to select the default program folder (Navisphere). The setup program copies the files to the program folder. 25. In the InstallShield Wizard Complete dialog box, click Finish. The InstallShield Wizard Complete dialog box opens again. 26. In the InstallShield Wizard Complete dialog box, click Finish again. 27. Remove the Agent/CLI CD-ROM from the servers CD-ROM drive. 28. If you are managing non-FC4700 storage systems, edit the Host Agent config file using one of the following: the Remote Agent Configuration feature of Manager as described in the Manager Administrators Guide. the CLI remoteconfig command as described in the CLI Reference Manual.
You must edit the Host Agent configuration file to add device entries for each storage system the server will communicate with.
For information on starting and stopping the Host Agent service, see Starting the Host Agent Service on page 2-14 and Stopping the Host Agent Service on page 2-14. For information on using the CLI, see the EMC Navisphere Command Line Interface (CLI) Reference.
When a non-FC4700 storage system experiences heavy input/output traffic (that is, applications are using the storage system), information may not be reported to the Host Agent in a timely manner. In such a situation, the Host Agent may take several minutes to execute a storage-system management task. This behavior is most evident when one Host Agent is managing multiple storage systems. Also, if the SP event log is large and the Host Agent configuration file is set up to read all the events, it may take a few minutes for the Host Agent to start.
2-11
2
What Next? What you do next depends on whether you will use Navisphere Manager to manage the storage systems connected to the server. Using Manager - If you are setting up a new server, go to the section Configuring Storage Systems on page 2-13. If you are just upgrading the Host Agent or CLI on an existing system, you have finished the upgrade. Using CLI - Continue to the next section Using the Event Monitor Configuration File for information on how you can monitor storage-system events.
2-12
What Next?
When the storage systems are configured, you must make the LUNs available to the operating system as described in the following chapters: For Windows NT For Windows 2000 Chapter 3 Chapter 4
2-13
2-14
1. Log in to the server as Administrator or as someone who has administrative privileges. 2. From the Windows taskbar, select Start Settings Control Panel 3. In the Control Panel dialog box, double-click Add/Remove. The Add/Remove Program Properties (Windows NT) or Add/Remove Programs (Windows 2000) dialog box opens. 4. Select CDE or ATF and click the Add/Remove button (Windows NT) or the Change or Remove Programs button (Windows 2000). Windows displays a Confirm File Deletion dialog box. 5. Click OK. The system removes CDE or ATF program group, drivers, and executable files. If you used an existing program folder when you installed CDE or ATF, the system does not delete the program folder entity. 6. If you will install another version of CDE or ATF, select No, I will restart my computer later and click Finish. 7. If you will not install another version of CDE or ATF, leave Yes, I want to restart my computer now selected and click Finish. Windows shuts down and restarts.
2-15
2
8. If the Host Agent is installed on the server, do the following: a. On startup, when the Host Agent reports that no devices are being managed, click OK. b. When the Host Agent reports that it has started, click OK. If you have non-FC4700 storage systems connected to the server, use either the remote agent configuration feature of Manager or the CLI command remoteconfig -setconfig to clear devices and then scan for devices Auto Detect does not find FC4700 storage systems because they are managed through their SP Agents and not through the Host Agent on the server.
CAUTION If you do not perform an Auto Detect, the storage system remains inaccessible to Navisphere Manager and CLI. c. When the scan is complete, save the new Agent Configuration file using one of the following menu paths File Save or File Save As d. When prompted about restarting the Agent, click Yes.
2-16
1. On the Windows server running the Host Agent that you want to remove, log in as Administrator or the equivalent. 2. Stop the Host Agent service as follows: a. From the Windows taskbar, select Start Settings Control Panel Services The Services dialog box opens. b. In the Services dialog box, if Navisphere Agent is started, select it and click Stop. c. When asked to confirm your request to stop the Agent service, click Yes. Then close the Services dialog box. 3. In the Control Panel dialog box, double-click Add/Remove Programs. The Add/Remove Program Properties (Windows NT) or Add/Remove Programs (Windows 2000) dialog box opens. 4. Select Navisphere Agent and click the Add/Remove button (Windows NT) or the Change or Remove Programs button (Windows 2000). The InstallShield Wizard dialog box opens. 5. When asked to confirm the removal, click Yes. 6. If prompted to remove shared files, click Yes, and/or if informed that the service has already been removed, click Yes or OK. 7. If the InstallShield Wizard dialog box opens again, click Finish. 8. If you want to remove the CLI, repeat steps 4 through 7, but in step 4 select Navisphere CLI.
2-17
2
9. When the Remove Programs From Your Computer dialog box opens, click OK. You have removed the application.
The drive:\Program Files\EMC folder, the drive:\Program Files\EMC\Navisphere Agent folder, and the drive:\Program Files\EMC\Navisphere Agent\agent.config file, are not removed. If you want to remove them, you must do so manually using Explorer.
2-18
3
Making LUNs or Disks Available to Windows NT
Before Windows NT can access the LUNs in a storage system with SPs or the disks in a storage system without SPs (JBOD configuration), you need to partition the LUNs or disks. This chapter describes how to plan and create your LUN or disk partitions, including how to fill out the relevant configuration worksheet. To help with the planning, you may want to refer to the appropriate storage-system configuration planning guide for your storage-system type. Major topics are Making LUNs Available to Windows NT ......................................3-2 Making JBOD Disks Available to Windows NT ............................3-9
3-1
3-2
3
Configuration Worksheet Storage System with SPs
Use this Configuration Worksheet is useful when you create partitions.
Configuration Worksheet
First Fibre Channel adapter - Host slot number: ________ Second Fibre Channel adapter - Host slot number: _________ Adapter Name Bus Number RAID Group LUN Target ID Number LUN Type Drive Letter Volume Label File System Type Capacity
3-3
3
Adapter Name The description of the adapter. For any target LUN connected to the adapter, you can find the adapter name in the SCSI Adapters window under the Settings tab. Bus Number A calculated value based on the drivers mapping. For any target LUN connected to the adapter, you can find the bus number in the SCSI Adapters window under the Settings tab. RAID Group See the Manager documentation for information on RAID Groups. Target ID A calculated value based on the drivers mapping. For any target LUN connected to the adapter, you can find the target ID in the SCSI Adapters window under the Settings tab. LUN Number The logical unit (LUN) number. For any target LUN connected to the adapter, you can find the LUN number in the SCSI Adapters window under the Settings tab. LUN Type The type of LUN that the volume will be on. For any target LUN connected to the adapter, you can find the LUN type in the SCSI Adapters window under the General tab. The types are: RAID 0 for a non-redundant array RAID 1 for a mirrored pair RAID 10 for a mirrored RAID 0 RAID 3 for a parallel access array RAID 5 for an independent access array DISK for an individual disk unit
Drive Letter The letter assigned to the partition that you create. You use the Windows NT Disk Administrator to create partitions on a LUN. You fill this in while creating partitions. Volume Label The name that you specify when you create the volume group. You fill this in while creating partitions. File System Type FAT or NTFS.
3-4
3
Capacity The user-accessible capacity of the logical volume. A volume containing a FAT file system must not exceed 2 Gbytes. You fill this in while creating partitions.
What Next?
3-5
3
Creating Windows NT Partitions on LUNs
A LUN functions like a disk for Windows NT.
You can create a maximum of four partitions on one disk (LUN). Only one of the four partitions can be an extended partition; the others must be primary partitions. You can create an extended partition on a disk without creating any primary partitions. As you create partitions, finish filling out the Configuration Worksheet as outlined in this procedure. 1. From the Windows NT taskbar, follow the path Start Programs Administrative Tools (Common) Disk Administrator 2. If you get a dialog box that calls this a first-time installation, click OK. 3. Click Yes to the confirmation message asking if you want to create a signature on your disk drive, and repeat this procedure for each new LUN you created. The Disk Administrator dialog box appears. You use it to create partitions on the disk.
Before you decide which file system you want to use, see the Windows NT on-line help topic, file systems supported by Windows. A partition containing a FAT file system cannot have a capacity greater than 2 Gbytes. A partition containing a FAT32 file system cannot have a capacity greater than 4 Gbytes.
4. To create a primary partition, follow these steps: a. Select the area of free space on the disk for the partition by clicking on the space. b. On the Partition menu, click Create. c. Enter the size of the partition. d. Click OK. e. Click Yes.
3-6
3
To create additional primary partitions, repeat steps a through e. You can create a maximum of either four primary partitions, or three primary partitions and one extended partition on a single disk. 5. To create an extended partition, follow these steps: a. Select the area of free space on the disk for the partition by clicking on the space. b. On the Partition menu, click Create Extended. c. Enter the size of the extended partition. d. Click OK. e. Create a logical drive in the extended partition as follows: Select an area of free space for the logical drive by clicking on the space. On the Partition menu, click Create. Enter the size of the logical drive. To create additional logical drives on the extended partition, repeat step e. 6. When you have created the desired partitions, save the changes by clicking Commit Changes Now on the Partition menu. 7. Click No to the confirmation message asking if you want to execute Rdisk.exe. 8. Enter the following information onto the Configuration Worksheet: Drive Letter Capacity Volume Label
3-7
3
9. Format each partition as follows: a. Select the partition. b. On the Tools menu, click Format. c. Select the file system type and enter a label. d. Select the appropriate format option. e. Click OK. f. Confirm the format operation by clicking Yes. What Next? What you do next depends on whether you will use the server in a Windows NT Cluster. Using server in a cluster Go to Chapter 5, Configuring for Windows Clusters. Not using the server in a cluster Applications can start using the LUNs. ATF provides the applications with access to the LUNs via the alternate path if either the primary or secondary path fails. However, the original high availability provided by ATF is not restored until the failed path is fixed. Also, if the server is booted while only one path is working, you must reboot to restore high availability. For information on using ATF, see Chapter 6, Using ATF.
3-8
Windows NT represents a Fibre Channel arbitrated loop (FC-AL) on a Fibre Channel adapter as one Scsi port with six Scsi buses, Scsi Bus 0 through Scsi Bus 5, each with the same initiator, which is the adapter. Scsi Bus 0 is reserved for the adapter. Scsi Bus 1 through Scsi Bus 5 are for the nodes on the FC-AL loop. Each of these six Scsi buses can have 32 nodes, which are targets with Ids of 0 through 30. Target Id 31 is reserved for the adapter. Windows NT assigns disk numbers by scanning all Scsi buses for disk devices and naming the devices in the order in which it finds them. It names the first disk device it finds, disk 0; the second disk, disk 1; and so on. Figure 3-1 illustrates how Windows NT assigns disk numbers. This figure shows a sample Scsi tree displayed by the Windows NT regedit32 program, which you will use to determine the disk number for each disk. In the configuration shown by the tree, the server has one Scsi adapter and one Fibre Channel adapter, which Windows NT names Scsi Port 0 and Scsi Port 1, respectively. The Scsi adapter has two disks connected to its Scsi bus. The Fibre Channel adapter has two DAEs connected to its Fibre Channel loop. One DAE has ten disks and the other has five.
When Windows NT scans a Scsi bus, it finds the Target Ids in numeric order, that is, 0 - 30. However, regedit32 displays these Target Ids in alphanumeric order. Since regedit32 uses only one digit for Target Ids 0 - 9, it displays Target Ids for each Scsi bus in the following order: 0, 1, 10 - 19, 2, 20 - 29, 3, 30, 4, 5, 6, 7, 8, 9.
3-9
3
Scsi Scsi Port 0 Scsi Bus 0 Scsi Devices Disk Numbers Initiator Id7 (Scsi adapter) Assigned Target Id0 Logical Unit Id0 . . . . . . . disk 0 Target Id1 Logical Unit Id0 . . . . . . . disk 1 Initiator Id31 (Fibre Channel adapter) Scsi Bus 1 Initiator Id31 (Fibre Channel adapter) Target Id0 Logical Unit Id0 . . . . . . . disk 2 Target Id1 Logical Unit Id0 . . . . . disk 3 Target Id10 Logical Unit Id0 . . . . . . . disk 12 Target Id11 Logical Unit Id0 . . . . . . . disk 13 Target Id12 Logical Unit Id0 . . . . . . . disk 14 Target Id13 Logical Unit Id0 . . . . . . . disk 15 Target Id14 Logical Unit Id0 . . . . . . . disk 16 Target Id2 Logical Unit Id0 . . . . . . . disk 4 Target Id3 Logical Unit Id0 . . . . . . . disk 5 Target Id9 Logical Unit Id0 . . . . . . . disk 11 Scsi Bus 2 Initiator Id31 (Fibre Channel adapter) Scsi Bus 3 Initiator Id31 (Fibre Channel adapter) Scsi Bus 5 Initiator Id31 (Fibre Channel adapter) Scsi Bus 6 Initiator Id31 (Fibre Channel adapter)
Figure 3-1
3-10
. . .
3
In the example, Windows NT scans Scsi Port 0, Scsi Bus 0, and finds one internal Scsi adapter, Initiator Id7, with two internal Scsi devices, Target Id0, Logical Unit Id0 and Target Id1, Logical Unit Id0. It assigns the names disk 0 and disk 1 to these devices. Windows NT continues scanning and finds Scsi Port 1, Scsi Bus 0 with one Fibre Channel adapter, Initiator Id31. It scans Scsi Port 1, Scsi Bus 1 and finds ten disk devices in the first DAE (enclosure address 0). These disk devices are Target Id0, Logical Unit Id0, through Target Id9, Logical Unit Id0, and they have FC-AL addresses 0 through 9. Windows NT names them disk 2 through disk 11. Next, it finds five disk devices in the second DAE (enclosure address 1). These disk devices are Target Id10, Logical Unit Id0, through Target Id14, Logical Unit Id0, and they have FC-AL addresses 10 through 19. Windows NT names them disk 12 through disk 16.
3-11
3
4. To create a primary partition, follow these steps: a. Select the area of free space on the disk for the partition by clicking on the space. b. On the Partition menu, click Create. c. Enter the size of the partition. d. Click OK. e. Click Yes. To create additional primary partitions, repeat steps a through e. You can create a maximum of either four primary partitions or three primary partitions and one extended partition on a single disk. 5. To create an extended partition, follow these steps: a. Select the area of free space on the disk for the partition by clicking on the space. b. On the Partition menu, click Create Extended. c. Enter the size of the extended partition. d. Click OK. e. Create a logical drive in the extended partition as follows: Select an area of free space for the logical drive by clicking on the space. On the Partition menu, click Create. Enter the size of the logical drive. To create additional logical drives on the extended partition, repeat step e. 6. When you have created the desired partitions, save the changes by clicking Commit Changes Now on the Partition menu. 7. Click No to the confirmation message asking if you want to execute Rdisk.exe. 8. Enter the following information onto the Configuration Worksheet: Drive Letter Capacity Volume Label
3-12
3
9. Format each partition as follows: a. Select the partition. b. On the Tools menu, click Format. c. Select the file system type and enter a label. d. Select the appropriate format option. e. Click OK. f. Confirm the format operation by clicking Yes. What Next? If you are using the server in a Windows NT cluster, go to Chapter 5, Configuring for Windows Clusters. If you are not using the server in a Windows NT cluster, applications can start using the disks. ATF provides the applications with access to the disks via the alternate path if either the primary or secondary path fails. However, the original high availability provided by ATF is not restored until the failed path is fixed. Also, if the server is booted while only one path is working, you must reboot to restore high availability. For information on using ATF, see Chapter 6, Using ATF.
3-13
3-14
,a
4
Making LUNs Available to Windows 2000
Before you can use the LUNs in a storage system available to Windows 2000, you need to partition the LUNs (disks). This chapter describes how to plan and create your LUN or disk partitions, including how to fill out the relevant configuration worksheet. To help with the planning, you may want to refer to the appropriate storage-system configuration planning guide for your storage-system type. Major topics are Determining the Disk Number for Each LUN...............................4-2 Creating Windows 2000 Partitions ..................................................4-5 Creating Volumes on a LUN ............................................................4-9
4-1
4-2
4
Configuration Worksheet The Configuration Worksheet is useful when you create partitions.
Configuration Worksheet
First Fibre Channel adapter - Host slot number: _________ Second Fibre Channel adapter - Host slot number: _________ Adapter Name RAID Target Id Group LUN Number LUN Type Drive Letter Volume Label File System Type Capacity
4-3
4
Adapter Name The description of the adapter. For any target LUN connected to the adapter, you can find the adapter name in the Properties dialog box. Target ID Calculated value based on the drivers mapping. For any target LUN connected to the adapter, you can find the target ID in the Properties dialog box. RAID Group See the Navisphere documentation for information on RAID Groups. LUN Number The logical unit (LUN) number. For any target LUN connected to the adapter, you can find the LUN number in the Properties dialog box. LUN Type The type of LUN that the volume will be on. For any target LUN connected to the adapter, you can find the LUN type in the Properties dialog box (as part of the device name). The types are: RAID 0 for a non-redundant array RAID 1 for a mirrored pair RAID 1/0 for a mirrored RAID 0 RAID 3 for a parallel access array RAID 5 for an independent access array DISK for an individual disk unit SPARE for a global hot spare disk
Drive Letter The letter assigned to the partition that you create. You use the Windows 2000 Disk Management folder to create partitions on a LUN. You fill this in while creating partitions. Volume Label The name that you specify when you create the volume group. You fill this in while creating partitions. File System Type FAT, FAT32, or NTFS. Capacity The user-accessible capacity of the logical volume. A volume containing a FAT file system must not exceed 2 Gbytes, and a FAT32 file system must not exceed 4 Gbytes. You fill this in while creating partitions.
4-4
What next?
4-5
4
3. If the signature is not written on the LUN, follow these steps to write the signature:
If the Write Signature and Upgrade Disk Wizard opens and you do not want to write a signature, close the dialog box. You cannot create partitions on a disk without a signature.
a. Click Next. b. In the Select Disk to Write Signature dialog box, select the disk for which you want to write a signature. c. Click Next. d. In the Select Disks to Upgrade dialog box, deselect the disks, and click Next. The Write Signature and Upgrade Disk dialog box displays the disks you selected. e. Click Finish. A display of the volumes appears in the right side of the dialog box. When the signature is written on a disk, the small red icon next to the disk name disappears. If you do not write a signature on a disk, this icon remains and its type displays as unknown. 4. If you close the Write Signature wizard and need to use it again, follow these steps: a. Right-click the disk without a signature, and click Write Signature. b. In the Write Signature dialog box, select the disks for which you want to write a signature. c. Click OK.
Before you decide which file system you want to use, see the Windows 2000 on-line help topic, file systems supported by Windows. A partition containing a FAT file system cannot have a capacity greater than 2 Gbytes. A partition containing a FAT32 file system cannot have a capacity greater than 4 Gbytes.
5. To create a primary partition, follow these steps: a. Right-click the area of free space on the disk detail panel for the partition, and click Create Partition.
4-6
4
b. In the Create Partition Wizard dialog box, click Next. c. In the Select Partition Type dialog box, select primary partition and click Next. d. In the Specify Partition Size dialog box, select the amount of disk space to use and click Next. e. In the Assign Drive Letter or Path dialog box, click Assign a drive letter, specify an unused drive letter, and click Next. f. If you need to format the volume, then in the Format Partition dialog box, select the Perform a Quick Format check box. g. Click Next. h. Click Finish. To create additional primary partitions, repeat steps a through g above. You can create a maximum of either four primary partitions or three primary partitions and one extended partition on a single disk. 6. To create an extended partition, follow these steps: a. Right-click the area of free space on the disk for the partition, and click Create Partition. b. In the Create Partition Wizard dialog box, click Next. c. In the Select Partition Type dialog box, select extended partition, and click Next. d. In the Specify Partition Size dialog box, specify the size of the extended partition, and click Next. The status information for the partition appears in the dialog box. e. Click Finish. 7. To create a logical drive in the extended partition, follow these steps: a. Right-click the area of free space for the logical drive, and click Create Logical Drive. b. In the Create Partition dialog box, click Next. c. Enter the size of the logical drive, and click Next. d. In the Assign Drive Letter or Path dialog box, select Assign a drive letter, specify an unused drive letter, and click Next.
Creating Windows 2000 Partitions
4-7
4
e. In the Format Partition dialog box, format the partition if you want to. f. If you format the partition, choose the file system type you want to use, the allocation size, and the volume label. g. Click Finish. To create additional logical drives on the extended partition, repeat steps a through g. You can create as many logical drives as you want until you run out of space. A dark green border surrounding the logical drives you created represents the extended partition. If you want to create volumes on dynamic disks, continue to next procedure.
4-8
2. Click Create Volume. 3. In the Create Volume Wizard dialog box, click Next. 4. In the Select Volume Type dialog box, click the type of volume you want to create, depending on the number of disks you have, and click Next. The Select Disks dialog box displays the list of dynamic disks. 5. In the Select Disks dialog box, select the dynamic disk and specify the size for this volume, and then click Next. 6. Assign a drive letter or path to access the volume, and click Next. 7. Accept the default, format this volume specifying the formatting information, including file system, allocation unit size, and volume label. 8. Click Next. The Create Volume Wizard displays the new settings you selected. 9. Click Finish. What Next? What you do next depends on whether you will use the server in a Windows 2000 cluster. Using the server in a cluster Go to Chapter 5, Configuring for Windows Clusters. Not using the server in a cluster Applications can start using the LUNs. ATF provides the applications with access to the LUNs via the alternate path if either the primary or secondary path fails. However, the original high availability provided by ATF is not restored until the failed path is fixed. Also, if the server is booted while only one path is working, you must reboot to restore high availability. For information on using ATF, see Chapter 6, Using ATF.
4-9
4-10
5
Configuring for Windows Clusters
This chapter describes Windows NT and Windows 2000 Clusters and how to configure Navisphere applications to work with clusters.
A server connected directly to an FC4700 storage system cannot be part of a Windows NT or Windows 2000 Cluster.
Major topics are About Windows Clusters..................................................................5-2 Using Navisphere Applications with Clusters ..............................5-3 Configuring the Host Agent for Clusters .......................................5-4 Adding a Generic Service Resource to the Cluster Group...........5-5
5-1
5-2
5-3
5-4
3. Enter the name of the resource that controls the Host Agent. For example, enter Navisphere Agent. 4. Under Resource Type, click Generic Service; under Group, click Cluster Group, and click Next. The two servers with the Agent installed should be listed in the Possible Owners list. If not, highlight one or both servers, click the to move them into the Possible Owners list, and click Next. The Resource Dependencies list should be empty. If not, highlight all the resources in the list, click the to move them out of the Resource Dependencies list, and click Next. 5. In Service Name, enter Navisphere Agent, and click Next. 6. Click Finish. A message appears stating that the service has been successfully created.
5-5
5
7. Return to the main Cluster Administrator dialog box, and follow the path
Clustername Group Cluster Group
8. Right-click the Navisphere Agent resource, and then click Properties. 9. Click the Advanced tab, select the Dont restart check box and click OK. 10. Bring the Agent resource online. To do so, right-click the Navisphere Agent resource and click Bring Online. The Agent service should start on the server that currently owns the Cluster Group. What Next? ATF provides the applications with access to the LUNs via the alternate path if the primary path fails. However, the original high availability provided by ATF is not restored until the failed path is fixed. Also, if the server is booted while only one path is working, you must reboot to restore high availability. For information on using ATF, see Chapter 6, Using ATF.
5-6
6
Using ATF
This chapter explains how use ATF on a Windows NT or Windows 2000 storage-system server. ATF Operation ....................................................................................6-2 Testing ATF .........................................................................................6-2 ATF Storage-System Device Names ................................................6-3 Adding Devices to a Server After ATF Installation ......................6-3 Failover Messages ..............................................................................6-3 Restoring the Original Path ..............................................................6-4 Verifying Restored LUNs Using Windows Event Viewer............6-5 ATF Trespass Utility...........................................................................6-6
Using ATF
6-1
Using ATF
ATF Operation
While the host(s) are running normally, the ATF software takes no action and requires no management. With certain hardware configurations, ATF uses multipath I/O, automatically directing the operating system to route I/O through all available paths to SPs and LUNs.
Testing ATF
At least one LUN must be bound for the test to work.
1. Start I/O to one or more LUNs assigned to an SP (SP A or SP B). 2. Start failover in one of the following ways: For a Fibre Channel storage system - Disconnect the cable to the I/O port of one SP from either the SP or the HBA. For any type of storage system - Pull an SP with active I/O about 1 inch (2.5 cm) from its enclosure. To do this, you need to open the storage-system chassis, pull the SP, and close the chassis within a 2-minute period to avoid thermal shutdown. Removing an SP is explained in the hardware reference manual that shipped with the storage system.
After you disconnect the cable(s) or pull out the SP, I/O may slow for a short time.
After you pull out the SP, Manager should show each affected LUN moving from its primary path (the failed SP) to the secondary path. I/O may slow for a short time but should resume to almost normal after failover is complete. 3. When you are satisfied that failover worked correctly, either push the SP fully into its slot or reconnect the cable you disconnected. 4. After the SP becomes ready (indicated by the SP ready light, described in the storage-system manual), issue the atf_restore command as described on page 6-4. Again, Manager should show each LUN moving back to its primary path.
6-2
Using ATF
atf_spna or atf_spnb
Failover Messages
If a device fails and ATF automatically fails over to the other path, you can tell by any of these events: Applications continue running after an adapter, cable, hub, switch, or SP failure. You see messages about failover similar to the following in Event Viewer (displayed line by line as you press the downarrow key) and in the atf_console window, if atf_console is running.
13:34:57 Warning [5/0/0/0] SP marked as failed 13:34:57 Info [5/0/0/4] FAILOVER STARTED 13:34:57 Info [6/0/1/4] FAILOVER SUCCESSFUL To restore default paths for this array, execute: atf_restore atf_sp0
Write down the last lines, which will help you restore the original path later.
6-3
Using ATF
For more detail, including a bitmap of the LUNs restored, see the Windows Event Viewer messages for clatf. You can learn which LUNs were affected as shown in the next section.
6-4
Using ATF
Text explanation of clatf message in Event Viewer, obtained by double-clicking clatf event Description: 12:57:46 Trespassed LUNs for SP atf_sp0a [4/0/0/0] (mode 0)
Data Byte
0000 0010 0020 0030
Word
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 c0000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Location of data in event record Header info is in locations 0000-001F (hex); LUN bitmap is in locations 0020 and above.
First word of LUN bitmap Each digit indicates a set of LUNs affected by the operation, using hex values by byte: 0000 0000 8421 8421 Therefore, a value of c000 means that LUNs 0 and 1 were affected, since 8 indicates LUN 0 and 4 indicates LUN 1; 8 plus 4 equals c (hex).
6-5
Using ATF
CAUTION A trespass operation affects I/O to the disk units involved and may affect the operating system. Use this utility only when the system is idle or you know that you must transfer control of one or more disks. You must run the utility from a command prompt window. Its directory is C:\Program Files\EMC\ATF. Use the form atf_trespass atf_spn sp:mode:[lun] where n is the number of the storage system (not SP) whose LUNs you want to trespass. The first storage system ATF found is number 0, the second 1, and so on. identifies the trespass with two or three fields separated by colons: sp indicates the SP in the storage system. 0 1 mode 0 means SP A means SP B Trespasses LUNs to the default owner SP. Affects all LUNs not currently owned by sp, but that should be owned by sp. You cannot include the lun argument with this form of the command, which restores all LUNs that this SP should own. To restore LUNS of both SPs, use the aft_restore command.
sp:mode[:lun]
6-6
Using ATF
6
1 Trespasses the LUN specified by lun (range 0 through 32 for SCSI or 0 through 222 for Fibre Channel) to SP sp. Trespasses all LUNs to SP sp. You cannot include the lun argument with this form of the command.
For example, the commands atf_trespass atf_sp0 0:0 atf_trespass atf_sp0 1:0 restore all LUNs in storage-system 0 to their default owner SP. These commands are equivalent to atf_restore atf_sp0.
6-7
Using ATF
6-8
A
Troubleshooting
This appendix describes how to troubleshoot some problems that you might encounter after configuring the storage system. Major topics are Troubleshooting Windows NT Problems ......................................A-2 Troubleshooting Windows 2000 Problems ....................................A-3
Troubleshooting
A-1
Troubleshooting
Server Passed Self-Test but Operating System Does Not Boot from Boot Disk
Look for conflicts with the adapter.
A-2
Troubleshooting
A-3
Troubleshooting
A
Fibre Channel Problems
Make sure that the FC devices were powered up before you powered up the PC.
Boot Drive
If your system has an integrated drive electronics (IDE) fixed disk device, it is assigned device number 80 and is the boot device. If your system does not have an IDE disk device, the first bootable disk device configured (the one with the lowest ID) is assigned device number 80 and is the boot device. If you do not have an IDE drive, set the motherboard BIOS parameters to None or Not Installed.
The storage system does not support booting.
A-4
Glossary
This glossary contains terms related to disk-array storage systems. Many of these terms are used in this manual.
A
Agent admsnap See Host Agent or SP Agent. An interface to SnapView software that lets you start, activate, and stop sessions using commands typed on servers. A software package that lets you specify the servers that can access LUNs in the storage system. Access Logix lets you define a Storage Group of one or more LUNs that any server can access. Application-Transparent Failover software that provides applications with access to storage-system LUNs using an alternate path if a failure occurs in primary path. See also primary path.
Access Logix
ATF
B
Base Software Code that runs in the storage system SPs and controls storage-system operation. You can update Base Software using Navisphere Manager. For non-FC4700 storage systems, Base Software is called Core Software.
g-1
Glossary
bind
In the context of disk-array storage systems, the procedure by which you form one or more disks into one LUN (logical unit). You can bind disks as one of the following RAID types: RAID 5 (independent access array), RAID 3 (parallel access array), RAID 1 (mirrored pair), RAID 0 (nonredundant array), RAID 1/0 (mirrored RAID 0), disk (individual disk), or hot spare. Before the LUN can store data, you must make it available to the operating system. Unbinding reverses the bind process, changing a LUN into its original disks or disk parts. Unbinding destroys all user information on the LUN. You bind (and unbind) disks using the Navisphere Manager or CLI as described in the Navisphere Manager or CLI manual.
C
cache CDE See storage-system caching. Software extensions to the HBA drivers that provide limited failover features for storage systems that do not use optional ATF software. Command line interface, which is software that lets you bind disks into LUNs, create RAID Groups, unbind LUNs, set storage-system properties, ascertain storage-system status, and perform other configuration tasks. For more information on Navisphere CLI, refer to the Command Line Interface (CLI) reference manual. Code that runs in the SPs and controls the operation of the storage system. You can update Core Software using Navisphere Manager. For FC4700 systems, Core Software is called Base Software.
CLI
Core Software
D
disk A self-contained disk drive that slides into one slot in the front of the storage system. It consists of the carrier assembly, which holds the disk drive and the regulator board. Also called disk module.
E
Event Monitor A software package that lets you define what happens when certain storage-system events (such as hardware failures) occur. Event Monitor can notify you via page or email, or send an SNMP trap or initiate a custom response when specified events occurs. You can
g-2
Glossary
specify the events either with Event Monitor or by editing an Event Monitor configuration file that ships with the Host Agent.
F
fabric FC-AL (Fibre Channel arbitrated loop) FC-AL address ID See switch fabric. An arrangement of Fibre Channel stations such that messages pass from one to the next in a ring. A number that identifies a device as a node on a Fibre Channel loop. You select the FC-AL address ID for an SP using switches on the back of the storage system. The default FC-AL address ID for SP A is 0; the default FC-AL address ID for SP B is 1. A high-speed, serial, bidirectional, topology-independent, multi-protocol, highly scalable interconnection between computers, networks, and peripherals. The name for the printed-circuit board within the computer chassis that allows the host to access the storage system through the Fibre Channel.
Fibre Channel
H
host Host Agent See server. Software that runs on the server that is connected to the storage system and that communicates with the Navisphere client applications and with the Base and Core Software on the storage system. See replace under power. A disk module bound as a global hot spare that can replace any failed disk module in a RAID 1, RAID 1/0, RAID 3, or RAID 5 LUN.
I
individual (disk) unit A disk module bound as an individual unit, independent of any other disk modules in the cabinet. An individual unit has no inherent high-availability feature, but since the operating system supports software mirroring, you can make it highly available by software
g-3
Glossary
L
logical unit (LUN) A logical unit is one or more disks or parts of disks bound into a single entity, accessible by logical unit number (LUN). Logical unit is a SCSI term. This manual generally uses the term LUN. The RAID types are: RAID 5 (independent access array), RAID 3 (parallel access array), RAID 1 (mirrored pair), RAID 0 (nonredundant array), RAID 1/0 (mirrored RAID 0), disk (individual disk), or hot spare. The operating system sees the LUN, which might include more than one disk, as one contiguous span of disk space. See logical unit (LUN).
LUN
M
Manager A program with a graphical user interface that lets you bind disks into LUNs on a non-RAID Group storage system, create RAID Groups and bind LUNs on them on a RAID Group storage system, unbind LUNs, destroy RAID Groups, set storage-system properties, and ascertain storage-system status. Navisphere Manager lets you do that for multiple storage systems on multiple servers running Navisphere Agent. For more information on Navisphere Manager, refer to the Manager manual. See SP memory modules. Maintenance of a second copy of a LUN that provides continuous access if an image becomes inaccessible. The system and user applications continue running on the good image without interruption. There are two kinds of mirroring: hardware mirroring, in which the storage system maintains synchronization of the disk images, and software mirroring, in which the operating system maintains synchronization. With an FC4700 storage system, you can acquire the MirrorView software, which lets you maintain a mirror image of a LUN on a remote storage system miles away. An important benefit of MirrorView is disaster recovery. Mirroring and MirrorView are further explained in the FC4700 Fibre Channel configuration planning guide. See mirroring.
MirrorView
g-4
Glossary
multipath I/O
The process by which ATF, in a server that has two connections to each SP, distributes I/O to LUNs to all ports in round-robin fashion. Multipath I/O improves performance by balancing the load among the ports.
N
non-RAID Group storage system A storage system with SPs running a revision of Core Software that does not support RAID Groups.
P
primary path The major (first) path to a storage system LUN. A server connected to two SPs in a storage system or to two ports of each SP in a storage system has multiple paths to the storage-system LUNs. The primary path is established when each LUNs is bound; all other paths are secondary. If a failure (such as an HBA, SP, or cable failure) occurs in a path, the optional ATF software can automatically transfer control to another path. On a server connected to two ports of each SP, ATF uses multipath I/O. Multipath distributes I/O to all the ports, each in round-robin fashion, to improve performance and balance the load among the ports. From the operating system view, each LUN connected on multiple paths has a different device name -- based on the route through the HBA and SP -- for each path.
R
RAID (redundant array of independent disks) RAID 0 Redundant Array of Independent Disks A technology with its own set of definitions. See RAID 0, RAID 1, RAID 1/0, RAID 3, and RAID 5. Three or more disk modules bound as striped disks (the storage system reads and writes file information with more than one disk at a time). RAID 0 offers enhanced performance via simultaneous I/O to different modules, but does not intrinsically offer high availability. For high availability, you can software mirror the striped disks. Two bound disk modules that the storage-system hardware will mirror. Four, six, eight, ten, twelve, fourteen, or sixteen disk modules bound as a mirrored RAID 0 group. These disk modules make up two mirror
RAID 1
RAID 1/0
g-5
Glossary
images of two, three, four, or five modules each. A RAID 1/0 group combines the speed advantage of RAID 0 with the redundancy advantage of mirroring. RAID 3 Five modules that use disk striping (as with RAID 5). The hardware maintains parity information that lets the group continue running and be rebuilt after a disk-module failure. With RAID 3, I/O occurs in smaller blocks than with RAID 5, and parity information is stored on one module, not distributed among all of them. RAID 3 works well for single-task applications that use I/Os of one or more 2-Kbyte blocks, aligned to start at disk addresses that are multiples of 2 Kbytes from the beginning of the logical disk. Three to sixteen disk modules that use disk striping, in which the hardware writes to or reads from multiple modules simultaneously, with high availability provided by parity information on each module. The ideal number of disk modules in a RAID 5 group is five. A group of disks on which you bind one or more LUNs of a specific RAID type. The RAID type is that of the first LUN bound on the group. Storage systems must run a specific revision of Core Software to support RAID Groups. A storage system with SPs running a version of Core Software that supports RAID Groups. The storage system lets you replace certain components while power remains on, allowing you to replace, for example, a disk without powering down the storage system. Applications continue while you replace the failed device. See primary path.
RAID 5
RAID Group
route
S
secondary path server A path other than the primary path to a LUN; see primary path. In the context of disk-array storage systems, a processor that runs an operating system and uses a disk-array storage system for data storage and retrieval.
g-6
Glossary
SnapView
A software package available with FC4700 storage systems that lets you capture the state of a production LUN at one moment in time and use that state for operations such as backup or live data analysis. I/O can continue with the production LUN while you use the snapshot for backup or analysis. A printed-circuit board with memory modules that control the disks in the storage-system chassis. The SP runs Core Software that controls the operation of the storage system. For higher availability, a site can use a second SP in a storage-system chassis. Software that runs on the SPs in an FC4700 storage system and communicates with the Navisphere client applications and with the Base Software on the FC4700 storage system. A time-ordered list of messages about storage-system events (such as errors) maintained by the SP. You can view this log using Navisphere Manager. Memory modules (RIMMS, DIMMS, or SIMMs) that provide the local storage for an SP. An SP must have at least two 4-Mbyte memory modules to support the storage-system cache. See standby power supply (SPS). A unit that provides backup power for a Fibre Channel storage system with SPs in case of a power outage. An SPS is required for the storage-system write caching. If power fails, the SPS allows the SP to write the cache image to the cache vault area on disk. You can replace an SPS under power, without interrupting applications. The arrangement of a LUN such that reads and writes can occur with multiple disk modules simultaneously and independently. By allowing multiple sets of read/write heads to work on the same task at once, disk striping can enhance performance. You can implement hardware disk striping by configuring disk modules as a RAID 5 group, RAID 3 group, RAID 0, or RAID 1/0 group. The size of the stripe is the stripe element size (area on each disk) multiplied by the number of data disks in the group. For RAID 5, RAID 0, RAID 1/0 groups, you can select the stripe element size. Also see RAID (redundant array of independent disks).
SP Agent
g-7
Glossary
Storage Group
A group of LUNs on a storage system with the Access Logix option. A storage system can have multiple Storage Groups. A Storage Group can be connected to multiple servers, but a server can be connected to only one Storage Group in each storage system. The technique of storage disk-based data in SP memory temporarily, which can save time when data is written to and/or read from a LUN. For write caching, two SPs and an SPS are required. The two SPs mirror the cache data, which lets the storage system maintain cache integrity if a failure occurs in one SP. A device that connects one or more servers and one or more storage systems, providing flexible connections and storage capacity options. Switch zoning lets you restrict or grant access between servers and storage systems. Switches are available with 8, 16, or more ports. For larger configurations, you can connect (cascade) switches. An alternate term switch fabric convers both a single and cascaded switch arrangement.
storage-system caching
switch
U
unbind See bind.
g-8
Index
A
Access Logix, defined g-1 adapter name worksheet entry Windows 2000 4-4 Windows NT 3-4 adapters, adding after ATF installation 6-3 Agent host, defined g-3 SP, defined g-7 Application-Transparent Failover (ATF) configurations 1-9 defined 1-2, g-1 failover messages 6-3 installing 2-2, 6-1 Windows 2000 host 2-3 Windows NT host 2-3 introduced 1-5 messages 2-4, 6-3 removing 2-15 requirements for hardware 2-2 software 2-2 testing 6-1 atf_console program 2-4
CLARiiON driver extensions (CDE) defined g-2 removing 2-15 CLI defined g-2 installing 2-6 removing 2-17 requirements for hardware 2-2 software 2-2 clusters 5-2 adding a Generic Service resource 5-5 configuring Host Agent for 5-4 using Navisphere applications with 5-3 Configuration Worksheet Windows 2000 4-3 Windows NT 3-3 configurations shared switched 1-11 shared-or-clustered direct 1-10 unshared direct 1-9 with ATF 1-9 configuring clusters 5-4 Core Software, defined g-2
B
Base Software, defined g-1 bind (LUNs), defined g-2 bus number worksheet entry 3-4
D
devices, adding after ATF installation 6-3 disks making available to Windows 2000 4-1 making available to Windows NT 3-1 drive letter worksheet entry 4-4 Windows 2000 4-4 Windows NT 3-4
C
C - series storage system, defined 1-2
i-1
Index
E
Event Monitor configuration file 2-13 defined g-2 Event Viewer verifying restored LUNs 6-5 Windows 2000 A-3 Windows NT A-2
L
logical unit (LUN) defined g-4 logical unit Id worksheet entry 4-4 LUN number worksheet entry Windows 2000 4-4 Windows NT 3-4 LUN type worksheet entry Windows 2000 4-4 Windows NT 3-4 LUNs (logical units) 3-1 affected by trespass and restore operation 6-5 defined g-4 LUN number 4-4 making available to Windows 2000 4-1 Windows NT 3-1
F
failover messages 6-3 FC-AL (Fibre Channel Arbitrated Loop) address ID 4-4 FC-series storage system defined 1-2 file system type Windows 2000 4-4 Windows NT 3-4
H
hardware requirements ATF 2-2 CLI 2-2 Host Agent 2-2 Host Agent 5-4 defined 1-2, g-3 installing 2-6 removing 2-17 requirements for hardware 2-2 software 2-2 service starting 2-14 stopping 2-14 hot spare, defined g-3
M
managed Agent, defined 1-2 managed storage system, defined 1-2 management station, defined 1-3 Manager, defined g-4 mirroring g-4 multipath I/O 6-2, g-5
N
non-RAID Group storage system defined 1-2
P
path, primary g-5 path, restoring 6-4 primary path g-5
I
individual (disk) unit, defined g-3 installing CLI 2-6 Host Agent 2-6
R
RAID Group storage system, defined 1-2 RAID Group worksheet entry Windows 2000 4-4 Windows NT 3-4 RAID group, defined g-6 RAID types, defined g-5 RAID, defined g-5
J
JBOD storage system, defined 1-2
i-2
Index
removing ATF 6-7 CLI 2-17 Host Agent 2-17 restoring the original path 6-4 revisions, software needed to use ATF 2-2, 6-1
V
volume capacity worksheet entry 4-4 Windows 2000 4-4 Windows NT 3-5 volume label worksheet entry 4-4 Windows 2000 4-4 Windows NT 3-4
S
server, defined g-6 shared storage system, defined 1-2 shared switched configuration 1-11 shared-or-cluster direct configuration 1-10 sing 2-13 SnapView, defined g-7 software requirements ATF 2-2 CLI 2-2 Host Agent 2-2 SP (storage processor) adding 6-3 defined g-7 FC-AL address ID 4-4 log, defined g-7 SP Agent, defined 1-2, g-7 standby power supply (SPS), defined g-7 Storage Group, defined g-8 striping g-7 switch fabric, defined g-8 switch, defined g-8
W
Windows 2000 host clusters 5-2 adding a Generic Service resource 5-5 configuring Host Agent for 5-4 using Navisphere applications with 5-3 Configuration Worksheet 4-3 installing ATF 2-3 CLI 2-6 Host Agent 2-6 making LUNs available to 4-1 removing ATF 2-15 CDE 2-15 CLI 2-17 Host Agent 2-17 requirements ATF 2-2 CLI 2-2 Host Agent 2-2 Windows host, see Windows NT host or Windows 2000 host Windows NT host clusters 5-2 adding a Generic Service resource 5-5 configuring Host Agent for 5-4 using Navisphere applications with 5-3 Configuration Worksheet 3-3 installing ATF 2-3 CLI 2-6 Host Agent 2-6 making disks available to 3-1 making LUNs available to 3-1
T
target ID worksheet entry Windows 2000 4-4 Windows NT 3-4 TCP/IP protocol requirement 2-3 terminology 1-2 testing ATF 6-1
U
unshared direct configuration 1-9 unshared storage systems, defined 1-2
i-3
Index
removing ATF 2-15 CDE 2-15 CLI 2-17 Host Agent 2-17 requirements ATF 2-2 CLI 2-2 Host Agent 2-2 worksheet Windows 2000 4-4 Windows NT 3-3
i-4