You are on page 1of 220

Front cover

z/OS V1R3 DFSMS


Technical Guide
Learn the new features and functions in z/OS V1R3 DFSMS Improve your business continuance and efficiency Enhance data access and storage management

Mary Lovelace Frank Byrne Patrick Oughton Jacqueline Tout

ibm.com/redbooks

International Technical Support Organization z/OS V1R3 DFSMS Technical Guide July 2002

SG24-6569-00

Take Note! Before using this information and the product it supports, be sure to read the general information in Notices on page ix.

First Edition (July 2002) This edition applies to Version 1 Release 3 of z/OS, Program Number 5694-A01. This document was created or updated on July 17, 2002. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 2002. All rights reserved. Note to U.S Government Users Documentation related to restricted rights Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Chapter 1. Release summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 The DFSMS family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 z/OS V1R3 DFSMS release focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 SMS enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.2 DFSMSdfp enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.3 DFSMShsm enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.4 DFSMSdss enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.5 DFSMSrmm enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.6 Advanced copy services enhancements . . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 2. SMS enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 Data set allocation verses creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Dynamic volume count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1 Out-of-space failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.2 Addressing out-of-space conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.3 What is supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.4 Enabling DVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.5 Everything you ever wanted to know about volumes . . . . . . . . . . . . 10 2.2.6 Allocation and candidate volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.7 DVC and candidate volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.8 LISTCAT, volumes, and DVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2.9 Deciding what value to specify for DVC . . . . . . . . . . . . . . . . . . . . . . 14 2.2.10 The size of the TIOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.11 When changing the DVC is not dynamic. . . . . . . . . . . . . . . . . . . . . 15 2.2.12 Other considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2.13 Required maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2.14 Advantages of implementing DVC . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2.15 DVC, extend processing, and space constraint relief . . . . . . . . . . . 16 2.3 Extend storage groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3.1 Defining extended storage groups . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3.2 Rules for extend storage groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Copyright IBM Corp. 2002

iii

2.4 Overflow storage groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.4.1 Defining the overflow storage group . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4.2 Using overflow storage groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.5 Automation assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.1 Message routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.2 SMF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.6 Data set separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.1 Allocation terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.2 Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.3 The answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.4 Contents of the data set separation profile . . . . . . . . . . . . . . . . . . . . 32 2.6.5 Allocation examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.6.6 Restrictions for separation processing . . . . . . . . . . . . . . . . . . . . . . . 33 2.6.7 Usage considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.6.8 Required maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.7 Summary of factors influencing volume selection . . . . . . . . . . . . . . . . . . . 35 Chapter 3. DFSMSdfp enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.1 Large volume support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.1.1 Large volume design considerations. . . . . . . . . . . . . . . . . . . . . . . . . 40 3.1.2 3390-9 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1.3 Limitations of the 3390-9 solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1.4 Coexistence support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1.5 EXCP considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1.6 Interfaces and vendor code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1.7 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1.8 Implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1.9 Required support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2 IDCAMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2.1 Changes to GDG base processing . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2.2 Extended alias support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.3 Catalog management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.3.1 Defining catalogs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.3.2 Data set name validity checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.3.3 Performance, diagnostic, and nice-to-have. . . . . . . . . . . . . . . . . . . . 48 3.4 CONFIGHFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4.1 How it works today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4.2 How it works with this release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4.3 Other enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.5 VSAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.5.1 VSAM parameter definition support removed . . . . . . . . . . . . . . . . . . 53 3.5.2 System managed buffering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.6 Large real storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

iv

z/OS V1R3 DFSMS Technical Guide

3.6.1 Media Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.7 REUSE for striped data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.8 Expiration date and retention period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.9 Record level sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.9.1 Coupling facility structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.9.2 Caching CIs larger than 4K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.9.3 Lock structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.10 OAM enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.10.1 Multiple object backup support . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.10.2 Improved reliability and usability . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Chapter 4. DFSMShsm enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.1 The common recall queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.1.1 Our test environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.2 Which environments can use a CRQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.3 How to enable this function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.3.1 Defining the members of the CRQ . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.3.2 Sizing the CRQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.3.3 Defining the CRQ structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.3.4 Accessing the CRQ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.4 Commands to manipulate the common recall queue . . . . . . . . . . . . . . . . 84 4.4.1 CANCEL command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.4.2 DELETE command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.4.3 HOLD AND RELEASE commands . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.4.4 RECALL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.4.5 QUERY command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.4.6 SETSYS command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.4.7 STOP command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.4.8 AUDIT Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.5 Using the CRQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.5.1 Placement of data on the queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.5.2 Processing when CRQ is full. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.5.3 Selection of requests from the queue . . . . . . . . . . . . . . . . . . . . . . . . 94 4.5.4 Disconnecting from the common recall queue . . . . . . . . . . . . . . . . . 96 4.5.5 Impact of HOLD and RELEASE commands . . . . . . . . . . . . . . . . . . . 98 4.6 Recovering from errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.6.1 Loss of a DFSMShsm or LPAR. . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.6.2 Loss of a CF or connectivity to a CF . . . . . . . . . . . . . . . . . . . . . . . . 102 4.6.3 Recall request processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.6.4 Auditing the CRQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.6.5 Rebuilding the CRQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.6.6 Additional diagnostic data collection . . . . . . . . . . . . . . . . . . . . . . . . 109 4.7 Other new enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Contents

4.7.1 Keyrange data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.7.2 DFSMShsm large volume support . . . . . . . . . . . . . . . . . . . . . . . . . 110 Chapter 5. DFSMSdss enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5.1 HFS logical copy support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.1.1 z/OS view of an HFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.1.2 z/OS UNIX view of an HFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.1.3 How HFS logical copy works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.1.4 Target HFS space allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.1.5 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.1.6 Usage considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5.1.7 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5.1.8 Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5.2 Enhanced dump conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.2.2 DUMPCONDITIONING phase I with OW45674 . . . . . . . . . . . . . . . 115 5.2.3 DUMPCONDITIONING phase II with OW48234. . . . . . . . . . . . . . . 116 5.2.4 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.2.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.3 Large volume support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Chapter 6. DFSMSrmm enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.1 Changes introduced with z/OS V1R3 DFSMS . . . . . . . . . . . . . . . . . . . . 120 6.1.1 Special character support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 6.1.2 Changed messages for improved diagnostics . . . . . . . . . . . . . . . . 121 6.1.3 HELP moved from SYS1.SEDGHLP1 to SYS1.HELP . . . . . . . . . . 121 6.1.4 OAM multiple object backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 6.2 New functions introduced since DFSMSrmm R10 . . . . . . . . . . . . . . . . . 122 6.2.1 Software MTL support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 6.2.2 Multi-volume alert in DFSMSrmm dialog. . . . . . . . . . . . . . . . . . . . . 122 6.2.3 Updated conversion tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 6.2.4 VSAM extended function support for DFSMSrmm CDS . . . . . . . . . 126 6.2.5 DFSMSrmm application programming interface . . . . . . . . . . . . . . . 127 6.2.6 PARMLIB options SMSACS and PREACS . . . . . . . . . . . . . . . . . . . 128 6.2.7 Storage location as home location . . . . . . . . . . . . . . . . . . . . . . . . . 129 6.2.8 Enhanced bin management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 6.2.9 DSTORE by location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.2.10 Extended extract file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6.2.11 Report generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6.2.12 Buffered tape mark support for A60 controller . . . . . . . . . . . . . . . 155 Chapter 7. Advanced copy services enhancements . . . . . . . . . . . . . . . . 157 7.1 Extended remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 7.1.1 XRC overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

vi

z/OS V1R3 DFSMS Technical Guide

7.1.2 7.1.3 7.1.4 7.1.5

Multiple XRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Configuring XRC or MXRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Coupling XRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 QUICKCOPY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... . . . . . . . . . . . . . 165 166 166 166 168 169 171 172 172 174 174 176 176

Appendix A. Record changes in z/OS V1R3 DFSMS . . A.1 VSAM RLS SMF record changes . . . . . . . . . . . . . . . . A.2 System managed buffering SMF record changes. . . . A.3 Open/Close/EOV SMF record changes . . . . . . . . . . . A.4 OAM SMF record changes . . . . . . . . . . . . . . . . . . . . . A.5 HSM FSR record changes . . . . . . . . . . . . . . . . . . . . . Appendix B. Maintenance information B.1 APAR II12431 . . . . . . . . . . . . . . . . . . B.1.1 Error description . . . . . . . . . . . . B.2 APAR II12896 . . . . . . . . . . . . . . . . . . B.2.1 Problem conclusion . . . . . . . . . B.3 APAR OW53834 . . . . . . . . . . . . . . . . B.3.1 Error description . . . . . . . . . . . . ....... ....... ....... ....... ....... ....... ....... ...... ...... ...... ...... ...... ...... ......

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Related publications . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other resources . . . . . . . . . . . . . . . . . . . . . . . . Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . IBM Redbooks collections . . . . . . . . . . . . . . . . . ...... ...... ...... ...... ...... ...... ....... ....... ....... ....... ....... ....... ...... ...... ...... ...... ...... ...... . . . . . . 191 191 191 193 193 193

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

Contents

vii

viii

z/OS V1R3 DFSMS Technical Guide

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.

Copyright IBM Corp. 2002

ix

Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX CICS DB2 DFS DFSMS/MVS DFSMSdfp DFSMSdss DFSMShsm DFSMSrmm DFSORT Enterprise Storage Server Extended Services IBM IMS Magstar MORE MVS OS/390 Parallel Sysplex Perform RACF Redbooks Redbooks(logo) RMF S/390 SP TotalStorage z/OS z/VM

The following terms are trademarks of other companies: ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others.

z/OS V1R3 DFSMS Technical Guide

Preface
Each release of DFSMS builds upon the previous version to provide enhanced storage management, data access, device support, program management, and distributed data access for the z/OS platform in a system-managed storage environment. This IBM Redbook provides a technical overview of the functions and enhancements in z/OS V1R3 DFSMS. It provides you with the information you need to understand and evaluate the content of this DFSMS release, along with practical implementation hints and tips. Also included are enhancements that were made available prior to this release through an enabling PTF that have been integrated into this release. z/OS V1R3 DFSMS includes catalog function enhancements that improve your ability to self-diagnose problems. New SMS function reduces out of space conditions and provides data set separation at the physical control unit level. The RLS coupling facility caching enhancements allow you to specify the amount of data that is cached in the coupling facility cache structure defined to DFSMS. VSAM enhancements include record level sharing (RLS) coupling facility (CF) cache data records greater than 4K, and I/O processing with real addresses greater than 2 GB for most VSAM data sets. DFSMShsm provides a common recall queue that is shared by multiple DFSMShsm hosts, allowing the recall workload to be balanced across each of the hosts. Object Access Method (OAM) and DFSMSdss provide data backup and recovery enhancements. DFSMSrmm incorporates reporting, storage location, and usability functions made available prior to z/OS V1R3 DFSMS. This book is written for storage professionals and system programmers who have experience with the components of DFSMS. It provides sufficient information so you can start prioritizing the implementation of new functions and evaluating their applicability in your DFSMS environment.

Copyright IBM Corp. 2002

xi

The team that wrote this redbook


This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, San Jose Center. Mary Lovelace is a Consulting IT Specialist with the IBM International Technical Support Organization. She has more than 20 years of experience with IBM in large systems, storage, and storage networking, including product education, system engineering, consultancy, and systems support. Prior to joining the ITSO, Mary worked with IBM Learning Services as the Global Storage Networking Education Program Manager. Frank Byrne is a Senior IT Specialist in the United Kingdom providing day-to-day guidance and advice on the usage of z/OS and associated products. He has 35 years of experience in large systems support. His areas of expertise include parallel sysplex implementation, and he has written extensively on parallel sysplex usage. Patrick Oughton is an MVS Systems Programmer for a leading bank in New Zealand. He has more than 15 years of experience in the MVS field. He provides both hardware and software support for the banks mainframe environment. Jacqueline Tout is a Senior IT specialist in Australia. She has 16 years of experience in systems programming in an MVS environment. She holds a degree in Science/Botany from Sydney University. Her areas of expertise include systems and availability management; in particular storage, performance, and capacity management as well as problem and change management. She has written extensively on storage and availability management. Thanks to the following people for their contributions to this project: Bob Haimowitz International Technical Support Organization, Raleigh Center Will Carney Yolanda Cascajo Jimenez Emma Jacobs Yvonne Lyon Deanna Polm Journel Saniel International Technical Support Organization, San Jose Center Bob Birchfield Charlie Burger Patricia Choi Joy Nakamura

xii

z/OS V1R3 DFSMS Technical Guide

Savur Rao Mark Thomen Dan Win IBM San Jose Stevan Allen Pamela Baird Harold Koeppel Gene McGaha Lisa Taylor John Thompson Glenn Wilcock IBM Tucson Mike Wood IBM United Kingdom

Notice
This publication is intended to help storage administrators and system programmers evaluate and implement the features and functions in DFSMS z/OS V1R3. The information in this publication is not intended as the specification of any programming interfaces that are provided by Version 1 Release 3 of z/OS. See the PUBLICATIONS section of the IBM Programming Announcement for IBM z/OS V1R3 for more information about what publications are considered to be product documentation.

Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at:
ibm.com/redbooks

Send your comments in an Internet note to:


redbook@us.ibm.com

Mail your comments to the address on page ii.

Preface

xiii

xiv

z/OS V1R3 DFSMS Technical Guide

Chapter 1.

Release summary
DFSMS is now an integral element of the z/OS operating system. Each release of z/OS builds on the function DFSMS offers. This book provides you with a technical update to DFSMS provided in z/OS V1R3 what is new along with practical implementation information. Each chapter is devoted to a functional area. Those with an interest in DFSMSdss or VSAM can go to directly to the particular chapter to see what is new or enhanced for that specific functional area.

Copyright IBM Corp. 2002

1.1 The DFSMS family


The members of the DFSMS family provide an integrated approach to data and storage management. In a system-managed storage environment, the components of DFSMS automate and centralize storage management, based on policies you define for availability, space, performance, and security. These are the members of the DFSMS family: DFSMSdfp, a base element of z/OS DFSMSdss, an optional feature of z/OS DFSMShsm, an optional feature of z/OS DFSMSrmm, an optional feature of z/OS

1.2 z/OS V1R3 DFSMS release focus


z/OS V1R3 DFSMS provides new functions and enhancements that improve data access, and provide more flexibility in managing the system-managed storage environment. This release focuses on DFSMS customer requirements, data recovery enhancements, high performance data access, and support of z/OS initiatives. There are enhancements in this release that were made available in earlier releases of DFSMS through an enabling PTF. Those enhancements are described in this book along with the APAR information in the appropriate section.

1.2.1 SMS enhancements


The release includes the following new functions: Dynamic volume count allows for the dynamic extend of volume count to SMS data sets. The data class parameter Dynamic Volume Count determines the number of volumes to dynamically add to a data set when a new volume is needed to minimize out of space situations. Designated extend storage groups, each of which is associated with a primary storage group. This allows a data set to extend to a designated extend storage group when the primary storage group is out of space, reducing out of space situations. The use of designated overflow storage groups, which serve as reserve areas for primary space allocations during periods of high space utilization. In volume selection, volumes in overflow storage groups are preferred over quiesced volumes.

z/OS V1R3 DFSMS Technical Guide

SMS data set allocation failure messages can now be written to hardcopy console log to allow automation products to take corrective action. New SMF type 42 subtype 10 records are also created if an allocation fails because of insufficient space. Data set separation, which automates separation of specified data to reduce the impact of single points of failure. To utilize data set separation, you create a data set separation profile and specify it in the SMS base configuration. During volume selection for data set allocation, SMS attempts to separate, on the PCU level, the data sets that are listed in the profile. The high availability requirement for data sets such as the couple data set and backup couple data set (or software duplexed DB2 logs) can be met by allocating them on separate storage controllers.

1.2.2 DFSMSdfp enhancements


The DFSMSdfp enhancements in this release are as follows: You can now define DASD volume sizes up to 32,760 cylinders (about 27.8 GB) on the IBM TotalStorage Enterprise Storage Server (ESS). The management of a few large volumes is easier than managing a large number of small volumes. Previous performance issues with large volumes have been addressed with the PAV/MA feature of IBM TotalStorage Enterprise Storage Server (ESS) and the Dynamic CHPID Management (DCM) supported by the z/OS workload manager. Changes to GDG base processing are the addition of a last-altered date when a GDS is added or removed from the GDG base, and specification of a GDG base expiration date is no longer supported. The last activity date will be displayed on LISTCAT ALL. Symbolic alias support has been enhanced to show the associated symbolic alias and resolved symbolic alias in the LISTC output. Prior to this release, a LISTC of a data set alias would only display the associated unresolved symbolic alias. Catalog management has been enhanced as follows: Catalog record sizes will be forced to 4086 and 32400 average and maximum record sizes, respectively. This will prevent invalid record sizes from being specified that result in wasted space in catalogs, as well as catalog extension records that require additional processing. The data set naming rules equivalent to those enforced by JCL to catalog data sets will be enforced by catalog management. These rules were previously applied only for SMS managed data sets.

Chapter 1. Release summary

During SVC dump processing, if an address space being dumped has an active catalog request, the catalog address space will also be dumped. This will provide better problem determination in the cases where the problem is related to problems in the catalog address space (CAS). The MODIFY command is enhanced to allow the performance statistics to be reset. The CONFIGHFS command for a path name can now be issued from any system in the sysplex it is no longer limited to the system owning the HFS. In addition, ISPF has been enhanced to correctly display full details of HFS data sets owned or mounted on different systems in the sysplex. IDCAMS DEFINE parameters KEYRANGE, REPLICATE, and IMBED are ignored with this release of DFSMS. They are no longer useful on newer control units that have high bandwidth and deliver all the data through cache. System managed buffering is extended to support VSAM spheres containing alternate indices. In addition, two attempts will be made to build DO buffer pools using less space before reverting to the DW access bias if insufficient space is available for the first build attempt. This release will allow large (64 bit) real storage to be exploited for buffers for all VSAM record organizations. DFSMS supports greater than 4K coupling facility caching for VSAM spheres that are opened for RLS processing. The SMS data class keyword RLS CF Cache Value and its values (NONE, UPDATESONLY and ALL) are used to specify which data is placed in the coupling facility cache structures that are defined to DFSMS. The REUSE attribute is now available for VSAM striped data sets. DB2 table spaces defined with the REUSE attribute can take advantage of VSAM striping. This enhancement can provide a significant reduction in the elapsed time of batch jobs accessing large DB2 table spaces. You can now change the expiration date of an existing SMS managed non-VSAM data set using the JCL EXPDT or RETPD specification (similar to the existing capabilities for non-SMS managed data sets) when the data set is opened for output. VSAM RLS used to cache directory entries and data control intervals in the coupling facility only if they were less than or equal to 4096 bytes. Now control intervals larger than 4096 bytes can be cached in the coupling facility. This requires all of the sharing systems to be at z/OS Version 1 Release 3 and that a G4 or later processor be used.

z/OS V1R3 DFSMS Technical Guide

Object access method (OAM) supports multiple object backup, which allows a system within an SMS complex to have more than one object backup storage group. Up to two object backup storage groups can be associated with each object storage group.Separate backup copies of objects, that are physically based on the object storage group to which the object belongs, may be specified.

1.2.3 DFSMShsm enhancements


With the DFSMShsm Common Recall Queue in Coupling Facility (CF) enhancement, DFSMShsm will use the commonly accessible coupling facility for its recall work queues. This will enable each DFSMShsm host to place its recall requests onto a common queue and balances the recall workload across multiple DFSMShsm hosts. This support will also allow a host that has a tape mounted for recall to process all recall requests requiring that tape, regardless of which host initiated the recall request. With DFSMShsm's multiple address space support, the common recall queue enhancement will enable the number of concurrent recall tasks to be increased above the current limit of 15 tasks per z/OS image.

1.2.4 DFSMSdss enhancements


The DFSMSdss enhancements in this release are as follows: DFSMSdss will now copy an SMS managed or non-SMS managed HFS data set in one step instead of requiring a two-step process of dump and restore. The dump conditioning support allows both the source and target volumes to remain online during a full volume COPY operation. When DUMPCONDITIONING is specified on COPY statements, for SMS or non-SMS volumes, the target volume remains online and can be dumped to tape and later used to restore the original source volume. It can also be used as input on a subsequent full volume COPY as a means of recovering the original volume.

1.2.5 DFSMSrmm enhancements


DFSMSrmm enhancements documented in this redbook include those introduced in this release and those made available through PTFs to other releases: All the characters supported by z/OS V1R3 can be used when defining volume serial numbers.

Chapter 1. Release summary

DFSMSrmm reporting has been enhanced by a new report generator function added to the DFSMSrmm ISPF dialog. DFSMSrmm provides support for VSAM extended addressability (EA) for the control data set (CDS). If you do not use an extended format (EF) data set for the DFSMSrmm CDS the CDS size is limited to a maximum of 4GB. Using an EF data set enables you to use VSAM functions such as multi-volume allocation, compression, or striping. EF also enables you to define a CDS that uses VSAM EA to enable the CDS to grow above 4GB. This release provides DFSMSrmm home location enhancements. You can now define DFSMSrmm storage locations for use as home locations. This enables you to have a location name for each non-system managed library and to better manage non-library resident volumes. DFSMSrmm extended bin management provides new options for efficient selection of bins during storage location management. Previously, DFSMSrmm users were required to confirm the completion of a tape volume move before the bin number could be reused by DFSMSrmm. The extended bin management options provide additional flexibility of when and how volume movement is performed by DFSMSrmm. The new choices include: A choice of how bins are allocated to volumes. A volume can be assigned a bin in a storage location as long as a volume move has been started for the volume that was previously assigned or the move completion can be required before a new volume is assigned to the bin location. DFSMSrmm can assign volumes to bins in volume serial number sequence and bin number sequence. Information can be obtained about where volumes reside and where they are moving to or moving from. Storage location management processing can optionally be processed by location.

1.2.6 Advanced copy services enhancements


The advanced copy services enhancements in this release are as follows: Coupled Extended Remote Copy (CXRC) expands the capability of Extended Remote Copy (XRC) so that very large customers who have configurations consisting of thousands of primary volumes can be assured that all their volumes can be recovered to a consistent point in time. Multiple extended remote copy (MXRC) is an enhancement to XRC that allows up to five XRC sessions per LPAR.

z/OS V1R3 DFSMS Technical Guide

Chapter 2.

SMS enhancements
The SMS enhancements in z/OS V1R3 DFSMS are designed either to eliminate failures due to lack of space within a storage group; or if a failure still occurs, to ensure that information is available to: Automate recovery actions. Prevent further occurrences of the problem. Ensure that information is captured so that you can accurately diagnose the cause of the problem. In this chapter we discuss the following enhancements for SMS managed data sets: Dynamic volume count Extend storage groups Overflow storage groups Automation assistance and reporting Data set separation We then consider how these new facilities impact volume selection for SMS data sets.

Copyright IBM Corp. 2002

2.1 Data set allocation verses creation


In the discussion of SMS enhancements, the terms allocation and creation are used quite often. For the purpose of this discussion, allocation refers to the z/OS function which associates devices with data sets it is the process of providing access to a data set. This is invoked either through the dynamic allocation (SVC99) interface or when a job step is initiated. Data set creation is the process of acquiring space for a data set. It is the DADSM function which acquires space for data sets on DASD volumes.

2.2 Dynamic volume count


Volume count for a data set is currently set at the time the data set is created or by using an ALTER ADDVOLUMES command. Dynamic volume count provides the capability to dynamically add volumes to an SMS managed data set, for both VSAM and non-VSAM formats, when a data set extends.

2.2.1 Out-of-space failures


A data set may be unable to extend because of a shortage of space on the currently allocated volume(s), even though sufficient space is available elsewhere in the storage group. This results in an X37 ABEND for non-VSAM and a return code for VSAM data sets. Space constraint relief (SCR), introduced in DFSMS/MVS 1.4, helps to reduce the incidence of out of storage conditions by: Removing the DADSM five extent limit for a single allocation. Increasing the number of extents for VSAM data sets from 123 to 255. Allowing allocation to be retried with a smaller space request. Space constraint relief does not allow additional volumes to be used. Additional volumes can be added for a data set to use by running IDCAMS, specifying the ADDVOLUMES parameter of the ALTER command. However, an in-use data set must be closed, unallocated, reallocated and reopened for this to take effect.

z/OS V1R3 DFSMS Technical Guide

2.2.2 Addressing out-of-space conditions


DVC is a value that you can specify for a data class. If a data class contains a DVC value and this value is greater than the current volume count for a data set then the DVC is used to allow the data set to extend to additional volumes. z/OS still imposes a maximum of 59 volumes for a data set. Unlike all other data class attributes, the DVC value effectively overrides the original requestors specification for volume count (when the DVC value is greater than the current volume count). It is used after the data sets initial creation for subsequent attempts to extend a data set. We discuss this further in When changing the DVC is not dynamic on page 15. Note: The DVC count replaces the original volume count, it is not added to it. The DVC value that exists when the data set is allocated is the one used when the data set extends.

2.2.3 What is supported


Although data classes can be assigned to non-SMS managed data sets, DVC support is only provided for SMS managed data sets. All SMS managed data sets are supported, with the following exceptions: SAM striped Existing data sets with KEYRANGE and IMBED. Note that it no longer possible to allocate new data sets with these attributes. Data sets limited to a single volume, for example PDS and PDSE data sets. VSAM alternate indexes (AIXs) Data sets allocated on a system level prior to z/OS V1R3 DFSMS can use DVC support once the operating system is upgraded and the data class changes made. There is no support for DVC on systems prior to z/OS V1R3 DFSMS. Once a data set has extended using this support, the new extent will be recognized on lower level systems.

Chapter 2. SMS enhancements

2.2.4 Enabling DVC


The DVC value is specified in the Data Class as a subset of the Space Constraint Relief (SCR) definition, an extract from ISMF Data Class Alter application is shown in Figure 2-1. Space Constraint Relief must be set to Y for the DVC value to be accepted by ISMF.
DGTDCDC2 Command ===> DATA CLASS ALTER Page 2 of 4

SCDS Name . . . : SYS1.SMS.SCDS Data Class Name : JMETEST To ALTER Data Class, Specify: Data Set Name Type . . . . . . If Ext . . . . . . . . . . . Extended Addressability . . . N Record Access Bias . . . . . Space Constraint Relief . . . . Y Reduce Space Up To (%) . . . 0 Dynamic Volume Count . . . . 15 Compaction . . . . . . . . . . Spanned / Nonspanned . . . . . (EXT, HFS, LIB, PDS or blank) (P=Preferred, R=Required or blank) (Y or N) (S=System, U=User or blank) (Y or N) (0 to 99 or blank) (1 to 59 or blank) (Y, N, T, G or blank) (S=Spanned, N=Nonspanned or blank)

Figure 2-1 Defining DVC for a data class

By default, DVC is not enabled. This default also applies to existing data classes. Supported down-level systems will tolerate the specification of DVC in a data class but will take no action based on it. There is no support available to implement this function on down-level systems. Note: DVC for VSAM striped data sets is supported even though they are not eligible for Space Constraint Relief.

2.2.5 Everything you ever wanted to know about volumes


Data sets can have extents on one or more volumes. Here we discuss some of the terminology used for the rest of this chapter and how the exact values are calculated for DVC.

Primary and candidate volumes


A cataloged data set has a number of volumes associated with it, up to a maximum of 59. The volumes are in two categories, primary and candidate. We restrict this discussion to SMS managed DASD resident data sets, though some of the information is true for all data sets.

10

z/OS V1R3 DFSMS Technical Guide

Primary volumes

Volumes with space allocated; that is, either data has been written to the volume, or an extent exists because the data set was defined with guaranteed space.

Candidate volumes Do not have volume serial numbers associated with them, they are an indication of how many volumes the data set can extend to. These are most commonly defined by the volume count in the data class or the VOLUMES parameter of an IDCAMS DEFINE command.

Sources of volume information


For SMS managed data sets the information about the volumes, both primary and candidate, currently associated with a data set is stored in catalog records. When DVC values are changed in a new SMS configuration, SMS and the catalog address space (CAS) communicate to synchronize the DVC value specified in the data classes with the catalog information.

The task input/output table


Allocation builds the task input/output table (TIOT), which provides a link between a data set and the volumes it exists on. The TIOT entries contain the name of each DD statement and pointers, four bytes long, to the unit control blocks (UCBs) of the associated devices.

2.2.6 Allocation and candidate volumes


When a job allocates an existing data set, either through JCL or dynamic allocation, a request is made to catalog management to identify the volumes associated with that data set. The information passed back consists of specific entries, which equate to the primary volumes and non-specific entries which equate to the candidate volumes. The TIOT can now be built with the specific entries pointing to real UCBs and the non-specific to dummy UCB pointers, owned by DFSMS. The TIOT entry cannot be increased in size after its creation, therefore the dummy pointers are required to allow a data set to extend to a new volume. It is by increasing the number of these dummy pointer entries that DVC processing allows a data set to extend to more volumes than originally specified.

Chapter 2. SMS enhancements

11

2.2.7 DVC and candidate volumes


DVC processing is performed at MVS allocation time. The DVC value affects the number of UCB entries built in the TIOT. The total number of UCB entries built in the TIOT is the greater of the volume count contained in the catalog entry for the data set, or the DVC value for the associated data class. When an attempt to extend a data set fails, EOV processing will check to see whether there are additional dummy UCBs in the current TIOT entry to extend to. If there are, then a volume is dynamically added with an internal ALTER ADDVOLUME operation. In the example shown in Figure 2-2, a VSAM cluster has been defined with six volumes, two of which have data on them. The DVC value specified in the data class is 20. Since the DVC value is greater than the catalog volume count, allocation is passed 20 volumes in total, 2 specific volumes and 18 non-specific volumes.

Specific volume count: 2 (volumes with data) Nonspecific volume count: 4 (candidate volumes from catalog record) Cluster dynamic volume count: 20 (from data class) Specific volumes returned to allocation: 2 Nonspecific volumes returned to allocation: 18 (DVC value - specific count) Total count of volumes returned to allocation: 20 (DVC value)

Figure 2-2 Volume calculation when DVC will be used

If the data class DVC value is 3, and the catalog volume count is 6 (2 specific and 4 non-specific), then the input to allocation would be as shown in Figure 2-3.

Specific volume count: 2 (volumes with data) Nonspecific volume count: 4 (candidate volumes from catalog record) Cluster dynamic volume count: 3 (from data class) Specific volumes returned to allocation: 2 Nonspecific volumes returned to allocation: 4 Total count of volumes returned to allocation: 6

Figure 2-3 Volume calculation where DVC has no impact

In either case, allocation is not given any indication as to whether a DVC value was used. Other examples can be found in z/OS V1R3.0 DFSMS Using Data Sets, SC26-7410. Note that the DVC only applies to base clusters and not upgrade alternate indexes (AIXs). For AIX and DVC considerations refer to 2.2.12, Other considerations.

12

z/OS V1R3 DFSMS Technical Guide

2.2.8 LISTCAT, volumes, and DVC


The LISTCAT option of IDCAMS will get information only from catalog records. An extract of a LISTCAT is shown Figure 2-4; this is a non-guaranteed space data set with a volume count of 5.

VOLUMES VOLSER------------MHLS1A VOLSER-----------------* VOLSER-----------------* VOLSER-----------------* VOLSER-----------------*

DEVTYPE------X'3010200F' DEVTYPE------X'00000000' DEVTYPE------X'00000000' DEVTYPE------X'00000000' DEVTYPE------X'00000000'

Figure 2-4 LISTCAT showing candidate volumes

This data set currently has one primary volume and four candidates, there is no indication of the DVC value. This is to be expected because the DVC value in the data class can be changed at any time and is therefore only relevant when the data set is allocated. If the data set extended to five volumes, then the LISTCAT output would be as shown in Figure 2-5.

VOLUMES VOLSER------------MHLS1A VOLSER------------MHLS1B VOLSER------------MHLS1C VOLSER------------MHLS1D VOLSER------------MHLS1E

DEVTYPE------X'3010200F' DEVTYPE------X'3010200F' DEVTYPE------X'3010200F' DEVTYPE------X'3010200F' DEVTYPE------X'3010200F'

Figure 2-5 LISTCAT showing primary volumes only

If the data set now tried to extend to a sixth volume, there would be a failure if the DVC value was less than 6. If the DVC value was 6 or greater, then the LISTCAT would indicate the new primary volume, as shown in Figure 2-6.

VOLUMES VOLSER------------MHLS1A VOLSER------------MHLS1B VOLSER------------MHLS1C VOLSER------------MHLS1D VOLSER------------MHLS1E VOLSER------------MHLS1F

DEVTYPE------X'3010200F' DEVTYPE------X'3010200F' DEVTYPE------X'3010200F' DEVTYPE------X'3010200F' DEVTYPE------X'3010200F' DEVTYPE------X'3010200F'

Figure 2-6 LISTCAT after DVC processing

Chapter 2. SMS enhancements

13

There are now more volumes than the original definition; this is because primary volumes must be in the catalog records. This new volume in the catalog record is the only indication that DVC processing has been invoked successfully.

2.2.9 Deciding what value to specify for DVC


The DVC value you specify in a data class applies to all eligible data sets that are currently allocated or will be allocated using this data class. You need to carefully consider the implications of specifing a large DVC value globally. The costs and benefits depend on how may data classes you currently use and whether you wish to allow all the data sets assigned these data classes to extend without failure. Remember that eventually, even DVC will not prevent an extend failure, as the data set will either reach the maximum number of volumes allowed or will reach its maximum number of extents. Today, if you are identifying data sets defined over large numbers of volumes or with a large number of extents, you should consider continuing this even after implementing DVC. In the following sections we discuss some of the points you need to consider when selecting a value for DVC. Remember that each data class can have a unique DVC value to suit the data sets particular requirements.

2.2.10 The size of the TIOT


The TIOT size is specified in the ALLOCxx PARMLIB member and has a default of 32K. This is sufficient for 1635 data sets on a single volume and 129 data sets with the maximum of 59 volumes defined. Note: This is not a consideration for DB2, because it uses an interface to dynamic allocation, which creates an XTIOT. Before specifying large values for DVC, on the assumption that all expansion will then be handled, consideration should be given to the effect on the TIOT size of jobs with large numbers of DD cards. If a data set, with one primary and five candidate volumes, was in a data class with a DVC of less than 7, then 24 bytes of TIOT space would be required. If the same data set was using a data class with a DVC of 20, then it would require 80 bytes of TIOT space.

14

z/OS V1R3 DFSMS Technical Guide

2.2.11 When changing the DVC is not dynamic


The DVC value specified in the data class may be changed at any time, but it is only applied to a data set when the TIOT is built, that is, when the data set is allocated to a job. Increasing the DVC value has no affect on currently allocated data sets in the data class until they are unallocated and reallocated.

2.2.12 Other considerations


The DVC value applies to all data sets in the data class; is this what you want? If you have a critical data set which is currently defined with a volume count of 10, then the DVC value must be at least 11 to be of any use. Keep in mind that any other data sets in the data class can extend to a volume count of 11. The DVC value only applies to the cluster; if the component name is used for allocation, it will not be eligible for extension using the DVC count. For example, an ESDS is defined with a cluster name of BYRNEF.DEMO.DVC and a volume count of 2, in a data class with a DVC of 15. A job is run which allocates the data component, BYRNEF.DEMO.DVC.DATA. If the data set needs to be extended, only two volumes will be available. If the cluster name had been used, then the 15 volume count would have been used. When specifying a DVC value with VSAM KSDSs that have alternate indexes (AIXs), consider that the volume count for the base cluster, plus any AIXs that are defined with the Upgrade attribute (when opening the data set for output), must not exceed 59. The calculation of the non-specific volume count is performed using the values specified for the base cluster. Volumes used by the alternate index(es) which are not being used by the base will be added on afterwards. An example of the calculation involved is shown in Figure 2-7, which is for a KSDS, originally defined with a volume count of 10, in a data class with a DVC of 59. The DVC value is greater than the volume count and it is therefore used for the calculation of non-specific volumes.

Specific volume count: 4 (volumes with data in base cluster) Specific volume count: 1 (AIX, not being used by the base cluster) Nonspecific volume count: 6 (candidate volumes for base cluster) Cluster dynamic volume count: 59 (from data class) Specific volumes returned to allocation: 5 (base + alternate index) Nonspecific volumes returned to allocation: 55 (DVC - base specific) Total count of volumes returned to allocation: 60

Figure 2-7 Maximum volume count calculation

Chapter 2. SMS enhancements

15

In this instance, the index data is on a volume that does not contain data from the base cluster which causes the total volume count to be 60. This is greater than the maximum of 59 allowed by z/OS and will cause a failure. If the index data had been on a volume that also contained data from the base cluster, then the total volume count would have been 59, and there would not have been a failure.

2.2.13 Required maintenance


Install the fix for APAR OW53521 (UW87249) before implementing z/OS V1R3 DFSMS. Without the fix for this APAR, catalog management does not recognize SMS configuration changes and thus does not update its tables of DVC and data class values. Install the fix for APARs OW54270 and OW54306 before implementing any data classes that specify a DVC value if you are planning to use DFSMSdss or DFSMShsm to manage data sets whose data class specifies DVC.

2.2.14 Advantages of implementing DVC


A primary advantage of using DVC instead of the existing Volume Count parameter of data class is that excess candidate volumes added by DVC do not appear in the catalog record for a data set until (and if) they actually become used. Candidate volumes added by Volume Count do appear in the catalog record for the data sets, and can greatly increase the amount of space required for catalog records when used heavily. Using DVC is a more preferable function than specifying Volume Count in the data class. You should consider using DVC in place of data class Volume Count for the following reasons: DVC is immediately available after an update, while volume count is not referred to until the data set is redefined. DVC processing uses less catalog space. DVC provides the capability to dynamically add volumes to an SMS managed data set, for both VSAM and non-VSAM formats, when a data set extends.

2.2.15 DVC, extend processing, and space constraint relief


The only function of DVC is to allow an increase in the number of volumes available to a data set. If, at time of allocation, the DVC value is greater than the volume count obtained from the catalog, then the DVC value is used to determine the number of candidate volumes.

16

z/OS V1R3 DFSMS Technical Guide

Assuming that both DVC and space constraint relief are enabled, extend processing tries to get space in the following order when selecting from similar volumes: On volumes in the current Storage Group (SG) which are below threshold On volumes in the extend SG which are below threshold On volumes in the current SG which are above threshold On volumes in the extend SG which are above threshold Space Constraint Relief processing is entered For more information on SMS volume selection for data set creation and extends, refer to 2.7, Summary of factors influencing volume selection.

2.3 Extend storage groups


The extend storage group is a feature of the pool storage group type designed to be used for secondary allocation when there is insufficient space available in the primary storage group. Without this support, extend processing can only select volumes from within the same storage group as the initial allocation. Figure 2-8 shows an allocation extending to a second storage group.

IEF403I JMEALLOC - STARTED - TIME=16.13.51 - ASID=0019 - SC64 IGD17216I JOBNAME (JMEALLOC) PROGRAM NAME (IKJEFT01) 956 STEPNAME (UNLOAD ) DDNAME (OUT ) DATA SET (MHLRES4.TESTC.DSN5 ) WHICH WAS INITIALLY ALLOCATED TO STORAGE GROUP (SGMHL03) WAS EXTENDED SUCCESSFULLY TO EXTEND STORAGE GROUP (SGMHL04)

Figure 2-8 An allocation extending to a second storage group

It has always been possible to increase the pool of volumes available to hold the first extent of an SMS managed data set by specifing more than one pool storage group as a target for allocation in your ACS routines. However, once the data sets first extent had been allocated, only the volumes in the storage group that contained the first extent were considered as targets by end of volume (EOV) processing. Extend storage groups allow volumes in a second storage group, the extend storage group, to be considered also.

Chapter 2. SMS enhancements

17

For a data set to extend successfully to an extend storage group: An extend storage group must be defined in the active SMS configuration for the storage group that will contain the first extent of this data set. The data set must be capable of extending to a second (or additional) volume. This can be achieved if: A candidate volume exists in the catalog entry, or the allocation is explicitly specified as multi-volume. A value exists for Dynamic Volume Count in the data class assigned to this data set. We describe this in the section Dynamic volume count on page 8. Data sets that are restricted to a single volume for example, data sets with DSORG=PO or VVDSs are not able to use extend storage groups. There is no relaxation on the current limits on total number of extents for data sets. To use an extend storage group, the data set must not have reached its maximum number of extents. Volumes in your extend storage groups will tend to be selected by EOV processing when volumes in the primary storage groups are over their allocation high threshold. Once a storage group has begun to utilize its extend storage group, it will tend to continue to do so until the volumes in the primary storage group fall below their high allocation threshold. We discuss this further in Summary of factors influencing volume selection on page 35.

2.3.1 Defining extended storage groups


To enable an extend storage group to be used, you need to define two pool storage groups in ISMF: The first group will be the target storage group; this is the storage group used for the initial creation of data sets. For this storage group, fill in the name of the second storage group in the Extend SG Name field on the ISMF Storage Group Define/Alter panel. You still need to add volumes to this storage group. Figure 2-9 shows how to specify the name of an extend storage group. Note: If you specify multiple storage groups as targets for data set creation in your storage group ACS routines, it makes sense to ensure that each of these storage groups have an extend storage group specified.

18

z/OS V1R3 DFSMS Technical Guide

DGTDCSG2 Command ===>

POOL STORAGE GROUP ALTER

SCDS Name . . . . . : SYS1.SMS.SCDS Storage Group Name : SGMHL03 To ALTER Storage Group, Specify: Description ==> TESTING FOR z/OS 1.3 ==> Auto Migrate . . Y (Y, N, I or P) Auto Backup . . Y (Y or N) Auto Dump . . . N (Y or N) Overflow . . . . N (Y or N) Dump Class . . . Dump Class . . . Dump Class . . . Allocation/migration Threshold: High Guaranteed Backup Frequency . . . .

Migrate Sys/Sys Group Name Backup Sys/Sys Group Name Dump Sys/Sys Group Name . Extend SG Name . . . . . .

. . . .

. . . . SGMHL04

(1 to 8 characters) Dump Class . . . Dump Class . . . . . 85 (1-99) Low . . 5 (0-99) . . NOLIMIT (1 to 9999 or NOLIMIT)

ALTER SMS Storage Group Status . . . N (Y or N) Use ENTER to Perform Verification and Selection;

Figure 2-9 Defining an extend storage group

The second storage group, whose name matches the value you specified in the Extend SG Name field for the first storage group, must exist and be defined as a pool storage group. Your SCDS will not validate successfully if there is not a storage group defined that matches the extend storage group name. We illustrate this in Figure 2-10.

VALIDATION RESULTS VALIDATION RESULT: ERRORS DETECTED SCDS NAME: SYS1.SMS.SCDS ACS ROUTINE TYPE: * DATE OF VALIDATION: 2002/03/20 TIME OF VALIDATION: 19:27 IGD06202I STORAGE GROUP SGMHL03 INCORRECTLY SPECIFIES EXTEND STORAGE GROUP NAME SGMHL05

Figure 2-10 Specifing an incorrect name for an extend storage group

You do not need to update your ACS routines to use an extend storage group.

Chapter 2. SMS enhancements

19

This support does not allow you to define extend volumes within an existing storage group or to define only a subset of volumes in an extend storage group as eligible for use by a specific data set. Storage groups defined as extend groups for another storage group are eligible to be used as normal pool storage groups. When you use extend storage groups as conventional pool storage groups it is transparent for all users. Volumes in extend storage groups are not considered during the initial creation of a data set unless the extend storage group is included in your storage group ACS routine as a target for allocation.

Extend storage groups and DVC


Extend storage groups can be used in conjunction with DVC. Volumes in extend storage groups are eligible to be used for DVC extend processing if the number of volumes specified in the DVC field in the data sets data class has not yet been met, and the DVC value is greater than both the number of volumes specified at data set creation and the data class volume count value.

Guaranteed space
Data sets whose storage class specifies guaranteed space may be able to use extend storage groups if they have non-specific candidate volumes specified in their catalog entry after the initial data set creation resulting from an IDCAMS ALTER ADDVOL. Data sets created with explicit volume serials only (no specific candidate volumes) will not be able to use extend storage groups unless the data class assigned to them specifies DVC and the extent being taken is driven by DVC processing. Volumes added by DVC processing are always added to the catalog as candidate volumes.

2.3.2 Rules for extend storage groups


The following rules apply to the use of extend storage groups: The storage group you specify as an extend storage group must be a pool storage group. To be effective it must have volumes below their high allocation threshold. Only the systems running z/OS V1R3 DFSMS will use extend storage groups during EOV extend processing. If the systems in your SMSplex are running a mixture of code levels the lower level systems ignore the specification of the extend storage group.

20

z/OS V1R3 DFSMS Technical Guide

You can reference extend storage groups directly in your Automatic Class Selection (ACS) routines, so you can use extend storage groups for allocations other than data set extends. There can only be one extend storage group for any single pool storage group. If you specify multiple storage groups today for a particular allocation then the storage group selected by allocation must specify an extend storage group for the use of the extend storage group to be successful. The storage group assigned as an extend storage group can also be defined as an overflow storage group. See the section on Overflow storage groups on page 23 for a description on overflow storage groups. When you wish to enable an extend storage group, the only changes required are those to the storage group definition in ISMF. You do not need to update your ACS routines to add references to the extend storage group. It is possible to use a single extend storage group for all other pool storage groups effectively a one-to-many, star configuration. You could also design a one-to-one, ring, configuration where each storage group extends to the next storage group in the ring. We illustrate these different configurations in Figure 2-11. A star configuration of your pool storage groups will allow you the most flexibility for use. However, it does require that you are able to dedicate volumes to an extend pool. In a ring configuration, if a single storage group fills, it becomes unavailable as an extend storage group, and may also impact the storage group that follows it as data sets extend into it. The advantage of the ring configuration is that you will be making use of resources that are already allocated.

SG1

SG2

SG1

SG2

SG Extend
SG5 SG3 SG3 SG4

SG4

SG5

Figure 2-11 Possible storage group configurations

Chapter 2. SMS enhancements

21

You are not required to define an extend storage group for any storage group. You can define extend storage group(s) for just a subset of pool storage groups. Even if you choose a ring configuration, data sets from one storage group are only permitted to extend to the extend group defined for the storage group that holds the first extent of the data set. If you can define extend storage groups, there may be advantages in defining the volumes in the extend storage group as large volumes. These volumes are always more likely to be under their storage groups high allocation threshold so are more likely to be chosen for selection for allocation. We discuss support for large volumes in Large volume support on page 40. DFSMShsm space management will not be impacted by a data set having extended to an extend storage group. The first extent of any data set will still on exist on a volume that is in the original storage group and this determines how space management will process data sets. DFSMShsm availability management may be impacted if you use concurrent processing or any other copy process based on underlying hardware functions. Before we introduced extend storage groups all extents of a data set had to reside in the same storage group so that storage groups were often based on the underlying device types. With the use of an extend storage group, data sets can have extents in multiple storage groups. If you have previously configured separate storage groups for devices with different capabilities, for example RVAs and IBM TotalStorage Enterprise Storage Server (ESS), you need to consider what devices are added to your extend storage group and whether you require more than one extend storage group (potentially one per device type). For example, if a data set extends across multiple storage groups and you request a concurrent copy using hardware functions for example, SNAPSHOT and FLASHCOPY, or you use a volume copy mechanism such as PPRC or XRC this will not be successful. You need to ensure that the copy method that you rely on can support a data set potentially allocated across a number of different control units. If you are reliant on a hardware based copy solution, then you may need to define a number of extend storage groups, at least one per physical control unit, or ensure that your extend storage group contains volumes from each control unit. Allocation will favor volumes that meet requirements for specific hardware functions; we discuss this in Summary of factors influencing volume selection on page 35. But there are other factors, for example data set separation, that can outweigh these criteria.

22

z/OS V1R3 DFSMS Technical Guide

If you rely on DFSMSdss or DFSMShsm full volume dumps for data recovery, you will need to ensure that your extend storage group is processed at the same time as the storage group that contains the start of the data set. Failure to do this will result in either an incomplete copy of the data set or a non-synchronized copy.

2.4 Overflow storage groups


Overflow storage groups are a new type of pool storage group introduced with z/OS V1R3 DFSMS. They are used for the initial data set creation when there is insufficient space available in the primary storage group for the first extent of a data set. If there is an overflow storage group defined and the ACS routines allow the use of the overflow storage group for this data set, a data set will be directed to the overflow storage group if all volumes in the primary storage groups are over their high allocation threshold.
Note: The storage group that contains the volume that was successfully allocated to becomes the primary storage group. This means that the overflow storage group can become the data sets primary storage group.

Many customers currently achieve a similar result by using storage groups or volumes defined to SMS in QUINEW status. These are often called spill storage groups. The difference between spill and overflow storage groups is that with spill storage groups, the volumes need to be in quiesced status, and volumes in an overflow group can be defined as enabled. Figure 2-12 shows a section of the JOBLOG output of a data set allocation that was directed directly to an overflow storage group.

IGD17223I JOBNAME (JMEREP01) PROGRAM NAME (IKJEFT01) STEPNAME (S1 ) DDNAME (OUT ) DATA SET (MHLRES4.TESTC.OVRF1 ) WAS ALLOCATED TO AN OVERFLOW STORAGE GROUP SGMHL02

Figure 2-12 Allocation directed to an overflow storage group

Overflow storage groups are only considered for the initial creation of a data set, they are not seen by EOV processing. There are two steps to consider when implementing an overflow storage group: Defining the overflow storage group to ISMF Adding the storage group to your storage group ACS routine

Chapter 2. SMS enhancements

23

2.4.1 Defining the overflow storage group


You define a storage group as an overflow storage group using the ISMF Storage Group Application to either define a new storage group or alter an existing one. We show the POOL STORAGE GROUP ALTER panel in Figure 2-13. All that is required at the storage group is to specify Overflow . . . Y, the default value for this field is Overflow . . . .N. Any pool storage groups defined prior to z/OS V1R3 DFSMS are designated as non-overflow by default.

Panel Utilities Help ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss DGTDCSG2 POOL STORAGE GROUP ALTER Command ===> SCDS Name . . . . . : SYS1.SMS.SCDS Storage Group Name : SGMHL02 To ALTER Storage Group, Specify: Description ==> TESTING FOR SMS 1.3 RESIDENCY ==> Auto Migrate . . Y (Y, N, I or P) Migrate Sys/Sys Group Name Auto Backup . . Y (Y or N) Backup Sys/Sys Group Name Auto Dump . . . N (Y or N) Dump Sys/Sys Group Name . Overflow . . . . Y (Y or N) Extend SG Name . . . . . . Dump Class . . . Dump Class . . . Dump Class . . . Allocation/migration Threshold: High Guaranteed Backup Frequency . . . .

. . . .

. . . .

(1 to 8 characters) Dump Class . . . Dump Class . . . . . 85 (1-99) Low . . 5 (0-99) . . NOLIMIT (1 to 9999 or NOLIMIT)

ALTER SMS Storage Group Status . . . N (Y or N) Use ENTER to Perform Verification and Selection;

Figure 2-13 Defining an overflow storage group

If you have a mixture of operating system levels in your SMSplex and wish to define overflow storage groups on those systems at z/OS V1R3 DFSMS, we recommend that the status of this group be set to QUINEW on all down-level systems as they will not recognize the overflow status. There are no toleration PTFs available. We recommend that you use a one-to-many relationship between your overflow and normal pool storage groups. An overflow storage group must be a pool storage group. You can use it for non-overflow allocations but each data set directed to the overflow storage group will receive message IGD17223I in both JOBLOG and SYSLOG. Overflow storage groups are only considered during the initial creation of a data set. Once the data set begins to extend only the primary and extend storage groups, if present, are available.
24

z/OS V1R3 DFSMS Technical Guide

Once a storage group has been defined as an overflow storage group, you must update your storage group ACS routines to add the overflow storage group. In Figure 2-14 we show a fragment of the ACS routines that we used to allow allocations to flow to an overflow storage group. Our overflow storage group was SGMHL02.

PROC 0 STORGRP SELECT(&DSN) WHEN(MHLRES4.TESTB.**) DO SET &STORGRP EQ 'SGMHL02' EXIT CODE(0) END WHEN(MHLRES4.TESTC.**) DO SET &STORGRP EQ 'SGMHL03','SGMHL02' EXIT CODE(0) END WHEN(MHLRES4.TESTD.**) DO SET &STORGRP EQ 'SGMHL04' EXIT CODE(0) END OTHERWISE DO SET &STORGRP = 'SGMHL01' EXIT CODE(0) END END /* END SELECT */

Figure 2-14 Storage group ACS routine assigning an overflow storage group

Storage groups defined with Overflow . . . . Y can also be used as extend storage groups for other pools. When an overflow pool storage group contains more volumes than a non-overflow pool storage group, specified volume counts might result in volumes in the overflow storage group being preferred over volumes in the pool storage group during volume selection. This is just probability as the primary pools near their high allocation thresholds. We discuss this in more detail in Summary of factors influencing volume selection on page 35.

2.4.2 Using overflow storage groups


If you use storage groups in QUINEW status today, these should be converted to overflow storage groups. Volumes in an overflow storage group are preferred to volumes with either a volume status or storage group status of QUINEW.

Chapter 2. SMS enhancements

25

Volumes from the overflow storage group will only be selected by allocation if there are insufficient volumes under their high allocation threshold value in any of the eligible non-overflow storage groups. Once an extent has been allocated on a volume in a non-overflow storage group, volumes in an overflow storage group are not considered by extend processing as targets unless the overflow storage group is also defined as an extend storage group. Similarly, if the initial allocation is made to a volume in an overflow storage group, then all subsequent data set extents will be taken either in the overflow storage group or its extend storage group if one has been defined. The volumes in the ACS eligible storage groups will not be considered for extend processing since the overflow pool storage group becomes the primary storage group in this case. Unlike extend storage groups, overflow storage groups will contain the initial extent of a data set. You need to ensure that any space management or availability management functions enabled on the primary pool are also enabled on the overflow storage group. If you are relying on full volume processing for data archiving or recovery, you must ensure that data sets allocated to an overflow storage group are included. If you are using DFSMShsm to manage full volume dumps, you could either ensure that the overflow pool storage group is assigned the relevant dump classes, or you could use interval migration or command migration specifying CONVERT to move data back to the storage groups that should have initially contained this data.

Guaranteed space and overflow storage groups


If a data set is being allocated with a storage class specifing that guaranteed space and specific volume serials are supplied, the following rules apply: If the volume specified is in the overflow storage group, the allocation will be honored as long as sufficient space exists on the selected volume to hold the primary extent of the data set, as the overflow pool storage group is one of those allowed by the storage group ACS routine. If the volume serial specified is not in the overflow pool storage group and there is not sufficient space on the selected volume to hold the primary extent of the data set, even after space constraint relief processing, then the allocation will fail. If the storage class specifies guaranteed space and there is no explicit volume serial passed in the allocation, then the volumes in an overflow pool storage group are eligible to be considered for the data set allocation.

26

z/OS V1R3 DFSMS Technical Guide

2.5 Automation assistance


Even with the new functions provided by DVC, extend and overflow storage groups, you may still see allocation or extend failures. Information about these failures is the subject of this section. Two related changes make up this component: A change to message routing for certain existing, and several new messages A new SMF 42 record subtype

2.5.1 Message routing


Previously the routing of all SMS messages issued during data set allocation had been altered to only send messages to the joblog of the task allocating the data set. With z/OS V1R3 DFSMS changes are made to route some messages back to your z/OS console. Only messages that relate to volume selection errors are written to SYSLOG. In Figure 2-15 and Figure 2-16 we show output from both JOBLOG and SYSLOG from the same allocation failure. Note that in the job shown in these figures DVC recovery was also unsuccessfully attempted, but there is no evidence of this.

IEF344I JMEREP01 S1 OUT - ALLOCATION FAILED DUE TO DATA FACILITY SYSTEM ERROR IGD17287I DATA SET MHLRES4.TESTC.OVRF2 COULD NOT BE ALLOCATED NORMALLY, SPACE CONSTRAINT RELIEF (MULTIPLE VOLUMES) WILL BE ATTEMPTED IGD17284I ALLOCATION ON STORAGE GROUP SGMHL03 WAS ATTEMPTED BUT ENOUGH SPACE COULD NOT BE OBTAINED, PROCESSING CONTINUES FOR DATA SET MHLRES4.TESTC.OVRF2 IGD17289I DATA SET MHLRES4.TESTC.OVRF2 COULD NOT BE ALLOCATED WITH SPACE CONSTRAINT RELIEF(MULTIPLE VOLUMES). SPACE REDUCTION AND/OR 5 EXTENT LIMIT RELIEF WILL BE ATTEMPTED IGD17284I ALLOCATION ON STORAGE GROUP SGMHL03 WAS ATTEMPTED BUT ENOUGH SPACE COULD NOT BE OBTAINED, PROCESSING CONTINUES FOR DATA SET MHLRES4.TESTC.OVRF2

Figure 2-15 Joblog failure messages due to requested space not available

Chapter 2. SMS enhancements

27

18.40.49 JOB05746 549 549 549 549 549 549 18.40.49 JOB05746

IGD17272I VOLUME SELECTION HAS FAILED FOR INSUFFICIENT SPACE DATA SET MHLRES4.TESTC.OVRF2 JOBNAME (JMEREP01) STEPNAME (S1 ) PROGNAME (IKJEFT01) DDNAME (OUT ) REQUESTED SPACE QUANTITY = 55 KB STORCLAS (STANDARD) MGMTCLAS (MCDB22) DATACLAS (JMETEST) STORGRPS (SGMHL03 ) -JMEREP01 S1 FLUSH 0 .00 .00 .00

Figure 2-16 Syslog messages from the same failing allocation

The following types of failures will cause information to be written to SYSLOG as well as to the JOBLOG: Extend failure Volume selection failures Volume redirection messages, the selection of either an extend or overflow storage group There are several benefits of writing these messages to SYSLOG: Storage administrators do not need access to JOBLOGs to determine the cause of problems. Often JOBLOGs have either been purged or archived by the time the storage administrator is informed of the problem, or storage administrators do not have security access to view JOBLOGs. Allocation failure messages will be available to automation products for further action, for example, scheduling DFSMShsm space management to schedule a command migration of the pool that has reached its high allocation threshold. It will be possible to report on these using syslog scanning tools.

28

z/OS V1R3 DFSMS Technical Guide

Note: If you are using automation to trigger space management of a pool, you should immediately target the source pool, the one that the allocation was directed to, and some time later also target the extend or overflow pool that may have been used for the allocation. If overflow or extend processing was successful, it is probable that, for the short term at least, the data set that was allocated to these pools will still be in use.

You may wish to consider a regular scheduled sweep of your overflow and extend storage groups to move data that has landed there back to volumes in their proper storage groups. You could do this by generating, for example, DFSMShsm MIGRATE CONVERT commands, after you ensure that the primary storage groups are below their high allocation thresholds.

2.5.2 SMF
A new SMF record subtype is now created if an allocation fails because of insufficient space, SMF 42 subtype 10. These records are only produced for failures, not if the allocation succeeded due to an overflow pool being selected. SMF 42 subtype 10 are not produced in response to EOV or EOD ABENDs. This record will contain the following information about the failing allocation: JOB name STEP name DD name Data set name Space quantity requested Data class Management class Storage group The format of the record is described by MACRO IGWSMF. We have included the mapping of the relevant part of this macro inOpen/Close/EOV SMF record changes on page 166.

Chapter 2. SMS enhancements

29

2.6 Data set separation


The data set separation function allows you to specify that two, or more, data sets should not be allocated to DASD attached to the same physical control unit (PCU).

2.6.1 Allocation terminology


In the discussion of data set separation, allocation refers to the DADSM function which allocates space for data sets on DASD volumes. It is not the z/OS function which associates devices with data sets.

2.6.2 Requirement
System critical data such as system configuration data sets, JES2 checkpoint data sets and logging for subsystems such as DB2 or IMS, is often held in two or more copies to protect against failure. Having both data sets on the same physical control units introduces a single point of failure. To avoid this situation currently requires either manual data set placement or storage groups which are aligned to PCUs. These strategies are required to be maintained through the life of the data sets, not just for their initial allocation. Application data may also benefit from being separated across PCUs to reduce recovery time in the event of a subsystem failure.

2.6.3 The answer


A data set separation profile has been introduced in z/OS V1R3 DFSMS. It is defined as part of he SMS base configuration and contains the names of data sets that require PCU separation.
Note: The DASD control unit must support the Read Configuration Data (RCD) command to be considered a separate PCU. All volumes behind control units that do not support the RCD command are considered to exist behind the same PCU.

You can use the DS QD,xxxx,1,RCD command to test which of your PCUs supports the RCD command. The profile information can be in a sequential data set or a member of a PDS or PDSE. The name of the sequential data set or member is specified in the base configuration, as shown in Figure 2-17.
30

z/OS V1R3 DFSMS Technical Guide

DGTDBSA1 Command ===> SCDS Name . : SYS1.SMS.SCDS SCDS Status : VALID To ALTER SCDS Base, Specify:

SCDS BASE ALTER

Page 1 of 2

Description ===> BASE SMS CONFIG FOR OE ===> Default Management Class . . Default Unit . . . . . . . . Default Device Geometry Bytes/Track . . . . . . . . 56664 Tracks/Cylinder . . . . . . 15 DS Separation Profile ==> 'SYS1.SMS.SEP.PDSE(SEP1)' (1 to 8 characters) (esoteric or generic device name) (1-999999) (1-999999) (Data Set Name)

Figure 2-17 Altering the SMS base configuration

In this example, the profile data set is a member of a PDSE. The separation profile specified must exist and not contain invalid syntax when you validate your SMS configuration or validation will fail. In Figure 2-18, we show the output from an unsuccessful validation; in this case the specified separation data set was a PDS, but no member name was specified in the base configuration.
********************************* Top of Data **************************** VALIDATION RESULTS VALIDATION RESULT: SCDS NAME: ACS ROUTINE TYPE: DATE OF VALIDATION: TIME OF VALIDATION: ERRORS DETECTED SYS1.SMS.SCDS * 2002/03/29 14:04

IGD06031I DATA SET SEPARATION PROFILE SYS1.SMS.SEP.PDS COULD NOT BE ACCESSED. SMS RETURN CODE 00000008 GET REASON CODE 000C6000

Figure 2-18 SMS configuration validation failure

Consider the following commands:


SET SMS=xx SETSMS ACDS SETSMS SCDS

Chapter 2. SMS enhancements

31

When a new SMS configuration is activated as the result of one of these commands, the other systems in the SMSplex are notified and read the profile data set. Refer to Required maintenance on page 34 for APARs that affect data set separation.

2.6.4 Contents of the data set separation profile


The separation profile contains one or more SEP statements. Each SEP statement defines data sets that are to be physically separated. Each new line within a SEP statement requires a continuation character on the previous line. The final character of a SEP statement must be a semi-colon ;. The action to be taken if separation is not possible is specified in the profile by use of the FAIL keyword. There are two valid values for this keyword, NONE and PCU; an example is shown in Figure 2-19.

SEP FAIL(NONE) DSNS(MHLRES4.TEST1.SEP1, MHLRES4.TEST1.SEP2, MHLRES4.TEST1.SEP3); SEP FAIL(PCU) DSNS(MHLRES4.TEST1.SEP4, MHLRES4.TEST1.SEP5);

Figure 2-19 Contents of a sample separation profile

The specification of FAIL(PCU) indicates that separation is required. The order of the data set names has no significance, the first data set to be allocated will be successful, if space is available. The allocation of the second and subsequent data sets will fail if space is not available on an another PCU. The specification of FAIL(NONE) indicates that separation is preferred. Allocation will attempt to direct these data sets to separate PCUs if possible. If it is not possible to separate them they can be allocated on any PCU. The complete syntax is described in z/OS V1R3.0 DFSMSdfp Storage Administration Reference, SC26-7402.

2.6.5 Allocation examples


Output for an allocation failure caused by failure of data set separation is shown in Figure 2-20. In this example SEP FAIL(PCU) was specified and the job was failed with a JCL error.

32

z/OS V1R3 DFSMS Technical Guide

IGD17206I VOLUME SELECTION HAS FAILED - THERE ARE NOT ENOUGH VOLUMES WITH SUFFICIENT SPACE FOR DATA SET MHLRES4.TEST1.SEP2 IGD17277I THERE ARE (10) CANDIDATE VOLUMES OF WHICH (1) ARE ENABLED OR QUIESCED IGD17290I THERE WERE 1 CANDIDATE STORAGE GROUPS OF WHICH THE FIRST 1 WERE ELIGIBLE THE CANDIDATE STORAGE GROUPS WERE:SGMHL01 IGD17279I 9 VOLUMES WERE REJECTED BECAUSE THE SMS VOLUME STATUS WAS DISABLED IGD17279I 5 VOLUMES WERE REJECTED BECAUSE THEY DID NOT MEET SEPARATION CRITERIA

Figure 2-20 JOBLOG from allocation failure due to data set separation

JOBLOG messages for an allocation allowed to proceed even after a separation failure is shown in Figure 2-21. Separation was specified as SEP FAIL(NONE). This is the only notification received that the requested separation was not achieved. The job completed with RC=0 for the step that allocated this data set.

IGD17271I ALLOCATION HAS BEEN ALLOWED TO PROCEED FOR DATA SET MHLRES4.TEST1.SEP2 ALTHOUGH VOLUME COUNT REQUIREMENTS COULD NOT BE MET IGD101I SMS ALLOCATED TO DDNAME (DD5 ) DSN (MHLRES4.TEST1.SEP5 ) STORCLAS (STANDARD) MGMTCLAS (MCDB22) DATACLAS (JMETEST) VOL SER NOS= MHLS2A

Figure 2-21 JOBLOG from allocation allowed to proceed SEP FAIL(NONE)

2.6.6 Restrictions for separation processing


The following restrictions apply for data set separation checking: Data sets must be SMS managed. This may not be considered appropriate for the z/OS system data sets, such as sysplex couple and JES2 checkpoint data sets. The use of wild cards is not supported, each data set name must be fully specified. For GDS data sets each absolute generation name must be coded. You cannot mask by GDG base name or specify relative generation numbers. You cannot use system symbols in a separation profile. In an SMSplex running different levels of DFSMS, when a change is made to the data set separation profile, an SMS control data set must be activated (SCDS) or switched to (ACDS) on a system running z/OS V1R3 DFSMS or higher. Lower level systems do not support SMS data set separation and will not communicate the need to refresh the data set separation profile to other systems in the SMSplex.

Chapter 2. SMS enhancements

33

2.6.7 Usage considerations


Separation is enforced for both initial allocation and EOV processing. This means that a data set in your separation profile extending to a new volume can only extend to volumes on PCUs that currently do not hold an extent of this data set and any other data set in the same SEP FAIL(PCU) group as this data set. Striped data sets are processed in the same way, each stripe must be on a unique PCU. Here are some other factors you need to consider when coding separation profiles: Each SMS allocation has to check the separation profile, and large lists could have a performance impact on the allocation extend processing. Specifing a storage class that specifies guaranteed space for other than the first data set allocated has no impact. Data set separation overrides the specification of guaranteed space. This may result in allocation failures if SEP FAIL(PCU) is specified. If you specify storage class with guaranteed space and SEP FAIL(PCU) and you specify volumes in the same PCU, the allocation of the second data set will fail. If SEP FAIL(NONE) had been specified, the second allocation would have succeeded, but you would not obtain your requested separation. A single separation profile is in effect for all systems in the SMSplex, it is not possible to run different profiles for different systems. Separation requirements are only considered at allocation they are not checked when data sets are renamed. Separation requirements are only considered for DASD data set allocations. Separation is not maintained when data is migrated or backed up using DFSMShsm but is maintained during recall. IBM Storage Systems ATS Center FLASH 10080 provides an unsupported tool XISOLATE that can be used to ensure that critical data sets are physically isolated. The tool can be obtained from:
http://www.ibm.com/support/techdocs/atsmastr.nsf/PubAllNum/Flash10080

2.6.8 Required maintenance


We recommend that you install the fix for the APARs listed here: OW54037 is required to correct a problem in AOM processing of PCU information. OW54121 is required to enable separation data set changes to be communicated between systems.

34

z/OS V1R3 DFSMS Technical Guide

2.7 Summary of factors influencing volume selection


Data set create and extend processing use an algorithm based on a points score to determine which volumes should be used when a data set is allocated or extends. Each volume is scored and SMS presents up to three lists of volumes, primary, secondary and tertiary to allocation. SMS follows the following sequence of steps when selecting volumes during data set creation. The volume selected is based on the requirements specified for the data set in the SMS constructs assigned to the data set. There are two sets of rules SMS chooses either conventional or striping volume selection. Before the volume selection process begins, SMS categorizes each volume in a storage group into one of four lists from which volumes are selected for data set allocation:
Primary

All the volumes in all the specified storage groups are candidates for the first, or primary list. The primary list consists of online volumes that meet all the specified criteria in the storage class and data class, are below threshold, and whose volume status and storage group status are enabled. All volumes on this list are considered equally qualified to satisfy the data set creation request. Volume selection starts from this list. Volumes that do not meet all the criteria for the primary volume list are placed on the secondary list. If there are no primary volumes, SMS selects from the secondary volumes. Volumes are marked for the tertiary list if the number of volumes in the storage group is less than the number of volumes requested. If there are no secondary volumes available, SMS selects from the tertiary candidates. Volumes that do not meet the required specifications (ACCESSIBILITY = CONTINUOUS, AVAILABILITY = STANDARD or CONTINUOUS, ENABLED or QUIESCED, ONLINE...) are marked rejected and are not candidates for selection. After the system selects the primary space allocation volume, that volumes associated storage group is used to select any remaining volumes requested for the data set. If you specify an extend storage group, the data may be extended to the specified extend storage group.

Secondary

Tertiary

Rejected

Table 2-1 is from the manual z/OS V1R3.0 DFSMSdfp Storage Administration Reference, SC26-7402. It contains the best summary of volume selection.

Chapter 2. SMS enhancements

35

Table 2-1 Volume selection preferences


Criteria Data set separation Preferences Volume does not reside in the same physical control unit that has allocated a data set from which this data set should be separated as specified in the separation profile Volume resides in a storage group that has enough eligible volumes to satisfy the requested VOLUME COUNT Volume has sufficient space for the allocation without exceeding the storage group HIGH THRESHOLD value Volume and its associated SMS storage group status are ENABLED This is an end-of-volume extend, where an extend storage group is specified and the volume does not reside in the specified extend storage group Volume resides in a non-overflow storage group Volume is mountable and an IART value greater than zero was specified in the storage class Volume is in the same snapshot-capable controller as the data set and this is a snap request Volume resides in a control unit that supports accessibility and the storage class accessibility is preferred Volume resides in a control unit that does not support accessibility and the storage class accessibility is standard Availability Volume resides in a control unit that supports availability and the storage class availability value is preferred Volume resides in a control unit that does not support availability and the storage class availability value is standard Extended format Volume resides in a control unit that supports extended format and the data class IF EXT value is preferred 4 8 Value 4096

Volume count

2048

High threshold

1024

SMS status EOV extend

512 256

Nonoverflow IART SNAPSHOT

128 64 32

ACCESSIBILITY

16

36

z/OS V1R3 DFSMS Technical Guide

Criteria Millisecond response (MSR)

Preferences Volume provides the requested response time that is specified or defaulted in the storage class direct msr or sequential MSR Volume provides the requested response time that is specified or defaulted in the storage class direct msr or sequential MSR

Value 2

If a criterion is not met or not specified it is assigned a value of zero (0). SMS adds the values for each volume in the preference list will prefer the volumes with the highest cumulative score for allocation. The z/OS V1R3.0 DFSMSdfp Storage Administration Reference, SC26-7402 contains some additional information on volume selection and the example that follows in Table 2-2. In this example SMS has returned two volumes A and B. Volume B receives the higher preference score and will be the one selected by allocation for this data set.
Table 2-2 Volume preferencing example
Score

Volume selection preferencing criteria Volume A Volume does not satisfy data set separation Volume and its associated Storage Group SMS status are ENABLED Volume resides in a non-overflow Storage Group Volume resides in a control unit that supports ACCESSIBILITY and the Storage Class ACCESSIBILITY value is PREFERRED Total preference value for Volume A Volume B Volume satisfies data set separation Volumes associated Storage Group SMS status is QUIESCED Volume does not reside in a non-overflow Storage Group Volume resides in a control unit that does not support ACCESSIBILITY, and the Storage Class ACCESSIBILITY value is PREFERRED Total preference value for Volume B

0 512 128 16 656

4096 0 0 0

4096

Chapter 2. SMS enhancements

37

If multiple volumes are returned to allocation as being available, allocation will select one. If no volumes are returned, the data set cannot be allocated, and SMS will perform space constraint relief (if specified in the data class) and repeat the selection process.

38

z/OS V1R3 DFSMS Technical Guide

Chapter 3.

DFSMSdfp enhancements
In this chapter we describe the changes introduced in the DFP component. The following topics are covered: Large volumes IDCAMS Catalog management CONFIGHFS VSAM CICS Record Level Sharing (RLS) Object Access Method (OAM). The changes are in a variety of areas, some of which have implications to the user community, as well as others which are internal and do not require any actions.

Copyright IBM Corp. 2002

39

3.1 Large volume support


The IBM TotalStorage Enterprise Storage Server (ESS) initially supported custom volumes of up to 10017 cylinders, the size of the largest standard volume, the 3390 model 9. This was the limit set by the operating system software.The IBM TotalStorage ESSLarge Volume Support (LVS) enhancement, announced in November 2001, has now increased the upper limit to 32760 (x7FF8) cylinders, approximately 27.8 GB. The enhancement is provided as a combination of IBM TotalStorage ESS licensed internal code (LIC) changes and system software changes, available for z/OS, OS/390, and z/VM. Large volumes can only be defined on the IBM TotalStorage Enterprise Storage Server (ESS) or equivalent vendor storage subsystems. The problem of concurrent, non-related requests has been addressed by the introduction of Parallel Access Volumes (PAVs) for the IBM TotalStorage ESS, which enables more than one I/O operation to be active on a volume. For DFSMS/MVS components, this support is provided as a Small Programming Enhancement (SPE) on OS/390 Release 10 (HDZ11F0) and integrated into z/OS Release 1.3 (HDZ11G0). You need to install large volume support on your systems before defining large volumes on the IBM TotalStorage ESS. Large volume support needs to be installed on all systems in a sysplex prior to sharing data sets on large volumes. Shared system/application data sets cannot be placed on large volumes until all system images in a Sysplex have the large volume support installed. Installation of PTFs on some components will require a system IPL to activate. Check PSP bucket information for required PTFs. Check with your OEM software product vendors for changes to their products which may be required in support of large volumes.

3.1.1 Large volume design considerations


Here are some considerations for large volume design: The S/390 I/O processor has an architectural device size limit of 32765 (x7FFD) cylinders. The current software limit is 10017 cylinders. Records in sequential data sets are located using two byte relative track addresses (TTR), which imposes a limit of 64K tracks per data set. Control blocks such as the Data Extent Block (DEB) and channel commands such as SEEK use two bytes for cylinder and head addressing (CCHH) which imposes a limit of 64K cylinders and tracks

40

z/OS V1R3 DFSMS Technical Guide

3.1.2 3390-9 overview


The 3390-9, with 10017 cylinders, gave a threefold increase in capacity over the 3390-3 and had the following advantages: More DASD space could be made available to configurations that were approaching the limit of 64K devices that can be defined for a z/OS image. Storage management costs were reduced by having smaller configurations to define and manage. Infrequently used data could be kept online.

3.1.3 Limitations of the 3390-9 solution


These are some limitations of the 3390-9 solution: With the original 3390-9, there were performance problems due to head movement; this was addressed, to a degree, by large control unit caches. The large capacity also meant the possibility of a large number of concurrent requests which were single threaded due to the design of OS/390. Therefore they were of limited use if performance was a primary concern. This is also the case with the newer Redundant Array of Independent Disks (RAID) based subsystems, which emulate the 3390 device geometry.

3.1.4 Coexistence support


For DFSMS/MVS 1.4 and 1.5, a coexistence PTF is provided that will allow these system levels to coexist in the same Sysplex with LVS systems. You must install this PTF in order to prevent unpredictable results that may arise from systems without large volume support accessing volumes that have more than 10017 cylinders. The coexistence PTF will: Prevent a device with more than 10017 cylinders from being varied online to the system. Prevent a device from coming online during an IPL if it is configured with more than 10017 cylinder. Coexistence PTFs will also be required by DFSMShsm on all releases prior to OS/390 R10 because of the updates being made to the record format in the DFSMShsm control data sets. Coexistence support will not be available for DFSMS 2.10 and higher. Install the large volume support prior to using data on large volumes at these release levels. No coexistence support will be available for DFSMS/MVS 1.3 (unsupported release), or earlier.

Chapter 3. DFSMSdfp enhancements

41

3.1.5 EXCP considerations


Prior to this release, the flag used to indicate that a data set cannot be processed by EXCP, for example PDSE or extended format, is the high order bit of field DEBSTRCC, this has been moved to the high order bit of DEBSTRHH+1. This information was not documented in any of the DFSMS publications available at the time of writing this book.

3.1.6 Interfaces and vendor code


All vendors should be contacted to determine if any code changes are required to support these volumes, particularly Sort and Database suppliers. The change in the DEB, described in the previous section, is most likely to affect SORTs, but locally written programs should also be checked to see if this change is relevant. Products which use TTR notation as an absolute value and not relative to the start of the data set, most likely to be database related, cannot have an ending cylinder address higher than 4369. This is not new, as it was a problem for 3390-9 users. However, JES2 is an example of a subsystem which has changed its code to use a relative displacement; APAR OW49373 gives the details.

3.1.7 Performance
We did not have the opportunity to do any performance testing, but the use of PAVs will be essential if performance is a consideration for the data on large volumes. If you are running on an IBM D/T2064 processor, or equivalent, the Workload Manager (WLM) controlled Dynamic CHPID management should be evaluated as it could improve overall DASD subsystem throughput.

3.1.8 Implementation considerations


All systems sharing DASD should be capable of recognizing large volumes before they are configured on the ESS. The LVS support will be rolled back to OS/390 V2R10; earlier systems will require PTFs to prevent large volumes from coming online during IPL or VARY online processing. As always, disaster recovery (DR) must be considered. If volume backups are being taken for this purpose, then the DR site must be capable of providing large volumes.

42

z/OS V1R3 DFSMS Technical Guide

Do not run the standard volume initialization job. With potentially over 32000 cylinders available for allocation, some thought needs to be put into sizing the VTOC, indexed VTOC, and VVDS.

3.1.9 Required support


The PTFs required for support or tolerance of large volumes by earlier releases are listed in z/OS V1R3.0 DFSMS Migration, GC26-7398. Full support for large volumes is provided in OS/390 2.10 and all levels of z/OS. For other supported levels of OS/390, toleration support is provided. This support prevents you from varying large volumes online to these systems.

3.2 IDCAMS
Enhancements to GDG base processing and LISTCAT command output for both GDG bases and symbolic resolution provide you with information to better manage your storage environment.

3.2.1 Changes to GDG base processing


There are two related changes made to the processing of GDG bases: the removal of support for the specification of an expiration date on the base itself, and the addition of a last-altered date.

GDG alter date


A LISTCAT of a GDG base has a new field labelled LAST ALTER which displays the last date that a GDS was added to or removed from the GDG base. The alteration only applies to a change in status of a GDS; it is not an indication of changes to the GDG base itself. We show a section of the output from a LISTCAT in Figure 3-1. The date altered will be available via a new field, GDGALTDT, to the Catalog Search Interface (CSI). It will only be valid when requested for a GDG base. More information on the CSI is available in the manual z/OS V1R3.0 DFSMS Managing Catalogs, SC26-7409.

Chapter 3. DFSMSdfp enhancements

43

LISTC ENT(MHLRES4.TEST.GDG) ALL GDG BASE ------ MHLRES4.TEST.GDG IN-CAT --- MCAT.SANDBOX.Z03.VSBOX11 HISTORY DATASET-OWNER----MHLRES3 CREATION--------2002.090 RELEASE----------------2 LAST ALTER------2002.099 ATTRIBUTES LIMIT------------------5 SCRATCH NOEMPTY ASSOCIATIONS NONVSAM--MHLRES4.TEST.GDG.G0001V00 NONVSAM ---- MHLRES4.TEST.GDG.G0001V00

Figure 3-1 LISTCAT output of GDG base

GDG expiration date


From z/OS V1R3 DFSMS, it is no longer possible to specify or alter an expiration date or retention period for a GDG base. It is still possible to specify and manipulate expiration dates or retention periods for individual GDSs. The expiration date that used to be shown on a LISTCAT of the GDG base is no longer displayed. In Figure 3-2 we show the output of a LISTCAT issued against a GDG on an OS/390 2.10 system.

LISTC ENT(MHLRES4.TEST.GDG) ALL GDG BASE ------ MHLRES4.TEST.GDG IN-CAT --- CATALOG.CS HISTORY DATASET-OWNER----MHLRES4 RELEASE----------------2 ATTRIBUTES LIMIT------------------8 ASSOCIATIONS

CREATION--------1996.246 EXPIRATION------0000.000 SCRATCH NOEMPTY

Figure 3-2 Output from LISTC on pre z/OS V1R3 DFSMS system

When you migrate to z/OS V1R3 DFSMS, any expiration dates applied to GDG bases will no longer be effective; it will be possible to delete and GDG base without specifying the PURGE operand. There will be no way of determining if an expiration date had been assigned to this GDG base, and if so, what its value was.

44

z/OS V1R3 DFSMS Technical Guide

In Figure 3-3 we show the output from an attempt to alter the expiration date of a GDG on a z/OS V1R3 DFSMS system.

ALTER MHLRES4.TEST.GDG TO(99365) IDC3019I INVALID ENTRY TYPE FOR REQUESTED ACTION IDC3009I ** VSAM CATALOG RETURN CODE IS 60 - REASON CODE IS IGG0CLE8-30 IDC0532I **ENTRY MHLRES3.TEST.GDG NOT ALTERED

Figure 3-3 Output from attempt to alter expiration information for GDG base

If you have any processing based on the value found in the GDG expiration date field you will need to review this before upgrading your first system to z/OS V1R3 DFSMS.
Note: Please ensure that you install the fix for HIPER APAR OW53804 when you install z/OS V1R3 DFSMS. We have included the text for this APAR in Maintenance information on page 171. The PTFs for this APAR should be applied to all down level systems that will share catalogs with a system running z/OS V1R3 DFSMS.

3.2.2 Extended alias support


Extended alias support was introduced in DFSMS 1.5 and allowed data set names to be indirectly accessed by the use of system symbolics. For testing we defined the alias in Figure 3-4.
DEFINE ALIAS(NAME('MHLRES4.TEST.ALIAS') SYMBOLICRELATE('MHLRES4.&SYSNAME..ALIASTST')

Figure 3-4 Defining a symbolic alias

The variable &SYSNAME resolves to SC64 on the system where this was tested, therefore references to MHLRES4.TEST.ALIAS should access data set MHLRES4.SC64.ALIASTST. There are several potential causes for the association between alias and data set not being successfully resolved, such as these: The symbol is not in use yet, because the system had not been IPLed. The data set has been given the wrong name. The symbol is entered incorrectly in the IDCAMS DEFINE. The symbol is entered incorrectly in the PARMLIB member.

Chapter 3. DFSMSdfp enhancements

45

Prior to this release, listing the catalog would show the output in Figure 3-5, which tells you that the symbolic was entered correctly, but nothing more.

ALIAS --------- MHLRES4.TEST.ALIAS IN-CAT --- MCAT.SANDBOX.Z03.VSBOX11 HISTORY RELEASE----------------2 ASSOCIATIONS SYMBOLIC-MHLRES4.&SYSNAME..ALIASTST

Figure 3-5 Output from a LISTC of a symbolic alias prior to z/OS V1R3 DFSMS

In this release, there are two possible results, shown in Figure 3-6 and Figure 3-7.

ALIAS --------- MHLRES4.TEST.ALIAS IN-CAT --- MCAT.SANDBOX.Z03.VSBOX11 HISTORY RELEASE----------------2 ASSOCIATIONS SYMBOLIC-MHLRES4.&SUSNAME..ALIASTST RESOLVED-MHLRES4.&SUSNAME..ALIASTST

Figure 3-6 LISTC of an unresolved symbolic alias

This tells you that the symbolic SUSNAME could not be resolved, which in this case would enable you to solve the problem, as it should have been SYSNAME.

ALIAS --------- MHLRES4.TEST.ALIAS IN-CAT --- MCAT.SANDBOX.Z03.VSBOX11 HISTORY RELEASE----------------2 ASSOCIATIONS SYMBOLIC-MHLRES4.&SYSNAME..ALIASTST RESOLVED-MHLRES4.SC64.ALIASTST

Figure 3-7 LISTC of a successfully resolved symbolic alias

In Figure 3-7, you can see that the symbolic has been successfully resolved, so you will need to start looking to see if the data set is correctly defined.

46

z/OS V1R3 DFSMS Technical Guide

3.3 Catalog management


In this section we describe some of the changes that have been made to catalog management.

3.3.1 Defining catalogs


The RECORDSIZE parameter will be ignored when defining catalogs, and a value of (4084,32400) will be used. This will give better performance, as smaller values force the creation of extension records.

3.3.2 Data set name validity checking


Both normal and dynamic allocation will not allow data sets specified in quotes to be cataloged, because they might have names that break the syntax rules. However, as catalog management did minimal checking, products which used the SVC26 CATALOG interface, could catalog data set with invalid names. This could cause difficulties with ISPF and when trying to uncatalog them. With the new checking the same rules as used by allocation will be applied and if the rules are broken then the catalog request will be rejected with a return code x1C, it will not cause an abend. A description of the characters which should not be used can be found in the z/OS MVS JCL Reference SA22-7597-02 in the DSNAME section. These rules are not changed by z/OS V1R3 DFSMS. The checking is on by default. If it is known or suspected that it could be an issue, it should turned off by issuing the MODIFY CATALOG command shown in Figure 3-8.

MODIFY CATALOG,DISABLE(DSNCHECK) IEC351I CATALOG ADDRESS SPACE MODIFY COMMAND ACTIVE IEC352I CATALOG ADDRESS SPACE MODIFY COMMAND COMPLETED

Figure 3-8 Disabling data set name checking

You can use the MODIFY CATALOG,REPORT command to check the status of data set name checking. MODIFY CATALOG,ENABLE can be used to reinstate data set name checking.

Chapter 3. DFSMSdfp enhancements

47

F CATALOG,REPORT IEC351I CATALOG ADDRESS SPACE MODIFY COMMAND ACTIVE IEC359I CATALOG REPORT OUTPUT 431 *CAS************************************** * CATALOG COMPONENT LEVEL = HDZ11G0 * * CATALOG ADDRESS SPACE ASN = 002F * * SERVICE TASK UPPER LIMIT = 180 * * SERVICE TASK LOWER LIMIT = 60 * * HIGHEST # SERVICE TASKS = 5 * * CURRENT # SERVICE TASKS = 5 * * MAXIMUM # OPEN CATALOGS = 1,024 * * ALIAS TABLE AVAILABLE = YES * * ALIAS LEVELS SPECIFIED = 1 * * SYS% TO SYS1 CONVERSION = OFF * * CAS MOTHER TASK = 007AF898 * * CAS MODIFY TASK = 007AF608 * * CAS ANALYSIS TASK = 007A0E88 * * CAS ALLOCATION TASK = 007AF2E0 * * VOLCAT HI-LEVEL QUALIFIER = SYS1 * * DELETE UCAT/VVDS WARNING = ON * * DATA SET SYNTAX CHECKING = ENABLED * *CAS************************************** IEC352I CATALOG ADDRESS SPACE MODIFY COMMAND COMPLETED

Figure 3-9 Output from MODIFY CATALOG REPORT

3.3.3 Performance, diagnostic, and nice-to-have


The next several enhancements are designed to reduce the effort and time spent diagnosing problems that involve the catalog address space.

Dumping CAS with user jobs


Catalog management is taking advantage of the option to have an exit get control during SVC dump processing. If an address space being dumped has an active catalog request then the catalog address space will also be dumped. This facility is not enabled by default, to enable it you must issue the following command:
CHNGDUMP SET,SDUMP=(SERVERS),ADD

48

z/OS V1R3 DFSMS Technical Guide

This could be done by automation or by the use of the COMMNDxx member in PARMLIB. Other components, for example JES2, are already making use of the SERVERS function. We highly recommend this as it can save having to reproduce a problem. Enabling this may add a small additional overhead to dump processing but this should only be seen where multiple address spaces are being dumped. You may need to increase the size of your dump data sets to accommodate the additional address spaces being included.

Catalog performance statistics


Catalog management is often implicated in performance issues and it is worth considering using automation to issue the following command on a regular basis:
MODIFY CATALOG,REPORT,PERFORMANCE

For example, this could be done every shift change. This will give a rolling history which can be used for diagnostic purposes. To make this more useful, there is a minor change to the output of the MODIFY CATALOG REPORT,PERFORMANCE command, which now has a heading line showing the start time of the period that the statistics apply to, an example of which is shown in Figure 3-10. A new operand, RESET, has been added to the MODIFY CATALOG PERFORMANCE command; this allows you to reset the statistics gathered. We show this command in Figure 3-11. We recommend automating the regular issuing of the command MODIFY CATALOG,REPORT,PERFORMANCE followed by the command MODIFY CATALOG,REPORT,PERFORMANCE(RESET), so that over time, you can get a feel for the normal catalog workload on your system. This will enable you to recognize changes in catalog behavior more easily.

Chapter 3. DFSMSdfp enhancements

49

F CATALOG,REPORT,PERFORMANCE IEC351I CATALOG ADDRESS SPACE MODIFY COMMAND ACTIVE IEC359I CATALOG PERFORMANCE REPORT 442 *CAS*************************************************** * Statistics since 22:59:05.28 on 04/08/2002 * * -----CATALOG EVENT-----COUNT-- ---AVERAGE--- * * Entries to Catalog 7,602 23.171 MSEC * * BCS ENQ Shr Sys 8,285 0.279 MSEC * * BCS ENQ Excl Sys 20 0.796 MSEC * * BCS DEQ 15,024 0.253 MSEC * * VVDS RESERVE CI 2,360 1.463 MSEC * * VVDS DEQ CI 2,360 0.234 MSEC * * VVDS RESERVE Shr 15,708 0.266 MSEC * * VVDS RESERVE Excl 22 0.401 MSEC * * VVDS DEQ 15,730 0.276 MSEC * * SPHERE ENQ Excl Sys 2 0.261 MSEC * * SPHERE DEQ 2 0.205 MSEC * * CAXWA ENQ Shr 2 0.072 MSEC * * CAXWA DEQ 2 0.069 MSEC * * VDSPM ENQ 8,305 0.152 MSEC * * VDSPM DEQ 8,305 0.158 MSEC * * BCS Get 7,190 0.351 MSEC * * VVDS I/O 18,101 5.755 MSEC * * VLF Define Major 9 0.005 MSEC * * VLF Identify 10,989 0.000 MSEC * * BCS Allocate 4 0.342 MSEC * * SMF Write 1,098 0.053 MSEC * * IXLCONN 2 1.351 SEC * * IXLCACHE Read 6 0.062 MSEC * * MVS Allocate 10 110.765 MSEC * * Capture UCB 11 0.012 MSEC * * Uncapture UCB 64 0.010 MSEC * * SMS Active Config 2 0.590 MSEC * * RACROUTE Auth 162 0.216 MSEC * * ENQ SYSZPCCB 584 0.024 MSEC * * DEQ SYSZPCCB 584 0.016 MSEC * *CAS*************************************************** IEC352I CATALOG ADDRESS SPACE MODIFY COMMAND COMPLETED

Figure 3-10 Output from the MODIFY CATALOG PERFORMANCE command

In Figure 3-11 we show the output from the command:


MODIFY CATALOG,REPORT,PERFORMANCE(RESET)

50

z/OS V1R3 DFSMS Technical Guide

F CATALOG,REPORT,PERFORMANCE(RESET) IEC351I CATALOG ADDRESS SPACE MODIFY COMMAND ACTIVE IEC352I CATALOG ADDRESS SPACE MODIFY COMMAND COMPLETED

Figure 3-11 Output from MODIFY CATALOG PERFORMANCE(RESET) command

Obtaining dumps of catalog


The MODIFY command is used to force a dump when a particular event occurs. MODIFY CATALOG,DUMPON(rc,rsn,mm,cnt) used to require that the rc, rsn and cnt fields be three characters with leading zeroes if necessary. This has been relaxed, and they can now be one to three characters.

3.4 CONFIGHFS
The CONFIGHFS command is used to display usage statistics for the HFS data sets which contains the pathname specified. Figure 3-12 shows the results of a CONFIGHFS command.

3.4.1 How it works today


Today, you can only issue the CONFIGHFS command for a path name on the system which owns the HFS.

3.4.2 How it works with this release


The enhancement to CONFIGHFS allows the command to be issued for any path name from any system in the sysplex, assuming that the system on which the HFS is mounted is also running z/OS V1R3 DFSMS. The system which owns the HFS, is considered to be the server for any CONFIGHFS command requesting information for a path name residing in that HFS. When issuing the CONFIGHFS command on a system for a path name which resides in a HFS owned by a different system, then the system from where the command was issued acts as a client and routes the command to the owning system or server. When issuing the CONFIGHFS command on a system for a path name which resides in a HFS owner by the same system, then the command is processed locally, as it is with previous releases.

Chapter 3. DFSMSdfp enhancements

51

The output from the command when issued for a HFS mounted on a different system as opposed to a HFS mounted on the local system is exactly the same. See Figure 3-12 for an example.

/>confighfs /etc/httpd.conf Statistics for file system HFS.SC64.ETC ( 03/27/02 12:39pm ) File system size:_____14220 (pages) ________55.547(MB) Used pages: _____12954 (pages) ________50.602(MB) Attribute pages: ________42 (pages) _________0.164(MB) Cached pages: ________32 (pages) _________0.125(MB) Seq I/O reqs: ___________________0 Random I/O reqs: __________________26 Lookup hit: _________________115 Lookup miss: _________________181 1st page hit: _________________334 1st page miss: __________________30 Index new tops: ___________________0 Index splits: ___________________0 Index joins: ___________________0 Index read hit: _________________480 Index read miss: ___________________5 Index write hit: __________________44 Index write miss:___________________0 RFS flags __________________43(HEX) RFS error flags: ___________________0(HEX) High foramt RFN: ________________3310(HEX) Member count: _________________388 Sync interval: __________________60(seconds)

Figure 3-12 CONFIGHFS command output

3.4.3 Other enhancements


ISPF options 3.2 and 3.4 have been enhanced to correctly return full details of HFS data sets owned or mounted on different systems in the sysplex. On previous releases you received an IGWFAMS failed RC=12 message when requesting information via ISPF 3.2 or 3.4.I, indicating that the HFS was not mounted on the system where the command was issued from. This error is no longer generated and full usage information is returned.

52

z/OS V1R3 DFSMS Technical Guide

There is also a new symbolic link for confighfs:


/usr/sbin/confighfs -> /usr/lpp/dfsms/bin/confighfs

This removes the requirement to add the install path for confighfs to your path statement (if /usr/sbin is already in your path) and provides a common location for commands.

3.5 VSAM
There are no programming interface changes in this release, but there are changes to keyword support and performance enhancements.

3.5.1 VSAM parameter definition support removed


The KEYRANGE parameter will be ignored in this release, this is a continuation of the policy, started in DFSMS 2.10, where IMBED, REPLICATE and ORDERED were removed. The original use of these parameters was to position data such that head movement was reduced, which improved performance on the older DASD susbsystems. With the current RAID based DASD subsystems this is no longer a requirement. Clusters that include KEYRANGE on the define statement will be defined as NONKEYRANGE, no messages are issued during the define. Existing data sets with these attributes can still be processed. We recommend that such data sets should be redefined where possible. Error messages are not issued when these data sets are created, but are issued whenever a keyrange data set is opened for output. You will receive the errors seen in Figure 3-13 when attempting to open an existing keyrange data set for output.

IEC161I 254(002)-002,HSM,HSM,MIGCAT,,,DFHSM.CS01.MCDS1 IEC161I 254(002)-002,HSM,HSM,MIGCAT2,,,DFHSM.CS01.MCDS2

Figure 3-13 Open error for VSAM cluster defined with keyrange

For further details, refer to informational APAR II12431 and APAR II12896. The text for these is included in Maintenance information on page 171. For additional information about the impact of the removal of this support. We recommend that you review Flash10072, which can be found at the Web site:
http://www-1.ibm.com/support/techdocs/atsmastr.nsf/PubAllNum/Flash10072

Chapter 3. DFSMSdfp enhancements

53

3.5.2 System managed buffering


System managed buffering (SMB) was introduced in DFSMS 1.4. Its function, for VSAM extended format data sets, is to decide on the optimum number of I/O buffers based on the type of access. I/O buffers are used by VSAM to read and write control intervals from DASD to virtual storage. For a key-sequenced data set or variable-length RRDS, VSAM requires a minimum of three buffers, two for data control intervals, and one for an index control interval. (One of the data buffers is used only for formatting control areas and splitting control intervals and control areas.) The VSAM default is enough space for these three buffers. Only data buffers are needed for entry-sequenced, fixed-length RRDSs or for linear data sets. To increase performance, there are parameters to override the VSAM default values. There are five places where these parameters can be specified: BUFFERSPACE, specified in the access method services DEFINE command. This is the least amount of storage ever provided for I/O buffers. BUFSP, BUFNI, and BUFND, specified in the VSAM ACB macro. This is the maximum amount of storage to be used for a data sets I/O buffers. If the value specified in the ACB macro is greater than the value specified in DEFINE, the ACB value overrides the DEFINE value. BUFSP, BUFNI, and BUFND, specified in the JCL DD AMP parameter. This is the maximum amount of storage to be used for a data sets I/O buffers. A value specified in JCL overrides DEFINE and ACB values if it is greater than the value specified in DEFINE. ACCBIAS, specified in the JCL DD AMP parameter. Record Access Bias has six specifications: SYSTEM: Force system-managed buffering and let the system determine the buffering technique based on the ACB MACRF and storage-class specification. USER: Bypass system-managed buffering. SO: System-managed buffering with sequential optimization. SW: System-managed buffering weighted for sequential processing. DO: System-managed buffering with direct optimization. DW: System-managed buffering weighted for direct optimization. Data class Record Access Bias = SYSTEM or USER.

54

z/OS V1R3 DFSMS Technical Guide

In addition to the foregoing, there are two internal access techniques that are used during load-mode processing and data set creation. These cannot be specified by the user and will be invoked internally if the data set is in load-mode (HURBA=0) and the keyword SYSTEM is specified for Record Access Bias in the data class or ACCBIAS in the JCL. These two techniques are: CO System-managed buffering with Create Optimization. This is used if SPEED is specified at data set creation. CR System-managed buffering with Create Recovery optimization. This is used if RECOVEY is specified at data set creation.

Whats new
There are two enhancements to SMB in this release: Retry capability for DO access bias AIX support

Retry capability
Currently, SMB defaults to using Direct Weighted (DW) access bias when a failure occurs building a shared resource pool (LSR) for the buffers and I/O related control blocks during Direct Optimize (DO). With this release, SMB will now make two attempts with less resources than what are considered optimum, before resorting to using DW. There will be two attempts to reduce the buffers for the Data pool and one attempt for the index pool. An optimum data pool size including buffer resources is 20% of the data set allocation.

How it works
If an attempt to build an optimum data pool for DO processing, results in a failure due to insufficient virtual storage, an attempt will then be made to build a pool with reduced resources equal to one-half of the optimum amount. Two additional checks are made before this second attempt: If the optimum amount was already below the minimum pool size for DO, then the DW access bias will be used immediately. If one-half of the optimum amount is less or equal to the minimum, this attempt at reducing buffer requirements will be skipped and an allocation of the minimum pool size will be attempted.

Chapter 3. DFSMSdfp enhancements

55

The retry process works as follows: If the first attempt fails, then retry with the number of buffers for the data component reduced to 50% of the optimum. If this fails, then retry with the number of buffers for the data component reduced to the minimum. If this fails, then retry with the number of buffers for the index component reduced to the minimum. If this fails, revert to Direct Weighted (DW).
Note: The minimum data pool size is one megabyte (1 MB), and the minimum index pool size will include enough resources to contain the entire index set plus 20% of the sequence set records.

AIX support
SMB has been enhanced to support the Direct Optimize access bias for VSAM data sets with associated AIXs. VSAM Open processing passes information to SMB for a single sphere and all related components; this information now includes the intent of the open of the cluster/AIX. The intent is either general purpose or upgrade only. The number of data buffers is based on the attribute relating to the open intent. The number of index buffers would follow the same calculation used in the current implementation. All related components are defined as the base of the sphere and all associated AIXs. For more detail on SMB, there is a section in the redbook VSAM Demystifed, SG24-6105. There is an SMF record change associated with this which is described in Appendix A, Record changes in z/OS V1R3 DFSMS on page 165.

3.6 Large real storage


Buffers for VSAM data sets will be GETMAINed with the option of backing them in 64-bit real storage frames. At the present time there are a few exploiters, other than z/OS, of 64-bit storage. Therefore, this change should improve overall real storage use and, as page stealing is less likely, improve VSAM performance. This is part of the support provided by the Media Manager, which is capable of I/O operations to 64-bit storage.
56

z/OS V1R3 DFSMS Technical Guide

3.6.1 Media Manager


In this release, Media Manager will be the I/O manager for all VSAM requests, apart from Improved Control Interval Processing (ICIP). The Media Manager is a high performance I/O processor and has been used for many years by performance orientated callers, such as DB2 and IMS. As well as its software performance, callers of Media Manager get the advantages of new hardware function such as the ESS Read Track. It currently provides the support for functions such as striping and extended format sequential.

3.7 REUSE for striped data sets


This support, which has been made available to earlier releases via APAR OW50528, allows the specification of the REUSE parameter with striped data sets. This could decrease the elapsed times of batch jobs which process large DB2 tablespaces. APAR OW54128 is required for data sets that have a large primary allocation.

3.8 Expiration date and retention period


Prior to z/OS V1R3 DFSMS, it was not possible to change the retention period or expiration date of a data set just by specifying a new date on the JCL that referenced a data set. This is now possible. The following rules are imposed by this support: The data set that has the new date specified must be opened in the step that uses the new date. The newly derived date must be within the boundaries allowed by the management class assigned to the data set. If the final expiration date is beyond that allowed by the management class, then an expiration date equal to the maximum imposed by the management class is set. If a retention period is set, the start date is calculated from the data set creation date specified in the data sets FM1 VTOC entry rather then from the current date.

Chapter 3. DFSMSdfp enhancements

57

Figure 3-14 shows an example of this process; the JCL and output have been edited to remove some information. In this case the management class specified NOLIMIT for both EXPIRE DAYS/DATE and RETENTION PERIOD.

//S1 DD DISP=(,CATLG),DSN=MHLRES4.TEST1.EXP4, // SPACE=(TRK,(1,1)),LRECL=80,RECFM=FB,DSORG=PS, // STORCLAS=STANDARD,MGMTCLAS=MC365,RETPD=5 //* //S1 EXEC PGM=IKJEFT01 //S1 DD DISP=OLD,DSN=MHLRES4.TEST1.EXP4,RETPD=800 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * LISTC ENT('MHLRES4.TEST1.EXP4') ALL REPRO IDS('MHLRES4.INPUT') OFILE(S1) LISTC ENT('MHLRES4.TEST1.EXP4') ALL LISTC ENT('MHLRES4.TEST1.EXP4') ALL NONVSAM ------- MHLRES4.TEST1.EXP4 IN-CAT --- MCAT.SANDBOX.Z03.VSBOX11 HISTORY DATASET-OWNER-----(NULL) CREATION--------2002.085 RELEASE----------------2 EXPIRATION------2002.090 ACCOUNT-INFO-----------------------------------(NULL) READY REPRO IDS('MHLRES4.INPUT') OFILE(S1) NUMBER OF RECORDS PROCESSED WAS 9116 READY LISTC ENT('MHLRES4.TEST1.EXP4') ALL NONVSAM ------- MHLRES4.TEST1.EXP4 IN-CAT --- MCAT.SANDBOX.Z03.VSBOX11 HISTORY DATASET-OWNER-----(NULL) CREATION--------2002.085 RELEASE----------------2 EXPIRATION------2004.155 ACCOUNT-INFO-----------------------------------(NULL) READY

Figure 3-14 Changing the expiration date of an existing data set

Please note that if the data set you are altering already has an expiration date or retention period set, then you will receive message IEC507D at the console when you attempt to access the data set. We show this in Figure 3-15.
*IEC507D E 37E4,MHLS2A,JMEEXPD3,S1,MHLRES4.TEST1.EXP4 *193 IEC507D REPLY 'U'-USE OR 'M'-UNLOAD

Figure 3-15 Attempting to overwrite data set with an existing expiration date

58

z/OS V1R3 DFSMS Technical Guide

Your operational procedures should ensure that this message is responded to appropriately. It is possible that you will see new occurrences of this message, as JCL that is running today may be specifying a retention period or expiration date, and this is being ignored.

3.9 Record level sharing


Enhancements to record level sharing (RLS) include cache data records greater than 4K and enhanced user-managed rebuild of the lock structure.

3.9.1 Coupling facility structures


Record level sharing (RLS) uses two coupling facility (CF) structures, one cache and one lock. The cache structure is used to keep copies of control intervals (CIs) from VSAM data sets. The lock structure provides the mechanism to control the sharing of data at a record level.

3.9.2 Caching CIs larger than 4K


Prior to this release, CIs with a size larger than 4K were not kept in the cache structure because of performance issues with large records in the earlier CF models, and because retrieval from cached DASD was more effective. The current CF models have increased performance and can support the larger record sizes. Therefore, in this release, it is optional whether CIs greater than 4K are cached. There are two parameters which control this. A new data class parameter, as shown in Figure 3-16, allows you to choose to cache all CIs, none at all, or only those updated.

RLS CF Cache Value

. . . . . . N

(A=ALL, N=NONE, U=UPDATESONLY)

Figure 3-16 RLS coupling facility cache data class parameter

There is also a new parameter in the IGDSMSxx PARMLIB member, RLS_MAXCFFEATURELEVEL which can have a value of A or Z (default): If Z is specified, or defaults, then caching of CIs greater than 4K is not allowed, even if it is specified in the data class. If A is specified, then caching of CIs greater than 4K is permitted, if specified in the data class.

Chapter 3. DFSMSdfp enhancements

59

To determine the support on a z/OS V1R3 DFSMS system issue the command:
D SMS,SYSVSAM

The reply to this will include the output shown in Figure 3-17.

DISPLAY SMS,SMSVSAM - GLOBAL CACHE FEATURE PARMLIB VALUES MAXIMUM CF CACHE FEATURE LEVEL = Z DISPLAY SMS,SMSVSAM - CACHE FEATURE CODE LEVEL VALUES SYSNAME: SC63 CACHE FEATURE CODE LEVEL = A CACHE FEATURE LEVEL DESCRIPTION: Z = No Caching Advanced functions are available A = Greater than 4K Caching code is available

Figure 3-17 DISPLAY SMS,SMSVSAM command output

This shows that, on this particular system, the code is installed to support greater than 4K caching, but it has not been enabled in the PARMLIB member.
Note: The support for CIs which are 4K or less in size is unchanged.

3.9.3 Lock structures


All lock structures contain lock table entries, which are used by the structure owners to indicate the use of resources. The lock table entries are allocated through the use of a hash value, and the original resource name cannot be derived from them. There is also the option to have record data entries, which are 64 bytes long and contain ownership information about a resource. These can be used for recovery purposes in the event of the failure of a sharing system.

RLS lock structure


The RLS lock structure contains both types of entries. The resources represented by the lock table entries are records from VSAM data sets. The record data entries contain information required to perform recovery and maintain data integrity in the event of a sharing system failing.

60

z/OS V1R3 DFSMS Technical Guide

Rebuild and alter


The size of the lock structure can be changed as part of the user-managed rebuild or altered by command or system action. There is currently no check for the possibility of running out of record data space in the new structure. If the structure is too small, the eventual result will either be a 0F4 abend or a record management error.

Structure change validation and message automation


A validation routine has been added which will get control during the user-managed rebuild of the lock structure. If the space is insufficient, the rebuild is rejected and message IGW322I will be issued. Message IGW322I is also issued if the structure size is altered and the new size is likely to be too small for future processing. In this case, the alter cannot be rejected and processing continues. In both of these cases, manual action is required to allocate the structure, and automation should be used to alert the operations staff.

3.10 OAM enhancements


Object Access Method (OAM) is an access method that provides storage, retrieval, and storage hierarchy management for objects. It also provides storage and retrieval management for tape volumes contained in and out of system-managed libraries. In this chapter we will describe the OAM enhancements in z/OS V1R3 DFSMS. There are two major enhancements with z/OS 1.3 DFSMS OAM: Multiple object backup support Improved reliability and usability

3.10.1 Multiple object backup support


The multiple object backup support allows you to make a second backup copy of objects based on the object storage group to which the object belongs. You can direct Object Access Method (OAM) to create up to two backup copies of objects using current fields in the SMS management class construct, and by specifying a first and a second Object Backup Storage Group (OBSG) for your object storage groups in the CBROAMxx member of PARMLIB. Additionally, you can direct OAM to write the first and second backup copies on the same removable media type or on different removable media types.

Chapter 3. DFSMSdfp enhancements

61

Migration
There are some migration tasks which are required when moving to the z/OS V1R3 release of OAM and you use object backup support. The CBRSMR13 SAMPLIB job must be run if you are migrating from DFSMS/MVS 1.5.0 or OS/390 V2R10 to the current release of DFSMS. The CBRSMR13 SAMPLIB job contains two jobs that modify the object directories SMR13A and SMR13B. They require customization before running: SMR13A performs the migration from any DFSMS/MVS 1.5.0 or OS/390 V2R10 system level of the optical configuration database to the z/OS V1R3 version that supports the multiple OBSGs and the second object backup function in the OAM storage management component (OSMC). This job adds a new column BKTYPE to the existing VOLUME table. It also adds a new column BKTYPE to the existing TAPEVOL table. For recovery purposes, we recommend that you create a DB2 image copy of the existing VOLUME and TAPEVOL tables prior to executing this migration job. SMR13B performs the migration from the base version of the OAM object directory tables to the z/OS V1R3 version, which supports second backup copies of objects. This job adds new columns ODBK2LOC and ODBK2SEC to the existing object directory tables. For recovery purposes, we recommend that you create a DB2 image copy of the existing object directory tables prior to running this migration job.
Note: After running the CBRSMR13 migration job you may need to run a DB2 reorganization after performing an ALTER to the table.

There are also several optional migration tasks. These tasks need only be performed if you intend to exploit the multiple object backup support: Update automatic class selection (ACS) routines to accommodate the new tape data set name of OAM.BACKUP2.DATA. OAM.BACKUP2.DATA is a new tape data set that will be created on OAM tapes that belong to the OBSGs and contain the second backup copies of objects. You will need to perform this step: If you are implementing multiple object backup support and storing second backup copies on tape media If the tape is to be SMS managed. Update the SMS management class definition construct to indicate Autobackup is allowed.

62

z/OS V1R3 DFSMS Technical Guide

Update the SMS management class definition construct to indicate the number of backup versions of objects that you want. The field that displays the number of backup versions now displays the number of backup versions to be maintained for objects. The default value for this field is two and any value greater than one is treated as two. Define multiple OBSGs using ISMF. Add the SETOSMC statement to the CBROAMxx member of PARMLIB. Associate at least one OBSG with a SECONDBACKUPGROUP keyword in a SETOSMC statement in order to write a second backup copy of an object. The SETOSMC statement and its associated keywords determines which OBSGs contain the first and second backup copies of the objects that are associated with an object storage group If SETOSMC statements are not provided, OAM will not process second backup copies of objects. For detailed migration information refer to z/OS V1R3.0 DFSMS OAM Planning, Installation, and Storage Administration Guide Object Support, SC35-0426.

Co-existence
Toleration/coexistence APAR OW47941 is required on all systems which will be running with z/OS 1.3 OAM in the same OAMplex.This APAR introduces several changes: It enables previous releases of OAM to coexist in OAMplex with z/OS V1R3 level. It enables previous releases of OAM to fall back to original version (down to DFSMS 1.4.0). It introduces modified control blocks in OAMplex XCF messages to accommodate different versions of OAM control blocks in OAMplex. There are OAM messages issued for toleration support on lower level DFSMS systems. Lower level systems will only use a single OBSG but can share the SCDS in OAMplex with a z/OS V1R3 system that has multiple OBSGs defined. When OAM encounters multiple OBSGs defined to a lower level system, the last one defined in the SCDS will be selected for use. OAM will issue message CBR0230D. You can choose to use the last OBSG to contain all backup copies of objects, or specify another is to be used. If another OBSG is to be used, message CBR0231A will be issued which allows another OBSG name to be used for writing backup copies of objects. If multiple OBSGs are not defined and no SETOSMC statements are specified in the CBROAMxxx PARMLIB member, OAM will continue to function as it did prior to z/OS V1R3 DFSMS.

Chapter 3. DFSMSdfp enhancements

63

If multiple OBSGs are defined but no SETOSMC statements are specified in the CBROAMxx PARMLIB member, OAM will issue CBR0231A to verify which object storage group should be used for backup processing. Second backup copies of objects will not be written.

Changed operator commands


There have been changes made to OAM operator commands in support of the multiple object backup support function in OAM.
D SMS,OAM

CBR1100I has been modified to display which backup copy, if any, is being used for Access Backup processing.
D SMS,OSMC,TASK(name)

CBR9370I has been modified to show statistics for the number of internal work items queued on the work and wait queues and the number of internal work items completed by the write first and write second backup service during OSMC processing.
D SMS,STORGRP(group_name),DETAIL

CBR1130I has been modified to include the names of the first and second backup storage groups associated with this object storage group.
D SMS,STORGRP(ALL),DETAIL

CBR1130I has been modified slightly for readability.


D SMS,VOLUME(volser)

CBR1140I has been modified for readability and contains a new field to indicate if the volume is used to write first or second backup copies of objects: BACKUP TYPE: (BACKUP1|BACKUP2).
F OAM,START,AB,reason[,BACKUP1|BACKUP2]

Syntax modified to allow specification of BACKUP1 or BACKUP2.


F OAM,START,RECOVERY,volser[,BACKUP1|,BACKUP2]

Syntax changed to allow optional specification of BACKUP1 or BACKUP2.


F OAM,START,OBJRECV,col,obj[,BACKUP1|,BACKUP2]

Syntax changed to allow optional specification of BACKUP1 or BACKUP2.

New OAM commands


There is a new operator command to display the current SETOSMC global settings and by storage group. The backup storage group can be specified at both the global and the storage group level:
F OAM,DISPLAY,SETOSMC,ALL

64

z/OS V1R3 DFSMS Technical Guide

CBR1075I GLOBAL VALUE FOR BACKUP1 IS backup1 CBR1075I GLOBAL VALUE FOR BACKUP2 IS backup2
F OAM,DISPLAY,SETOSMC,group-name

CBR1075I group_name VALUE FOR BACKUP1 IS backup1 CBR1075I group_name VALUE FOR BACKUP2 IS backup2

Changed CBROAMxx PARMLIB options


New parameters have been added to the SETOSMC and the SETOSMC STORAGEGROUP statements. This is the SETOSMC syntax:
SETOSMC FIRSTBACKUPGROUP(global_1st_bu_group) SETOSMC SECONDBACKUPGROUP(global_2nd_bu_group)

These keywords specify the default first and second backup storage groups at the global level. They will be used as the default OBSGs when both of the following are true: The object storage group to which the object is defined is not specified in the on the SETOSMC statement FIRSTBACKUPGROUP and SECONDBACKUPGROUP parameters The management class that is assigned to the object specifies that a first or second backup copy can be written.
SETOSMC STORAGEGROUP(obj_storage_group FIRSTBACKUPGROUP(1st_bu_group)) SETOSMC STORAGEGROUP(obj_storage_group SECONDBACKUPGROUP(2nd_bu_group))

If no second backup group is defined, either globally or on a specific object storage group, then no second backup copy will be taken regardless of the settings in the management class.

Multiple object backup scenarios


Here we present three scenarios which explain what backups are taken, based on the management class setting and the SETOSMC statements. Keep in mind that an object storage group can be made up of objects that have different management classes, and therefore can have different numbers of backup versions assigned to them. This will affect how they are treated during OSMC processing. For the sake of simplicity, in the following scenarios all objects in an object storage group have the same management class assigned.

Scenario one
Management class settings:

Chapter 3. DFSMSdfp enhancements

65

Auto Backup = Y Number of Backup Versions = 2

CBROAMxx PARMLIB settings:


SETOSMC STORAGEGROUP(GROUP22 FIRSTBACKUPGROUP(BACK00A) SECONDBACKUPGROUP(BACK00B)) SETOSMC STORAGEGROUP(GROUP44 FIRSTBACKUPGROUP(BACK00A))

Results of running the OAM space management cycle (OSMC): GROUP22 object two backup copies successfully written GROUP44 object fails (x'0472') attempting to write second backup
Note: In this scenario, because one of the storage groups has a second backup group defined and the number of backup versions is set to two, two backups will be attempted for all storage groups but will only succeed for storage groups with the SECONDBACKUPGROUP parameter specified.

Scenario two
Management class settings:
Auto Backup = Y Number of Backup Versions = 2

CBROAMxx PARMLIB settings:


SETOSMC STORAGEGROUP(GROUP22 FIRSTBACKUPGROUP(BACK00A)) SETOSMC STORAGEGROUP(GROUP44 FIRSTBACKUPGROUP(BACK00A))

Results of running OSMC cycle 1 backup copy written for GROUP22 and GROUP44 objects
Note: In this scenario, there is only one backup copy taken, even though the number of backup versions is set to two in the management class. No storage group has a SECONDBACKUPGROUP parameter specified in the SETOSMC statement.

Scenario three
Management class settings:
Auto Backup = 'Y' Number of Backup Versions = 1

CBROAMxx PARMLIB settings:


SETOSMC STORAGEGROUP(GROUP22 FIRSTBACKUPGROUP(BACK00A) SECONDBACKUPGROUP(BACK00B))

66

z/OS V1R3 DFSMS Technical Guide

SETOSMC STORAGEGROUP(GROUP44 FIRSTBACKUPGROUP(BACK00A) SECONDBACKUPGROUP(BACK00B))

Results of running OSMC cycle 1 backup copy written for GROUP22 and GROUP44 objects
Note: In this scenario, even though one storage group has a SECONDBACKUPGROUP parameter specified in the SETOSMC statement, there is still only one backup copy taken since the number of backup versions is set to one in the management class.

As you can see from these scenarios, if you have SECONDBACKUPGROUP parameters specified on the SETOSMC statement in the CBROAMxx PARMLIB member, OSMC will attempt to create a second backup copy for any objects that have a Management Class with Autobackup and Number of Backup Versions greater than 1 (the default is 2). If you don't want a second backup copy attempted then you need to change Number of Backup Versions in the your existing Management Class constructs.

Application programmer interface (API) changes


The OAM API has been updated to include support for the multiple object backup feature. The OAM API is referred to as OSREQ: OSREQ RETRIEVE has been modified to allow direct retrieval of second backup copy using VIEW=BACKUP2. OSREQ TSO command interface has been modified to allow direct retrieval of second backup copy using VIEW(BACKUP2) parameter. OSREQ QUERY has been modified to add a Backup2 retrieval order key. This field can be used to sort objects prior to issuing retrievals for most efficient mount and media positioning time.

3.10.2 Improved reliability and usability


The volume recovery process has been rewritten to provide improved reliability and usability. The key improvements are as follows: The limit of 70,144 for the maximum number of objects that could be recovered in single invocation has been removed. A full list of volumes required to accomplish a recovery beyond the old limit of 70,144 objects that could be recovered in single invocation has been provided.

Chapter 3. DFSMSdfp enhancements

67

A complete list of TAPE volumes required for recovery is listed in message CBR9827I. If the volumes are available, recovery will proceed when you reply GO to message CBR9810D. If the volumes are not available, recovery can be stopped by replying QUIT to message CBR9810D. If some of the volumes are available and others are not, and you reply GO to message CBR9810D, recovery will be performed for objects from the volumes that are available. A complete list of OPTICAL volumes required for recovery is listed in message CBR9824I. If the volumes are available, recovery will proceed when you reply GO to message CBR9810D. If the volumes are not available, recovery can be stopped by replying QUIT to message CBR9810D. If some of the volumes are available and others are not, and you reply GO to message CBR9810D, recovery will be performed for objects from the volumes that are available. Prior to z/OS V1R3 DFSMS, OAM returned the list of volumes in sets of 100, then a response to CBR9820D was required. Once replied to, the next set of 100 volumes was displayed, and so on until all volumes required had been listed. Informational message CBR9863I is issued at completion of volume recovery stating the number of objects attempted and the number successfully recovered. The volume being recovered is automatically marked non-writable and then restored to original status. This prevents OAM from selecting the volume for a write request during recovery. If the volume is to remain non-writable after recovery, you must manually change its status. The ISMF line operator, RECOVER, has not been modified to support recovery from the second backup copy. To exploit recovery from the second backup copy, the MODIFY OAM command must be used.

68

z/OS V1R3 DFSMS Technical Guide

Chapter 4.

DFSMShsm enhancements
In this chapter we focus on the major new enhancement to DFSMShsm, the common recall queue (CRQ). The CRQ enables you to have all DFSMShsm address spaces in an HSMplex process recall requests from a single common queue. We discuss these topics: The function that has been provided The environments required to take advantage of this function A description of the steps required to enable this function Some samples showing the function in use We also briefly discuss the other new functions introduced with this level of z/OS and their impact on DFSMShsm.

Copyright IBM Corp. 2002

69

4.1 The common recall queue


Traditionally, each z/OS image has had one DFSMShsm address space, and this address space has managed its own recall queue. Even with the support for Multiple Address Spaces for DFSMShsm (MASH) which allows more than one DFSMShsm address space to be active on a single system, only the MAIN DFSMShsm host could process implicit recall requests. This processing limitation can result in both performance bottlenecks and availability problems: The loss of a single DFSMShsm results in the loss of all recall requests queued to it. There is no ability to balance recall workloads across differing DFSMShsm address spaces. One may be idle while another has many recall requests queued to it. Only MAIN DFSMShsm address spaces can process implicit recall requests. There is no way to ensure that the recall requests with the highest priority in the HSMplex are processed first. Each MAIN address space only sees its own recall queue. All DFSMShsm address spaces require access to tape drives to recall data from ML2. There is no way to co-ordinate the use of a single ML2 volume across multiple hosts; each host with recall requests for data sets on the ML2 volume must mount this volume in turn. The DFSMShsm common recall queue overcomes these limitations by enabling all members of an HSMplex to share a single recall queue. An HSMplex is one or more DFSMShsm hosts that share a set of control data sets. In turn, each of these hosts should have the same setting for the PLEXNAME value. Figure 4-1 contrasts todays processing with the changes in z/OS 1.3. This new function allows flexibility in designing DFSMShsm configurations and assigning resources within them. You can designate a single host to process all recall requests within an HSMplex, or allow any host to process recall requests from ML1 but only specific hosts to process requests from ML2, or all hosts may be allowed to process all types of request. You can control the ability of an individual host to place requests on or remove requests from the CRQ. Hosts that are not normally able to process recall requests may now be able to select them from the CRQ.

70

z/OS V1R3 DFSMS Technical Guide

RECALLS

RECALLS

Today

z/OS 1.3 DFSMS

ML1 DASD

ML1 DASD

ML1 DASD

ML1 DASD
ML2 Tape

ML1 DASD
ML2 Tape

ML1 DASD

ML2 Tape

ML2 Tape

ML2 Tape

Figure 4-1 DFSMShsm processing today and using a common recall queue

Using a common queue requests from DFSMShsm on one system can be processed by DFSMShsm on another system. This new function also allows AUX hosts in a MASH complex to process recall requests from the common queue as well as placing recall requests explicitly directed to them on the common queue. The algorithm that determines which DFSMShsm will process a request is discussed in Accessing the CRQ on page 78. Systems that connect to the CRQ still retain their own, local, recall queue. There is a new SETSYS command introduced, the SETSYS COMMONQUEUE command. Several other commands have been altered to support the CRQ. These changes to commands are discussed in Commands to manipulate the common recall queue on page 84.

4.1.1 Our test environment


To illustrate the functions of the CRQ, we enabled the DFSMShsm CRQ on a three system Parallel Sysplex. This sysplex contained a single HSMplex defined using SETSYS PLEXNAME=PLEX0. The sysplex was connected to two coupling facilities CF01 and CF02. Each system was running z/OS V1R3 DFSMS. We have summarized the environment in Table 4-1.

Chapter 4. DFSMShsm enhancements

71

Table 4-1 Our test environment


System SC63 SC64 JES JES2 JES2 DFSMShsm Host 3 4 A SC65 JES3 5 Hostmode MAIN MAIN AUX MAIN

There were four DFSMShsm address spaces active. HOST 4 on SC64 was defined as the primary host.

4.2 Which environments can use a CRQ


To use the CRQ, all systems that are participating in the queue must access the same coupling facility structure. That is, they must either be members of the same Parallel Sysplex or be a monoplex, PLEXCFG=MONOPLEX specified in PARMLIB. The CRQ is maintained as a persistent list structure in a coupling facility (CF) that all participating DFSMShsm address spaces connect to. If a DFSMShsm address space cannot connect to the structure it cannot be part of the CRQ. You must have CF level eight installed on the CF that will hold your CRQ structure. You can determine the CF level by using the D CF command. Not all members of a HSMplex need to access the CRQ. Systems can still manage their own local recall requests even if they are part of the same HSMplex as systems using a CRQ. It is also possible to have two, or more groups of systems each using a their own CRQ, a CRQplex, operating in the same HSMplex. If you have a single z/OS V1R3 DFSMS image, running in monoplex mode, you may still benefit from using a CRQ. If you run a MAIN and one or more AUX hosts enabling CRQ will allow AUX hosts to process implicit recall requests. Also the loss of an DFSMShsm address space will not result in the loss of queued nowait recall requests across recycles of the DFSMShsm address space. If you have only a single DFSMShsm address space then the benefits from a CRQ may be less although you can still benefit from request persistence. If your DFSMShsm hosts are not running as part of a Parallel Sysplex or monoplex they cannot take part in a CRQ.

72

z/OS V1R3 DFSMS Technical Guide

4.3 How to enable this function


There are several steps required to enable this function: Determine which systems are to use the common recall queue: Refer to Defining the members of the CRQ on page 73. Determine the expected maximum number of requests that may appear on the queue: Refer to Sizing the CRQ on page 75. Update the CFRM policy to define a new structure and activate this policy: Refer to Defining the CRQ structure on page 76. Issue the SETSYS commands required to activate the queue: Refer to Accessing the CRQ on page 78. After activating the CRQ, review your recall settings to re-evaluate how many recall tasks are required and which systems they will run on. Determine the workload breakdown of the final configuration, how many recall tasks and which systems they will run on. We discuss these steps in the following sections.

4.3.1 Defining the members of the CRQ


Within a sysplex, there can be one or more HSMplexes. Your first task is to determine how many HSMplexes you currently have implemented. You must use the SETSYS keyword PLEXNAME in a ARCCMDxx member of PARMLIB on all DFSMShsm hosts in the HSMplex, specifying the HSMplex name. By default, if you have not explicitly issued a SETSYS PLEXNAME statement on any DFSMShsm hosts, then you have a single HSMplex with the default name of ARCPLEX0. A CRQplex cannot include DFSMShsm instances from more than one HSMplex. You can use the QUERY IMAGE command to help determine which DFSMShsm address spaces are active. We show the output from this command issued in our test environment in Figure 4-2.

Chapter 4. DFSMShsm enhancements

73

IEE421I RO *ALL,F HSM,Q I SYSNAME RESPONSES ---------------------------------------------SC63 ARC0101I QUERY IMAGE COMMAND STARTING ON HOST=3 ARC0250I HOST PROCNAME JOBID ASID MODE ARC0250I 3 HSM STC07035 0042 MAIN ARC0101I QUERY IMAGE COMMAND COMPLETED ON HOST=3 SC64 ARC0101I QUERY IMAGE COMMAND STARTING ON HOST=4 ARC0250I HOST PROCNAME JOBID ASID MODE ARC0250I 4 HSM STC07036 0048 MAIN ARC0250I A HSM2 STC07037 0047 AUX ARC0101I QUERY IMAGE COMMAND COMPLETED ON HOST=4 SC65 ARC0101I QUERY IMAGE COMMAND STARTING ON HOST=5 ARC0250I HOST PROCNAME JOBID ASID MODE ARC0250I 5 HSM HSM 0020 MAIN ARC0101I QUERY IMAGE COMMAND COMPLETED ON HOST=5

Figure 4-2 Output from Q I command from all systems in test HSMplex

Your next decision is whether you want the CRQ to apply to all members of the HSMplex. Generally this is the preferred solution. A one-to-one relationship between CRQplex and HSMplex provides the most flexible allocation of resources. This one-to-one relationship also generally provides the best overall throughput and availability of the queue and is the simplest to manage. There is no requirement that all DFSMShsm hosts connect to the CRQ or those that do connect remain connected to the queue at all times. Connecting and disconnecting from the CRQ is not disruptive to the DFSMShsm address space or to recall requests that are currently being processed. If you have catalogs or data that are not shared with all members of the HSMplex, or you have a mixture of production and other systems in the HSMplex, you may decide to exclude some systems from a CRQ or to implement more than one CRQ. These are also supported configurations. They do require more resources to implement and maintain but may be the preferred solution for some environments. Systems running in monoplex mode with a single DFSMShsm address space can implement a CRQ. There may not be a justification for doing this. If you have implemented MASH in your HSMplex then AUX hosts are able to process recall requests and you could dedicate an AUX host as a recall server. Systems that are not members of a Parallel Sysplex or a monoplex are not able to participate in the CRQ.

74

z/OS V1R3 DFSMS Technical Guide

4.3.2 Sizing the CRQ


Once you have determined how many DFSMShsm address spaces will use the CRQ and the number of CRQs you will implement, the next step is to determine the maximum number of concurrent recall requests expected. This allows you to size the CF structures that will hold the queue. The CRQ needs to be sized such that it can contain the maximum number of concurrent recalls that may occur. This number is the high water mark for recall request processing; generally you would not expect your CRQ to be full. If the structure allocated is too small, recall requests will be queued only on the local recall queue of the originating host. If the structure is too large, then coupling facility resources will be allocated, but not used. DFSMShsm does not provide a simple way to determine the maximum number of concurrently queued recall requests over a period. The DFSMShsm Implementation and Customization Guide SC35-0418 lists the values shown in Table 4-2 as estimates for given numbers of maximum concurrent recall requests and the size of the structure required to hold them.
Table 4-2 Estimated structure sizes
Max concurrent recalls 1700 3900 8400 12900 Structure size KB 2560 5120 10240 15360

As a starting point, we recommend that you allocate the recall queue structure with an initial size of 5120 KB and a maximum size of 10240 KB. If you have a high proportion of recall requests requiring unique tape mounts, you may not quite reach this capacity. This is more likely in environments using 3480 or 3490 capacity volumes. For environments with high numbers of recall requests satisfied from a single volume, you may exceed this capacity. If you wish to resize your CF structure at any time you can, see our discussion of this in Processing when CRQ is full on page 92.

Chapter 4. DFSMShsm enhancements

75

4.3.3 Defining the CRQ structure


The CRQ structure needs to be defined to your Sysplex. This is done by updating a Coupling Facility Resource Management (CFRM) policy and starting the updated policy. For information about managing CFRM policies refer to z/OS V1R3.0 MVS Setting Up a Sysplex SA22-7625. For this discussion, we assume that you are already running a Parallel Sysplex or Monoplex. The name of the CRQ structure must be SYSARC_basename_RCL, where basename is the value you will use to name the structure to DFSMShsm. Your basename value must be exactly five characters in length. To define the CRQ structure you need to run the XCF IXCMIAPU utility. We used the JCL in Figure 4-3 to define our CRQ structure. You must ensure that modifications that you make to the CFRM policy will not impact existing definitions. The sample below is not sufficient to create an entire CFRM policy.

//STEP20 EXEC PGM=IXCMIAPU //SYSPRINT DD SYSOUT=* //SYSABEND DD SYSOUT=* //SYSIN DD * DATA TYPE(CFRM) REPORT(YES) DEFINE POLICY NAME(HSM1 ) STRUCTURE NAME(SYSARC_PLEX0_RCL) SIZE(10240) INITSIZE(5120) PREFLIST(cfname1,cfname2)

Figure 4-3 JCL to define the CF structure for the common recall queue

Most of the options for the structure are defined by DFSMShsm when an address space connects to the structure for the first time. You must define:
Name

The name of the structure. This is fixed except for the base name. The basename must be exactly five characters in length. The maximum size of the structure in kilobytes. We recommend 10240. The names of the coupling facilities you want to place the structure into. This may be a list of CFs. If you have two or more suitable CFs, we recommend that you specify at least two in the preference list.

Size Preference list

The entries in the preference list are tested in turn. The first CF that meets the structures requirements will be selected.

76

z/OS V1R3 DFSMS Technical Guide

You may also define:


Initsize

An initial size value for the structure in kilobytes. This value is used rather than SIZE for the initial allocation of the structure. We recommend that you specify 5120. Specifying which coupling facilities the structure is not to be placed in. If you have any CFs that are not at levels that support the CRQ they should be placed here. A percentage used by XCF for monitoring structure utilization. We recommend that you leave this at the default value of 80 percent. If not specified, this defaults to one (1). z/OS uses this value to help determine when to attempt to rebuild a structure after a connectivity failure based on weighting values in your sysplex failure management (SFM) policy. If you do not have an active SFM policy, z/OS will attempt to rebuild the structure anyway.

Exclusion list

Threshold

Rebuildpercent

We recommend that the CRQ structure be failure isolated from the members of the sysplex. Failure isolation means that the CF that contains the CRQ structure is not located on the same physical processor as any of the z/OS LPARs connecting to the structure. Once the structure is defined, you need to activate the CFRM policy that you have just updated. This is done by starting your new CFRM policy. Figure 4-4 shows the starting of a new CFRM policy. Sysplex structures and policies are managed by a component called Cross System Coupling Facility (XCF).

SETXCF START,POLICY,TYPE=CFRM,POLNAME=HSM1 IXC511I START ADMINISTRATIVE POLICY HSM1 FOR CFRM ACCEPTED IXC513I COMPLETED POLICY CHANGE FOR CFRM. HSM1 POLICY IS ACTIVE.

Figure 4-4 Starting a new CFRM policy

Once the CFRM policy with the new structure is active with the CRQ structure defined you are ready to enable DFSMShsms use of it.

CFSIZER
You can use the Web based S/390 Coupling Facility Structure Sizer Tool (CFSIZER) to estimate the required size and generate the code to define your CRQ structure. The tool is available from:
http://www.ibm.com/servers/eserver/zseries/cfsizer/

Chapter 4. DFSMShsm enhancements

77

4.3.4 Accessing the CRQ


In this section we will what happens when your DFSMShsm connects to the CRQ or disconnects from it. We also discuss the scope of commands and how they impact the CRQ.

Command scope
Commands that you issue to manipulate XCF structures have a Sysplex scope, that is the command needs to be issued once only for the sysplex. This is not true for most of the commands used by DFSMShsm. Unlike the XCF commands the scope of DFSMShsm commands that interact with the CRQ queue is generally limited to the system that they were issued from. We discuss the scope of each of the DFSMShsm commands you will be using to manipulate the CRQ in Commands to manipulate the common recall queue on page 84 You can use the sysplex console routing functions to direct commands to DFSMShsms active on other systems in the sysplex. If you are using the same STC name prefix in a MASH environment you can use command masking to route a command to all DFSMShsm hosts in your sysplex. We illustrate this in Figure 4-5 where one command was propagated to four hosts on three systems.

IEE421I RO *ALL,F HS*,Q REQ SYSNAME RESPONSES -------------------------------------------SC63 ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=3 ARC0166I NO DFSMSHSM REQUEST FOUND FOR QUERY ARC0101I QUERY REQUEST COMMAND COMPLETED ON HOST=3 SC64 ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=4 ARC0166I NO DFSMSHSM REQUEST FOUND FOR QUERY ARC0101I QUERY REQUEST COMMAND COMPLETED ON HOST=4 ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=A ARC0166I NO DFSMSHSM REQUEST FOUND FOR QUERY ARC0101I QUERY REQUEST COMMAND COMPLETED ON HOST=A SC65 ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=5 ARC0166I NO DFSMSHSM REQUEST FOUND FOR QUERY ARC0101I QUERY REQUEST COMMAND COMPLETED ON HOST=5

Figure 4-5 Using command masking and sysplex command routing

78

z/OS V1R3 DFSMS Technical Guide

Starting to use the CRQ


There are two methods available to connect your DFSMShsm hosts to the CRQ. Both use the new SETSYS COMMONQUEUE command. This command can either be included in your DFSMShsm startup parameters, ARCCMDxx, or be explicitly issued once your DFSMShsm address space is active. We discuss the syntax of this command in SETSYS command on page 89. The SETSYS COMMONQUEUE(RECALL(CONNECT(basname))) command needs to be issued on to each DFSMShsm that will connect to the CRQ. If the CRQ structure is defined in the active CFRM policy then DFSMShsm should connect successfully to the structure. Figure 4-6 shows the output from a successful connection to the structure that was defined previously in Figure 4-3.
F HSM,SETSYS CQ(RECALL(CONNECT(PLEX0)) ARC0100I SETSYS COMMAND COMPLETED IXL014I IXLCONN REQUEST FOR STRUCTURE SYSARC_PLEX0_RCL WAS SUCCESSFUL. JOBNAME: HSM ASID: CONNECTOR NAME: HOSTCONNECTION1 CFNAME: CF02 ARC1501I CONNECTION TO STRUCTURE SYSARC_PLEX0_RCL WAS ARC1501I (CONT.) SUCCESSFUL, RC=00, REASON=00000000 IXL015I STRUCTURE ALLOCATION INFORMATION FOR STRUCTURE SYSARC_PLEX0_RCL, CONNECTOR NAME HOSTCONNECTION1 CFNAME ALLOCATION STATUS/FAILURE REASON -------- --------------------------------CF02 STRUCTURE ALLOCATED CF01 PREFERRED CF ALREADY SELECTED

Figure 4-6 Connecting to the CRQ structure

When DFSMShsm connects to the CRQ structure messages are received from both DFSMShsm and XCF about the status of the structure and DFSMShsms connection to it. The XCF status of the structure can also be displayed using the DISPLAY XCF,STR command. We provide an example in Figure 4-7. This display shows structure SYSARC_PLEX0_RCL that has been connected to by the four DFSMShsm address spaces running on the systems in our sysplex. A structure that has been defined but not yet connected to will not have the ACTIVE STRUCTURE or CONNECTION NAME sections in the display structure command output.

Chapter 4. DFSMShsm enhancements

79

D XCF,STR,STRNM=SYSARC* IXC360I 19.23.04 DISPLAY XCF STRNAME: SYSARC_PLEX0_RCL STATUS: ALLOCATED POLICY INFORMATION: POLICY SIZE : 10240 K POLICY INITSIZE: 5120 K POLICY MINSIZE : 0 K FULLTHRESHOLD : 80 ALLOWAUTOALT : NO REBUILD PERCENT: N/A PREFERENCE LIST: CF02 CF01 ENFORCEORDER : NO EXCLUSION LIST IS EMPTY ACTIVE STRUCTURE ---------------ALLOCATION TIME: 03/07/2002 15:13:11 CFNAME : CF02 COUPLING FACILITY: 002064.IBM.02.000000010ECB PARTITION: D CPCID: 00 ACTUAL SIZE : 5120 K STORAGE INCREMENT SIZE: 256 K PHYSICAL VERSION: B74AF405 F5E4EF45 LOGICAL VERSION: B74AF405 F5E4EF45 SYSTEM-MANAGED PROCESS LEVEL: 8 XCF GRPNAME : IXCLO02F DISPOSITION : KEEP ACCESS TIME : 0 MAX CONNECTIONS: 32 # CONNECTIONS : 4 CONNECTION NAME ---------------HOSTCONNECTIONA HOSTCONNECTION3 HOSTCONNECTION4 HOSTCONNECTION5 ID -04 03 01 02 VERSION -------00040001 00030005 00010019 0002000B SYSNAME -------SC64 SC63 SC64 SC65 JOBNAME -------HSM2 HSM HSM HSM ASID ---004C 0045 0049 0060 STATE ------ACTIVE ACTIVE ACTIVE ACTIVE

Figure 4-7 XCF view of the CRQ structure

When the first DFSMShsm connects to your new structure, a new XCF group will be created. If you are explicitly assigning transport classes to groups, you will need to assign one to this group as well.

80

z/OS V1R3 DFSMS Technical Guide

If SETSYS COMMONQUEUE(RECALL(CONNECT(basename)) is specified in the DFSMShsm startup parameters and the CF structure does not exist when DFSMShsm is started, then DFSMShsm will attempt, unsuccessfully, to connect to the CRQ structure. DFSMShsm will invoke an ENF Listen so that it is automatically notified when the structure is available, then DFSMShsm will attempt to connect to it. Once DFSMShsm has successfully connected to the CRQ structure existing recall requests are placed on the CRQ and new requests are passed to the CRQ. These recall requests will be eligible for processing by any DFSMShsm connected to the CRQ.

Stopping access to the CRQ


You can disconnect DFSMShsm from the CRQ structure either by using the STOP command or by using the SETSYS COMMONQUEUE(RECALL(DISCONNECT)) command. We show the syntax of the DISCONNECT command in Figure 4-8. During normal operations, there should be no reason to use the SETSYS COMMONQUEUE(RECALL(DISCONNECT)) command.
F HSM,SETSYS CQ(R(D)) ARC0100I SETSYS COMMAND COMPLETED ARC1502I DISCONNECTION FROM STRUCTURE SYSARC_PLEX1_RCL ARC1502I (CONT.) WAS SUCCESSFUL, RC=00, REASON=00000000

Figure 4-8 Disconnecting from the CRQ structure

There is no requirement that you explicitly disconnect DFSMShsm from the CRQ structure when you shutdown DFSMShsm. As part of its normal shutdown process DFSMShsm will disconnect from the queue. We discuss the implications of using the SETSYS COMMONQUEUE(RECALL(DISCONNECT)) command further in Disconnecting from the common recall queue on page 96. If the DFSMShsm address space is in the process of shutting down in response to a STOP command, any outstanding non-batch WAIT recall request originating from this host that is on the CRQ is failed, but a new NOWAIT request is created to replace this request, so that the recall request is preserved. The new request has the same priority as the original request, but is a NOWAIT request instead of a WAIT request. NOWAIT recall requests remain available in the CRQ. The new recall request is eligible for processing by any eligible host still connected to the CRQ. If the recall request is a batch WAIT request, it is not converted to a NOWAIT type request; it will remain on the CRQ and in CSA so the request can time out if not processed in a timely manner. The batch wait type request is eligible for processing by any eligible host still connected to the CRQ.

Chapter 4. DFSMShsm enhancements

81

Errors connecting to the CRQ


If your structure has not been defined, or if it has been defined with a name other than you expected, DFSMShsm will not be able to successfully connect to the structure. We illustrate an example of this in Figure 4-9. In this example there was no structure named SYSARC_PLEXX_RCL defined in the CFRM policy.
F HSM2,SETSYS CQ(R(C(PLEXX)) ARC0100I SETSYS COMMAND COMPLETED ARC1501I CONNECTION TO STRUCTURE SYSARC_PLEXX_RCL WAS ARC1501I (CONT.) UNSUCCESSFUL, RC=0C, REASON=02010C05 IXL013I IXLCONN REQUEST FOR STRUCTURE SYSARC_PLEXX_RCL FAILED. JOBNAME: HSM ASID: 0059 CONNECTOR NAME: HOSTCONNECTIONA IXLCONN RETURN CODE: 0000000C, REASON CODE: 02010C05 CONADIAG0: 00000002 CONADIAG1: 00000008 CONADIAG2: 00000C05

Figure 4-9 Failing to connect to a structure

If you receive the RC=12 from an IXLCONN, DFSMShsm will not revert to not using a CRQ. Rather, it waits for the structure to be allocated. IXLCONN return and reason codes are documented in the MVS PROGRAMMING: Authorized Assembler Services Reference, Volume 2 SA22-7610-02. If DFSMShsms connection to the CRQ is not successful, this will not stop DFSMShsm from processing recall requests placed on its local recall queue as long as there are sufficient resources available for the recalls to be processed. Figure 4-10 shows a QUERY ACTIVE command issued just after the command in Figure 4-9. The COMMONQUEUE CONNECTION STATUS is RETRY.

ARC1540I ARC1540I ARC1540I ARC1540I ARC1541I ARC1541I ARC1541I ARC0101I

COMMON RECALL QUEUE PLACEMENT FACTORS: (CONT.) CONNECTION STATUS=RETRY,CRQPLEX HOLD STATUS=***,HOST (CONT.) COMMONQUEUE HOLD STATUS=NONE,STRUCTURE ENTRIES=***% (CONT.) FULL,STRUCTURE ELEMENTS=***% FULL COMMON RECALL QUEUE SELECTION FACTORS: (CONT.) CONNECTION STATUS=RETRY,HOST RECALL HOLD (CONT.) STATUS=RECALL(TAPE),HOST COMMONQUEUE HOLD STATUS=NONE QUERY ACTIVE COMMAND COMPLETED ON HOST=A

Figure 4-10 COMMONQUEUE CONNECYION in RETRY status

82

z/OS V1R3 DFSMS Technical Guide

If the structure name was specified in error, you must first disconnect by issuing the SETSYS COMMONQUEUE(RECALL(DISCONNECT)) command and then specify a new SETSYS COMMONQUEUE(RECALL(CONNECT(basename))) command. If you receive the error because the structure is not yet defined in the active CFRM policy, DFSMShsm will connect to the structure when the CFRM policy that defines the structure is started. If you are running with multiple HSMplexes and multiple CRQs, you need to take care that you actually connect the right DFSMShsm and structure. DFSMShsm will use the value you pass to it to. DFSMShsm does no checking, other than checking the structure exists for the first connector to a structure. For subsequent connectors the PLEXNAME value of the connector is checked to ensure that it matches that of the systems currently sharing the CRQ. In Figure 4-11 we show what happened when we attempted to connect a host whose SETSYS PLEXNAME value was different to the SETSYS PLEXNAME of the currently or previously connected hosts.

ARC0008I DFSMSHSM INITIALIZATION SUCCESSFUL ARC1501I CONNECTION TO STRUCTURE SYSARC_PLEX0_RCL WAS ARC1501I (CONT.) SUCCESSFUL, RC=00, REASON=00000000 IXL014I IXLCONN REQUEST FOR STRUCTURE SYSARC_PLEX0_RCL WAS SUCCESSFUL. JOBNAME: HSM ASID: 0059 CONNECTOR NAME: HOSTCONNECTION1 CFNAME: CF02 ARC1506E ARC1506E ARC1506E ARC1502I ARC1502I AN INVOCATION OF THE COUPLING FACILITY LIST (CONT.) STRUCTURE IXLLSTE MACRO COMPLETED UNSUCCESSFULLY, (CONT.) RC=08, REASON=0C1C0859 DISCONNECTION FROM STRUCTURE SYSARC_PLEX0_RCL (CONT.) WAS SUCCESSFUL, RC=00, REASON=00000000

Figure 4-11 Connecting to a structure with the wrong PLEXNAME value

The connection to the CRQ was not successful. In this case DFSMShsm fully disconnected from the CRQ structure. This DFSMShsm will continue processing local recall requests only. There is no requirement that the values specified for PLEXNAME and COMMONQUEUE are the same. However when you have a one-to-one relationship between recall queues and HSMplexes it will makes operations simpler. If you are implementing multiple CRQs in the one HSMplex, then you will not be able to maintain this one-to-one relationship.

Chapter 4. DFSMShsm enhancements

83

4.4 Commands to manipulate the common recall queue


A number of commands have been enhanced to allow you to both determine the status of the CRQ and to manipulate it.

4.4.1 CANCEL command


You can use the CANCEL command to cancel outstanding recall requests that have been placed on the CRQ. The CANCEL command must be issued on the host that placed the request on the CRQ. There is no change to the syntax of the command or the messages received as a result of the CANCEL command. You can only CANCEL a request from the CRQ before it has been selected for processing by DFSMShsm. This restriction also exists for recall requests on the local recall queue.

4.4.2 DELETE command


There is no change to delete command processing. All DELETE commands are processed on the local recall queue and are never passed to the CRQ. Issuing a HOLD RECALL still prevents DELETE requests from being processed. This differs from what happens with RECALL processing.

4.4.3 HOLD AND RELEASE commands


We will discuss the HOLD and RELEASE commands together as one is the inverse of the other. Both HOLD and RELEASE commands only impact processing on the DFSMShsm address space they were issued against. Issuing HOLD or RELEASE commands may influence the amount of work performed by other hosts in your HSMplex.

COMMONQUEUE
There is a new operand, COMMONQUEUE, on both the HOLD and RELEASE commands. There are three ways that the COMMONQUEUE operand can be used:
COMMONQUEUE(RECALL)

Impacts all RECALL requests to the CRQ. They influence both addition and removal of requests from the CRQ
COMMONQUEUE(RECALL(SELECT))

Has no impact on how requests are placed on the CRQ. They will influence whether requests can be removed from the CRQ for processing.

84

z/OS V1R3 DFSMS Technical Guide

COMMONQUEUE(RECALL(PLACEMENT))

Has no impact on DFSMShsms selection of work from the CRQ. Determines whether requests can be added to the CRQ for processing. HOLD or RELEASE COMMONQUEUE(RECALL) commands override any setting that has specified SELECT or PLACEMENT: For example, a RELEASE COMMONQUEUE(RECALL)) command overrides a HOLD COMMONQUEUE(RECALL(SELECTION)) command. The inverse is not true; a HOLD COMMONQUEUE(RECALL) command can only be reversed by a RELEASE COMMONQUEUE(RECALL) command.
Note: There is also a HOLD/RELEASE COMMONQUEUE command. For z/OS V1R3 DFSMS this command is equivalent to the CQ(R) command except when issuing the RELEASE command. If HOLD CQ has been issued, RELEASE CQ(R) is not sufficient to release the common queue. The result is:
ARC0111I SUBFUNCTION COMMONQUEUE(RECALL) CANNOT BE ARC0111I (CONT.) RELEASED WHILE MAIN FUNCTION COMMONQUEUE IS HELD

You can determine whether placement to and selection from the CRQ are HELD or RELEASED using the QUERY ACTIVE command.

ALL and RECALL


HOLD or RELEASE ALL have no impact on the CRQ. The only mechanism for impacting the status of the CRQ is the HOLD/RELEASE COMMONQUEUE command. HOLD ALL or HOLD RECALL have no impact on the placement of recall requests on the CRQ. They will prevent a host from, or enable a host to, select recall requests from the CRQ. HOLD RECALL will prevent an DFSMShsm host from selecting requests from the CRQ, this is similar to using the HOLD COMMONQUEUE(RECALL(SELECTION)) command. Prior to this support, once a HOLD RECALL command had been issued, a host was not able to process recall requests. NOWAIT requests would be queued and WAIT requests would be failed. Once you implement the CRQ, recall requests originating on hosts who have recall processing held are placed on the CRQ and the requests will be processed by another host. If both a HOLD RECALL and a HOLD COMMONQUEUE(RECALL(PLACEMENT)) command are issued on the host originating the request, then NOWAIT requests will be added to the local recall queue and WAIT requests will be failed.

Chapter 4. DFSMShsm enhancements

85

4.4.4 RECALL
Although there are no changes to the syntax of the RECALL command, there are some changes to the impact that the command has. The scope of the RECALL command is changed. If there is a CRQ in place, there is no way to direct that an explicit host will process a particular command without changing the HOLD status of systems; whereas previously, RECALLs were always processed on the DFSMShsm that received them.

4.4.5 QUERY command


There is both a new operand for the QUERY command and changes to the results of several of the query commands to support the CRQ.

QUERY COMMONQUEUE
This is a new command that returns the status of the CRQ. We show a sample output in Figure 4-12.

RO SC63,F HSM,Q CQ(R) ARC1545I COMMON QUEUE STRUCTURE FULLNESS: COMMON ARC1545I (CONT.) RECALL QUEUE:STRUCTURE ENTRIES=004% FULL, STRUCTURE ARC1545I (CONT.) ELEMENTS=004% FULL ARC0162I RECALLING DATA SET MHLRES4.TEST5.A1 FOR USER ARC0162I (CONT.) MHLRES3, REQUEST 00000053 ON HOST 4 ARC0162I RECALLING DATA SET MHLRES4.TEST1.A3 FOR USER ARC0162I (CONT.) MHLRES3, REQUEST 00000017 ON HOST 4 ARC1543I RECALL MWE FOR DATASET MHLRES4.TEST1.A4, FOR ARC1543I (CONT.) USER MHLRES3, REQUEST 00000018, WAITING TO BE ARC1543I (CONT.) PROCESSED ON A COMMON QUEUE,00000000 MWES AHEAD OF ARC1543I (CONT.) THIS ONE ARC1543I RECALL MWE FOR DATASET MHLRES4.TEST1.A5, FOR ARC1543I (CONT.) USER MHLRES3, REQUEST 00000019, WAITING TO BE ARC1543I (CONT.) PROCESSED ON A COMMON QUEUE,00000001 MWES AHEAD OF ARC1543I (CONT.) THIS ONE

Figure 4-12 Output from a QUERY COMMONQUEUE command

The QUERY COMMONQUEUE(RECALL) command returns information for the CRQ and the requests on the CRQ. The results should be the same no matter which host it is issued from. This command may result in large amounts of data being returned to you if there are many recall requests on the CRQ. You can also issue a QUERY COMMONQUEUE command with no RECALL operand, with this form of the command DFSMShsm returns only message ARC1545I.

86

z/OS V1R3 DFSMS Technical Guide

QUERY ACTIVE
The QUERY ACTIVE command has been enhanced to show the status of the CRQ. We show partial output from a QUERY ACTIVE command in Figure 4-13.

F HSM,Q AC ARC0101I QUERY ACTIVE COMMAND STARTING ON HOST=3 ... ARC1540I COMMON RECALL QUEUE PLACEMENT FACTORS: ARC1540I (CONT.) CONNECTION STATUS=CONNECTED,CRQPLEX HOLD STATUS=NONE, ARC1540I (CONT.) HOST COMMONQUEUE HOLD STATUS=NONE,STRUCTURE ARC1540I (CONT.) ENTRIES=004% FULL,STRUCTURE ELEMENTS=004% FULL ARC1541I COMMON RECALL QUEUE SELECTION FACTORS: ARC1541I (CONT.) CONNECTION STATUS=CONNECTED,HOST RECALL HOLD ARC1541I (CONT.) STATUS=NONE,HOST COMMONQUEUE HOLD ARC1541I (CONT.) STATUS=CQ(RECALL(SELECTION)) ARC0101I QUERY ACTIVE COMMAND COMPLETED ON HOST=3

Figure 4-13 Partial output from a QUERY ACTIVE command

QUERY ACTIVE only produces information about the current status of the host it was directed to. If you are trying to determine the cause of a problem, you may need to issue this command to all DFSMShsms that are connected to the common queue.

QUERY DATASETNAME
There are no changes for the QUERY DATASET command. The QUERY DATASETNAME command only returns information about recall requests that originated on the host that executes the command.

QUERY REQUEST
The QUERY REQUEST command has been enhanced to return the location of outstanding recall requests. It now distinguishes whether a request is to be found on a common recall queue or the local recall queue. We show the output from a QUERY REQUEST command in Figure 4-14. A new message ARC1543I is issued for requests that are on the CRQ. The QUERY REQUEST command only returns information about recall requests that originated on the host that executes the command. If the recall is currently being processed by another host, this information is available from the response to the QUERY REQUEST command. For example, in Figure 4-14, the RECALL and QUERY commands were issued from HOST=3, but the recall request is being processed by HOST=4.

Chapter 4. DFSMShsm enhancements

87

F HS*,Q REQ ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=3 ARC0162I RECALLING DATA SET MHLRES4.TEST1.A3 FOR USER ARC0162I (CONT.) MHLRES3, REQUEST 00000017 ON HOST 4 ARC0162I RECALLING DATA SET MHLRES4.TEST5.A1 FOR USER ARC0162I (CONT.) MHLRES3, REQUEST 00000053 ON HOST 4 ARC1543I RECALL MWE FOR DATASET MHLRES4.TEST1.A4, FOR ARC1543I (CONT.) USER MHLRES3, REQUEST 00000018, WAITING TO BE ARC1543I (CONT.) PROCESSED ON A COMMON QUEUE,00000000 MWES AHEAD OF ARC1543I (CONT.) THIS ONE ARC1543I RECALL MWE FOR DATASET MHLRES4.TEST1.A5, FOR ARC1543I (CONT.) USER MHLRES3, REQUEST 00000019, WAITING TO BE ARC1543I (CONT.) PROCESSED ON A COMMON QUEUE,00000001 MWES AHEAD OF ARC1543I (CONT.) THIS ONE ARC0167I RECALL MWE FOR DATA SET MHLRES4.TEST5.A10 FOR ARC0167I (CONT.) USER MHLRES3, REQUEST 00000020, WAITING TO BE ARC0167I (CONT.) PROCESSED, 00003 MWE(S) AHEAD OF THIS ONE

Figure 4-14 Output from a QUERY REQUEST command

QUERY WAITING
The QUERY WAITING command also returns information about requests that are currently on the CRQ. We illustrate this in Figure 4-15.

F HSM,Q W ARC0101I QUERY WAITING COMMAND STARTING ON HOST=3 ARC1542I WAITING MWES ON COMMON QUEUES: COMMON RECALL ARC1542I (CONT.) QUEUE=00000210,TOTAL=00000210 ARC0168I WAITING MWES: MIGRATE=00000, RECALL=00000, ARC0168I (CONT.) DELETE=00000, BACKUP=00000, RECOVER=00000, ARC0168I (CONT.) COMMAND=00000, ABACKUP=00000, ARECOVER=00000, ARC0168I (CONT.) TOTAL=000000 ARC0101I QUERY WAITING COMMAND COMPLETED ON HOST=3

Figure 4-15 Output from a QUERY WAITING command

The QUERY WAITING command shows the number of requests currently on the CRQ in message ARC1542I. However it will only show recall requests that are in its own local queue in message ARC0168I.

88

z/OS V1R3 DFSMS Technical Guide

4.4.6 SETSYS command


There is one new SETSYS command introduced. The SETSYS COMMONQUEUE command. This command can be used to both connect a DFSMShsm system to the CRQ or disconnect a system from the CRQ. The syntax of the command is:
SETSYS COMMONQUEUE(RECALL(CONNECT(basename))) SETSYS COMMONQUEUE(RECALL(DISCONNECT))

The CONNECT command can be issued either during execution of your ARCCMDxx member or issued as an explicit command once your DFSMShsm address space has initialized. We showed an example of this command in Figure 4-6 on page 79. The SETSYS COMMONQUEUE(RECALL(DISCONNECT)) command should not be issued during normal DFSMShsm operations. DFSMShsm does not require that you manually disconnect it from the CRQ during shutdown processing. Please read the discussion in Disconnecting from the common recall queue on page 96 before using the command SETSYS COMMONQUEUE(RECALL(DISCONNECT)) There is also a change in processing for SETSYS EMERGENCY. Hosts running in EMERGENCY mode are able to place recall requests on the CRQ but not able to select requests from the CRQ. If you wish to prevent hosts in EMERGENCY mode generating actual recall requests you will need to issue a HOLD COMMONQUEUE(RECALL(PLACEMENT)) command to that host.

4.4.7 STOP command


There are no changes to the syntax or scope of the STOP command when a CRQ is implemented, but there are changes to the persistence of requests. Without a CRQ issuing the STOP command causes all WAIT recalls to be failed with message ARC1192I and requests are purged from CSA. After implementing a CRQ a WAIT RECALL requests queued when the STOP command is issued is failed with ARC1165I, but a new NOWAIT RECALL request of same priority are created to replace the failed request. These NOWAIT RECALL requests are eligible to be processed by other active hosts still connected to the CRQ while the originating host is not active. This is not applicable for batch WAIT type requests, which are not failed. They remain on the CRQ and in CSA so that a request can time-out if it is not processed in a timely manner. The batch wait type requests are eligible to be processed by other active hosts still connected to the CRQ.

Chapter 4. DFSMShsm enhancements

89

4.4.8 AUDIT Command


A new operand is added to the AUDIT command:
AUDIT COMMONQUEUE(RECALL)

This can be issued with either FIX or NOFIX. This command executes only on the host that is issued on but may impact all DFSMShsm hosts attached to the CRQ. We discuss using the AUDIT COMMONQUEUE(RECALL) further in Auditing the CRQ on page 106. Message ARC1544I is the only output that AUDIT COMMONQUEUE(RECALL) returns. It does not return a specific message for each error. We show the output from an AUDIT COMMONQUEUE NOFIX in Figure 4-16.

F HSM,AUDIT CQ NOFIX ARC1544I AUDIT COMMONQUEUE HAS COMPLETED, 0000 ERRORS ARC1544I (CONT.) WERE DETECTED FOR STRUCTURE SYSARC_PLEX0_RCL, RC=00

Figure 4-16 AUDIT COMMON QUEUE command

4.5 Using the CRQ


Once an DFSMShsm address space has connected to the CRQ, any recall requests that were queued locally in this host for processing are moved to the common queue. Any recall requests that this host is currently processing are completed by this host. All DELETE requests continue to be processed locally rather than being placed on the CRQ. If a HOLD RECALL command has been issued for this DFSMShsm host, recall requests are accepted by this host and placed on the CRQ by this host as long as a HOLD COMMONQUEUE(RECALL(PLACEMENT)) command is not also in effect. A host that has had a HOLD RECALL issued will not process recall requests from either the CRQ or its local recall queue. If recall is held on a host other DFSMShsm address spaces connected to the CRQ will still process requests placed on the queue by the held host. You should use the HOLD COMMONQUEUE command to help manage how hosts place and remove requests from the common queue.

90

z/OS V1R3 DFSMS Technical Guide

You can display the HOLD status of the CRQ by issuing the QUERY ACTIVE command, a portion of sample output from the QUERY ACTIVE command is included in Figure 4-17. Holding the CRQ will direct recall requests to the local recall queue. We discuss this in Impact of HOLD and RELEASE commands on page 98.

F HSM,Q AC ... ARC6019I AGGREGATE BACKUP = NOT HELD, AGGREGATE ARC6019I (CONT.) RECOVERY = NOT HELD ARC1540I COMMON RECALL QUEUE PLACEMENT FACTORS: ARC1540I (CONT.) CONNECTION STATUS=CONNECTED,CRQPLEX HOLD ARC1540I (CONT.) STATUS=RECALL(TAPE),HOST COMMONQUEUE HOLD ARC1540I (CONT.) STATUS=CQ(RECALL(PLACEMENT)),STRUCTURE ENTRIES=000% ARC1540I (CONT.) FULL,STRUCTURE ELEMENTS=000% FULL ARC1541I COMMON RECALL QUEUE SELECTION FACTORS: ARC1541I (CONT.) CONNECTION STATUS=CONNECTED,HOST RECALL HOLD ARC1541I (CONT.) STATUS=RECALL(TAPE),HOST COMMONQUEUE HOLD STATUS=NONE ARC0101I QUERY ACTIVE COMMAND COMPLETED ON HOST=3

Figure 4-17 Output from a QUERY ACTIVE command

In Figure 4-17, for the host queried, RECALL was active, but a HOLD RECALL(TAPE) command had been issued. This host was allowed to select work from the CRQ but a HOLD CQ(R(PLACEMENT)) command had been issued and this DFSMShsm was unable to place new work onto the queue. The impact of these commands is summarized in Impact of HOLD and RELEASE commands on page 98.

4.5.1 Placement of data on the queue


Once each DFSMShsm has connected to the CRQ, new recall requests are placed on the CRQ. Each DFSMShsm also maintains its local recall queue and the migration work elements (MWE) associated with local recall requests are still retained in local storage. If the CRQ becomes more than 95 percent full hosts are unable to move new requests to the CRQ, we discuss this in Processing when CRQ is full on page 92. You need to remember that DFSMShsm hosts that have HOLD RECALL command or SETSYS EMERGENCY command active are still able to place recall requests on the CRQ although they cannot select requests from the CRQ.

Chapter 4. DFSMShsm enhancements

91

Each recall request placed on the CRQ is given a priority between zero and 100 by the DFSMShsm address space that issues the request. By default, NOWAIT recall requests receive a priority of 50. WAIT requests a priority of 100. The DFSMShsm address space that places the request on the CRQ drives the recall priority exit, ARCRPEXT, for each request before it is placed on the queue. This allows the host originating the request to influence the relative placement on the CRQ for specific requests. If you currently use the ARCRPEXT to change the priority of recall requests, you may still wish to do so. If you have some systems or data that you deem to be more important you may wish to implement this function, for example, you may wish to prioritize recall requests for production data over those for test data. A sample ARCRPEXT is provided in SAMPLIB. Hosts attempting to place WAIT requests on the CRQ must determine that there is a host currently capable of processing the request currently connected to the queue. We discuss this further in WAIT versus NOWAIT recalls on page 99. NOWAIT recall requests placed on the CRQ are interleaved by userid, as are requests that are placed on the local recall queue. This should prevent recall requests from one user monopolizing the CRQ.

4.5.2 Processing when CRQ is full


If the CRQ structure is more than 95 percent full, then requests are directed only to the local recall queue until the CRQ is less than 85 percent full. Once the CRQ is again less than 85 percent full, DFSMShsm address spaces will begin placing requests on the CRQ. You can use either the QUERY COMMONQUEUE or the QUERY ACTIVE command to determine how full the CRQ is. We show what happens as the CRQ fills in Figure 4-18. There is no interface within DFSMShsm to change the limits that it uses for determining when the CRQ is full. For this test, we generated large numbers of recall requests until the CRQ reached 95% utilization. Recall requests were then placed on the local recall queue until no more could be fitted within the CSALIMIT specified. To relieve the queue-full problem, the outstanding requests were cancelled.

92

z/OS V1R3 DFSMS Technical Guide

Note: The IXC585E message shown in Figure 4-18 is an XCF message. XCF has a structure full monitoring threshold independent of that used by DFSMShsm. The default value for XCF monitoring is 80 percent. You can change this by changing the value specified for THRESHOLD in the structures definition in your CFRM policy.

You could use this difference in threshold monitoring to trigger automation to rebuild the CRQ structure with more space before the DFSMShsm limits are reached, or to take what ever other action you believe to be appropriate.

*IXC585E STRUCTURE SYSARC_PLEX0_RCL IN COUPLING FACILITY CF02, PHYSICAL STRUCTURE VERSION B753986F 50682D63, IS AT OR ABOVE STRUCTURE FULL MONITORING THRESHOLD OF 80%. F HSM,Q W ARC0101I ARC1542I ARC1542I ARC0168I ARC0168I ARC0168I ARC0168I ARC0101I

QUERY WAITING COMMAND STARTING ON HOST=4 WAITING MWES ON COMMON QUEUES: COMMON RECALL (CONT.) QUEUE=00004527,TOTAL=00004527 WAITING MWES: MIGRATE=00000, RECALL=00000, (CONT.) DELETE=00000, BACKUP=00000, RECOVER=00000, (CONT.) COMMAND=00000, ABACKUP=00000, ARECOVER=00000, (CONT.) TOTAL=000000 QUERY WAITING COMMAND COMPLETED ON HOST=4

*ARC1505E THE ENTRIES FOR STRUCTURE SYSARC_PLEX0_RCL ARC1505E (CONT.) ARE MORE THAN 95% IN-USE. ALL NEW REQUESTS WILL BE ARC1505E (CONT.) DIRECTED TO THE LOCAL QUEUE. ARC0058I CSA USAGE BY DFSMSHSM HAS REACHED THE ACTIVE THRESHOLD OF 000090K BYTES, ALL BUT BATCH WAIT REQUESTS FAILED F HSM,CANCEL USER(MHLRES3) ARC0931I (H)CANCEL COMMAND COMPLETED, NUMBER OF ARC0931I (CONT.) REQUESTS CANCELLED=6739 IXC586I STRUCTURE SYSARC_PLEX0_RCL IN COUPLING FACILITY CF02, PHYSICAL STRUCTURE VERSION B753986F 50682D63, IS NOW BELOW STRUCTURE FULL MONITORING THRESHOLD.

Figure 4-18 CRQ structure full

Chapter 4. DFSMShsm enhancements

93

Instead of cancelling the outstanding requests to relieve this problem, we could have increased the size of the CRQ structure. The maximum size of the CRQ structure is determined by the SIZE value specified in the active CFRM policy. You can increase the size of the CRQ structure up to the value specified in SIZE parameter using the SETXCF START,ALTER command. We show an example of this in Figure 4-19.

SETXCF START,ALTER,STRNM=SYSARC_PLEX0_RCL,SIZE=10240 IXC530I SETXCF START ALTER REQUEST FOR STRUCTURE SYSARC_PLEX0_RCL ACCEPTED. IXC533I SETXCF REQUEST TO ALTER STRUCTURE SYSARC_PLEX0_RCL COMPLETED. TARGET ATTAINED. CURRENT SIZE: 10240 K TARGET: 10240 K IXC534I SETXCF REQUEST TO ALTER STRUCTURE SYSARC_PLEX0_RCL COMPLETED. TARGET ATTAINED. CURRENT SIZE: 10240 K TARGET: 10240 K CURRENT ENTRY COUNT: 13674 TARGET: 13674 CURRENT ELEMENT COUNT: 13672 TARGET: 13672 CURRENT EMC COUNT: 3360 TARGET: 3360

Figure 4-19 Altering the size of the CRQ structure

If you wish to increase the size of the CRQ structure beyond the value specified for size you must update the structure size value in the active CFRM policy. Effectively write a CFRM policy with the new SIZE value, activate it, and then rebuild the common queue structure.

4.5.3 Selection of requests from the queue


Each host takes its turn to select requests from the common recall queue. Only one DFSMShsm can select work from the queue at any time. DFSMShsm will select work from the queue when: DFSMShsm is successfully connected to the queue and is not in the process of shutting down or processing a SETSYS COMMONQUEUE(RECALL(DISCONNECT)) command. This host does not have SETSYS EMERGENCY in effect. A HOLD RECALL has not been issued for this host. For recalls from ML2, a HOLD RECALL(TAPE) has not been issued. An HOLD COMMONQUEUE or HOLD COMMOMQUEUE(SELECTION) command has not been issued for this host. There are no recall requests to be processed on the hosts local recall queue with the same or a higher priority.

94

z/OS V1R3 DFSMS Technical Guide

There is an unused recall task available to process this request. There are recall requests to be processed on the CRQ. DFSMShsm is able to allocate the resources required to satisfy the request: Input volumes, ML2 or ML1 SDSP, are not already in use by another host. Recall is not held for this resource on this host. Once one DFSMShsm host has selected a recall request to process the request is copied to the local host and flagged as in process in the CRQ. DFSMShsm then begins to process the recall request. If the recall request requires an ML2 volume to be mounted DFSMShsm will check the CRQ to determine whether there are any other recall requests that can be satisfied from this volume. All recalls for other data sets will be processed by this host while the tape remains mounted. This can result in requests apparently jumping the queue, but reduces contention for tape volumes and the total number of tape mounts that need to be processed. This processing replaces the function previously provided by recall tape takeaway from recall. If you have multiple CRQs or hosts that do not connect to the CRQ within your HSMplex you will lose this benefit, and may still see recall tape takeaway from recall processing. Recalls are processed by the host that selected them as if they had originated on this host, except that: Messages are still returned to the originating user or job. For wait requests, the originating host POSTs the waiting task. The processing host: Produces the Functional Statistics Records (FSR) for the recall. There are new fields in the FSRs that allow you to determine that a recall request was selected from the CRQ. We summarize the changes to FSRs in Table 7-1 on page 169. The FSRs also record the host IDs of the host that originated the recall request. Produces the operator messages for the recall, these are only logged on the host that processes the recall. Drives any ARCRDEXT that may exist. Since the ARCRDEXT can be used to determine device pooling for recall we strongly recommend that the same ARCRDEXT is used on all hosts participating in the CRQ as there is no way of predetermining which host will actually service this recall request.

Chapter 4. DFSMShsm enhancements

95

Care should be taken for JES3 environments that if recall is enabled and recalls are being selected from tape and no tape drives are online, then even NOWAIT recall requests will be cancelled. You will receive the messages in Figure 4-20. If there are no online tape drives available to a JES3 system we recommend that you issue a HOLD RECALL(TAPE) command on this host. DFSMShsm running on a JES3 system will select and fail recall requests from a common queue even if there are other systems connected to the CRQ that could process this request.

HRECALL MHLRES4.TEST5.A11 NOWAIT ARC0790E TAPE(S) ARE NOT AVAILABLE FOR USERID ARC0790E (CONT.) **OPER** RECALL REQUEST. ARC0790E (CONT.) DSN=MHLRES4.TEST5.A11, ARC0790E (CONT.) VOLSER(S)=TST010 ARC1001I MHLRES4.TEST5.A11 RECALL FAILED, RC=0081, ARC1001I (CONT.) REAS=0000 ARC1198E TAPE(S) CONTAINING NEEDED DATA NOT AVAILABLE ARC1181I RECALL FAILED - ERROR ALLOCATING TAPE VOLUME

Figure 4-20 JES3 recall failure

DFSMShsm AUX hosts are able to process recall requests even though implicit recall requests are not directed to them. We illustrate this in Figure 4-21. Where an AUX host, HOST A, is shown processing a recall request that was generated from HOST 4.

F HSM,Q REQ ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=4 ARC0162I RECALLING DATA SET MHLRES4.TEST1.A1 FOR USER ARC0162I (CONT.) MHLRES3, REQUEST 00000047 ON HOST A ARC0101I QUERY REQUEST COMMAND COMPLETED ON HOST=4

Figure 4-21 Recall processed on an AUX host

4.5.4 Disconnecting from the common recall queue


DFSMShsm will disconnect from the CRQ as a result of a shutdown request or when an a SETSYS COMMONQUEUE(RECALL(DISCONNECT)) command is issued. The disconnecting DFSMShsm Sends all new recall requests to its local recall queue No longer selects new requests from the CRQ Completes any remote request that it had begun processing

96

z/OS V1R3 DFSMS Technical Guide

In addition, when you disconnect DFSMShsm from the CRQ with the command CRQSETSYS COMMONQUEUE(RECALL(DISCONNECT)), all requests that this DFSMShsm had placed on the CRQ are moved back to this DFSMShsms local recall queue. We show this in Figure 4-22. This does not happen if the disconnection is part of normal shutdown processing. During normal shutdowns, recall requests are left on the CRQ and are available for processing by other DFSMShsm hosts still connected to the CRQ.

IEF244I HSM HSM - UNABLE TO ALLOCATE 1 UNIT(S)AT LEAST 1 OFFLINE UNIT(S) NEEDED. IEF877E HSM NEEDS 1 UNIT(S)FOR HSM RESIN1LIBRARY: LIB1 LIBRARY STATUS: ONLINE OFFLINE 0B90-0B93 IEF878I END OF IEF877E FOR HSM HSM RESIN1 *059 IEF238D HSM - REPLY DEVICE NAME OR 'CANCEL'. F HSM,Q W ARC0101I QUERY WAITING COMMAND STARTING ON HOST=4 ARC1542I WAITING MWES ON COMMON QUEUES: COMMON RECALL ARC1542I (CONT.) QUEUE=00000009,TOTAL=00000009 ARC0168I WAITING MWES: MIGRATE=00000, RECALL=00000, ARC0168I (CONT.) DELETE=00000, BACKUP=00000, RECOVER=00000, ARC0168I (CONT.) COMMAND=00000, ABACKUP=00000, ARECOVER=00000, ARC0168I (CONT.) TOTAL=000000 ARC0101I QUERY WAITING COMMAND COMPLETED ON HOST=4 F HSM,SETSYS CQ(R(D)) ARC0100I SETSYS COMMAND COMPLETED ARC1504I DISCONNECTION FROM STRUCTURE SYSARC_PLEX0_RCL MAY BE DELAYED F HSM,Q W ARC0101I QUERY WAITING COMMAND STARTING ON HOST=4 ARC0168I WAITING MWES: MIGRATE=00000, RECALL=00009, ARC0168I (CONT.) DELETE=00000, BACKUP=00000, RECOVER=00000, ARC0168I (CONT.) COMMAND=00000, ABACKUP=00000, ARECOVER=00000, ARC0168I (CONT.) TOTAL=000009 ARC0101I QUERY WAITING COMMAND COMPLETED ON HOST=4 R 59,CANCEL IEE600I REPLY TO 059 IS;CANCEL *060 ARC0381A ALLOCATION REQUEST FAILED FOR TST011 FOR RECALL. REPLY WAIT OR CANCEL R 60,CANCEL IEE600I REPLY TO 060 IS;CANCEL ARC1502I DISCONNECTION FROM STRUCTURE SYSARC_PLEX0_RCL ARC1502I (CONT.) WAS SUCCESSFUL, RC=00, REASON=00000000

Figure 4-22 Delayed disconnection from the CRQ structure

Chapter 4. DFSMShsm enhancements

97

Similarly, there is no need to disconnect DFSMShsm from the CRQ structure if you need to move it to another CF. If another CF is specified in the preference list for the CRQ structure, you can move the structure non-disruptively with the SETXCF START,REBUILD,STRNM=SYSARC_basename_RCL,LOC=xxxx command. When DFSMShsm is disconnecting from a CRQ structure you may see a delay in the process if there are outstanding allocation requests. DFSMShsm will issue a ARC1504I message. We show this in Figure 4-22. Note that this output has been slightly reformatted. In Figure 4-22 we forced DFSMShsm to go through allocation recovery by requesting an offline tape drive, then while DFSMShsm was still requesting the tape mount issued the SETSYS COMMONQUEUE(RECALL(DISCONNECT)) command. Before this command was issued there were nine recall requests on the CRQ after the disconnect was issued there were nine recall requests on the local recall queue. DFSMShsm did successfully disconnect from the recall queue once allocation recovery had completed.

4.5.5 Impact of HOLD and RELEASE commands


In this section we discuss the interactions between HOLD RECALL and HOLD COMMONQUEUE commands and their reverse the RELEASE RECALL and RELEASE COMMONQUEUE commands. We discuss the following interactions: The interactions between the differing levels of each command, we only discuss the HOLD and RELEASE COMMONQUEUE commands in this section. For information about the interactions between levels of other HOLD and release commands, please refer to z/OS V1R3.0 DFSMShsm Storage Administration Reference SC35-0422-01. Differences in processing for WAIT and NOWAIT RECALL requests. The interactions between the HOLD and RELEASE COMMONQUEUE and the HOLD and RELEASE RECALL commands.

Interactions between levels of HOLD and RELEASE


When you use the HOLD and RELEASE COMMONQUEUE commands, you can alter the scope of the command. HOLD or RELEASE COMMONQUEUE issued with no operands is the most powerful level of these command. However you can use both the HOLD and RELEASE CQ with the SELECTION or PLACEMENT options.

98

z/OS V1R3 DFSMS Technical Guide

The HOLD ALL and the RELEASE ALL commands have no impact on the CRQ. The only HOLD or RELEASE commands that impacts the CRQ are the HOLD or RELEASE COMMONQUEUE commands. If a HOLD COMMONQUEUE(RECALL) command has been issued it is not possible to partially release the HOLD command. We illustrate this in Figure 4-23 where a RELEASE COMMONQUEUE(RECALL(SELECTION)) command was issued unsuccessfully after a HOLD COMMONQUEUE(RECALL) command.

F HSM,RELEASE CQ(R(S)) ARC0111I SUBFUNCTION COMMONQUEUE(RECALL(SELECTION)) ARC0111I (CONT.) CANNOT BE RELEASED WHILE MAIN FUNCTION ARC0111I (CONT.) COMMONQUEUE(RECALL) IS HELD ARC0100I RELEASE COMMAND COMPLETED

Figure 4-23 RELEASE failure

To allow this host to select recall requests from the CRQ, you will need to issue the command RELEASE COMMONQUEUE(RECALL)). Then, if you wish this host to only select requests from the CRQ, issue the command HOLD COMMONQUEUE(RECALL(PLACEMENT)).

WAIT versus NOWAIT recalls


The host that places the request on the CRQ must ensure that WAIT recall requests are not stranded on the CRQ. A WAIT recall request is stranded if it has been placed on the CRQ but there is currently no DFSMShsm host connected to the queue eligible to process the recall request. There must be at least one host connected to the CRQ that is eligible to process the Wait-type request, otherwise it must be failed. Before placing a RECALL request on the CRQ, the DFSMShsm placing the request on the CRQ determines which of the host connected to the CRQ is the most eligible host and what level of recall the most eligible host is capable of processing. The most eligible host in the CRQplex is the DFSMShsm host that has an equal or greater ability to process recalls than any other host. For example, a host that has HOLD RECALL(TAPE) in effect is more eligible to process requests than a host with HOLD RECALL in effect. Before placing WAIT recall requests onto the CRQ, the originating host examines the status of the most eligible host. If there is no host eligible to process the request, then the request is failed. If the status of the most eligible host decreases, then each host scans its Wait-type requests and fails those for which there is no host eligible to process. Table 4-3 summarizes host eligibility.

Chapter 4. DFSMShsm enhancements

99

Table 4-3 Determining the most eligible host connected to a CRQ


Hold Level None HOLD RECALL(TAPE(TSO)) HOLD RECALL(TAPE) HOLD RECALL HOLD COMMONQUEUE(R) HOLD CQ(R(S)) Eligibility Highest eligibility Less eligible Least eligible Not eligible

Interaction between HOLD RECALL and HOLD CQ commands


The effects of HOLD RECALL and HOLD COMMONQUEUE commands can be complex. In Table 4-4 we summarize some of the interactions between these commands.
Table 4-4 Interaction between HOLD commands
HOLD RECALL status No HOLD HOLD CQ status NO HOLD HOLD CQ(R(P)) HOLD CQ(R(S)) HOLD CQ(R) HOLD RECALL NO HOLD HOLD CQ(R(P)) HOLD CQ(R(S)) HOLD CQ Use local recall queue N/A YES YES YES NO NO NO NO Select from CRQ YES YES NO NO NO NO NO NO Place on CRQ YES No YES NO YES NO YES NO

Using the information from Table 4-3 and Table 4-4, it is possible to see how a DFSMShsm host could be configured to be able to place RECALL requests on the CRQ, but only to be able to process RECALL requests from ML1, a HOLD RECALL(TAPE) command would allow this. If you wanted to prevent a DFSMShsm host from performing all recalls, you could choose either a HOLD RECALL or a HOLD COMMONQUEUE(RECALL(SELECTION)) command. The HOLD RECALL command would have an impact to DELETE requests that the HOLD COMMONQUEUE command would not.

100

z/OS V1R3 DFSMS Technical Guide

Figure 4-24 shows recall requests being processed on the local queue after a HOLD CQ(R(P)) command has been issued.

F HSM,Q AC ARC0101I QUERY ACTIVE COMMAND STARTING ON HOST=4 ARC0160I RECALL=NOT HELD, TAPERECALL=NOT HELD, DATA SET RECALL=ACTIVE ARC0162I RECALLING DATA SET MHLRES4.TEST3.A1 FOR USER ... ARC1540I COMMON RECALL QUEUE PLACEMENT FACTORS: ARC1540I (CONT.) CONNECTION STATUS=CONNECTED,CRQPLEX HOLD STATUS=NONE, ARC1540I (CONT.) HOST COMMONQUEUE HOLD STATUS=CQ(RECALL(PLACEMENT)), ARC1540I (CONT.) STRUCTURE ENTRIES=000% FULL,STRUCTURE ELEMENTS=000% ARC1540I (CONT.) FULL ARC1541I COMMON RECALL QUEUE SELECTION FACTORS: ARC1541I (CONT.) CONNECTION STATUS=CONNECTED,HOST RECALL HOLD ARC1541I (CONT.) STATUS=NONE,HOST COMMONQUEUE HOLD STATUS=NONE ARC0101I QUERY ACTIVE COMMAND COMPLETED ON HOST=4 F HSM,Q W ARC0101I QUERY WAITING COMMAND STARTING ON HOST=4 ARC1542I WAITING MWES ON COMMON QUEUES: COMMON RECALL ARC1542I (CONT.) QUEUE=00000000,TOTAL=00000000 ARC0168I WAITING MWES: MIGRATE=00000, RECALL=00003, ... F HSM,Q REQ ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=4 ARC0162I RECALLING DATA SET MHLRES4.TEST3.A1 FOR USER ARC0162I (CONT.) MHLRES3, REQUEST 00004422 ON HOST 4 ARC0167I RECALL MWE FOR DATA SET MHLRES4.TEST3.A10 FOR ARC0167I (CONT.) USER MHLRES3, REQUEST 00004423, WAITING TO BE ARC0167I (CONT.) PROCESSED, 00000 MWE(S) AHEAD OF THIS ONE

Figure 4-24 RECALL processing on the local queue

4.6 Recovering from errors


In this section we will discuss the impact of and suggested recovery actions for some of the errors that could occur in you HSMplex after you implement a CRQ. We will discuss errors due to a lost component, for example: Loss of a DFSMShsm address space Loss of an OS LPAR Loss of a CF Loss of connectivity to a CF

Chapter 4. DFSMShsm enhancements

101

We will also discuss errors that may not have quite such an obvious cause, rather that involve a loss of function, for example: Recall requests not processing Recall requests failing Recall requests being processed by the wrong DFSMShsm We recommend that you also review the manual DFSMShsm Data Recovery Scenarios, GC35-0419.

4.6.1 Loss of a DFSMShsm or LPAR


If either a DFSMShsm host or a z/OS LPAR fails, the remaining DFSMShsm connected to the CRQ perform the following recovery actions: One of the remaining DFSMShsm hosts drives the cleanup of references to the failed host in the CRQ. Any RECALL requests that had been selected from the CRQ for processing by the failed host are replaced on the CRQ and are eligible to be processed by any of the remaining DFSMShsm hosts. If the DFSMShsm host that failed was the most eligible host, the remaining hosts determine the new most eligible host. Potentially this may cause some WAIT RECALL requests to be failed if the new most eligible host is unable to process these. All RECALL requests placed on the CRQ by the failed DFSMShsm remain on the CRQ and will be processed by the remaining DFSMShsm hosts. When the failed DFSMShsm host reconnects to the CRQ, it does the following: Negotiates with other hosts to determine who is the most eligible host. Checks to see if any WAIT requests that it retains are still on the CRQ and being processed, if so these requests are allowed to process on the remote host. Moves RECALL requests from the it local recall queue to the CRQ. Begins to select work from the CRQ if there are no local requests of the same or higher priority than the first request for processing on the CRQ.

4.6.2 Loss of a CF or connectivity to a CF


The results of loosing either a CF or a systems connectivity to a CF vary depending on your configuration. We will discuss three possible configurations.

102

z/OS V1R3 DFSMS Technical Guide

CF rebuild
If a second CF is defined in the CRQs structure definition preference list in your CFRM policy, loss of a CF will cause Cross-system Extended Services (XES) to attempt to rebuild the structure to the other CF. Loss of system connection to the CF may also drive this, depending on: The values specified for REBUILDPERCENT in the structure definition Whether an active SFM policy exists If both these conditions are true, loss of connectivity to a single z/OS system may cause the structure to rebuild. For more information about SFM and REBUILDPERCENT, please refer to z/OS V1R3.0 MVS Setting Up a Sysplex SA22-7625-03. If your structure is rebuilt to another CF, all systems should retain access to the CRQ and continue using this for recall processing. It is possible that some systems may loose access to the CRQ during this time if that occurs processing will continue as discussed in No CF available below.

No CF available
If there is no alternate available for the CRQ structure, then each DFSMShsm: Will continue to process any active recall requests. Will fall back to using the local recall queue for processing. This may cause problems if a HOLD RECALL has been issued. Will move all requests that it has retrieved from the CRQ to the local recall queue. May experience some recall failures for data sets that were queued for processing on a remote host, this is not a major problem since the recall request will be processed. Will listen to see when connectivity is restored to the CRQ structure and reconnect to it. After DFSMShsm reconnects to the CRQ structure, then DFSMShsm will move requests from DFSMShsms local recall queue to the CRQ This process should be transparent to users of DFSMShsm. Each DFSMShsm still retains information about all the RECALL requests it had placed on the CRQ. If you are using recall servers or limiting tape recalls to certain hosts you may need either to issue RELEASE commands on some hosts or experience delays in recall processing until DFSMShsm can reconnect to the CRQ.

Chapter 4. DFSMShsm enhancements

103

4.6.3 Recall request processing


If your problem is that DFSMShsm is not successfully processing recall requests, you may see one of the following symptoms: Recall requests are not being processed, requests may be accumulating on the CRQ, or are being directed to the local recall queue. WAIT recall requests are being cancelled because no DFSMShsm host is able to process them. Recall requests are being failed; you may see message ARC1506E. If errors are detected within the CRQ structure, DFSMShsm will attempt to correct the error automatically. DFSMShsm may issue an ARC1506E message stating that a recall failed due to an error with the CRQ. If recalls are not processing you should issue the following commands on each system that is participating in the CRQ QUERY WAITING QUERY COMMONQUEUE QUERY REQUEST QUERY ACTIVE The QUERY REQUEST command may produce large amounts of output if there are many requests queued. We have included samples of the outputs received from these commands in Figure 4-25. Note the output has been edited to reduce the amount of data shown.

104

z/OS V1R3 DFSMS Technical Guide

F HSM,Q W ARC0101I QUERY WAITING COMMAND STARTING ON HOST=1 ARC1542I WAITING MWES ON COMMON QUEUES: COMMON RECALL ARC1542I (CONT.) QUEUE=00000199,TOTAL=00000199 ARC0168I WAITING MWES: MIGRATE=00000, RECALL=00000, ARC0168I (CONT.) DELETE=00000, BACKUP=00000, RECOVER=00000, ARC0168I (CONT.) COMMAND=00000, ABACKUP=00000, ARECOVER=00000, ARC0168I (CONT.) TOTAL=000000 ARC0101I QUERY WAITING COMMAND COMPLETED ON HOST=1 F HSM,Q REQ ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=1 ARC1543I RECALL MWE FOR DATASET MHLRES4.TEST3.A1, FOR ARC1543I (CONT.) USER MHLRES3, REQUEST 00004446, WAITING TO BE ARC1543I (CONT.) PROCESSED ON A COMMON QUEUE,00000000 MWES AHEAD OF ARC1543I (CONT.) THIS ONE F HSM,Q AC ARC1540I COMMON RECALL QUEUE PLACEMENT FACTORS: ARC1540I (CONT.) CONNECTION STATUS=CONNECTED,CRQPLEX HOLD STATUS=ALL, ARC1540I (CONT.) HOST COMMONQUEUE HOLD STATUS=CQ(RECALL),STRUCTURE ARC1540I (CONT.) ENTRIES=004% FULL,STRUCTURE ELEMENTS=003% FULL ARC1541I COMMON RECALL QUEUE SELECTION FACTORS: ARC1541I (CONT.) CONNECTION STATUS=CONNECTED,HOST RECALL HOLD ARC1541I (CONT.) STATUS=NONE,HOST COMMONQUEUE HOLD STATUS=CQ(RECALL) ARC0101I QUERY ACTIVE COMMAND COMPLETED ON HOST=1

Figure 4-25 Commands for determining the status of the common queue

The key items to check are these: Check to see whether any DFSMShsm is currently processing any recall request. This is shown in the Q AC output. This needs to be checked for each member of the HSMplex. Use the D XCF,STR,STRNM=SYSARC* command to check that the CRQ structure is allocated and all address spaces that should be connected to the structure are. Issue the Q CQ command and check the ARC1545I message to verify that the CRQ is not full. The results of this command should be identical on all systems connected to the CRQ. Check that DFSMShsm sees all hosts successfully connected to the CRQ.

Chapter 4. DFSMShsm enhancements

105

Check to see whether there are any queued requests to be processed. The Q W command provides this information. You can also determine whether the requests are on the common or local queue from this output. This needs to be issued on all participating systems. You can use the output from the Q REQ or the Q AC command to determine whether data sets to be recalled are currently located on ML1 or ML2. Use the Q AC output to check whether RECALL has been held at any level Use the Q AC output to check whether any HOLD CQ command has been issued and if so what form of the command was used. For example in Figure 4-25 a HOLD CQ(R) command has been issued. Check to see whether any outstanding RECALL requests are waiting for a common resource, for example, an ML2 volume. If all request are waiting for a single volume then Check to see whether this volume is currently mounted and whether it is being used for input or output. If the volume is currently mounted then one of the TAPETAKEAWAY functions should release it to allow the recall requests to process. If the volume is not mounted, then check that the volumes information is correctly recorded by DFSMShsm. The quickest check is to issue the LIST TTOC(volser) NODSI command. If the LIST returns message ARC0378I, then you will need to perform the standard recovery for this message. ARC0378I will only be issued after the DFSMShsm address space that had been using the volume is recycled so you also need to use the LIST VOLUME(volser) command and check whether the in use flag is on, if a system has not reset this flag when they completed using the volume this will prevent any system from allocating this volume for recall. This limitation exists for systems whether or not they are taking part in a CRQ.

4.6.4 Auditing the CRQ


It is possible that problems may be encountered with the contents of the CRQ structure. DFSMShsm will initially attempt to correct any problems with the contents of the CRQ structure that it encounters. If DFSMShsm is not successful at correcting the structures contents you may be able to do so using the AUDIT command. The DFSMShsm for z/OS 1.3 Storage Administration Reference SC35-0422-01 recommends using the AUDIT command: After receiving an ARC1506E message After receiving an ARC1187E message When recall requests are unexpectedly not being selected for processing

106

z/OS V1R3 DFSMS Technical Guide

The structure of the common queues is not externalized. It is possible that losses of connectivity, ABENDS or internal errors may cause the CRQ to become corrupted. Just like the DFSMShsm control data sets the CRQ has interrelated entries that can get out-of-sync. You can use the AUDIT COMMONQUEUE(RECALL) command to analyze the entire structure and report/fix the errors that it finds. It issues ARC1544I to report the number of errors. It cannot find/correct all errors. It creates PDA entries to record what it found/fixed. AUDIT COMMONQUEUE(RECALL) does NOT return FIXCDS commands, as there are no control data set records for the CRQ these are are not applicable. Also AUDIT COMMONQUEUE(RECALL) does not create an output data set. You may see differences in the number of errors found between AUDITs specified with FIX and NOFIX. CRQ errors are usually interrelated. Fixing one error will fix many errors reported by NOFIX.

4.6.5 Rebuilding the CRQ


If errors persist after an AUDIT FIX has been run, the next step is to disconnect all connected DFSMShsm address spaces from the CRQ and then delete the structure. The CRQ structure is defined with a structure disposition of KEEP so you will need to use the SETXCF FORCE command to delete it. Like any other FORCE command you need to ensure that this is issued with care. If DFSMShsm address spaces do not disconnect cleanly from the CRQ structure then you will need to use XCF to force the existing connection. Please refer to the z/OS V1R32.0 MVS System Commands SA22-7627-02 manual for additional information on using the SETXCF FORCE commands. The response from a SETXCF FORCE command is shown in Figure 4-26.

Chapter 4. DFSMShsm enhancements

107

SETXCF FORCE,STR,STRNM=SYSARC_PLEX0_RCL IXC579I NORMAL DEALLOCATION FOR STRUCTURE SYSARC_PLEX0_RCL IN 704 COUPLING FACILITY 002064.IBM.02.000000010ECB PARTITION: D CPCID: 00 HAS BEEN COMPLETED. PHYSICAL STRUCTURE VERSION: B753986F 50682D63 INFO116: 132C2000 01 2800 00000003 TRACE THREAD: 00002A1B. IXC353I THE SETXCF FORCE REQUEST FOR STRUCTURE SYSARC_PLEX0_RCL WAS COMPLETED: STRUCTURE WAS DELETED D XCF,STR,STRNM=SYSARC_PLEX0_RCL IXC360I 13.56.22 DISPLAY XCF 707 STRNAME: SYSARC_PLEX0_RCL STATUS: NOT ALLOCATED POLICY INFORMATION: POLICY SIZE : 10240 K POLICY INITSIZE: 5120 K POLICY MINSIZE : 0 K FULLTHRESHOLD : 80 ALLOWAUTOALT : NO REBUILD PERCENT: N/A PREFERENCE LIST: CF02 CF01 ENFORCEORDER : NO EXCLUSION LIST IS EMPTY

Figure 4-26 SETXCF Force issued against a CRQ structure

The SETXCF FORCE command will not complete if any connections still exist to the structure. You can use the D XCF,STR,STRNM=SYSARC_basename_RCL command to list the connections and the SETXCF FORCE,CON command to delete any failed persistent connections, you cannot use this command to delete a connection that is in active status. Once the active structure has been deleted you should be able to reconnect your DFSMShsms to the CRQ using the SETSYS COMMONQUEUE(RECALL(CONNECT(basename))) command. This sequence effectively deletes the contents of the CRQ structure and the contents will be rebuilt as each DFSMShsm reconnects to the structure.

108

z/OS V1R3 DFSMS Technical Guide

4.6.6 Additional diagnostic data collection


As a general rule if using the CRQ, or any other CF structure you should check that your systems dump options include the COUPLE and XESDATA options. You can check this using the D D,O command. When issuing an explicit command to cause an SVC dump of the DFSMShsm address space, you may want to consider adding a STRLIST operand to the dump command:
DUMP COMM=(CRQ LIST STRUCTURE)

Then respond to the outstanding WTOR:


STRLIST=(STRNAME=SYSARC_basename_RCL, (LNUM=ALL,ADJ=CAP,EDATA=UNSER), LOCKE,(EMC=ALL),ACC=NOLIM),END

You should also ensure that you collect all the usual diagnostic information required by your IBM support center, this may include LOGREC, JOBLOGS, SYSLOGs and DFSMShsm PDA trace information.

4.7 Other new enhancements


This level of DFSMShsm provides support for the other functions discussed elsewhere in this book. This includes support for: VSAM large real storage Large volume support DFSMSdss HFS copy support Overflow and extend storage groups Dynamic volume count Data set separation

4.7.1 Keyrange data sets


If you are using keyrange data sets for your DFSMShsm control data sets, we strongly recommend converting these to non-keyrange. Please read informational APAR II12896 for a detailed description of why this is necessary. We have included the responder section of this APAR in Maintenance information on page 171. We strongly recommend that if you are currently using KEYRANGE for your DFSMShsm control data sets that you convert these to non-KEYRANGE before upgrading your system to z/OS V1R3 DFSMS.

Chapter 4. DFSMShsm enhancements

109

4.7.2 DFSMShsm large volume support


Also in z/OS 1.3 DFSMShsm has added support for large volumes. We discuss support for Large volume support on page 40. These volumes can be used by DFSMShsm for the following functions: SMS level zero volumes primary volumes Non-SMS level zero volumes primary volumes Migration level one volumes Migration level two volumes Backup volumes Support for large volumes has been fitted back to OS390 2.10 by APAR OW49147. Toleration support for large volumes for OS/390 levels prior to 2.10 is provided by APAR OW49148. Toleration support prevents a large volume from being varied online to an earlier level system, we do not recommend defining large volumes that will be accessed by DFSMShsm until all systems in the HSMplex have the enabling maintenance installed. There is no change in the number of ML1 volumes required for best performance; this is still n+1, where n is the maximum number of migration tasks in the HSMplex. You may still only allocate one SDSP per ML1 volume. The following DFSMShsm control data sets can be allocated on large volumes: MCDS, BCDS, OCDS JRNL PDA Logs The FREEVOL command supports the movement of data from previous volumes to large volumes.

110

z/OS V1R3 DFSMS Technical Guide

Chapter 5.

DFSMSdss enhancements
In this chapter we describe the changes introduced with the DFSMSdss component: HFS logical copy support Enhanced dump conditioning Large volume support

Copyright IBM Corp. 2002

111

5.1 HFS logical copy support


This release introduces the ability for DFSMSdss to perform a logical data set copy of hierarchical file system (HFS) data sets. This eliminates the previous requirement of either copying the entire volume or performing a DUMP/RESTORE to process an HFS data set. The logical copy of individual files within the HFS is not supported.

5.1.1 z/OS view of an HFS


z/OS sees an HFS data set as having a partitioned organization made up of pages that are 4K (4096 bytes). An HFS data set can be SMS managed or non-SMS managed. If the HFS is SMS managed, then the data set must be cataloged and it can be multi-volume. If the HFS is non-SMS managed, then it can be uncataloged and must be on a single volume. However, only cataloged HFS data sets can be mounted.

5.1.2 z/OS UNIX view of an HFS


z/OS UNIX sees an HFS as a set of files and attributes. In order for z/OS UNIX process an HFS data set, the data set must first be mounted using the MOUNT command or by the BPXPRMxx member of PARMLIB during an IPL. The BPXPRMxx member contains the parameters that control the z/OS UNIX environment and the file systems. Mounting the HFS relates the physical HFS data set to the UNIX Path name. Only cataloged HFS data sets can be mounted. DFSMSdss can make logical copies of both mounted and unmounted data sets.

5.1.3 How HFS logical copy works


When a job is submitted to perform a logical copy of an HFS data set, DFSMSdss requests a SYSZDSN enqueue of the HFS data set resource. If the user specified DELETE, the request is for an exclusive enqueue. Otherwise, the request is for a shared enqueue. If the HFS data set is not currently mounted, or if it is mounted for read-only and the user did not specify DELETE, then the enqueue will succeed and the copy will proceed as requested.

112

z/OS V1R3 DFSMS Technical Guide

If the HFS data set is mounted and the user specified DELETE, then the enqueue will fail and DFSMSdss will fail the request with ADR410E. If the enqueue fails and the data set is uncataloged, then DFSMSdss will fail the request with ADR412E. If the HFS data set is currently mounted for read-write, then a quiesce is performed against the HFS. This quiesce marks the HFS and the files in it as unusable, even for read processing, until the HFS is unquiesced. Users accessing the files in a quiesced HFS are suspended until the unquiesce is processed. If the quiesce fails, the copy fails and ADR960E is issued. If the quiesce succeeds, then the copy proceeds.

5.1.4 Target HFS space allocation


HFS logical copy can use a preallocated target HFS or allocate space for a new target HFS. The DFSMSdss keyword ALLDATA is honored for the target HFS. If ALLDATA is specified, then the required space is determined from the High Allocation Relative Page Number (HARPN). If ALLDATA is not specified, then required space is determined from the High Used Relative Page Number (HURPN). A pre-allocated target HFS must have the same name as the source data set. Either the source or the target must be non-SMS managed (and not cataloged). A pre-allocated target data set must have sufficient space for the copy to be performed. If there is not sufficient space, then the copy fails and ADR439E is issued with reason code 60.

5.1.5 Restrictions
There are some restrictions and other points to note about the HFS logical copy support: The following DFSMSdss COPY keywords are ignored for the source HFS data set. DYNALLOC TOLERATE(ENQFAILURE) You cannot perform a logical copy of an uncataloged HFS data set if there is a cataloged HFS with the same name, which is mounted Read-Write. If you attempt to do so, the copy will fail with ADR412E.

Chapter 5. DFSMSdss enhancements

113

The DELETE keyword is only honored if the SYSZDSN enqueue is obtained EXCLUSIVE on the source HFS. DELETE is possible only if the HFS is unmounted. If you code the DELETE keyword and the HFS is mounted as either Read-Only or Read-Write, then the copy will fail with message ADR410E.

5.1.6 Usage considerations


Prior to z/OS V1R3 DFSMS your DFSMSdss copy jobs could have had include statements which inadvertently selected HFS data sets. In this case DFSMSdss would de-select them and did not perform the HFS logical copy. With z/OS V1R3 DFSMS, DFSMSdss will attempt to copy any included HFS data sets, therefore you should review your DFSMSdss jobs to ensure that they actually copy what you think they do.

5.1.7 Performance
Some existing DFSMSdss logical copy jobs may now run longer if HFS data sets which you never intended to be copied are now being copied. For DFSMSdss HFS logical copy between like devices, DFSMSdss will attempt to use fast replication techniques (for example, Snapshot) to perform the copy. If fast replication is not possible, then one of the following techniques is used: Concurrent copy EXCP For logical copies between unlike devices, DFSMSdss performs I/O by the track packing method. Regardless of the copy method selected, this new single step copy process should be faster than the previous requirement to use the two-step DUMP/RESTORE process.

5.1.8 Coexistence
There are no coexistence issues, as this function will not be made available in previous releases. However, in a mixed level sysplex you will have to ensure that any HFS logical copies required, are performed on a z/OS V1R3 DFSMS system. Any logical copy attempted on a system with a prior release will still exclude HFS data sets, the same as it does today.

114

z/OS V1R3 DFSMS Technical Guide

5.2 Enhanced dump conditioning


This section describes DFSMSdss dump conditioning, introduced with APAR OW45674, and the enhancements to the dump conditioning functions of DFSMSdss since OS/390 R10.

5.2.1 Overview
Prior to the introduction of dump conditioning, when performing a full volume copy of a SMS-managed volume, DFSMSdss required that the COPYVOLID keyword was specified. This resulted in the target volume being varied offline automatically due to a duplicate volser. Dump conditioning was added as a means to perform a full volume COPY operation which allows both the source and target volumes to remain online, so that full volume DUMP operations can be performed against the source data on an intermediary target location. The DFSMSdss COPY command DUMPCONDITIONING keyword specifies that you want to create a copy of the source volume for backup purposes, not for the purpose of using the target volume for general applications. The two DFSMSdss COPY command keywords, DUMPCONDITIONING and COPYVOLID, are mutually exclusive.

5.2.2 DUMPCONDITIONING phase I with OW45674


When DUMPCONDITIONING is specified, the result of a full volume copy is a target volume with a VTOC index and VVDS name identical to the source volume. However the target volume serial remains unchanged. The target volume of a full volume copy operation with DUMPCONDITIONING specified is referred to as a conditioned volume. The resulting dump data set will have all of the data from the original source volume. The dump data set will, however, have the volser from the conditioned volume (target volume) in the dump tape header. If a full volume RESTORE is performed from the conditioned volume (target volume) dump data set to an SMS volume, the COPYVOLID keyword must be specified at restore time and the volser must be clipped back to the original source volser.

Chapter 5. DFSMSdss enhancements

115

Note: A volume with a volser which does not match the VVDS and VTOC index name may have data accessibility problems. The target volume from a COPY with DUMPCONDITIONING should only be used as a source for full volume DUMP operations.

Physical data set RESTORE from a dump of a conditioned volume is not supported.

5.2.3 DUMPCONDITIONING phase II with OW48234


The enhancements introduced with APAR OW48234, and included in z/OS V1R3 DFSMS, improve the usability of dump conditioning in two areas: Full volume restore enhancement Allow physical data set restore

Full volume restore enhancement


A full volume dump of a conditioned volume will look the same as it did previously. That is, the resulting dump data set will have all of the data from the original source volume, and the dump data set will have the volser from the conditioned volume (target volume) in the dump tape header. A full volume copy of a source volume (VOL001 for example) to a target volume (VOL002 for example) specifying the DUMPCONDITIONING keyword, followed by a full volume dump of the target volume (VOL002), results in a dump data set that looks as if it was created by dumping the source volume (VOL001). The dump data set will appear as if it was created from a volume where the VVDS name matched the volumes volser. This eliminates the requirement of having run another job to clip the volser of the target volume following a subsequent restore. If DUMPCONDITIONING is not specified on a full volume copy of a conditioned volume, the copy operation fails with message ADR814E.

Allow physical data set restore


You can now use a full volume dump of a dump conditioned volume as input to a physical data set restore. The restore will work exactly the same as if restoring from any other full volume dump. No indication that you are restoring from conditioned dump is given. A new message, ADR808I is issued during a full restore of a conditioned volume.

116

z/OS V1R3 DFSMS Technical Guide

5.2.4 Restrictions
There are a few restrictions when using DUMPCONDITIONING: If you attempt to perform a full volume copy of a dump conditioned volume, you must specify the DUMPCONDITIONING keyword. If this is not specified, the copy will fail with new message ADR814E. The DUMPCONDITIONING keyword can be used with COPY TRACK operations, but only those that include the VTOC tracks. For a conditioned volume, the VVDS name will not match the volume serial number. This makes the VVDS invisible to the system. One side effect of this is that a full volume copy or full volume restore operation to a conditioned volume may fail when DFSMSdss attempts to do expiration date checking because the VVDS cannot be located. To avoid this problem, specify the PURGE keyword during full volume copy and restore operations if the target volume is a previously used conditioned volume, or if the target volume was copied with neither COPYVOLID nor DUMPCONDITIONING specified.

5.2.5 Performance
You can combine physical volume copy using DUMPCONDITIONING with Snapshot or Flashcopy to reduce the amount of time that your data is unavailable when you back it up. A full volume copy in conjunction with Snapshot or Flashcopy can produce a copy of a volume in seconds. Then, the copy can be dumped to tape while your applications are accessing the data on the original volume.

5.3 Large volume support


z/OS V1R3 DFSMS has introduced large volume support (LVS). Before LVS the largest volume available was a 3390-9, introduced in 1993. This volume had 10,017 cylinders (150,255 tracks). A large volume is any DASD volume with between 10,018 and 32,760 cylinders (491,400 tracks). All DFSMSdss functions now fully support devices defined as having up to 32,760 cylinders.

Chapter 5. DFSMSdss enhancements

117

Coexistence
Full LVS is available for OS/390 Release 10 and higher with APAR OW50405. Toleration support is also provided on releases prior to OS/390 Release 10 by APAR OW49148. This toleration support prevents a large volume from being varied online to an earlier level system. See 3.1, Large volume support on page 40 for further information.

118

z/OS V1R3 DFSMS Technical Guide

Chapter 6.

DFSMSrmm enhancements
In this chapter we describe the changes introduced with the DFSMSrmm component. We also describe the new functions introduced since DFSMSrmm Release 10. These are the changes introduced with DFSMSrmm in z/OS V1R3: Special character support Changed messages for improved diagnostics HELP moved from SYS1.SEDGHLP1 to SYS1.HELP These new functions have been introduced since DFSMSrmm R10: Software MTL support Multi-Volume alert in RMM Dialog Updated conversion tools VSAM Extended function support for DFSMSrmm CDS RMM Application Programming Interface PARMLIB options SMSACS and PREACS Storage location as Home location Enhanced bin management DSTORE by location Extended extract file Report generator Buffered tape marks support for A60 controller

Copyright IBM Corp. 2002

119

6.1 Changes introduced with z/OS V1R3 DFSMS


This section describes the changes introduced in this release of DFSMSrmm.

6.1.1 Special character support


Special characters are now supported for the following DFSMSrmm entry types: VOLSER RACK POOL They are also supported on the REJECT parameter. There is no support for special characters for the BIN entry type. The special characters now supported are:
, . / ( ) * & + - = blank

Special considerations
These are some considerations regarding special character support: For volumes that reside in IBM automated tape libraries, only alphanumeric characters are supported (with barcode labels) unless the unlabeled tape facility is used. When the unlabeled tape facility is used, then the alphanumeric characters (+) plus, - (hyphen), # (number or pound sign), & (ampersand), $ (dollar sign), and @ (at sign) are supported. Leading blanks are not allowed. DFSMSrmm does not support a leading blank on the volser. Asterisks are allowed anywhere in a volser.
AA001*, *A0001 and A*0001 are all valid volsers

Using an asterisk in a volser could lead to unexpected results when using the RMM dialog. The RMM dialog treats the asterisk as a wildcard character and therefore will return the actual volser requested and any others that match the mask specified. The ADDVOLUME volser COUNT(n) supports less than 6 characters in the VOLSER field. The following would add volumes A0001 through A0015:
ADDVOLUME A0001 COUNT(15)

RACKs can now be less than 6 characters. Use the TPRACF(N) option for special character volsers. You should protect them outside RMM with a generic RACF TAPEVOL profile if required. This is because RACF does not support special characters in tape volume volsers for the TAPEVOL class.

120

z/OS V1R3 DFSMS Technical Guide

6.1.2 Changed messages for improved diagnostics


There are two changed messages to improve error diagnostics. Catalog name is included in CATSYNCH VERIFY messages:
EDG2236I DATA SET DQQ.SAMMEL.ESA.TREND.G0001V00 VOLUME B51987 DSSEQ 1 IN CATALOG UCAT.X IS NOT DEFINED TO DFSMSrmm

Pool information is added to tape reject messages:


EDG4021I VOLUME volser REJECTED, IT IS NOT IN AN ACCEPTABLE SCRATCH POOL. rtype rvalue REQUESTED. mtype mvalue MOUNTED type = SG, PREFIX. value = SG name, pool prefix value

This change to message EDG4021I is also available through APAR OW48865 for OS/390 R10.

6.1.3 HELP moved from SYS1.SEDGHLP1 to SYS1.HELP


You no longer need to copy DFSMSrmm TSO/E help into SYS1.HELP because DFSMSrmm TSO/E help is now included in SYS1.HELP along with the rest of DFSMS. Once all systems in your sysplex are on z/OS V1R3 DFSMS, then SYS1.SEDGHLP1 can be removed from your concatenations and deleted from the systems.

6.1.4 OAM multiple object backup


Object access method (OAM) supports multiple object backup, which allows a system within an SMScomplex to have more than one object backup storage group. Up to two object backup storage groups can be associated with each object storage group. Separate backup copies of objects, that are physically based on the object storage group to which the object belongs, may be specified. This will allow you to direct backup copies to different media types (optical or tape). Using DFSMSrmm you could create policies to move a set of tape backups offsite. Refer to 3.10, OAM enhancements on page 61 for more information.

Chapter 6. DFSMSrmm enhancements

121

6.2 New functions introduced since DFSMSrmm R10


In this section we discuss the enhancements to DFSMSrmm that were introduced after DFSMSrmm R10 through APARs, which are included in z/OS V1R3 DFSMS.

6.2.1 Software MTL support


This support provides full system managed tape functionality for non-SMS devices and libraries and was introduced with APAR OW45271. A manual tape library (MTL) is an installation-defined set of tape drives and the associated set of tape volumes that can be mounted on those tape drives. Manual tape library management provides the advantages of system-managed tape in the stand-alone environment. A stand-alone drive, in this context, is a tape drive that is independent of robotics. The RMM CHANGEVOLUME subcommand can be used to add volumes into the tape configuration database (TCDB), for example
RMM CV volser LOCATION(mtlname) STORGRP(sg) MEDIATYPE(type) ...

No JCL changes are required to use an MTL. SMS storage groups and ACS routines can be updated to determine the placement of new tape data sets to an MTL. For details on defining tape libraries to SMS, both MTL and ATL, refer to DFSMS Object Access Method Planning, Installation, and Storage Administration Guide for Tape Libraries, SC35-0427.

6.2.2 Multi-volume alert in DFSMSrmm dialog


DFSMSrmm dialog multi-volume alert was introduced with APAR OW46143. When displaying a volume that is part of a multi-volume set or displaying a data set that spans multiple volumes in the DFSMSrmm dialog, you receive a warning message indicating that this volume or data set is part of a multi-volume set.

122

z/OS V1R3 DFSMS Technical Guide

When displaying a data set:


EDGT062 Multi volume EDGT062 Data set is part of a multi-volume set

When displaying a volume:


EDGT063 Multi volume EDGT063 Volume is part of a multi-volume set

6.2.3 Updated conversion tools


IBM provides the tools and documentation needed for converting from other tape management systems to DFSMSrmm. You can find the latest information about conversion tools and samples in SYS1.SAMPLIB members EDGDOC, EDGDOCS and EDGCMM01. Three APARs since OS/390 R10, OW44917, OW45557 and OW46387, have introduced various changes to the DFSMSrmm conversion tools which are shipped in SYS1.SAMPLIB. The following enhancements have been made to the conversion tools: No requirement for RACK numbers Improved slot to bin conversion TLMS version 5.5 conversion Ship tools to convert based on catalogs Ship TAPE2000 conversion tools Parallel running exits provided 3590 media support Storage group assignment based on volume range The conversion process implies a test system and a production system. The conversion process itself is done in the test system and finally is moved to the production system. The overall conversion process is shown in Figure 6-1.

Chapter 6. DFSMSrmm enhancements

123

Figure 6-1 Conversion process flow

124

z/OS V1R3 DFSMS Technical Guide

The real conversion process implies a number of activities for reading the current tape management related data and translates into a format that DFSMSrmm is able to use. This process is shown in Figure 6-2.

Extraction

L-Records

D-Records

O-Records

K-Records

Report

Building CDS - Conversion -

ADDVRS Commands

VRS Records

OWNER Records

BIN Records

VOLUME Records

Loading CDS

DFSMSrmm CDS

Figure 6-2 Conversion process overview

Chapter 6. DFSMSrmm enhancements

seiciloP tnemeganaM epaT lanoitiddA


Macros

Old TMS Data Base

Old TMS Retention

Old TMS Policies

Build UXTABLE

DSN Records

Load UXTABLE

UXTABLE

125

There are three major phases in this process: 1. Extraction phase, in which the information that DFSMSrmm needs is derived from your current tape management system database. 2. Conversion phase, where data from the previous phase is converted to a format that is, after loaded into a VSAM KDSD data set, useful to DFSMSrmm. 3. Post-processing phase, where the VSAM KSDS data set is loaded, the DFSMSrmm CDS control record is created, and a comparison utility is run to verify the completeness and accuracy. For more detailed information about the conversion process, refer to the most current DFSMSrmm conversion redbook applicable to your installation.

6.2.4 VSAM extended function support for DFSMSrmm CDS


The ability to use VSAM extended functions for the DFSMSrmm control data set (CDS) was introduced with APAR OW47639. You can define the DFSMSrmm CDS as either an extended format (EF) or a non-extended format VSAM data set. If you do not use an extended format data set for the DFSMSrmm CDS, the CDS size is limited to a maximum of 4GB. Using an EF data set enables you to use VSAM functions such as multi-volume allocation, compression, or striping. Extended format also enables you to define a CDS that uses VSAM extended addressability (EA) to enable the CDS to grow above 4GB. To define an EF or EA DFSMSrmm CDS, you must include the DATACLASS keyword on the IDCAMS DEFINE command and reference a correctly defined data class which has these attributes specified.
Note: This APAR states that you can change an existing EF CDS to an EA CDS by issuing an IDCAMS ALTER with the EXTENDEDADDR keyword. This is incorrect; the EXTENDEDADDR keyword is invalid and the only way to create an EA CDS is to allocate a new data set and REPRO the contents of the old CDS into it.

126

z/OS V1R3 DFSMS Technical Guide

6.2.5 DFSMSrmm application programming interface


DFSMSrmm provides an application programming interface (API) that allows customers to write programs to obtain services from DFSMSrmm. Figure 6-3 shows the relationship between DFSMSrmm API and other software components.

RMM Address Space

User Space

RMM Command

TSO
REXX Variables

REXX Program

RMM

EDG TSO RMM API

EDGXCI Macro

Structured Field Introducer

Asm Program

Figure 6-3 DFSMSrmm API

Using the API, you can issue any of the RMM TSO subcommands from an assembler program. You can use the API to obtain information about DFSMSrmm resources and then use the data to create reports or to implement automation. The sample installation exit EDGUX100 shows how to use the API call to list a volume record from the DFSMSrmm Control Data Set. The use of the API call is optional, and you must have High Level Assembler installed on your system in order to code the assembler language programs. For further details, refer to z/OS V1R3.0 DFSMSrmm Application Programming Interface, SC26-7403.

Chapter 6. DFSMSrmm enhancements

127

6.2.6 PARMLIB options SMSACS and PREACS


DFSMSrmm automatic class selection (ACS) pooling control enhancements have been made so you can enable or disable ACS processing introduced with OS/390 V2R10. This enhancement was introduced with APAR OW48865. Use this enhancement to control DFSMSrmm support for pre-ACS processing and to control assignment of storage group and management class. You can set DFSMSrmm to call SMS ACS processing to use your ACS routine management class and storage group values instead of vital record specification management values and scratch pools. There are two new PARMLIB options, PREACS and SMSACS. Both of these options default to NO if they are not specified in the EDGRMMxx member of PARMLIB. These defaults are to prevent DFSMSrmm from using a Management Class or Storage Group name returned by the ACS routines incorrectly.
PREACS(NO|YES)

Specify this operand to control whether DFSMSrmm-supplied and EDGUX100 installation exit-supplied values are input to SMS Pre-ACS processing.
NO Specify NO to avoid DFSMSrmm PreACS processing using the DFSMSrmm EDGUX100 installation exit. YES Specify YES to enable DFSMSrmm PreACS processing using the DFSMSrmm EDGUX100 installation exit.

Default: NO.
SMSACS(NO|YES)

Specify this operand to control whether DFSMSrmm calls SMS ACS processing to enable use of storage group and management class values with DFSMSrmm.
NO Specify NO to prevent DFSMSrmm from calling the SMS ACS processing to obtain management class and storage group names. DFSMSrmm system-based scratch pooling, and scratch pooling and VRS management values based on the EDGUX100 installation exit are used. YES Specify YES to enable DFSMSrmm calls to the SMS ACS processing to obtain management class and storage group names. If values are returned by the SMS ACS routines the values are used instead of the DFSMSrmm and EDGUX100 decisions.

Default: NO.

128

z/OS V1R3 DFSMS Technical Guide

Implementation tasks
Update the EDGRMMxx PARMLIB member with PREACS and SMSACS options to enable the function. Before you enable SMS ACS support in your installation, at a minimum, you must make a preventative change in your SMS ACS routines to keep DFSMSrmm from incorrectly processing tape volumes. Update your ACS management class (MC) and storage group (SG) routines to check whether the ACS environment variable (&ACSENVIR) is set to either RMMPOOL or RMMVRS, and if it is, avoid setting an MC or SG name. You can add statements such as the following to your MC and SG routines:
WHEN (&ACSENVIR ='RMMPOOL'|&ACSENVIR ='RMMVRS') DO EXIT END

If you do not make this change, it is possible for DFSMSrmm to use a management class name or a storage group name that was incorrectly returned by the ACS routines.

6.2.7 Storage location as home location


In this section we provide an overview of storage locations and then give you two situations where you could benefit from using storage locations as home locations.

Storage locations
Storage locations are those places outside the removable media library where you send removable media. Storage locations are not part of the removable media library, because the volumes are not generally available for immediate use and it is not possible to return to scratch when in these locations. Storage locations are typically used to store removable media that are kept for disaster recovery or vital records. DFSMSrmm manages two types of storage locations: installation-defined storage locations and DFSMSrmm built-in storage locations. DFSMSrmm provides shelf-management of storage locations by assigning bin numbers to shelf locations within a storage location. DFSMSrmm automatically provides shelf-management for the built-in locations; this means that DFSMSrmm assigns bin numbers to each volume in a built-in storage location.

Chapter 6. DFSMSrmm enhancements

129

You can define an unlimited number of installation-defined storage locations, using any 8-character name for each storage location. Within the installation-defined storage location, you can define the type or shape of the media in the location. You can also define the bin numbers that DFSMSrmm assigns to the shelf locations in the storage location. You can request DFSMSrmm shelf management when you want DFSMSrmm to assign a specific shelf location to a volume in the location. We recommend the use of installation-defined storage locations. All that you can do with built-in storage locations, you can do with installation-defined storage locations, and more. In Table 6-1 you can see an overview of the differences between built-in and installation-defined storage locations.
Table 6-1 Differences between built-in and installation-defined storage locations

Built-in storage location Number Name Bin numbers Priority Segregate Shelf-managed
3 predefined LOCAL, DISTANT, REMOTE From 1 to 999999 Default priority between locations You cannot Automatically

Installation-defined storage location


unlimited Any 1 to 8 characters Any 6 character value Is defined by you in the LOCDEF parameter You can segregate by specifying media names You can decide

Installation-defined storage locations can be subdivided based on the media that resides in the location. For example, you can identify part of a storage location for cartridges and another part for reels. To do this you should provide a media name when you add bin numbers so the volumes are sent to the correct part of the storage location.

Defining a storage location as a home location


The ability to define a storage location as a Home location was with APAR OW48921. You can define storage locations and use them as home locations for volumes. This enables DFSMSrmm to return volumes for release processing to this location and for volumes to be returned to scratch while in this location as long as the location is their home location. This support allows you to add volumes to DFSMSrmm using this location name as the volume home location.

130

z/OS V1R3 DFSMS Technical Guide

You can decide how volumes are shelf managed. Shelf management would be required if you want volumes stored in a specific slot such as a rack number or bin number. A shelf location would not be required if the volume is stored in a robotic tape library. Depending on how you define the locations, you will use one way or the other: To not use shelf locations, define the storage location using:
LOCDEF LOCATION(loc_name) TYPE(STORAGE,HOME) MANAGEMENTTYPE(NOBINS)

(And do not define rack numbers that match to the volume serial number.) To use the rack number as the shelf location, define the storage location in this way:
LOCDEF LOCATION(loc_name) TYPE(STORAGE,HOME) MANAGEMENTTYPE(NOBINS)

(And define rack numbers that match to the volume serial numbers, or use the POOL operand when adding or moving volumes.) To use the bin numbers as shelf locations, define the storage location using:
LOCDEF LOCATION(loc_name) TYPE(STORAGE,HOME) MANAGEMENTTYPE(BINS)

(And do not define rack numbers that match to the volume serial numbers.) To assign or to change the assignment of shelf locations for volumes in the SHELF location or in a system-managed library, you use the RMM ADDVOLUME and CHANGEVOLUME subcommands with the RACK or POOL operands. DFSMSrmm does not automatically initiate assignment of rack numbers as the shelf location for these volumes. When you specify LOCATION operand on CHANGEVOLUME and specify a storage location, it does not set the HOME location. To set the HOME location, you must specify the HOME location operand. When implementing this function, follow the steps that describe your current situation.

If you are converting to DFSMSrmm


These are the steps you would follow: 1. Specify LOCDEF store BINS HOME in the EDGCNVT SYSIN statements. 2. Specify IF VOLRANGE with LOCATION to assign location names based on volume ranges. 3. Ensure that LOCDEF commands in EDGRMMxx PARMLIB member include TYPE(STORAGE,HOME) only for those storage locations to also be home locations.

Chapter 6. DFSMSrmm enhancements

131

If you are already using DFSMSrmm


These are the steps you would follow: 1. Update LOCDEF commands in EDGRMMxx PARMLIB member to include TYPE(STORAGE,HOME) only for those storage locations to also be home locations. 2. Refresh DFSMSrmm parameters:
F DFRMM,M=xx

3. Use DFSMSrmm CHANGEVOLUME subcommand to set the home location of volumes to be assigned to a specific storage home location:
RMM CV volser HOME(storname)

4. Use DFSMSrmm CHANGEVOLUME subcommand to set the current location of volumes already at the storage home location:
RMM CV volser LOCATION(storename)

If the storage location is not shelf managed, you can include the CMOVE operand to confirm the move is completed. If the storage location is shelf managed, you must run EDGHSKP DSTORE processing to assign shelf locations to the volumes. If you want specific shelf locations assigned, you can include the BIN operand on the CV subcommand.

Suggested uses of home storage locations


In this section we include two situations in which you can benefit from the use of storage locations as home locations: 1. In this situation, you have a robot library in your data center. If the robot is full of tapes, you need to move volumes in and out to keep the active volumes inside the library. With the previous implementation, all the volumes were in SHELF location, which was confusing. With the new support, you can identify a robot library as a location, named to DFSMSrmm: Give your robot library a name, such as ACS0; define it to DFSMSrmm using LOCDEF command:
LOCDEF LOCATION(ACS0) TYPE(STORAGE,HOME) MEDIANAME(CARTS) MANAGEMENTTYPE(NOBINS)

When you enter volumes into the library, issue the following command:
RMM CV volser LOCATION(ACS0) HOME(ACS0) CMOVE

When a volume is ejected from the library, ensure that DFSMSrmm knows where the volume is going. If it is not being moved by DSTORE processing, tell DFSMSrmm where it is being stored. For moving it, issue the following command:
RMM CV volser LOCATION(SHELF) POOL(T*) CMOVE

132

z/OS V1R3 DFSMS Technical Guide

In this way you can easily identify whether a volume is inside the robot library or outside. You can use the TSO RMM subcommands via the DFSMSrmm API, perhaps from an exit provided by the robot library vendor such as SLSUX06. 2. In this situation, your data center is located between two separate centers, with active volumes in both. With the previous implementation, all were located in SHELF. Now you can give your second data center a storage location name, and manage as a data center. You can easily identify where your volumes are: Give your second data center library a location a name, such as LIB; define it to DFSMSrmm using LOCDEF command:
LOCDEF LOCATION(LIB) TYPE(STORAGE,HOME) MEDIANAME(REELS,CARTS,*) MANAGEMENTTYPE(NOBINS)

When you define volumes for use in the this second data center, issue the following command:
RMM AV volser LOCATION(LIB) STATUS(SCRATCH) MEDIANAME(CARTS)

6.2.8 Enhanced bin management


DFSMSrmm bin management has been enhancement in five areas with the introduction of APAR OW49863.

Extended bin management


DFSMSrmm now maintains additional information in the volume and bin record to reflect when a move of a volume to or from a bin managed storage location has been started or has been confirmed.

Bin reuse
A bin can become reusable as soon as a move of a volume out of this bin has been started. Use the REUSEBIN operand to control how DFSMSrmm reuses bins when a volume is moving from a bin. There are two options: CONFIRMMOVE: When a volume moves out of a bin, DFSMSrmm does not reuse this bin until the volume move has been confirmed. STARTMOVE: A bin can be reused as soon as a volume starts moving out of a bin. CONFIRMMOVE is the default.

Chapter 6. DFSMSrmm enhancements

133

Assign bins in sequence


Inventory management (DSTORE) processing can now assign volumes to available bins in volume sequence and bin sequence order. INSEQUENCE only applies to those volumes that are required to move from a non-bin-managed location to a bin-managed location. Storage location management processing assigns volumes to available bins in volume sequence and bin sequence, starting with the lowest volume serial number and the lowest bin number. All bins that become available in a single run of storage location management processing can be reused for other volumes. Bins can become available during storage location management processing under these conditions: Global confirm move processing. Volumes starting a move out of the bin with PARMLIB option REUSEBIN(STARTMOVE) specified. If REASSIGN is also specified, the volumes that restart their move are merged in sequence with those volumes that have just started their move. Freed bins are merged with empty bins. Bins are best utilized, if INSEQUENCE and REASSIGN are specified and PARMLIB option REUSEBIN(STARTMOVE) is also specified. Fewer empty bins need to be defined.

Reassign processing
Inventory management (DSTORE) processing can now reassign bins to moving volumes. REASSIGN only applies to those volumes that are already moving from other than a bin-managed storage location and the required location is either a bin-managed storage location or is different from the destination. When you specify REASSIGN, you are canceling the move for these volumes and requesting that the move for the volumes is restarted so that DFSMSrmm could assign these volumes to other locations or bins: A volume is reassigned when DSTORE(LOCATION(...)) is specified if at least one of the LOCATION subparameter pairs matches the volume current location and destination.

134

z/OS V1R3 DFSMS Technical Guide

Command enhancements
The following commands have been changed to allow for the enhanced bin management functions: LISTCONTROL: Output reflects extended bin management support. LISTVOLUME: Output reflects extended bin management support. LISTBIN: Output reflects extended bin management support. SEARCHBIN: Output reflects extended bin management support. Also, the default of INUSE has been removed. If neither INUSE nor EMPTY is specified in the search request, DFSMSrmm lists all bins regardless of their status. SEARCHRACK: Output reflects extended bin management support. Also, the default of INUSE has been removed. If neither INUSE nor EMPTY is specified in the search request, DFSMSrmm lists all racks regardless of their status.

Performance
Inventory management DSTORE run time could slightly increase if extended bin support is enabled, or if DSTORE(INSEQUENCE) is used. The amount of the increase depends on the relative number of volumes that move to and from bin managed storage locations.

Usage considerations
These are some important considerations regarding usage: Since the default INUSE has been removed from the SEARCHBIN and SEARCHRACK commands, you only need to invoke the command once to obtain a complete list of bins or racks. You can change any existing user-written programs and REXX programs that request two or more searches to retrieve all rack or bin numbers to only make one search request. Extended bin management is now the default for all new conversions from other tape library management systems to DFSMSrmm. In your RMMplex, your system managed libraries have to be connected to at least one system which has enhanced bin management installed and enabled.

Coexistence
To avoid corruption of the DFSMSrmm control data set, before you enable extended bin support, ensure that for all DFSMSrmm systems sharing a control data set in an RMMplex, you have installed coexistence APAR OW49863 or APAR OW47947 or are running all systems on z/OS V1R3 or higher.

Chapter 6. DFSMSrmm enhancements

135

If you do not enable extended bin support, then there is no requirement to install these APARS.

Migration tasks
To implement these enhancements, consider each task listed in the following sections. The task list is split into required and optional tasks: Required tasks apply to all DFSMSrmm installations enabling the function. Optional tasks apply to any DFSMSrmm installation which is going to implement some or all of the enhancements.

Required migration tasks


These are the required migration tasks: Ensure that for all DFSMSrmm systems sharing a control data set in an RMMplex, you have installed coexistence APAR OW49863 or APAR OW47947 or are running all systems on z/OS V1R3 DFSMS or higher. Failure to do so could result in corruption of the DFSMSrmm control data set. Update any existing reports to include additional information provided. See z/OS V1R3.0 DFSMSrmm Reporting, SC26-7406. Determine whether you want to enable extended bin support. This function cannot be disabled once you have enabled it. Ensure that all volume moves from bin-managed storage locations and to bin-managed storage locations have completed. If you use the global confirm move command to complete volume moves, run inventory management vital record processing or expiration processing to process the confirm moves for the volume records. DO NOT run storage location management (DSTORE) processing once all volume moves have been confirmed, as this could start new volume moves. Specify CONTROL EXTENDEDBIN in the SYSIN of your EDGUTIL JCL if you intend to enable extended bin support.

Optional migration tasks


These are the optional migration tasks: Consider increasing the size of your CDS. Refer to DFSMSrmm control data set growth on page 137 for details. Consider increasing the size of you journal data set. Refer to DFSMSrmm journal growth on page 137 for details. Specify the OPTION REUSEBIN operand in your EDGRMMxx PARMLIB member to reuse an available bin in a storage location as soon as the volume move begins.

136

z/OS V1R3 DFSMS Technical Guide

Move volumes to selected storage locations by running storage location management (DSTORE) with location specified. Refer to DSTORE by location on page 138 for details. Redirect and reassign a moving volume by running the EDGHSKP utility with the DSTORE(REASSIGN) parameter. Run the EDGHSKP utility with the DSTORE(INSEQUENCE) parameter to assign volumes to storage locations in volume serial number and bin number sequence Some or all of these optional migration tasks may be appropriate to your installation.

DFSMSrmm control data set growth


After you have enabled extended bin support, the space required for your CDS will grow over time. When a move is started for a volume, the associated CDS records will be migrated to a new level dynamically. The size of the volume record will increase by 64 bytes. The size of the bin record(s) will increase by 24 bytes, if the move is to and/or from a bin managed storage location. To determine how much additional space you may need for the CDS, you can use this calculation Additional space required = (volumes * 64) + (bins * 24) This additional space may already be available as imbedded free space in your CDS. Check if VSAM free space or secondary space covers the additional bytes needed and resize your CDS if required. Even if you do not reallocate your CDS, performing an EDGBKUP with the BACKUP(REORG) to reorganize the CDS before you enable extended bin support could reduce the amount of VSAM CI/CA splits during inventory management as the volume records increase over time.

DFSMSrmm journal growth


With extended bin support enabled, larger CDS records for volumes and bins will also result in larger journal records. In addition to the growth due to the increased CDS records, DFSMSrmm writes additional 0.3 KB to the journal for each confirm of a move into, and for each start of a move from, a bin management storage location. When running inventory management with the DSTORE(INSEQUENCE) parameter, DFSMSrmm could also write an additional 1.6 KB to the journal for each start of a move into a bin management storage location.

Chapter 6. DFSMSrmm enhancements

137

The journal data set MUST be large enough that it does not fill up during inventory management processing. A summary of journal record growth is listed here: Each confirm move into bin requires an additional 0.3 KB. Each start move from bin requires an additional 0.3 KB. Also, during DSTORE(INSEQUENCE): Each start move into bin requires an additional 1.6 KB.

6.2.9 DSTORE by location


The ability to perform storage location management (DSTORE) by location was introduced with APAR OW47993.

Storage location management


Storage location management is part of the inventory management functions. It is used to set the destination for the volume and optionally assigns the exact shelf location to be used. Storage location management processing assigns volume destinations and sets required volume destinations based on the results of other inventory management functions. You must have volume destinations determined before running storage location management. To determine the destinations you can run vital record processing (VRSEL) or use the RMM CHANGEVOLUME subcommand for manual movement. You can override automatic vital record specification location selection by using the RMM CHANGEVOLUME subcommand. When running storage location management, a shelf location, known as the bin number, is optionally set if volumes are moving to and among shelf-managed storage locations. DFSMSrmm does not assign a bin number for the volumes move to the destination of a system-managed tape library. DFSMSrmm uses the in-transit status to indicate that volumes does not reside in a system-managed library. Volumes in system-managed libraries are marked as being in-transit only after you eject them after the storage location management processing. We recommend that after successful storage location processing, you schedule a job or job step to eject all volumes.

Running DSTORE by location


Storage location management can be performed for all the locations at the same time or for specific locations. This function, known as DSTORE by location, is useful when you would like to physically move volumes to and from different locations on different days.

138

z/OS V1R3 DFSMS Technical Guide

You can select to perform DSTORE by location, specifying:


DSTORE(LOCATION(from_location:to_location,...)

In this statement, from_location:to_location is a pair of location names separated by a colon. The from_location is the current location a volume should move from. The to_location is the name of the required location where a volume should move to. If you omit to_location, DFSMSrmm uses * as the default. You can specify 1 to 8 pairs of from_location:to_location names. DSTORE processing will then be performed for a volume, if at least one of the location pairs matches the volumes current location and destination. The from_location and to_location names can be specified in one of the following ways:
Specific:

A specific location name is to 8 characters. The location names you specify are not validated against the DFSMSrmm LOCDEF entries or the name of SMS libraries. Example 1: SHELF This means just this one location is included.
Generic:

The location names can be specified in one of the following ways: All location names can be specified using a single asterisk (*). Example 2: * This means all locations are included. Use an asterisk to specify all locations that begin or end with specific characters. Example 3: ATL* This means all location names starting with the characters ATL are included. Use the percent sign (%) in the location name to replace a single character. Up to eight percent signs can be used in a location name mask. Example 4: ABC%%% This means all locations starting with ABC and with any other character up to a total of a 6 character name. Use a combination of the asterisk and the percent sign. Example 5: *AB% Sample jobs for performing inventory management functions are provided in SAMPLIB members EDGJDHKP and EDGJWHKP.

Chapter 6. DFSMSrmm enhancements

139

6.2.10 Extended extract file


The extended extract file was introduced with APAR OW47651. This extended extract file is used by the new report generator function, for details see Section 6.2.11, Report generator. In order to produce the extended extract file, you must make a change to your EDGHSKP job to specify a DD of XREPTEXT to produce the new extended extract file. You can still produce the old extract file by leaving the DD as REPTEXT. The difference between the extract files and the extended extract file is that the extended extract file contains extended records in addition to all the other records. The extended record combines data set and volume information into a single record. The report extract process now uses internal sorts to simplify the report JCL EDGJRPT. This JCL is now a single step job with reports enabled via DD statements, it also uses internal sorts via the EDGRRPTE REXX EXEC. Once you apply this APAR, you must use the new EDGJRPT JCL shipped in SAMPLIB or modify existing JCL to perform a similar function. The updated EDGRRPTE can process existing extract files, or the new extract file with the extended records. The exec has a new method of requesting reports and specifying options. The previous version of EDGRRPTE cannot process the new parameters passed to it and should no longer be used. Report extract processing can now occur while other inventory management functions are in progress. A systems ENQ is done on the EXTRACT and MESSAGE DSN. You can serialize reporting jobs using the same extract DSN by using a dummy data set with DISP=OLD. A new RACF FACILITY class profile STGADMIN.EDG.HOUSEKEEP.RPTEXT is used to protect report extract processing separately from other inventory functions.

Migration considerations
Once installed you must use the sample JCL that was shipped in SAMPLIB as EDGJRPT or update any existing JCL that calls REXX EXEC EDGRRPTE. When running in a sysplex with mixed levels, you should ensure that the housekeeping functions, including report generation occur on the system with the highest level of DFSMSrmm.

6.2.11 Report generator


The DFSMSrmm report generator is a new ISPF application introduced with APAR OW47967, that you can use to create reports.

140

z/OS V1R3 DFSMS Technical Guide

The report generator allows you to create reports using sequential data sets as input. You must specify mappings of the records in the input data set for the report generator to be able to pick out information in the input data set. You can create a DFSMSrmm extract data set using the EDGHSKP utility to create an input file for the report generator. If you use the extract data set as input, DFSMSrmm provides mapping macros for the records in the extract data set. DFSMSrmm provides report types that associate the input file with the needed mapping macros. The default version of the report generator uses DFSORT ICETOOL as the reporting tool to create the reports. A sample reporting tool using SYNCSORT is also provided. If you want to use other types of input data sets or other reporting tools, you can modify the report generator to accomplish this. You must also provide mapping of the input data set records. There are no special requirements for using the new report generator, only to have DFSORT installed if you want to use the default reporting tool and input data set.

Setting up the report generator


You need to define up to three report definition libraries: The product library. This is the library that ships with the product. The default is SYS1.SAMPLIB. The installation library. This is a library the system programmer or storage administrator will provide for the whole installation. When the system programmer or librarian creates reports for an installation, they need to specify the installation library name as their user library since updates are only made available in the library defined as the user library. Once the report types, definitions and tools have been customized as required in your installation the system programmer uses his user library as the installation library. The default is empty. The user library. This library can be specified by the user for his own reports. Updates are only performed in the user library. The default is userid.REPORT.LIB.

Chapter 6. DFSMSrmm enhancements

141

In addition, a JCL library to save the generated report JCL can be specified. The default library name is userid.REPORT.JCL. The default libraries are defined in the EDGRMAIN exec, refer to Implementation steps on page 143 for more information. All four libraries must be partitioned data sets with fixed 80 byte records. If the two user libraries are not predefined, DFSMSrmm will allocate them automatically with a primary and secondary space of 10 tracks and 50 directory blocks. All data sets need to be specified in normal ISPF convention, fully qualified with single quotes, or without quotes. If the data set name is specified without quotes then the data set names is automatically expanded to the full qualifier using the TSO PREFIX value and enclosed within single quotes. If the user has specified NOPREFIX in the ISPF profile, then the RACF userid will be used as the HLQ. You can select from predefined report types and report definitions or create your own report types and definitions. To specify the four libraries needed for the report generator, you can use the ISPF panels. The library names are initially set to the values defined in the EDGRMAIN exec. You can change the names if you want. If you select option 0 (OPTIONS) in the Removable Media Manager primary menu, then you receive the Dialog Options Menu panel, Figure 6-4

Panel Help -----------------------------------------------------------------------------DFSMSrmm Dialog Options Menu Option ===> 1 USER 2 SORT 3 REPORT - Specify processing options - Specify list sort options - Specify report options

Enter selected option or END command. For more info., enter HELP or PF1.

5694-A01 (C) COPYRIGHT 1993,2002 IBM CORPORATION

Figure 6-4 Dialog Options Menu panel

142

z/OS V1R3 DFSMS Technical Guide

From this panel, select option 3 (REPORT), and you get the Report Options panel, Figure 6-5. In this panel you enter the name of the four libraries required.

Panel Help -----------------------------------------------------------------------------DFSMSrmm Report Options Command ===> Report definition libraries: User . . . . . . . . . . . 'MHLRES2.REPORT.LIB' Installation . . . . . . . 'RMM.REPORT.LIB' Product . . . . . . . . . 'SYS1.SAMPLIB' User report JCL library . . 'MHLRES2.REPORT.JCL'

DFSMSrmm allocates user libraries if they do not exist.

Figure 6-5 Report Options panel

Implementation steps
There are a number of steps to follow before being able to use the report generator, and these steps will vary depending on the type of user.

Storage administrator
These steps are for the storage administrator: 1. Define the installation library to be used in your installation as user library in the Report Options panel and allocate it manually. 2. Define the JCL library and specify the product library. The product library by default is SYS1.SAMPLIB. 3. Provide READ authority to the necessary users to the installation and product libraries. 4. Select the Report Types panel and add or change reports types shipped with the product and set them up for your users. 5. Select the Report Definition panel and add or change the reports shipped with the product and set them up for your users. 6. Define the installation library name in EXEC EDGRMAIN as well as the product library name, if not equal to SYS1.SAMPLIB.

Chapter 6. DFSMSrmm enhancements

143

You may want to consider leaving the Product library name blank as the RMM dialog searches the entire library when searching for reports. If you do this you will need to copy the IBM provided samples from SAMPLIB to your installation library. 7. Update the default naming convention in EXEC EDGRMAIN for user library name and the JCL library name if necessary.

Librarian and general user


These steps are for the librarian and general user: 1. Verify the user library name in the Report Options panel and allocate it manually or dynamically. 2. Verify the JCL library and specify the product and installation library. The names may be provided by the storage administrator if not filled already. 3. Select the Report Types panel and add or change report types shipped with the product or installation support if needed. 4. Select the Report Definition panel and add or change reports shipped with the product or installation support if needed. 5. Fill out the job card in the DFSMSrmm Options panel. 6. Create the report JCL to be submitted and get the report.

The report type support


The report type contains the basic information you need for identifying the records for reporting. The report type for the DFSMSrmm specific data sets is provided in SYS1.SAMPLIB(EDGGRTD) with the product and can be copied and changed by the proper person. This member name EDGGRTD is a reserved name and it is used in all three libraries (product, installation, and user) to store report types. Updates to EDGGRTD member are only performed in the user library. The report generator application reads the information from all three libraries, if defined, starting with the user library, installation library and the product library. The information is presented in an ISPF table to the user. DFSMSrmm does not display duplicate entries in the ISPF table. DFSMSrmm stores added and changed entries in EDGGRTD in the user library. All the changes on entries from the installation and product library are only performed in the ISPF table, not in the libraries itself. On exit from the ISPF table, all entries being added, changed or read from the user library are written to the user library member.

144

z/OS V1R3 DFSMS Technical Guide

Report types are only used when a new report definition is created. Once the report definition has been created all report type information is copied to the report definition. Changes in the report types will not be reflected in existing report definitions. The report type contains the following information: The report type name: This is the unique identifier of all the report types. The report type description: This is a required field. The name of the associated macros: If a record is mapped by the concatenation of more than one macro (for example, DFSMSrmm SMF records have two macros: EDGSMFAR for the SMF header and EDGSVREC for DFSMSrmm SMF volume record), then you can specify up to five macros. The macro structures are concatenated in the sequence they are specified. Macros must be defined in ASSEMBLER. One macro name is required. The name of the macro library: All macros need to be defined in the same library.This is a required field. The report input data set: This is the data set containing the input records for the report mapped by the macro definition. If you are using GDGs or always use the same data set name, the data set name can already be specified here. If you plan to use different input data set names then you should leave this field empty, since you will be prompted for the name in any case before the report JCL is created. The input data set must be a sequential file. This is an optional field. The report creation information: This is the creating timestamp and user ID. This information is created automatically. The report last change information: This is the last change timestamp and user ID. This information is created automatically.

Chapter 6. DFSMSrmm enhancements

145

The record selection criteria: Record selection criteria specify the criteria for selecting records from the input data set in case different record types exist. As with selection of records in the report definition dialog, you can specify criteria for the different fields. Refer to The report definition support on page 147 for more details about the selection criteria. These criteria are not displayed in the ISPF table. You have to select the entry and then you can specify the criteria. The selection criteria are passed to the report definition and can be changed there as well.

The reporting tool support


The report generator allows the use of different reporting tools. The selection of the fields and criteria in the report definition is independent from the used reporting tool. The intention of the report generator is to create a JCL that can be executed in batch. No support for reports running in foreground is provided. The creation of the report JCL requires an EXEC member with the code to read the report definition member and to write the report JCL. Optionally, a skeleton library can be used. With this approach you can create your own report JCL for any type of reporting tool (for example, SAS or SYNCTOOL). The list of reporting tools is provided in SYS1.SAMPLIB(EDGGTOOL) with the product. The reporting tools list can be modified and stored in the installation library for the entire installation or stored in a user library for private use. The member EDGGTOOL is a reserved name and is used in all three libraries similar to the report type member EDGGRTD. The reporting tool definitions are presented in an ISPF table to the user. Duplicate entries are ignored. Added or changed entries are stored in EDGGTOOL in the user library. The report generator copies information about the contents of the product, installation, and user libraries into an ISPF table that remains for a session. When the user requests that a library member is deleted, the report generator deletes the entry from the ISPF table and not the installation or product library. The changes that the user makes to the ISPF table are reflected in the user library member. The report generator provides support for DFSORT ICETOOL. The reporting tool EXEC needs to be in a library defined in SYSPROC or SYSEXEC.

146

z/OS V1R3 DFSMS Technical Guide

The reporting tool definition contains the following information: The reporting tool EXEC name: The reporting tool EXEC is the name of the member that creates the reporting JCL out of the report definition. This is a unique identifier of all reporting tools. The reporting tool description: The reporting tool description is the synonym of the reporting tool (for example, ICETOOL, SYNCTOOL, SAS). This is a required field. The column space: This is the number of spaces between the report columns. The value depends on the reporting tool being used. It will only be used for the calculation of the report width. This is a required field. The report creation information: This is the creating timestamp and user ID and is generated automatically. The report last change information: This is the last change timestamp and user ID and is generated automatically.

The report definition support


The report definition allows the definition of individual reports. That includes the report header and footer, the fields, the criteria, the sort order and the grouping of records. Each report definition is stored in a separate member in one of the three report libraries (user, installation, product). Therefore, the report definition name can only have eight characters. The names EDGGRTD and EDGGTOOL are reserved names and cannot be used for report definitions. For the creation of report definitions a report type and reporting tool is required. The information about the used macro, the macro library and the record selection criteria specified and copied from the report type is visible to the user and can be changed. The reporting tool can be changed at any time.

Using the report generator


The ISPF dialog has been changed to include the new report generator options. The new REPORT option is included in the following panels: Dialog Options menu panel (Figure 6-4 on page 142).

Chapter 6. DFSMSrmm enhancements

147

Command Menu panel (Figure 6-6).

Panel Help -----------------------------------------------------------------------------DFSMSrmm Command Menu - z/OS V1R3 Option ===> 0 1 2 3 4 5 6 7 R OPTIONS VOLUME RACK DATA SET OWNER PRODUCT VRS CONTROL REPORT Specify dialog options and defaults Volume commands Rack and bin commands Data set commands Owner commands Product commands Vital record specifications Display system control information Report generator

Enter selected option or END command. For more info., enter HELP or PF1.

5694-A01 (C) COPYRIGHT 1993,2002 IBM CORPORATION

Figure 6-6 Command Menu panel

User Menu panel (Figure 6-7).

Panel Help -----------------------------------------------------------------------------DFSMSrmm User Menu - z/OS V1R3 Option ===> 0 1 2 3 4 5 6 R OPTIONS VOLUME DATA SET PRODUCTS OWNER REQUEST RELEASE REPORT Specify dialog options and defaults Display list of volumes Display list of data sets Display list of products Display or change owner information Request a new volume Release an owned volume Work with reports

Enter selected option or END command. For more info., enter HELP or PF1.

5694-A01 (C) COPYRIGHT 1993,2002 IBM CORPORATION

Figure 6-7 User Menu panel

148

z/OS V1R3 DFSMS Technical Guide

Librarian Menu panel (Figure 6-8).

Panel Help -----------------------------------------------------------------------------DFSMSrmm Librarian Menu - z/OS V1R3 Option ===> 0 1 2 3 4 5 6 7 8 9 A B R OPTIONS VOLUME ADDPP PRODUCTS OWNER RACKS ADDVOL SCRATCH RELEASE CONFIRM REQUEST STACKED REPORT Specify dialog options and defaults Display or change volume information Add a product Search for products Display or change owner information Manipulate library racks and storage location bins Add a volume to the library Add SCRATCH volumes to the library Release volumes Confirm librarian/operator actions Assign a user volume to an owner Add stacked volumes Work with reports For more info., enter HELP or PF1.

Enter selected option or END command.

5694-A01 (C) COPYRIGHT 1993,2002 IBM CORPORATION

Figure 6-8 Librarian Menu panel

From the Command Menu panel, and using the REPORT primary line command within the RMM dialog, you can reach the Report Generator panel (Figure 6-9).

Panel Help ------------------------------------------------------------------------------DFSMSrmm Report Generator Option ===> 0 1 2 3 OPTIONS REPORT REPORT TYPE REPORTING TOOL Specify dialog options and defaults Work with reports Work with report types Work with reporting tools

Enter selected option or END command. For more info., enter HELP or PF1.

5694-A01 (C) COPYRIGHT 1993,2002 IBM CORPORATION

Figure 6-9 Report Generator panel

Chapter 6. DFSMSrmm enhancements

149

From the Dialog Options Menu panel (Figure 6-4 on page 142), selecting option 3 (REPORT) , and from the Report Generator panel (Figure 6-9), selecting option 0 (OPTIONS), you can reach the Report Options Menu panel, Figure 6-4 on page 142. In this panel you can change the names of the different libraries needed for the report generator.

Working with report definitions


In the Report Generator panel (Figure 6-9), select option 1 (REPORT) or from the User Menu panel (Figure 6-7 on page 148) or Librarian Menu panel (Figure 6-8 on page 149), select option R (REPORT) . Then you will reach the Report Definition Search panel (Figure 6-10)

Panel Help -----------------------------------------------------------------------------DFSMSrmm Report Definition Search Command ===> Report name . . User id . . . . Libraries (enter S): User Installation Product May be generic. Leave blank for all reports.

Leave blank for all user ids. Select one or more library. Default are all defined libraries.

The A G N T

following line commands will be available when the list is displayed: - Add a new report definition D - Delete a report definition - Generate and save the JCL J - Edit and manually submit the JCL - Copy a report definition S - Display or change the report definition - Select a reporting tool

Figure 6-10 Report Definition Search panel

From this panel you can enter a Report name to search for report definitions by name. The report definition name is the name of a member in one of the report definition libraries. You can also use the User ID fields to search for report definitions updated by a specific user. Finally, you can select one or more of the user, installation or product libraries among the three specified.

150

z/OS V1R3 DFSMS Technical Guide

You can also leave these fields blank to get a list of all available report definitions. In Figure 6-11 you can see an example Report Definitions panel with a list of all the IBM supplied reports.

Panel Help ------------------------------------------------------------------------------DFSMSrmm Report Definitions Row 1 to 18 of 18 Command ===> Scroll ===> PAGE The following line commands are valid: A,D,G,J,N,S, and T S Name - -------S EDGGAUD1 EDGGAUD2 EDGGR01 EDGGR02 EDGGR03 EDGGR04 EDGGR06 EDGGR07 EDGGR08 EDGGR09 EDGGR10 EDGGR11 EDGGR12 EDGGR13 EDGGR14 EDGGR15 EDGGSEC1 Report title -----------------------------SMF Audit of Volumes by Volser SMF Audit of Volume by Rack Scratch tapes by volume serial List of SCRATCH Volumes by Dat Inventory List by Volume Seria Inventory List by Dataset Name Inventory of Volumes by Locati Inventory of Dataset by Locati Inventory of Bin by Location Datasets in Loan Location Volumes in Loan Location List MultiVolume and MultiFile Movement Report by Dataset Movement Report by Bin Movement Report by Volume Seri Volume Inventory Including Vol Report of Accesses to Secure V Report type ---------------------------SMF Records for Volumes SMF Records for Volumes Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records SMF Security Records User id ------RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM

Figure 6-11 Report Definitions panel

These are the available line commands:


A D G J N S T Add a new report definition Delete a report definition Generate and save a report JCL Edit and manually submit a report JCL Copy a report definition Display or change a report definition Select a reporting tool

You can create a new report definition by selection the option A in the Report Definitions panel. You can also create a new report definition by copying an existing one, by selecting option N in the Report Definitions panel. In both cases, you will receive a screen with the following prompt for a new report name.
Enter the report name . . . .

Chapter 6. DFSMSrmm enhancements

151

When you select option A, then you must select the report type (Figure 6-12) and the reporting tool to use in the new report definition (Figure 6-13).

Panel Help ----------------------------------------------------------Select Report Type Row 1 to 12 of 17 Command ===> Scroll ===> PAGE S Report type - -----------------------------Extended Extract Records Extract Records for Bins Extract Records for Data Sets Extract Records for Owners Extract Records for Products Extract Records for Racks Extract Records for Volumes Extract Records for VRSs HSKP ACTIVITY file records SMF Records for Bins SMF Records for Data Sets SMF Records for Owners Name -------EDGRXEXT EDGRSEXT EDGRDEXT EDGROEXT EDGRPEXT EDGRREXT EDGRVEXT EDGRKEXT EDGACTRC EDGSSREC EDGSDREC EDGSOREC

Figure 6-12 Select Report Type panel

Panel Help ----------------------------------------------------------Select Reporting Tool Row 1 to 2 of 2 Command ===> Scroll ===> PAGE S Reporting tool - -----------------------------ICETOOL SYNCTOOL ********************* Bottom of data **********************

Figure 6-13 Select Report Tool panel

The next step is to select the criteria fields for creating the report. In the Report Definition panel, Figure 6-14, you define the header, the footer, the reporting tool, and the fields to be reported.

152

z/OS V1R3 DFSMS Technical Guide

Panel Help -----------------------------------------------------------------------------DFSMSrmm Report Definition - SCRATCH Row 1 to 18 of 171 Command ===> Scroll ===> PAGE Report title . . . Scratch volumes Report footer . . ITSO Reporting tool . : ICETOOL

Report width: 168

Use END to save changes, NOSAVE to ignore Select a field name with S to specify a field selection criterion S CO SO - -- --1 1A 2 3 4 5 6 7 Field name -------------------XVVOLSER XVMDMVID XVUSE XVSTORID XVOWNID XVLRDDAT XVLWTDAT RXTYPE XVPVOL XVNVOL XVCRDATE XVCRTIME XVCRSID XVLCDATE XVLCTIME XVLCUID XVLCSID XVEXPDTO Column header text -----------------------------------Volume serial number Multi-dataset multi-volume id Volume use count Current location name Volume owner userid Date volume last read Date volume last written Record type - C'X' Previous volume in sequence Next volume in sequence Create date of volume record Create time volume record (hhmms Create system id of volume recor Last change date of volume recor Last change time of volume recor Last change user id of volume Last change system id of volume Expiration date - original CW Len Typ --- --- --20 6 C 29 8 C 16 4 C 21 8 C 19 8 C 21 10 C 24 10 C 18 1 C 27 6 C 23 6 C 28 10 C 32 6 C 32 8 C 32 10 C 32 6 C 29 8 C 31 8 C 27 10 C

Figure 6-14 Report Definition panel with criteria fields

The Report Definition panel displays a list of fields in the record and any report criteria specified previously or with the report type. Use the S line command to select the fields for which you would like to view or change the selection criteria. Update the column header test as required for your reporting columns. You can also enter data in the panel to specify which fields are to be included in the report, sorted on and used to group records onto a new page. The grouped fields are used as the primary sort key and to separate the records into pages. Once you have selected the criteria fields for your report, you get the Report Criteria panel, Figure 6-15, in which you can define the criteria for selecting records in your report.

Chapter 6. DFSMSrmm enhancements

153

Panel Help -----------------------------------------------------------------------------DFSMSrmm Report Criteria - SCRATCH Row 1 to 2 of 2 Command ===> Scroll ===> PAGE Report title : Scratch volumes Use END to save changes, NOSAVE to ignore The following line commands are valid: B,D,I,N,P,R, and T Comparison operators: EQ =, NE <>, GT >, GE >=, LT <, LE <=, IN, BW Conjunction: AND, OR, AND(, )AND S Field name Op Compare value(s) Conj Len Typ - -------------------- -- --------------------------------------- ---- --- --RXTYPE EQ X 1 C XVVOLSER EQ 8* 6 C ******************************* Bottom of data ********************************

Figure 6-15 Report Criteria panel

The available line commands are:


B D I N P R T Bottom Delete Detail Next Previous Repeat Top Move this entry to the bottom Delete this entry Add or change information details Move this entry down by one Move this entry up by one Repeat this entry Move this entry to the top

When you select option I in the previous panel, you get the Report Criteria Details panel, showing the details of the field you selected; see Figure 6-16.

DFSMSrmm Report Criteria Details - SCRATCH

Field name . . . Operation . . . Compare value(s) Compare value(s) Conjunction . . Length . . . . . Type . . . . . .

. . . . . . .

: . . . . : :

XVVOLSER EQ 8*

6 C

Figure 6-16 Report Criteria Details panel

154

z/OS V1R3 DFSMS Technical Guide

When creating a new report by copying an existing one, all the values from the old definition remain the same for the new one.

Generating reporting JCL


Once you have created your Report Definition, you need to generate the JCL for the report. When you select G from the Report Definition panel, you receive the following screen prompting for a data set name containing the records to be reported.

Input data set . . . 'RMM.EXTRACT' New data set name to be stored in the report definition . . . . . N (Y/N)

Figure 6-17 Report Definition panel prompt for input records

The input data set name should normally be the same as the data set specified on the XREPTEXT DD in the EDGHSKP JCL or the data set name containing the SMF extract records from EDGJSMFP. Once the JCL has been generated, it will be saved into the User Report JCL library as specified on the Report Options panel (Figure 6-5 on page 143). Once the JCL has been generated, you can edit and submit it by selecting J from the Report Definitions panel. For further information, refer to z/OS V1R3.0 DFSMSrmm Reporting, SC26-7406.

6.2.12 Buffered tape mark support for A60 controller


Buffered tape mark support was introduced in OS/390 DFSMSrmm R10 with APAR OW49833. It increases the applications use of Magstar A60 when writing multiple files on a tape by removing the necessity to write a second tapemark at the end of a file when the next operation to the tape volume is a write, and occurs within a short period of time. DFSMSrmm only utilizes the buffered tape mark support if there is an error writing to a tape volume. DFSMSrmm is told by CLOSE processing that a file could not be written completely or that a number of files could not be written completely. DFSMSrmm marks each of the affected files as closed by abend so that they can avoid being retained by normal retention and movement policies.

Chapter 6. DFSMSrmm enhancements

155

Buffered tape marks are application requested via assembler program DCBE macro option: SYNC=NONE. This results in no synchronization of the controller buffer to the tape media when tape marks are written. Currently this option only has an effect on IBM 3590 MAGSTAR Tape Subsystems. Buffering tape mark support allows multiple files to be written at streaming speed when volume disposition leaves the tape positioned at the end of file created. If the device supports buffered tape marks, the OPEN, EOV and CLOSE functions will take advantage of it when writing. This can save several seconds of real time. If the device does not support buffered tape marks, this option has no effect. Similarly, this option has no effect on older level systems. The system does not ensure that user data, tape marks and data set labels are actually written to the tape media when transitioning to a new volume or when closing the data set. However, if the application ensures that the CLOSE for the last file written is issued with the REWIND or REREAD option a synchronize failure will result in the job abending and a message externalizing the number of lost blocks the number of blocks written to the buffer but not to the tape media.

156

z/OS V1R3 DFSMS Technical Guide

Chapter 7.

Advanced copy services enhancements


In this chapter we describe the changes introduced to the Advanced Copy Services component of z/OS V1R3 DFSMS specifically, the enhancements to extended remote copy (XRC).

Copyright IBM Corp. 2002

157

7.1 Extended remote copy


In this section we provide a brief overview of Extended Remote Copy (XRC) and then discuss the enhancements made with the introduction of Multiple XRC and Coupled XRC.

7.1.1 XRC overview


Extended remote copy is a combined hardware and software solution to the problem of accurate and rapid disaster recovery. XRC can also be used to provide a DASD and workload migration solution, but here we are only discussing the disaster recovery solution. XRC is designed for sites that match the following criteria: They must maintain the highest levels of performance on their primary system. They support extended distances between volume copies. They can accept a gap of a few seconds between writes on the primary system and the subsequent write updates on the recovery system. Maintaining data integrity becomes especially critical when a volume is updated by multiple applications, or when a data set exists on multiple volumes spread across multiple storage controllers. XRCs design strategy ensures that secondary updates are applied on a consistent basis across multiple storage controls. This update sequencing is necessary in order to avoid data integrity problems and potential data loss.

How extended remote copy works


XRC is implemented between cached storage subsystems and DFSMS/MVS host system software. With XRC, copies of updated data are automatically sent to the recovery system. This is done asynchronously to data updates on the primary system by the S/390 System Data Mover (SDM), and with a minimal increase to DASD write response time at the application. There are two address spaces which comprise the SDM, ANTAS000 is the primary address and is started at IPL time, and ANTAS001 which is started when the XSTART command is issued to start a XRC session. ANTAS000 handles all TSO and API calls for a XRC session

158

z/OS V1R3 DFSMS Technical Guide

A single XRC session can manage somewhere between 10003000 3390-3 volumes, depending on the workload characteristics. These characteristics include workloads with a high-write rate at the low end of the scale, and those with a high read-to-write ratio and low-write rate at the higher end of the scale. For most workloads, a single XRC session can manage approximately 1500 3390-3 volumes. The limiting factors for a single XRC session are the single central processor (CP) speed and the amount of available storage. As processor speeds and storage capabilities increase, the number of volumes capable of being managed per XCF session also increases. While an XRC session could potentially support up to 3000 primary volumes, this limit may not be practical in your environment. Two enhancements have been made to XRC to cater for larger environments where the number of volumes is greater than 2000-3000 or a large number of storage controllers (or both) were required to be copied using XRC for disaster recovery purposes. The CPU and amount of storage available are still limiting factors, as is workload characteristics. The two enhancements are: Multiple XRC Coupled XRC Coupled XRC and Multiple XRC are standard features of z/OS V1R3 DFSMS. Both of these enhancements are available for earlier releases back to DFSMS 1.4 via APAR OW43316.

7.1.2 Multiple XRC


Multiple XRC (MXRC) is an enhancement which allows for up to five XRC sessions per LPAR. Previously you could only run one XRC session per LPAR. Given this enhancement, a single LPAR could potentially support a total of around 15000 primary volumes for XRC sessions. Again, this limit may not be practical for your environment and careful planning is required. See z/OS DFSMS Advanced Copy Services, SC35-0428-01 for planning details. These multiple sessions can not be recovered to a consistent point in time.

How it works
Multiple XRC is implemented by running up to 6 address spaces for the SDM, ANTAS000 through ANTAS005. ANTAS000 is the primary address space and handles TSO commands and API requests that control XRC. This address space is started at IPL time.

Chapter 7. Advanced copy services enhancements

159

ANTAS001 through ANTAS005 may or may not be present depending on the number of XRC sessions you have started using the XSTART command.

7.1.3 Configuring XRC or MXRC


There are three types of data sets needed in order to use XCF or MXRC. All three types are required when using XRC for disaster recovery purposes. Journal data sets hlq.XCOPY.session_id.JRNL01: The journal data set contains temporary user data that is created when records on the primary volume are changed or updated. Each journal data set must be a fixed-block SAM data set with the following attributes:
DCB=(RECFM=FB,LRECL=7680,BLKSIZE=7680,DSORG=PS)

A minimum of two journal data sets are required and you can allocate up to 16. Refer to z/OS DFSMS Advanced Copy Services, SC35-0428-01, for sizing information. Control data set hlq.XCOPY.session_id.CONTROL: The control data set contains consistent group information on the secondary volumes and the journal data set. It contains information necessary for recovery operations. The control data set acts as the table of contents for the session.
Example: The control data set keeps track of data written to secondary volumes, the location of unwritten data in the journal set, and which group to start recovery with.

The control data set must be a sequential data set. We recommend the following allocation:
DCB=(RECFM=FB,LRECL=15360,BLKSIZE=15360,DSORG=PS)

Refer to z/OS DFSMS Advanced Copy Services, SC35-0428-01, for sizing information. State data set hlq.XCOPY.session_id.STATE: The state data set contains status of the XRC session and of associated volumes that XRC is managing. The state data set is updated if an XADDPAIR, XDELPAIR, XSET, XSUSPEND, XRECOVER, or XEND command is issued, or whenever a volume state changes. Allocate the state data set on disk as an SMS-managed partitioned data set extended (PDSE) data set with the following attributes:
DCB=(RECFM=FB,LRECL=4096,BLKSIZE=4096,DSORG=PO),DSNTYPE=LIBRARY

160

z/OS V1R3 DFSMS Technical Guide

Allocate ten tracks per storage control session and one track for each volume pair in the storage control session. Try to plan for expected future growth when you initially allocate the state data set. There is no harm in over-allocating this data set but it is inconvenient if you under-allocate and then have to re-size it once XRC has been implemented. The default for the HLQ is SYS1. If you choose to use a HLQ other than SYS1, then you must specify the HLQ keyword on the XSTART command. The session_id specified in the data set names must match the session_id keyword specified on the XSTART command.

7.1.4 Coupling XRC


Coupled XRC (CXRC) allows for multiple XRC sessions to be coupled together both within and across LPARs. A maximum of 14 XRC sessions can be coupled together, and all volumes in a coupled session can be recovered to the same consistent time. Therefore there is no reasonable limit to the number of DASD volumes or storage controllers that can be under CXRC control. This ability provides a hugely scalable DR solution.
Note: There is still the limit of five XRC sessions per LPAR.

Configuring CXRC
To configure CXRC, a master data set required in addition to the three required for each XRC session. Master data set hlq.XCOPY.msession.MASTER. The master data set ensures recoverable consistency among all XRC sessions contained within the Coupled XRC system. The HLQ of the data set can be changed on the XCOUPLE ADD command using the MHLQ keyword. This HLQ does not have to be the same as that for the individual XRC sessions. The default is SYS1. The msession specified in the data set name must match the msession keyword specified on the XCOUPLE ADD command. When XRC sessions are added to a CXRC configuration, the state data set for each XRC session contains additional information which identifies the master session it is coupled to and is updated when an XCOUPLE command is issued.

Chapter 7. Advanced copy services enhancements

161

Considerations for the master data set


All of the XRC sessions coupled in the master session continuously write to and read from the master data set. This access allows communication between hosts as data is copied from the primary site to the recovery site. The following points need to considered when allocated your master data set: Make the master data set cataloged and accessible to each host system that processes a coupled session, as well as to the system that processes the XRECOVER command.
Note: The master data set can reside on the same volume as the state and control data sets, but not on the same volume that contains the journal data sets.

Allocate the master data set as physical sequential and not striped without defining secondary extents.
Note: The required size for the master data set is fixed at one cylinder. This size allows for 14 XRC sessions to be coupled.

Allocate the master data set with one cylinder primary space and zero cylinders secondary space as follows:
DCB=(RECFM=FB,LRECL=15360,BLKSIZE=15360,DSORG=PS),SPACE=(CYLS,(1,0))

Allocate the master data set on a single disk device and pre-allocate the master data set size before use of the XCOUPLE command. Only the space that is allocated at the time the XCOUPLE ADD command is issued, will be available for XRC use. It is recommended that you place the master data set in a user catalog that contains only entries for the master data set.

Adding XRC sessions to CXRC


We have two systems, SC63 and SC64 in our sysplex. Each system has one active XRC session called SC63 and SC64 respectively. Data set names for the SC63 session are MHLRES2.XCOPY.SC63.llq. Where llq is JRNL01, JRNL02, STATE and CONTROL. Data set names for the SC64 session are MHLRES2.XCOPY.SC64.llq. Where llq is JRNL01, JRNL02, STATE and CONTROL.

162

z/OS V1R3 DFSMS Technical Guide

Create the master data set


In order to create a CXRC session called COUPLED we need to allocate and catalog the master data set on all systems involved, in this case SC63 and SC64. Master data set name is MHLRES2.XCOPY.COUPLED.MASTER. As noted above this data set should be allocated using the following attributes and space requirements.
DCB=(RECFM=FB,LRECL=15360,BLKSIZE=15360,DSORG=PS),SPACE=(CYL,(1,0))

Add the XRC sessions


Issue the XCOUPLE command for the SC63XRC session from the SC63 system.
XCOUPLE SC63 ADD MSESSION(COUPLED) MHLQ(MHLRES2)

Issue the XCOUPLE command for the SC64XRC session from the SC64 system.
XCOUPLE SC64 ADD MSESSION(COUPLED) MHLQ(MHLRES2)

Message ANTC8400I is display to indicate that a XRC session was added to a Coupled XRC session.

Displaying CXRC session


The XQUERY command has additional keywords MASTER and MHLQ. Only master is required if displaying a CXRC session. MHLQ must be specified if you are not using the default MHLQ of SYS1. The results of a QUERY MASTER command are shown in Figure 7-1. In this example, there are no volume pairs being managed by XRC session SC64.

ANTL8800I ANTQ8300I ANTQ8202I ANTQ8302I ANTQ8303I ANTQ8304I ANTQ8304I ANTQ8305I ANTQ8308I ANTQ8309I ANTQ8301I

XQUERY COUPLED MASTER MHLQ(MHLRES2) XQUERY STARTED FOR MSESSION(COUPLED) MHLQ(MHLRES2) 361 XQUERY MASTER REPORT - 002 SESSION STA VOL INT CMD JOURNAL DELTA RCV/ADV DELTA ------------------------------------------------------------SC63 ACT Y =00:00:00.000000 =00:00:00.000000 SC64 ACT NOV N TOTAL=2 ACT=2 SUS=0 END=0 ARV=0 RCV=0 UNK=0 MSESSION RECOVERABLE TIME(2002.094 19:59:58.857134) INTERLOCKED=1 NON-INTERLOCKED=1 XQUERY MASTER REPORT COMPLETE FOR MSESSION(COUPLED)

Figure 7-1 XQUERY MASTER command results.

Chapter 7. Advanced copy services enhancements

163

7.1.5 QUICKCOPY
XRC has also introduced a new keyword to the XADDPAIR function, which is used to define primary and secondary volume pairs. When QUICKCOPY is specified, XRC only copies the actual allocated space on the volume. The default on the XADDPAIR command is FULLCOPY this ensures that function is maintained as it is today. You can change the default by using the XSET COPY command.

Use the QUICKCOPY option with caution


As a part of QUICKCOPY initial processing, XRC must issue a reserve for the primary volume. After the tracks that contain the allocated space for the volume have been identified, XRC issues a release during the initial synchronization phase. If access to the primary volume is through a channel extender and the connection to the primary volume is lost while XRC has the reserve, XRC will not be able to release the volume. Applications will not be able to access the primary volume should this occur.

164

z/OS V1R3 DFSMS Technical Guide

Appendix A.

Record changes in z/OS V1R3 DFSMS


New functionality in z/OS V1R3 DFSMS introduces changes to SMF and DFSMShsm records in order to provide detailed information on your storage subsystem. In this appendix we provide information on the changes in this release that will enhance your reporting capability.

Copyright IBM Corp. 2002

165

A.1 VSAM RLS SMF record changes


There are two changes to the SMF type 42 record in support of RLS coupling facility caching enhancements. Subtype 15, storage class summary record has a new field added to provide the following information: SMF42F00 indicates if DFSMS greater than 4K CF caching is active Subtype 16, data set summary record has new fields added to provide the following information: SMF42GAJ indicates DFSMS greater than 4K CF caching status SMF42GAP SMS data class value

A.2 System managed buffering SMF record changes


There are changes to the SMF type 64 record in support of SMB. Bits 5-7 of SMF64RSC, which were previously reserved, are used to give more information about Direct Optimization (DO)

A.3 Open/Close/EOV SMF record changes


A new SMF subtype is now produced. SMF type 42 subtype 10 records are written at the time of volume selection failure because of insufficient space when allocate a data set. We have included the format of these records in Table A-1. This format is also included in MACLIB(IGWSMF). See Table A-1.

166

z/OS V1R3 DFSMS Technical Guide

Table A-1 Format of SMF type 42 subtype 10 records

Name

Type
Character Character Character Character Character

Length
8 8 8 8 44

Description
Jobname Program name Step name DD name Data set name

SMF42JBN SMF42PGN
SMF42STN SMF42DDN SMF42DSN SMF42SPQ SMF42RSP SMF42UNT SMF42VDC SMF42DCL SMF42DCN SMF42VMC SMF42MCL SMF42MCN SMF42VSC SMF42SCL SMF42SCN SMF42SGS SMF42SGL SMF42SGN

Fixed Character

31 2

Requested space quantity Unit of space quantity

Fixed Character

16 30

Length of data class name Data class name

Fixed Character

16 30

Length of management class name Management class name

Fixed Character

16 30

Length of storage class name Storage class name

Fixed Character

15 30

Length of storage group name Storage group name

Appendix A. Record changes in z/OS V1R3 DFSMS

167

A.4 OAM SMF record changes


OAM SMF record type 85 has been enhanced to show statistics regarding the second backup copy of objects.
Subtype 3: OSREQ RETRIEVE:

New processing flag to indicate VIEW=BACKUP2 specified.


Subtype 32: OSMC Storage Group Processing:

Number of second backup objects written to optical Number of bytes of second backup object data written to optical Number of second backup objects read from optical Number of kbytes of second backup object data read from optical Number of second backup objects deleted from optical Number of kbytes of second backup object data deleted from optical Number of second backup objects written to tape Number of kbytes of second backup object data written to tape Number of second backup objects read from tape Number of kbytes of second backup object data read from tape Number of second backup objects deleted from tape Number of kbytes of second backup object data deleted from tape
Subtype 34: Volume Recovery Utility:

Number of second backup objects written to optical Number of kbytes of second backup object data written to optical Number of second backup objects read from optical Number of kbytes of second backup object data read from optical Number ofNumber of second backup objects deleted from optical Number of kbytes of second backup object data deleted from optical Number of second backup objects written to tape Number of kbytes of second backup object data written to tape Number of second backup objects read from tape Number of kbytes of second backup object data read from tape Number of second backup objects deleted from tape Number of kbytes of second backup object data deleted from tape
Subtype 36: Single Object Recovery:

New processing flags to indicate BACKUP1, BACKUP2 or neither was specified.

168

z/OS V1R3 DFSMS Technical Guide

A.5 HSM FSR record changes


HSM functional statistics records (FSR) have been updated to reflect the implementation of the CRQ. The FSR records do not have a fixed SMF number, the record number is the value specified for SETSYS SMF plus one. If no value is specified, then by default, FSR records will be type 241. Only the added or changed values have been included in Table 7-1. The full structure of the FSR records is available in DFSMShsm Implementation and Customization Guide, SC35-0418.
Table 7-1 Changes to the FSR records

Offsets
131 (83)

Type
Bitstring 1... ....

Length
1

Name
FSRMFLGS FSRFRTRY

Description
Flags from MWE When set to 1, the backup copy was made during a retry, after the first try failed because the data set was in use When set to 1, this request completed successfully on a remote system When set to 1, request was completed by a tape already mounted When set to 1, remote host processed

.1.. .... ..1. .... ...1 ....

FSRF_REMOTE FSRFPIGB FSRF_REMOTE _HOST_PROCE SSED * 2 FSR_ORGNL_HI D

.... xxxx 290 122 Character

Reserved Host ID that generated the request. Only valid for recall requests.

Appendix A. Record changes in z/OS V1R3 DFSMS

169

170

z/OS V1R3 DFSMS Technical Guide

Appendix B.

Maintenance information
Throughout this book there is recommended maintenance for the various components of DFSMS. This appendix contains the responder text for several of the APARs in order to provide you with the information you need to successfully implement the function in z/OS V1R3 DFSMS.

Copyright IBM Corp. 2002

171

B.1 APAR II12431


This APAR documents defining data and index components for data sets after conversion to R1E0.

B.1.1 Error description


This APAR is intended to provide information to the user about the removal of IMBED and REPLICATE in DFSMS R1E0. This is a supplement, but not a replacement, for the AMS Reference for Integrated Catalog Facility Guide (SC26490600), the Managing Catalogs Guide (SC26491400) and/or the Enhanced Catalog Sharing and Management Redbook (SG24-5594). Please refer to these manuals first. IMBED and REPLICATE are ignored on a DEFINE of a new data set under DFSMS R1E0 and above. This was done because, with the faster DASD and the improved Cache Controllers and procedures, IMBED no longer provides value for performance. Also, IMBED and REPLICATE are not supported for RLS or for extended format or compressed data sets. For compatibility reasons, IMBED and REPLICATE are still supported for the IMPORT and DFSMSdss RESTORE functions. When Catalog ignores the IMBED parameter, it means that the allocation size for the index component now has to accommodate both the sequence set index records as well as the high level records. Before, with IMBED, the sequence set was stored with the data and the index allocation only needed to contain the high level index records, which are relatively few. For this reason, data sets with a small index allocation size or ones with a zero secondary space, can have space problems once IMBED is removed. For these data sets, the user will need to make provisions to increase the Index size. There are no good formulas to calculate the amount of space required for the index component because the number of index records depends on the size of the data set, the size of the data CA and the size of the index records themselves. The AMS Reference for Integrated Catalog Facility states that space can be allocated at the Cluster or Data component level and Catalog will figure out what the Index component will need. 1. If space is provided for the cluster, the system will divide the space between the data and index components. 2. If space is provided only for the data component, the system will determine how much more space is needed for the index component.

172

z/OS V1R3 DFSMS Technical Guide

One recommendation is to make the primary space allocation of the Index component (in tracks) equal to the number of cylinders the data set uses. This means that, for a data set using 200 cylinders, the Index component should have a primary allocation of 200 tracks. Also, secondary space should always be defined. Large data sets could be defined with an Index component space allocation of 2 cylinders with a smaller secondary space allocation. PLEASE NOTE: REPLICATE is not ignored in the base code of DFSMS R150. OW44442 corrects this omission and should be applied as soon as possible so IMBED and REPLICATE are treated together. Without OW44442, the IMBED/REPLICATE option will be treated as NOIMBED/REPLICATE which requires more index space than either IMBED/REPLICATE or NOIMBED/NOREPLICATE. For R1F0, after applying the PTF of APAR OW41955, the new return code IEC161I RC254 will be generated when OPEN for OUTPUT is done against a VSAM IMBED/REPLICATE/KEYRANGE data set. The associated sfi code means IMBED (001), KEYRANGE (002), and REPLICATE (003). The new RC254 with SFI 1, 2 and 3 are informational only and do not require any immediate action. It is intended as an aid to customers to identify data sets currently defined with IMBED, REPLICATE or KEYRANGE. If the message IEC161I rc254 is created 00000000 will be returned in REG15, and no ACBERFLG value will be returned. PLEASE NOTE: After converting (migrating) to R1F0 When a DEFINE CLUSTER contains the IMBED and/or REPLICATE keywords they will be ignored and the CLUSTER will be defined as NOIMBED and/or NOREPLICATE. Thus, msgIEC161i rc254 message will no longer be generated after the CLUSTER is re-DEFINEd under R1F0. PLEASE NOTE: For further information regarding KEYRANGE data sets please see II12896, and WSC Flash10072 which may be found at:
http://www-1.ibm.com/servlet/support/manager?rs=0&rt=0&org=ats &doc=F9AD8FC0B58E4A6A852569F5004ADC21

You may also find this Web site by: 1. Go to http://www.ibm.com 2. Select Support & downloads 3. Type Flash10072 (without the quotes) in the field Search for technical support by keyword(s): 4. Double-click on the title KEYRANGE Specification to be Ignored in Future Release of DFSMS

Appendix B. Maintenance information

173

B.2 APAR II12896


This APAR documents DFSMShsm support for KEYRANGE data sets.

B.2.1 Problem conclusion


The value of keyranges as a feature of VSAM key-sequenced data sets has diminished significantly over the last few years with the introduction of new DASD cached controllers, improved SMS DASD performance parameters, and VSAM data striping. Because of this, it is IBM's intent to ignore the KEYRANGE specification on the IDCAMS DEFINE and IMPORT commands for any new data sets beginning in the z/OS V1R3 release of DFSMS. In accordance with this, DFSMShsm customers should be aware of the following:

DFSMShsm control data sets


For those customers that currently use keyrange data sets for their DFSMShsm control data sets (CDSes), it is strongly recommended that they begin planning to convert their CDSes to be multi-cluster non-keyrange. Chapter 3 of the DFSMShsm Implementation and Customization Guide SH21-1078 contains information on multi-cluster non-keyrange control data sets. A good time to convert the CDSes to be non-keyrange is during a planned reorganization of the CDSes. To do the conversion, simply remove the KEYRANGE keyword from the IDCAMS DEFINE statements used to define the multi-cluster CDSes. Then follow, your normal reorganization process. When DFSMShsm is restarted, since the VSAM keyranges are not present, DFSMShsm will dynamically calculate the key boundaries of each cluster. Use the QUERY CONTROLDATASETS command to view both the low and high keys that DFSMShsm calculated for each cluster. If the keyranged multi-cluster CDSes are not converted to be non-keyranged before migrating to z/OS v1R3 release of DFSMS, then UPDATEC must be used in CDS recovery situations. The Enhanced CDS Recovery procedure will not perform properly in z/OS v1R3 DFSMS if the CDSes still contain VSAM keyranges.

Datamover selection
DFSMShsm can use either DFSMSdss or DFSMShsm as the datamover when performing either space or availability management. DFSMSdss has been the default data mover since the introduction of DFSMS/MVS V1R1 and is being used in most DFSMShsm installations.

174

z/OS V1R3 DFSMS Technical Guide

When DFSMShsm is the datamover, IDCAMS is invoked to manage VSAM data sets. Because of this, beginning with the next release of DFSMS, VSAM data sets with keyranges will have their keyranges removed during Recall and Recovery operations. To verify that your installation is not using DFSMShsm as the datamover, examine your DFSMShsm startup parmlib member for a patch to the Datamover Selection Table (DMVST). If this patch is being used, then DFSMShsm is being selected as the datamover. IBM recommends that only DFSMSdss be used as the datamover.

Migration
Attempts to migrate VSAM keyrange data sets with the DFSMShsm datamover will be failed. VSAM keyrange data sets must be migrated using the DFSMSdss datamover.

Recall
VSAM keyrange data sets that were previously migrated with the DFSMShsm datamover will be recalled, but the keyranges will be removed. A warning message will be issued to indicate this. If possible, the data set should be recalled on a lower level system. VSAM keyrange data sets that were previously migrated with the DFSMSdss datamover will be recalled with the keyranges intact.

Backup
Attempts to backup VSAM keyrange data sets with the DFSMShsm datamover will be allowed, but a warning message will be issued. The warning message will indicate that the keyranges will be removed if the data set is recovered.

Recovery
VSAM keyrange data sets that were previously backed up with the DFSMShsm datamover will be recovered, but the keyranges will be removed. A warning message will be issued to indicate this. If possible, the data set should be recovered on a lower level system. VSAM keyrange data sets that were previously backed up with the DFSMShsm datamover will be recovered, but the keyranges will be removed. A warning message will be issued to indicate this. If possible, the data set should be recovered on a lower level system.

Appendix B. Maintenance information

175

VSAM keyrange data sets that were previously backed up with the DFSMSdss datamover will be recovered with the keyranges intact. Other DFSMShsm functions remain unchanged.

B.3 APAR OW53834


This APAR documents that update under HDZ11G0 of a GDG created prior to JDP2230 or OZ97150 can corrupt the catalog GDG base record.

B.3.1 Error description


GDG bases defined prior to the installation of JDP2230 or OZ97150 (Y2K support) did not account for a century indicator in the base record. If a GDG base of this type is altered by adding a GDS or rolling off a GDS under HDZ11G0 the catalog record for the GDG base will become corrupted. A DIAGNOSE of a catalog will indicate this corruption by the following messages:
IDC21363I THE FOLLOWING ENTRIES HAD ERRORS: gdgbasename (B) - REASON CODE: 2 gdgbasename.G0001V00 (H) - REASON CODE: 28

The errors seen during processing of this GDG base are unpredictable, but may include MSGIGD07001I with RC14 RSN0 module IGG0CLED, or MSGIDC3009I RC24 RSN12. There is a method to help detect whether or not you have any GDGs susceptible to this problem, and a step you can take to correct those old-format GDGs to avoid the problem entirely: 1. If you know the date when JDP2230 or OZ97150 was installed on your system, you may run a LISTCAT CREATE(xxxx) GDG ALL where 'xxxx' is the number of days that have passed since that date. This LISTCAT will only show those GDG bases that were defined prior to that date. If you have none, then this problem should not occur in your environment. 2. If you issue an IDCAMS ALTER of the expiration date of: a. All of your GDG bases, or b. Only those that are listed in step 1 above, the old record format will automatically be upgraded to the new format, and you will not be susceptible to this problem. The IDCAMS ALTER must be successful for this to occur. You may alter the expiration date to its current value it is not necessary to alter it to a new date for this to correct the down level record.

176

z/OS V1R3 DFSMS Technical Guide

Note: You cannot alter GDG expiration dates in HDZ11G0, as that support has been removed. You must do the ALTER from a prior level system.

Problem summary
USERS AFFECTED: All releases using GDGs

Problem description:
Use of a GDG under HDZ11G0 may break the GDG catalog record, if the GDG base was created prior to installation of Y2K support (product FMID JDP2230 or APAR OZ97150 added support to Catalog, approximate date of ship was 5/86).

Recommendation
GDG catalog records can be corrupted if they were created prior to installation of JDP2230 or OZ97150 on the system, and those GDGs are accessed by a HDZ11G0 system. Also, changes are necessary on lower-level releases to ensure that expiration dates are no longer accepted or modified for GDG bases (HDZ11G0 removed support for expiration dates for GDGs, and this APAR includes toleration support for that change for other releases). Note that failure to install the PTFs on lower-level releases will not result in any catalog corruption, however the dates shown under HDZ11G0 for a GDG last alter date will be incorrect. This data is not currently used by any catalog functions but is made available for customer use. Incorrect dates in this field will not affect catalog operation, but any new user programs that extract and take action based on last alter dates of the GDG may make incorrect decisions if lower-level releases do not have this fix installed and they are used to alter the expiration dates of GDGs.

Problem conclusion
For the HDZ11G0 APAR, if a down-level (e.g. pre-Y2K) format GDG base record is encountered when adding or deleting a GDS, the old format cell will be upgraded to the new format as part of the update. This will prevent any breakage of the catalog record. For the other releases, this APAR fix prevents users from altering or setting the expiration of GDG bases, which is incompatible with the change in HDZ11G0. Beginning with HDZ11G0, expiration dates are no longer supported for GDG bases. Without this fix for the releases before HDZ11G0, users can alter the expiration date of a GDG, and it will incorrectly show up as the last alteration date if the GDG base is listed under HDZ11G0. If you attempt to alter the expiration date of a GDG base under HDZ11G0, it will fail with MSGIDC3009I RC60 RSN30. After installing this fix on releases prior to HDZ11G0, an attempt to alter the expiration date of a GDG base will fail for the same reason.

Appendix B. Maintenance information

177

178

z/OS V1R3 DFSMS Technical Guide

Glossary
Your glossary term, acronym or abbreviation. Term definition.

A ABARS. Aggregate backup and recovery support. ABR. Aggregate backup and recovery record. access method services. A multifunction service program that manages VSAM and non-VSAM data sets, as well as integrated catalog facility (ICF). Access method services provides the following functions:

accompany list. An optional list in the selection data set that identifies the accompany data sets. ACDS. Active control data set. ACS. Automatic class selection. active control data set (ACDS). A VSAM linear data set that contains an SCDS that has been activated to control the storage management policy for the installation. When activating an SCDS, you determine which ACDS will hold the active configuration (if you have defined more than one ACDS). The ACS is shared by each system that is using the same SMS configuration to manage storage. active data. Data that is frequently accessed by users and that resides on level 0 volumes. activity log. In DFSMShsm, a SYSOUT or data set on disk used to record activity and errors that occurred during DFSMShsm processing. AG. Aggregate group. aggregate backup. The process of copying the data sets and control information of a user-defined group of data sets so that they may be recovered later as an entity by an aggregate recovery process. aggregate data sets. In aggregate backup and recovery processing, data sets that have been defined in an aggregate group as being related.

Defines and allocates space for data sets and catalogs Converts indexed-sequential data sets to key-sequenced data sets Modifies data set attributes in the catalog Reorganizes data sets Facilitates data portability among operating systems Creates backup copies of data sets Assists in making inaccessible data sets accessible Lists the records of data sets and catalogs Defines and builds alternate indexes Converts CVOLs to ICF catalogs
accompany data set. In aggregate backup and recovery processing, a data set that is physically transported from the backup site to the recovery site instead of being copied to the aggregate data tape. It is cataloged during recovery.

Copyright IBM Corp. 2002

179

aggregate group. A Storage Management Subsystem construct that defines control information and identifies the data sets to be backed up by a specific aggregate backup. aggregate recovery. The process of recovering a user-defined group of data sets that were backed up by aggregate backup. ATL. Automated tape library. audit. A DFSMShsm process that detects discrepancies between data set information in the VTOCs, the computing system catalog, the MCDS, BCDS, and OCDS. authorized user. In DFSMShsm, the person or persons who are authorized through the DFSMShsm AUTH command to issue DFSMShsm system programmer, storage administrator, and operator commands. automated tape library. A device consisting of robotic components, cartridge storage frames, tape subsystems, and controlling hardware and software, together with the set of volumes which reside in the library and may be mounted on the library tape drives. automatic backup. In DFSMShsm, the process of automatically copying eligible data sets from DFSMShsm-managed volumes or migration volumes to backup volumes during a specified backup cycle. automatic class selection (ACS) routine. A procedural set of ACS language statements. Based on a set of input variables, the ACS language statements generate the name of a predefined SMS class, or a list of names of predefined storage groups, for a data set. automatic class selection (ACS). A mechanism for assigning SMS classes and storage groups.

automatic dump. In DFSMShsm, the process of using DFSMSdss to automatically do a full volume dump of all allocated space on DFSMShsm-managed volumes to designated tape dump volumes. automatic interval migration. In DFSMShsm, automatic migration that occurs periodically when a threshold level of occupancy is reached or exceeded on a DFSMShsm-managed volume during a specified time interval. Data sets are moved from the volume, largest eligible data set first, until the low threshold of occupancy is reached. automatic primary space management. In DFSMShsm, the process of automatically deleting expired data sets, deleting temporary data sets, releasing unused overallocated space, and migrating data sets from DFSMShsm-managed volumes. automatic secondary space management. In DFSMShsm, the process of automatically deleting expired migrated data sets from the migration volumes, deleting expired records from the migration control data set, and migrating eligible data sets from level 1 volumes to level 2 volumes. automatic space management. In DFSMShsm, includes automatic volume space management, automatic secondary space management, and automatic recall. automatic volume space management. In DFSMShsm, includes automatic primary space management and automatic interval migration. availability management. In DFSMShsm, the process of ensuring that a current version (backup copy) of the installations data sets resides con tape or disk.

180

z/OS V1R3 DFSMS Technical Guide

B backup control data set (BCDS). A VSAM, key-sequenced data set that contains information about backup versions of data sets, backup volumes, dump volumes, and volumes under control of the backup and dump functions of DFSMShsm. backup copy. In DFSMShsm, a copy of a data set that is kept for reference in case the original data set is destroyed. backup cycle. In DFSMShsm, a period of days for within a pattern is used to specify the days in the cycle on which automatic backup is scheduled to take place. backup frequency. In DFSMShsm, the number of days that must elapse since the last backup version of a data set was made until a changed data set is again eligible for backup. backup version. Synonym for backup copy. backup volume. A volume managed by DFSMShsm to which backup versions of data sets are written. backup. In DFSMShsm, the process of copying a data set residing on a level 0 volume, a level 1 volume, or a volume not managed by DFSMShsm to a backup volume. base configuration. The part of an SMS configuration that contains general storage management attributes., such as the default management class, default unit, and default device geometry. It also identifies the systems or system groups that an SMS configuration manages.

base sysplex. A base (or basic) sysplex is the set of one or more MVS systems that is given a cross-system coupling facility (XCF) name and in which the authorized programs can then use XCF coupling services. A base system does not include a coupling facility. basic catalog structure (BCS). The name of the catalog structure in the integrated catalog facility environment. BCDS. Backup control data set. BCS. Basic catalog structure. C CDS. Control data set. CF. Coupling facility. COMMDS. Communications data set. communications data set (COMMDS). The primary means of communications among systems governed by a single SMS configuration. The COMMDS is a VSAM linear data set that contains the name of the ACDS and current utilization statistics for each system-managed volume, which helps balance space among systems running SMS. compaction. In DFSMShsm, a method of compressing and encoding data that is migrated or backed up. compress. To reduce the amount of storage required for a given data set by having the system replace identical words or phrases with a shorter token associated with the word or phrase.

Glossary

181

compressed format. A particular type of extended-format data set specified with the COMPACTION parameter of data class. VSAM can compress individual records in a compressed-format data set. SAM can compress individual blocks in a compressed-format data set. concurrent copy. A function to increase the accessibility of data by enabling you to make a consistent backup or copy of data concurrent with the usual application program processing. construct. One of the following: data class, storage class, management class, storage group, aggregate group, base configuration. control data set. (1) In DFSMShsm, one of three data sets (BCDS, MCDS, and OCDS) that contain records used in DFSMShsm processing. coupling facility (CF). The hardware that provides high-speed caching, list processing, and locking functions in a Parallel Sysplex. D data class. A collection of allocation and space attributes, defined by the storage administrator, that are used to create a data set. Data Facility Sort (DFSORT). An IBM licensed program that is a high-speed data processing utility. DFSORT provides an efficient and flexible way to handle sorting, merging, and copy operations, as well as providing versatile data manipulation at the record, field, and bit level.

Data Facility Storage Management Subsystem (DFSMS). An operating environment that helps automate and centralize the management of storage. To manage storage, SMS provides the storage administrator with control over data class, storage class, management class, storage group, and automatic class selection routine definitions. device category. A storage device classification used by SMS. The device categories are as follows: SMS-managed disk, SMS-managed tape, non-SMS-managed disk, non-SMS-managed tape. DFSMS. Data Facility Storage Management System. DFSMSdfp. A DFSMS functional component or base element of z/OS, that provides functions for storage management, data management, program management, device management, and distributed data management. DFSMSdss. A DFSMS functional component or base element of z/OS, used to copy, move dump, and restore data sets or volumes. DFSMShsm. A DFSMS functional component or base element of z/OS, used for backing up and recovering data, and managing space on volumes in the storage hierarchy. DFSMShsm-managed volume. (1) A primary storage volume, which is defined to DFSMShsm but which does not belong to a storage group. (2) A volume in a storage group, which is using DFSMShsm automatic dump, migration, or backup services.

182

z/OS V1R3 DFSMS Technical Guide

DFSMShsm-owned volume. A storage volume on which DFSMShsm stores backup versions, dump copies, or migrated data sets. DFSMSrmm. A DFSMS functional component or base element of z/OS, that manages removable media. disaster backup. A means to protect a computing rm2 definition. disaster recovery. A procedure for copying and storing an installations essential business data in a secure location, and for recovering that data in the event of a catastrophic problem. dummy storage group. A type of storage group that contains the serial numbers of volumes no longer connected to a system. Dummy storage groups allow existing JCL to function without having to be changed. dump class. A set of characteristics that describes how volume dumps are managed by DFSMShsm. duplexing. The process of writing two sets of identical records in order to create a second copy of data. E EA. Extended addressability. esoteric unit name. A name used to define a group of devices having similar hardware characteristics, such as TAPE or SYSDA.

expiration. The process by which data sets or objects are identified for deletion because their expiration data or retention period has passed. On disk, data sets and objects are deleted. On tape, when all data sets have reached their expiration date, the tape volume is available for reuse. extended addressability. The ability to create and access a VSAM data set that is greater than 4 GB in size. Extended addressability data sets must be allocated with DSNTYPE=EXT and EXTENDED ADDRESSABILITY=Y. extended format. The format of a data set that has a data set name type of EXTENDED. The data set is structured logically the same as a data set that is not in extended format but the physical format is different. extent reduction. In DFSMShsm, the releasing of unused space, reducing the number of extents, and compressing partitioned data sets. F filtering. The process of selecting data sets based on specified criteria. These criteria consist of fully or partially-qualified data set names or of certain data set characteristics. FSR. Functional statistics record. functional statistics record (FSR). A record that is created each time a DFSMShsm function is processed. It contains a log of system activity and is written to the system management facilities (SMF) data set. G GB. Gigabyte.

Glossary

183

GDG. Generation data group. GDS. Generation data set. generation data group (GDG). A collection of data sets with the same base name, such as PAYROLL, that are kept in chronological order. Each data set is called generation data set (GDS). generic unit name. A name assigned to a class of devices with the same geometry (such as 3390). global resource serialization (GRS). A component of z/OS used for serializing use of system resources and for converting hardware reserves on disk volumes to data set enqueues. global scratch pool. A group of empty tapes that do not have unique serial numbers and are not known individually to DFSMShsm. The tapes are not associated with a specific device. GRS. Global resource serialization. H hierarchical file system (HFS) data set. A data set that contains a POSIX-compliant file system, which is a collection of files and directories organized in a hierarchical structure, that can be accessed using z/OS UNIX System Services. HMT. HSM Monitor/Tuner. HSM complex (HSMplex). One or more z/OS images running DFSMShsm that share a common set of control data sets (MCDS, BCDS, OCDS, and Journal).

I inactive data. Copies of active or low-activity data that reside on DFSMShsm-owned dump and incremental backup volumes. incremental backup. In DFSMShsm, the process of copying a data set that has been opened for other than read-only access since the last backup version was created, and that has met the backup frequency criteria. inline backup. The process of copying a specific data set to a migration level 1 volume from a batch environment. This process allows you to back up data sets in the middle of a job. in-place conversion. The process of bringing a volume and the data sets it contains under the control of SMS without data movement, using DFSMSdss. Interactive Storage Management Facility (ISMF). The interactive interface of DFSMS that allows users and storage administrators access to the storage management functions. interval migration. In DFSMShsm, automatic migration that occurs when a threshold level of occupancy is reached or exceeded on a DFSMShsm-managed volume, during a specified time interval. Data sets are moved from the volume, largest eligible data set first, until the low threshold of occupancy is reached. journal data set. In DFSMShsm, a sequential data set used by DFSMShsm for recovery of the MCDS, BCDS, and OCDS. The journal contains a duplicate of each record in the control data sets that has changed since the MCDS, BCDS, and OCDS were last backed up.

184

z/OS V1R3 DFSMS Technical Guide

K KB. Kilobyte; 1024 bytes. level 0 volume. A volume that contains data sets directly accessible by the user. The volume may be either DFSMShsm-managed or non-DFSMShsm-managed. level 1 volume. A volume owned by DFSMShsm containing data sets migrated from a level 0 volume. level 2 volume. A volume under control of DFSMShsm containing data sets that migrated from a level 0 volume, from a level 1 volume, or from a volume not managed by DFSMShsm. M management class. A named collection of management attributes describing the retention, backup, and class transition characteristics for a group of objects in an object storage hierarchy. manual tape library. Installation-defined set of tape drives defined as a logical unit together with the set of system-managed volumes which can be mounted on the drives. The IBM implementation includes one or more 3490 subsystems, each connected by a Library Attachment Facility to a processor running the Library Manager application, and a set of volumes, defined by the installation as part of the library, which resides in shelf storage located near the 3490 subsystems. MB. Megabyte; 1,048,576 bytes. MCB. BCDS data set record. MCC. Backup version record. MCD. MCDS data set record.

MCDS. Migration control data set. MCT. Backup volume record. MCV. Primary and migration volume record. MEDIA2. Enhanced Capacity Cartridge System Tape. MEDIA3. High Performance Cartridge Tape. MEDIA4. Extended High Performance Cartridge Tape. migration control data set (MCDS). In DFSMShsm, a VSAM key-sequenced data set that contains records, control records, user records, records for data sets that have migrated, and records for volumes under migration control of DFSMShsm. migration level 1. DFSMShsm-owned disk volumes that contain data sets migrated from primary storage volumes. The data can be compressed. migration level 2. DFSMShsm-owned tape or disk volumes that contain data sets migrated from primary storage volumes or from migration level 1 volumes. The data can be compressed. migration. The process of moving unused data to lower cost storage in order to make space for high-availability data. If you wish to use the data set, it must be recalled. ML1. Migration level 1. ML2. Migration level 2. MTL. Manual tape library.

Glossary

185

N NaviQuest. A component of DFSMSdfp for implementing, verifying, and maintaining your SMS environment in batch mode. It provides batch testing and reporting capabilities that can be used to automatically create test cases in bulk, run many other storage management tasks in batch mode, and use supplied ACS code fragments as models when creating your own ACS routines. non-DFSMShsm-managed volume. A volume not defined to DFSMShsm containing data sets that are directly accessible to users. O OAM. Object access method. object access method (OAM). An access method that provides storage, retrieval, and storage hierarchy management for objects and provides storage and retrieval management for tape volumes contained in system-managed libraries. object. A named byte stream having no specific format or record orientation. OCDS. Offline control data set.

partitioned data set (PDS). A data set on direct access storage that is divided into partitions, called members, each of which can contain a program, part of a program, or data. partitioned data set extended (PDSE). A system-managed data set that contains an indexed directory and members that are similar to the directory and members of partitioned data sets. A PDSE can be used instead of a partitioned data set. PDS. Partitioned data set. PDSE. Partitioned data set extended. pool storage group. A type of storage group that contains system-managed disk volumes. Pool storage groups allow groups of volumes to be managed as a single entity. primary space allocation. Amount of space requested by a user for a data set when it is created. primary storage. A disk volume available to users for data allocation. The volumes in primary storage are called primary volumes. R RACF. Resource Access Control Facility.

offline control data set (OCDS). In DFSMShsm, a VSAM, key-sequenced data set that contains information about tape backup volumes and tape migration level 2 volumes. P parallel sysplex. A sysplex with one or more coupling facilities, and defined by the COUPLExx members of SYS1.PARMLIB as being a parallel sysplex.

recall. The process of moving a migrated data set from a level 1 or level 2 volume to a DFSMShsm-managed volume or to a volume not managed by DFSMShsm. record-level sharing (RLS). An extension to VSAM that provides direct shared access to a VSAM data set from multiple systems using cross-system locking.

186

z/OS V1R3 DFSMS Technical Guide

recovery. The process of rebuilding data after it has been damaged or destroyed, often by using a backup copy of the data or by reapplying transactions recorded in a log. relative track address (TTR). Relative track and record address on a direct-access device, where TT represents two bytes specifying the track relative to the beginning of the data set, and R is one byte specifying the record on that track. Resource Access Control Facility (RACF). An IBM licensed program that provides access control by identifying users to the system; verifying users of the system; authorizing access to protected resources; logging detected, unauthorized attempts to enter the system; and logging detected accesses to protected resources. RACF is included in z/OS Security Server and is also available as a separate program for the MVS and VM environments. Resource Measurement Facility (RMF). An IBM licensed program or optional element of z/OS, that measures selected areas of system activity and presents the data collected in the format of printed reports, system management facilities (SMF) records, or display reports. Use RMF to evaluate system performance and identify reasons for performance problems. restore. In DFSMShsm, the process of invoking DFSMSdss to perform the programs recover function. In general, it is to return to an original value or image, for example, to restore data in main storage from auxiliary storage. RLS. Record-level sharing. RMF. Resource Measurement Facility.

S SCDS. Source control data set. SDSP. Small data set packing. secondary space allocation. Amount of additional space requested by the user for a data set when primary space is full. service-level agreement. (1) An agreement between the storage administration group and a user group defining what service-levels the former will provide to ensure that users receive the space, availability, performance, and security they need. (2) An agreement between the storage administration group and operations defining what service-level operations will provide to ensure that storage management jobs required by the storage administration group are completed. shelf location. A single space on a shelf for storage of removable media. shelf. A place for storing removable media, such as tape and optical volumes, when they are not being written to or read. small data set packing (SDSP). In DFSMShsm, the process used to migrate data sets that contain equal to or less than a specified amount of actual data. The data sets are written as one or more records into a VSAM data set on a migration level 1 volume. small-data-set-packing data set. In DFSMShsm, a VSAM key-sequenced data set allocated on a migration level 1 volume and containing small data sets that have migrated. SMF. System management facilities.

Glossary

187

SMS complex. A collection of systems or system groups that share a common configuration. All systems in an SMS complex share a common active control data set (ACDS) and a communications data set (COMMDS). The systems or system groups that share the configuration are defined to SMS in the SMS base configuration. SMS configuration. A configuration base, Storage Management Subsystem class, group, library, and drive definitions, and ACS routines that the Storage Management Subsystems uses to manage storage. SMS control data set. A VSAM linear data set containing configurational, operational, or communications information that guides the execution of the Storage Management Subsystem. SMS. Storage Management Subsystem. source control data set (SCDS). A VSAM linear data set containing an SMS configuration. The SMS configuration in an SCDS can be changed and validated using ISMF. space management. In DFSMShsm, the process of managing aged data sets on DFSMShsm-manages and migration volumes. The three types of space management are: migration, deletion, and retirement. specific scratch pool. A group of empty tapes with unique serial numbers that are known to DFSMShsm as a result of being defined to DFSMShsm with the ADDVOL command. spill storage group. An SMS storage group used to satisfy allocations which do not fit into the primary storage group.

storage administrator. A person in the data processing center who is responsible for defining, implementing, and maintaining storage management policies. storage class. A collection of storage attributes that identify performance goals and availability requirements, defined by the storage administrator, used to select a device that can meet those goals and requirements. storage control. The component in a storage subsystem that handles interaction between processor channel and storage devices, runs channel commands, and controls storage devices. storage group. A collection of storage volumes and attributes, defined by the storage administrator. The collections can be a group of disk volumes, or a group of disk, optical, or tape volumes treated as a single object storage hierarchy. storage hierarchy. An arrangement of storage devices with different speeds and capacities. The levels of the storage hierarchy include main storage (memory, disk cache), primary storage (disk containing uncompressed data), migration level 1 (disk containing data in a space-saving format), and migration level 2 (tape cartridges containing data in a space-saving format). storage location. A location physically separate from the removable media library where volumes are stored for disaster recovery, backup, and vital records management.

188

z/OS V1R3 DFSMS Technical Guide

Storage Management Subsystem (SMS). A DFSMS facility used to automate and centralize the management of storage. Using SMS, a storage administrator describes data allocation characteristics, performance and availability goals, backup and retention requirements, and storage requirements to the system through data class, storage class, management class, storage group, and ACS routine definitions. storage management. The activities of data set allocation, placement, monitoring, migration. backup, recall, recovery, and deletion. These can be done either manually or by using automated processes. The Storage Management Subsystem automated these processes for you, while optimizing storage resources. striping. A software implementation of a disk array that distributes a data set across multiple volumes to improve performance. sysplex. A set of MVS or z/OS systems communicating and cooperating with each other through certain multi-system hardware components and software services to process customer workloads. system management facilities (SMF). A component of z/OS that collects input/output (I/O) statistics, provided at the data set and storage class levels, which helps you monitor the performance of the direct access storage subsystem. system-managed data set. A data set that has been assigned a storage class. system-managed storage. An approach to storage management in which the system determines data placement and an automatic data manager handles data backup, movement, space, and security.

system-managed tape library. A collection of tape volumes and tape devices, defined in the tape configuration database. A system-managed tape library can be automated or manual. system-managed volume. A disk, optical, or tape volume that belongs to a storage group. T tape configuration database (TCDB). One or more volume catalogs used to maintain records of system-managed tape libraries and tape volume. Tape Library Dataserver. A hardware device that maintains the tape inventory associated with a set of tape drives. An automated tape library dataserver also manages the mounting, removal, and storage of tapes. tape library. A set of equipment and facilities that support an installations tape environment. This can include tape storage racks, a set of tape drives, and a set of related tape volumes mounted on those drives. tape mount management. The methodology used to optimize tape subsystem operation and use, consisting of hardware and software facilities used to manage tape data efficiently. tape storage group. A type of storage group that contains system-managed private tape volumes. The tape storage group definition specifies the system-managed tape libraries that can contain tape volumes.

Glossary

189

tape subsystem. A magnetic tape subsystem consisting of a controller and devices, which allows for the storage of user data on tape cartridges. Examples of tape subsystems include the IBM 3490 and 3490E Magnetic Tape Subsystems. TB. Terabyte. TTOC. Tape table of contents record. TTR. Relative track address. U unit affinity. Requests that the system allocate different data sets residing on different removable volumes to the same device during execution of the step to reduce the total number of tape drives required to execute the step. Explicit unit affinity is specified by coding the UNIT=AFF JCL keyword on a DD statement. Implicit unit affinity exists when a DD statement requests more volumes that devices. use attribute. (1) The attribute assigned to a disk volume that controls when the volume can be used to allocate new data sets; use attributes are public, private, and storage. (2) For system-managed tape volumes, use attributes are scratch and private. V virtual storage access method (VSAM). An access method for direct or sequential processing of fixed and variable-length records on direct access devices. The records in a VSAM data set or file can be organized in logical sequence by a key field (key sequence), in the physical sequence in which they are written on the data set or file (entry-sequence), or by relative-record number.

vital records. A data set or volume maintained for meeting an externally-imposed retention requirement, such as a legal requirement. volume status. In the Storage Management Subsystem, indicates whether the volume is fully available for system management:

Initial indicates that the volume is not ready for system management because it contains data sets that are ineligible for system management. Converted indicates that all of the data sets on a volume have an associated storage class and are cataloged in an integrated catalog facility catalog. Non-system-managed indicates that the volume does not contain any system-managed data sets and has not been initialized as system-managed.
volume. The storage space on disk, tape. or optical devices, which is identified by a volume label. volume pool. In DFSMShsm, a set of related primary volumes. When a data set is recalled, if the original volume that it was on is in a defined volume pool, the data set can be recalled to one of the volumes in the pool. VTOC. Volume table of contents.

190

z/OS V1R3 DFSMS Technical Guide

Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.

IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks on page 193.

DFSMS Release 10 Technical Update, SG24-6120 Hierarchical File System Usage Guide, SG24-5482 DFSMSrmm Primer, SG24-5983 VSAM Demystified, SG24-6105

Other resources
These publications are also relevant as further information sources:

z/OS V1R1.0-V1R3.0 DFSMS DFM Guide and Reference, SC26-7395 z/OS V1R1.0-V1R3.0 DFSMS Using the Volume Mount Analyzer, SC26-7413 z/OS V1R1.0-V1R3.0 DFSMSdfp Checkpoint/Restart, SC26-7401 z/OS V1R3.0 DFSMS Access Method Services for Catalogs, SC26-7394 z/OS V1R3.0 DFSMS Installation Exits, SC26-7396 z/OS V1R3.0 DFSMS Introduction, SC26-7397 z/OS V1R3.0 DFSMS Macro Instructions for Data Sets, SC26-7408 z/OS V1R3.0 DFSMS Migration, GC26-7398 z/OS V1R3.0 DFSMS Implementing System Managed Storage, SC26-7407 z/OS V1R3.0 DFSMS Managing Catalogs, SC26-7409 z/OS V1R3.0 DFSMS Using Data Sets, SC26-7410 z/OS V1R3.0 DFSMS Using Magnetic Tapes, SC26-7412 z/OS V1R3.0 DFSMS Using the Interactive Storage Management Facility, SC26-7411

Copyright IBM Corp. 2002

191

z/OS V1R3.0 DFSMSdfp Advanced Services, SC26-7400 z/OS V1R3.0 DFSMSdfp Diagnosis Guide, GY27-7617 z/OS V1R3.0 DFSMSdfp Diagnosis Reference, GY27-7618 z/OS V1R3.0 DFSMSdfp Storage Administration Reference, SC26-7402 z/OS V1R3.0 DFSMSdfp Utilities, SC26-7414 z/OS V1R3.0 DFSMSrmm Application Programming Interface, SC26-7403 z/OS V1R3.0 DFSMSrmm Diagnosis Guide, GY27-7619 z/OS V1R3.0 DFSMSrmm Guide and Reference, SC26-7404 z/OS V1R3.0 DFSMSrmm Implementation and Customization Guide, SC26-7405 z/OS V1R3.0 DFSMSrmm Reporting, SC26-7406 z/OS V1R3.0 DFSMShsm Storage Administration Reference, SC35-0422 z/OS V1R3.0 DFSMS Advanced Copy Services, SC35-0428 z/OS V1R3.0 DFSMS OAM Planning, Installation, and Storage Administration Guide Object Support, SC35-0426 z/OS V1R3.0 DFSMS OAM Planning, Installation, and Storage Administration Guide for Tape Library, SC35-0427 z/OS V1R3.0 DFSMS Object Access Method Application Programmer's Reference, SC35-0425 z/OS V1R3.0 DFSMSdss Storage Administration Guide, SC35-0423 z/OS V1R3.0 DFSMSdss Storage Administration Reference, SC35-0424 z/OS V1R3.0 DFSMShsm Data Recovery Scenarios, GC35-0419 z/OS V1R3.0 DFSMShsm Implementation and Customization Guide, SC35-0418 z/OS V1R3.0 DFSMShsm Managing Your Own Data, SC35-0420 z/OS V1R3.0 DFSMShsm Storage Administration Guide, SC35-0421 z/OS V1R3.0 MVS System Management Facilities, SA22-7630

192

z/OS V1R3 DFSMS Technical Guide

Referenced Web sites


These Web sites are also relevant as further information sources: IBM TotalStorage Web site
http://www.storage.ibm.com/

z/OS Internet Library


http://www-1.ibm.com/servers/eserver/zseries/zos/bkserv/

S/390 coupling facility structure sizer tool


http://www.ibm.com/servers/eserver/zseries/cfsizer

How to get IBM Redbooks


You can order hardcopy Redbooks, as well as view, download, or search for Redbooks at the following Web site:
ibm.com/redbooks

You can also download additional materials (code samples or diskette/CD-ROM images) from that site.

IBM Redbooks collections


Redbooks are also available on CD-ROMs. Click the CD-ROMs button on the Redbooks Web site for information about all the CD-ROMs offered, as well as updates and formats.

Related publications

193

194

z/OS V1R3 DFSMS Technical Guide

Index
Numerics
3390-9 overview 41 automatic class selection 62, 128

C A
ACS See automatic class selection advanced copy services 157 APARs II12431 172 II12896 174 OW43316 159 OW44442 173 OW44917 123 OW45271 122 OW45557 123 OW45674 115 OW46143 122 OW46387 123 OW47639 126 OW47651 140 OW47947 135 OW47967 140 OW47993 138 OW48234 116 OW48865 121, 128 OW48921 130 OW49148 118 OW49379 42 OW49491 63 OW49833 155 OW49863 133 OW50405 118 OW50528 57 OW53521 16 OW53804 45 OW53834 176 OW54128 57 API See Application Programming Interface Application Programming Interface DFSMSrmm 127 OAM 67 XRC 158 candidate volumes 11 catalog management 47 catalog address space 4 coupling facility caching 4 data set name validity checking 47 data set naming rules 3 defining catalogs 47 dumping CAS 48 expiration date 4 KEYRANGE 4 large real storage 4 MODIFY command 4 performance statistics 49 record size 3 REUSE attribute 4 system managed buffering 4 catalog search interface 43 common recall queue 5, 70 ARCRDEXT exit 95 ARCRPEXT exit 92 auditing the CRQ 106 CF loss 103 CF structure sizing 75 CRQ structure definition 76 diagnostic data collection 109 enabling 73 error recovery 101 full queue 92 HSMplex 70 JES3 environments 96 processing recall requests 104 processing requests 91 rebuilding the CRQ 107 selecting requests 94 usage considerations 72, 90 concurrent copy 114 CONFIGHFS command 51 configure MXRC 160 XRC 160

Copyright IBM Corp. 2002

195

coupling XRC 161 CRQ see common recall queue 5 CSI See catalog search interface CXRC See coupled XRC

D
data set separation 3, 30 profile 30 usage considerations 33 DATACLASS 126 DFSMSdfp 39 caching CIs greater than 4K 59 catalog management 47 catalog performance statistics 49 CONFIGHFS command 51 data set name validity checking 47 dumping CAS 48 EXCP considerations 42 expiration date 57 extended alias support 45 GDG alter date 43 GDG base processing 43 GDG expiration date 44 HFS data sets 52 KEYRANGE parameter 53 large real storage 56 large volume support 40 OAM multiple object backup 61 OAM operator commands 64 OAM volume recovery 67 object access method record level sharing RECORDSIZE parameter 47 retention period 57 RLS lock structures 60 striped data sets 57 system managed buffering z/OS V1R3 enhancements 3 DFSMSdfp enhancements 3 DFSMSdss dump conditioning 115 full volume restore 116 HFS logical copy 114 large volume support 117 DFSMSdss keyword

ALLDATA 113 COPY 113, 115 COPY TRACK 117 COPYVOLID 115 DELETE 114 DUMP 115 DUMPCONDITIONING 117 DYNALLOC 113 PURGE 117 RESTORE 116 TOLERATE(ENQFAILURE) 113 DFSMShsm ARCRDEXT exit 95 ARCRPEXT exit 92 AUDIT FIX 107 auditing the CRQ 106 common recall queue 5 See common recall queue keyrange data sets 109 QUERY IMAGE command 73 DFSMSrmm API 127 bin management 133 control data set 126 extended addressability 126 extended format 126 control data set growth 137 conversion process 123 conversion tools 123 dialog multi-volume alert 122 DSTORE by location 138 EDGGTOOL 146 EDGRMMxx 128 EDGUX100 exit 127 error diagnostic messages 121 extended extract file 140 generating report JCL 155 Home location 130 journal data set growth 137 management class 128 object access method 121 RACF FACILITY class profile 140 reassign processing 134 report definition 147 report generator 140 report type support 144 reporting tool 146 special character support 120 storage group 128

196

z/OS V1R3 DFSMS Technical Guide

storage location management 138 storage locations 129 TSO/E help 121 DO See system managed buffering dump conditioning 5, 115 DVC See dynamic volume count dynamic volume count 2, 8 advantages 16 candidate volumes 12 changing DVC value 15 enabling 10 extend storage group 20 LISTCAT 13 maintenance 16 space constraint relief 17 supported data set types 9 TIOT size 14 value considerations 14 volume selection 10

XDELPAIR 160 XEND 160 XQUERY 163 XRECOVER 160, 162 XSET 160, 164 XSTART 158, 161 XSUSPEND 160 configure CXRC 161 control data set 160 coupled XRC 161 displaying CXRC session 163 journal data set 160 master data set 161 multiple XRC 159 overview 158 state data set 160

F
Flashcopy 117

E
EA See extended addressability EDGHSKP utility 141 EDGRMAIN exec 142 EDGRRPTE exec 140 ESS See IBM TotalStorage ESS EXCP 42 exits EDGUX100 127128 SLSUX06 133 expiration date 57 extend storage group 2, 17 defining 18 DFSMShsm 22 DVC processing 20 star configuration 21 usage considerations 20 extended addressability 126 extended alias support 45 extended format 42, 126 extended remote copy 158 command XADDPAIR 160, 164 XCOUPLE 161

G
GDG base processing 3, 43 alter date 43 expiration date 44 guaranteed space 20, 26

H
HFS See hierarchical file system hierarchical file system 51, 112 logical copy 112 HSMplex 70

I
IBM TotalStorage ESS 40 ICETOOL 141 IDCAMS 126

K
KEYRANGE parameter 53

L
large real storage 56 large volume support 40, 117 coexistence support 41

Index

197

design considerations 40 EXCP considerations 42 implementation 42 large volumes 3 LOCATION operand 131 LOCDEF command 131 LOCATION 132 MANAGEMENTTYPE 131 MEDIANAME 132 TYPE 131

See multiple XRC

O
OAM API changes 67 CBROAMxx keyword FIRSTBACKUPGROUP 65 SECONDBACKUPGROUP 63, 65 STORAGEGROUP 65 CBROAMxx statement SETOSMC 63, 65 DB2 tables ODBK2LOC 62 ODBK2SEC 62 display commands 64 ISMF 63 ISMF line operator RECOVER 68 management class 62 autobackup 62 modify commands 65 multiple object backup migration 62 multiple object backup scenarios 65 multiple object backup support 61 object backup storage group 63 object storage group 61 operator commands 64 SETOSMC STORAGEGROUP command FIRSTBACKUPGROUP 65 SECONDBACKUPGROUP 65 tape data set name 62 volume recovery process 67 OAM space management cycle 66 object access method See OAM OBSG See OAM OSMC See OAM space management cycle overflow storage group 2, 23 defining 24 guaranteed space 26 usage considerations 25

M
Magstar A60 155 maintenance information 171 management class 128 manual tape library 122 media manager 57 message routing 27 messages ADR410E 113114 ADR412E 113 ADR439E 113 ADR808I 116 ADR814E 117 ADR960E 113 CBR0230D 63 CBR0231A 6364 CBR1075I 65 CBR1100I 64 CBR1130I 64 CBR1140I 64 CBR9370I 64 CBR9820D 68 CBR9824I 68 CBR9863I 68 EDG2236I 121 EDG4021I 121 EDGT062 123 EDGT063 123 IEC507D 58 IGW322I 61 IGWFAMS 52 MODIFY CATALOG command 49 MTL See manual tape library multiple object backup support 61 multiple XRC 159 MXRC

P
parallel access volumes 40 PARMLIB members BPXPRMxx 112 CBROAMxx 61, 63, 6567

198

z/OS V1R3 DFSMS Technical Guide

COMMNDxx 49 EDGRMMxx 128129, 131132, 136 IGDSMSxx 59 partitioned data set extended 160 PAV See parallel access volumes PDSE See partitioned data set extended POOL operand 131 primary volumes 11

Q
QUICKCOPY 164

R
RACF STGADMIN.EDG.HOUSEKEEP.RPTEXT 140 RACK operand 131 record level sharing 59 lock structures 60 rebuild or alter structures 61 records greater than 4K 59 Redbooks Web site 193 Contact us xiii report generator installation library 141 product library 141 report definition 147 user library 141 retention period 57 REUSE parameter 4, 57 RLS See record level sharing RMM ADDVOLUME subcommand 131 RMM CHANGEVOLUME subcommand 131 RMM OPTION command PREACS 128 REUSEBIN 133, 136 REUSEBIN(STARTMOVE) 134 SMSACS 128 TPRACF(N) 120

EDGDOCS 123 EDGGRTD 144 EDGGTOOL 146 EDGJDHKP 139 EDGJRPT 140 EDGJWHKP 139 EDGUX100 127 SCR See space constraint relief SDM See system data mover SMB See system managed buffering SMF 29 SMS 2 SMS enhancements 2 Snapshot 117 space constraint relief 8, 10, 17 spill storage group 23 storage group 128 storage locations built-in 129 DSTORE by location 138 home locations use 132 installation-defined 129130 SVC99 8 symbolic alias support 3 system data mover 158 ANTAS000 158 ANTAS001 158 API 158 system managed buffering 54 AIX support 56 create optimization 55 create recovery 55 direct optimize 55 direct weighted 55 retry capability 55 system managed tape 122 system symbolics 45

T
task input/output table 11 content 11 DVC value 12 size 14 TIOT See task input/output table

S
SAMPLIB members CBRSMR13 62 EDGCMM01 123 EDGDOC 123

Index

199

V
vital record specification 128 volume selection 10 VRS See vital record specification VSAM 9 KEYRANGE parameter 53 large real storage 56 media manager 57

W
WLM See workload manager workload manager 42

X
XRC See extended remote copy

Z
z/OS UNIX 112

200

z/OS V1R3 DFSMS Technical Guide

z/OS V1R3 DFSMS Technical Guide

(0.2spine) 0.17<->0.473 90<->249 pages

Back cover

z/OS V1R3 DFSMS


Technical Guide
Learn the new features and functions in z/OS V1R3 DFSMS Improve your business continuance and efficiency Enhance data access and storage management
Each release of DFSMS builds upon the previous version to provide enhanced storage management, data access, device support, program management, and distributed data access for the z/OS platform in a system-managed storage environment. This IBM Redbook provides a technical overview of the functions and enhancements in z/OS V1R3 DFSMS. It provides you with the information you need to understand and evaluate the content of this DFSMS release, along with practical implementation hints and tips. Also included are enhancements that were made available prior to this release through an enabling PTF that have been integrated into this release. This book is written for storage professionals and system programmers who have experience with the components of DFSMS. It provides sufficient information so you can start prioritizing the implementation of new functions and evaluating their applicability in your DFSMS environment.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks


SG24-6569-00 ISBN 073842532X

You might also like