You are on page 1of 102

Tivoli Storage Manager


Version 5.5

Performance Tuning Guide

SC32-0141-01
Tivoli Storage Manager
®


Version 5.5

Performance Tuning Guide

SC32-0141-01
Note!
Before using this information and the product it supports, read the general information in "Notices" appendix.

Edition notice
This edition applies to Version 5 Release 5 of IBM Tivoli Storage Manager Performance Tuning Guide (program
numbers 5608-ACS, 5608-APD, 5608-APE, 5608-APR, 5608-ARM, 5608-CSS, 5608-HSM, 5608-ISM, 5608-ISX,
5608-SAN, 5608-SPM, 5698-USS) and to any subsequent releases until otherwise indicated in new editions.
Changes since the previous edition are marked with a vertical bar (|) in the left margin. Ensure that you are using
the correct edition for the level of the product.
© Copyright International Business Machines Corporation 1993, 2007. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Preface . . . . . . . . . . . . . . . v Tuning tape drive performance . . . . . . . . 21
Who should read this guide . . . . . . . . . v Using collocation with tape drives . . . . . . . 22
Publications . . . . . . . . . . . . . . v IBM LTO Ultrium tape drives . . . . . . . . 23
Tivoli Storage Manager publications . . . . . v IBM LTO Ultrium streaming rate performance . . 23
Related hardware publications . . . . . . . vii IBM LTO Ultrium performance recommendations 23
Related software publications . . . . . . . viii Tuning disk performance . . . . . . . . . . 24
Support information . . . . . . . . . . . viii Busses . . . . . . . . . . . . . . . . 24
Getting technical training . . . . . . . . viii
Searching knowledge bases . . . . . . . . viii Chapter 3. IBM Tivoli Storage Manager
Contacting IBM Software Support . . . . . . x client performance tuning . . . . . . 25
Accessibility features . . . . . . . . . . . xi COMPRESSION . . . . . . . . . . . . . 25
COMPRESSALWAYS . . . . . . . . . . . 26
Chapter 1. Overview of IBM Tivoli COMMRESTARTDURATION and
Storage Manager tuning . . . . . . . . 1 COMMRESTARTINTERVAL . . . . . . . . . 27
QUIET . . . . . . . . . . . . . . . . 27
Chapter 2. IBM Tivoli Storage Manager DISKBUFFSIZE . . . . . . . . . . . . . 28
PROCESSORUTILIZATION . . . . . . . . . 28
server performance tuning. . . . . . . 3 Multiple session backup and restore . . . . . . 28
BUFPOOLSIZE . . . . . . . . . . . . . 3 RESOURCEUTILIZATION . . . . . . . . . 30
EXPINTERVAL . . . . . . . . . . . . . 4 TAPEPROMPT . . . . . . . . . . . . . 32
LOGPOOLSIZE . . . . . . . . . . . . . 5 TCPBUFFSIZE . . . . . . . . . . . . . 32
MAXNUMMP . . . . . . . . . . . . . . 5 TCPNODELAY . . . . . . . . . . . . . 32
MAXSESSIONS . . . . . . . . . . . . . 5 TCPWINDOWSIZE . . . . . . . . . . . . 33
MOVEBATCHSIZE and MOVESIZETHRESH . . . 6 TXNBYTELIMIT . . . . . . . . . . . . . 33
RESTOREINTERVAL . . . . . . . . . . . 7 Client command line options . . . . . . . . 34
SELFTUNEBUFPOOLSIZE . . . . . . . . . . 7 Performance recommendations by client platform . 35
TCPWINDOWSIZE . . . . . . . . . . . . 8 Macintosh client . . . . . . . . . . . . 35
TXNGROUPMAX . . . . . . . . . . . . 9 Windows client . . . . . . . . . . . . 35
| Performance recommendations for all server Client performance considerations . . . . . . . 36
| platforms . . . . . . . . . . . . . . . 10 Hierarchical Storage Manager tuning . . . . . . 36
Database and recovery log performance . . . . . 10 Data Protection for Domino for z/OS . . . . . . 36
Backup performance . . . . . . . . . . . 12 | Client incremental backup memory requirements . . 37
Disaster recovery performance . . . . . . . . 12
Cached disk storage pools . . . . . . . . . 12
| Clearing cached files . . . . . . . . . . 12
Chapter 4. Network protocol tuning . . 39
Tuning storage pool migration . . . . . . . . 13 Networks . . . . . . . . . . . . . . . 39
Tuning migration processes . . . . . . . . 13 Limiting network traffic . . . . . . . . . . 39
Tuning migration thresholds . . . . . . . . 13 TCP/IP communication concepts and tuning . . . 40
| Collocation by group . . . . . . . . . . 13 Sliding window . . . . . . . . . . . . 41
Searching the server activity log . . . . . . . 13 Platform-specific network recommendations . . . 42
Scheduling sessions and processes . . . . . . . 14 AIX network settings . . . . . . . . . . 42
LAN-free backup . . . . . . . . . . . . 14 NetWare client cache tuning . . . . . . . . 45
Storage agent tuning . . . . . . . . . . . 14 Sun Solaris network settings . . . . . . . . 46
AIX: vmo and ioo commands . . . . . . . . 15 z/OS network settings . . . . . . . . . . 46
UNIX file systems and raw logical volumes . . . 16
Performance recommendations by server platform 16 Chapter 5. Archive function . . . . . . 51
AIX server . . . . . . . . . . . . . . 17 Using the archive function . . . . . . . . . 51
HP-UX server . . . . . . . . . . . . 17 Identifying and correcting archive-related problems 52
Linux server . . . . . . . . . . . . . 17
Sun Solaris server . . . . . . . . . . . 17 Appendix. Notices . . . . . . . . . . 55
Windows server . . . . . . . . . . . . 18 Trademarks . . . . . . . . . . . . . . 57
z/OS server . . . . . . . . . . . . . 19
Estimating throughput. . . . . . . . . . . 20 Glossary . . . . . . . . . . . . . . 59
Estimating throughput rate for average
workloads . . . . . . . . . . . . . . 20
Estimating throughput in other environments . . 21 Index . . . . . . . . . . . . . . . 81

© Copyright IBM Corp. 1993, 2007 iii


iv IBM Tivoli Storage Manager: Performance Tuning Guide
Preface
This publication helps you tune the performance of the servers and clients in your
IBM® Tivoli® Storage Manager environment.

Before using this publication, you should be familiar with the following areas:
v The operating systems on which your IBM Tivoli Storage Manager servers and
clients reside
v The communication protocols installed on your client and server machines

Who should read this guide


The audience for this publication is anyone who wants to improve the
performance of the Tivoli Storage Manager server, client, network, or attached
hardware

Publications
Tivoli Storage Manager publications and other related publications are available
online.

You can search all the Tivoli Storage Manager publications in the Information
Center: http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/index.jsp

You can download PDF versions of publications from the IBM Publications Center:
http://www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/
pbi.cgi

You can also order some related publications from this Web site. The Web site
provides information for ordering publications from countries other than the
United States. In the United States, you can order publications by calling
800-879-2755.

Tivoli Storage Manager publications


Tivoli Storage Manager server publications
Publication Title Order Number
IBM Tivoli Storage Manager Messages SC32-0140
IBM Tivoli Storage Manager Performance Tuning Guide SC32-0141
IBM Tivoli Storage Manager Problem Determination Guide SC32-0142
IBM Tivoli Storage Manager for AIX Installation Guide GC23-5969
IBM Tivoli Storage Manager for AIX Administrator’s Guide SC32-0117
IBM Tivoli Storage Manager for AIX Administrator’s Reference SC32-0123
IBM Tivoli Storage Manager for HP-UX Installation Guide GC23-5970
IBM Tivoli Storage Manager for HP-UX Administrator’s Guide SC32-0118
IBM Tivoli Storage Manager for HP-UX Administrator’s Reference SC32-0124
IBM Tivoli Storage Manager for Linux Installation Guide GC23-5971

© Copyright IBM Corp. 1993, 2007 v


Tivoli Storage Manager server publications
Publication Title Order Number
IBM Tivoli Storage Manager for Linux Administrator’s Guide SC32-0119
IBM Tivoli Storage Manager for Linux Administrator’s Reference SC32-0125
IBM Tivoli Storage Manager for Sun Solaris Installation Guide GC23-5972
IBM Tivoli Storage Manager for Sun Solaris Administrator’s Guide SC32-0120
IBM Tivoli Storage Manager for Sun Solaris Administrator’s Reference SC32-0126
IBM Tivoli Storage Manager for Windows Installation Guide GC23-5973
IBM Tivoli Storage Manager for Windows Administrator’s Guide SC32-0121
IBM Tivoli Storage Manager for Windows Administrator’s Reference SC32-0127
IBM Tivoli Storage Manager for z/OS Installation Guide GC23-5974
IBM Tivoli Storage Manager for z/OS Administrator’s Guide SC32-0122
IBM Tivoli Storage Manager for z/OS Administrator’s Reference SC32-0128
IBM Tivoli Storage Manager for System Backup and Recovery Installation SC32-6543
and User’s Guide

Tivoli Storage Manager storage agent publications


Publication Title Order Number
IBM Tivoli Storage Manager for SAN for AIX Storage Agent User’s SC32-0129
Guide
IBM Tivoli Storage Manager for SAN for HP-UX Storage Agent User’s SC32-0130
Guide
IBM Tivoli Storage Manager for SAN for Linux Storage Agent User’s SC32-0131
Guide
IBM Tivoli Storage Manager for SAN for Sun Solaris Storage Agent SC32-0132
User’s Guide
IBM Tivoli Storage Manager for SAN for Windows Storage Agent User’s SC32-0133
Guide

Tivoli Storage Manager client publications


Publication Title Order Number
IBM Tivoli Storage Manager for Macintosh: Backup-Archive Clients SC32-0143
Installation and User’s Guide
IBM Tivoli Storage Manager for NetWare: Backup-Archive Clients SC32-0144
Installation and User’s Guide
IBM Tivoli Storage Manager for UNIX and Linux: Backup-Archive SC32-0145
Clients Installation and User’s Guide
IBM Tivoli Storage Manager for Windows: Backup-Archive Clients SC32-0146
Installation and User’s Guide
IBM Tivoli Storage Manager for Space Management for UNIX and Linux: SC32-0148
User’s Guide
IBM Tivoli Storage Manager for HSM for Windows Administration Guide SC32-1773
IBM Tivoli Storage Manager Using the Application Program Interface SC32-0147

vi IBM Tivoli Storage Manager: Performance Tuning Guide


Tivoli Storage Manager Data Protection publications
Order
Publication Title Number
IBM Tivoli Storage Manager for Advanced Copy Services: Data Protection for SC33-8207
Snapshot Devices for SAP Installation and User’s Guide for Oracle
IBM Tivoli Storage Manager for Advanced Copy Services: Data Protection for SC33-8330
Snapshot Devices for DB2 Installation and User’s Guide
IBM Tivoli Storage Manager for Advanced Copy Services: Data Protection for SC32-9075
WebSphere Application Server Installation and User’s Guide
IBM Tivoli Storage Manager for Databases: Data Protection for Informix® SH26-4095
Installation and User’s Guide
IBM Tivoli Storage Manager for Databases: Data Protection for Microsoft SQL SC32-9059
Server Installation and User’s Guide
IBM Tivoli Storage Manager for Databases: Data Protection for Oracle for UNIX SC32-9064
and Linux Installation and User’s Guide
IBM Tivoli Storage Manager for Databases: Data Protection for Oracle for SC32-9065
Windows Installation and User’s Guide
IBM Tivoli Storage Manager for Enterprise Resource Planning: Data Protection for SC33-6341
SAP Installation and User’s Guide for DB2
IBM Tivoli Storage Manager for Enterprise Resource Planning: Data Protection for SC33-6340
SAP Installation and User’s Guide for Oracle
IBM Tivoli Storage Manager for Hardware: Data Protection for Enterprise Storage SC32-9060
Server® for DB2 UDB Installation and User’s Guide
IBM Tivoli Storage Manager for Hardware: Data Protection for Snapshot Devices GC32-1772
for Oracle Installation and User’s Guide
IBM Tivoli Storage Manager for Mail: Data Protection for Lotus Domino for SC32-9056
UNIX, Linux, and OS/400® Installation and User’s Guide
IBM Tivoli Storage Manager for Mail: Data Protection for Lotus Domino for SC32-9057
Windows Installation and User’s Guide
IBM Tivoli Storage Manager for Mail: Data Protection for Microsoft Exchange SC32-9058
Server Installation and User’s Guide

Related hardware publications


Title Order Number
IBM 3490E Model E01 and E11 User’s Guide GA32-0298
®
IBM Magstar 3494 Tape Library Introduction and Planning Guide GA32-0279
IBM Magstar 3494 Tape Library Dataserver Operator Guide GA32-0280
IBM 3495 Tape Library Dataserver Models L20, L30, L40, and L50 GA32-0235
Operator’s Guide
IBM Magstar MP 3570 Tape Subsystem Operator’s Guide GA32-0345
IBM 3584 Ultrascalable Tape Library Planning and Operator Guide GA32-0408
IBM TotalStorage® Tape Device Drivers Installation and User’s Guide GC35-0154
IBM TotalStorage Enterprise Tape System 3590 Operator Guide GA32-0330
IBM TotalStorage Enterprise Tape System 3592 Operator Guide GA32-0465
IBM LTO Ultrium 3580 Tape Drive Setup, Operator, and Service Guide GA32-0415

Preface vii
Title Order Number
BM LTO Ultrium 3581 Tape Autoloader Setup, Operator, and Service GA32-0412
Guide
IBM LTO Ultrium 3583 Tape Library Setup and Operator Guide GA32-0411

StorageSmart by IBM TX200 Ultrium External Tape Drive M/T 3585 GA32-0421
Setup, Operator, and Service Guide
StorageSmart by IBM SL7 Ultrium Tape Autoloader M/T 3586 Setup, GA32-0423
Operator, and Service Guide

Related software publications


Publication Title Order Number
DFSMS/MVS DFSMSrmm Guide and Reference SC26-4931
DFSMS/MVS DFSMSrmm Implementation and Customization Guide SC26-4932
®
OS/390 Security Server (RACF) Security Administrator’s Guide SC28-1915
z/OS SecureWay Security Server RACF Security Administrator’s Guide SA22-7683

Support information
You can find support information for IBM products from a number of different
sources:
v “Getting technical training”
v “Searching knowledge bases”
v “Contacting IBM Software Support” on page x

Getting technical training


Information about Tivoli technical training courses is available online.

http://www.ibm.com/software/tivoli/education/

Searching knowledge bases


If you have a problem with Tivoli Storage Manager, there is a variety of
knowledge bases you can search.

You can begin with the Information Center, from which you can search all the
Tivoli Storage Manager publications: http://publib.boulder.ibm.com/infocenter/
tivihelp/v1r1/index.jsp

Searching the Internet


If you cannot find an answer to your question in the information center, search the
Internet for the latest, most complete information that might help you resolve your
problem.

To search multiple Internet resources, go to the support web site for Tivoli Storage
Manager: http://www.ibm.com/software/sysmgmt/products/support/
IBMTivoliStorageManager.html. From this section, you can search a variety of
resources including:
v IBM technotes
v IBM downloads

viii IBM Tivoli Storage Manager: Performance Tuning Guide


v IBM Redbooks™
If you still cannot find the solution to your problem, you can search forums and
newsgroups on the Internet for the latest information that might help you resolve
your problem.

Using IBM Support Assistant


The IBM Support Assistant is a free, stand-alone application that you can install on
any workstation. You can then enhance the application by installing
product-specific plug-in modules for the IBM products you use.

The IBM Support Assistant helps you gather support information when you need
to open a problem management record (PMR), which you can then use to track the
problem. The product-specific plug-in modules provide you with the following
resources:
v Support links
v Education links
v Ability to submit problem management reports

For more information, see the IBM Support Assistant Web site at
http://www.ibm.com/software/support/isa/

Finding product fixes


A product fix might be available to resolve your problem. You can determine what
fixes are available by checking the product support Web site.
1. Go to the IBM Software Support Web site: http://www.ibm.com/software/
tivoli/products/storage-mgr/product-links.html
2. Click the Support Pages link for your Tivoli Storage Manager product.
3. Click Fixes for a list of fixes for your product.
4. Click the name of a fix to read the description and optionally download the fix.

Getting E-mail notification of product fixes


You can sign up to receive weekly E-mail notifications about fixes and other news
about IBM products.
1. From the support page for any IBM product, click My support in the
upper-right corner of the page.
2. If you have already registered, skip to the next step. If you have not registered,
click register in the upper-right corner of the support page to establish your
user ID and password.
3. Sign in to My support.
4. On the My support page, click Edit profiles in the left navigation pane, and
scroll to Select Mail Preferences. Select a product family and check the
appropriate boxes for the type of information you want.
5. Click Submit.
6. For E-mail notification for other products, repeat steps 4 and 5.

Preface ix
Contacting IBM Software Support
Before you contact IBM Software Support, you must have an active IBM software
maintenance contract, and you must be authorized to submit problems to IBM. The
type of software maintenance contract that you need depends on the type of
product you have.
v For IBM distributed software products (including, but not limited to, Tivoli,
Lotus®, and Rational® products, as well as DB2® and WebSphere® products that
run on Windows® or UNIX® operating systems), enroll in Passport Advantage®
in one of the following ways:
Online
Go to the Passport Advantage Web page (http://www.ibm.com/
software/sw-lotus/services/cwepassport.nsf/wdocs/passporthome) and
click How to Enroll
By phone
For the phone number to call in your country, go to the IBM Contacts
Web page (http://techsupport.services.ibm.com/guides/contacts.html)
and click the name of your geographic region.
v For IBM eServer™ software products (including, but not limited to, DB2 and
WebSphere products that run in zSeries®, pSeries®, and iSeries™ environments),
you can purchase a software maintenance agreement by working directly with
an IBM sales representative or an IBM Business Partner. For more information
about support for eServer software products, go to the IBM Technical Support
Advantage Web page: http://www.ibm.com/servers/eserver/techsupport.html.

If you are not sure what type of software maintenance contract you need, call
1-800-IBMSERV (1-800-426-7378) in the United States. For a list of telephone
numbers of people who provide support for your location, go to the IBM Contacts
Web page, http://techsupport.services.ibm.com/guides/contacts.html, and click
the name of your geographic region.

Perform these actions to contact IBM Software Support:


1. Determine the business impact of your problem.
2. Describe your problem and gather background information.
3. Submit your problem to IBM Software Support.

Determine the business impact


When you report a problem to IBM, you are asked to supply a severity level.
Therefore, you need to understand and assess the business impact of the problem
you are reporting.

Severity 1 Critical business impact: You are unable to use the program,
resulting in a critical impact on operations. This condition
requires an immediate solution.
Severity 2 Significant business impact: The program is usable but is
severely limited.
Severity 3 Some business impact: The program is usable with less
significant features (not critical to operations) unavailable.
Severity 4 Minimal business impact: The problem causes little impact on
operations, or a reasonable circumvention to the problem has
been implemented.

x IBM Tivoli Storage Manager: Performance Tuning Guide


Describe your problem and gather background information
When explaining a problem to IBM, be as specific as possible. Include all relevant
background information so that IBM Software Support specialists can help you
solve the problem efficiently.

To save time, know the answers to these questions:


v What software versions were you running when the problem occurred?
v Do you have logs, traces, and messages that are related to the problem
symptoms? IBM Software Support is likely to ask for this information.
v Can the problem be re-created? If so, what steps led to the failure?
v Have any changes been made to the system? For example, hardware, operating
system, networking software, and so on.
v Are you currently using a workaround for this problem? If so, be prepared to
explain it when you report the problem.

Submit your problem to IBM Software Support


You can submit your problem to IBM Software Support online or by phone.
Online
Go to the ″Submit and track problems″ page on the IBM Software Support
site http://www.ibm.com/software/support/probsub.html . Enter your
information into the appropriate problem submission tool.
By phone
For the phone number to call in your country, go to the contacts page of
the IBM Software Support Handbook on the Web and click the name of
your geographic region.

If the problem you submit is for a software defect or for missing or inaccurate
documentation, IBM Software Support creates an Authorized Program Analysis
Report (APAR). The APAR describes the problem in detail. Whenever possible,
IBM Software Support provides a workaround for you to implement until the
APAR is resolved and a fix is delivered. IBM publishes resolved APARs on the
IBM product support Web pages daily, so that other users who experience the
same problem can benefit from the same resolutions.

Accessibility features
Accessibility features help a user who has a physical disability, such as restricted
mobility or limited vision, to use software products successfully. The major
accessibility features of Tivoli Storage Manager are described in this topic.
v Server and client command-line interfaces provide comprehensive control of
Tivoli Storage Manager using a keyboard.
v The Windows client-graphical interface can be navigated and operated using a
keyboard.
v The Web backup-archive client interface is HTML 4.0 compliant, and accessibility
is limited only by the choice of Internet browser.
v All user documentation is provided in HTML and PDF format. Descriptive text
is provided for all documentation images.
v The Tivoli Storage Manager for Windows Console follows Microsoft®
conventions for all keyboard navigation and access. Drag and Drop support is
handled using the Microsoft Windows Accessibility option known as

Preface xi
MouseKeys. For more information about MouseKeys and other Windows
accessibility options, please refer to the Windows Online Help (keyword:
MouseKeys).

xii IBM Tivoli Storage Manager: Performance Tuning Guide


Chapter 1. Overview of IBM Tivoli Storage Manager tuning
Tivoli Storage Manager performance can be influenced by various factors. Tuning
for optimal performance requires care and expertise.

Tuning Tivoli Storage Manager can be complex because of the many operating
systems, network configurations, and storage devices that Tivoli Storage Manager
supports. Performance tuning even for a single platform function is complex. The
factors that can affect performance and have significant effects include:
v Average client file size
v Percentage of files changed since last incremental backup
v Percentage of bytes changed since last incremental backup
v Client hardware (CPUs, RAM, disk drives, network adapters)
v Client operating system
v Client activity (non-Tivoli Storage Manager workload)
v Server hardware (CPUs, RAM, disk drives, network adapters)
v Server storage pool devices (disk, tape, optical)
v Server operating system
v Server activity (non-Tivoli Storage Manager workload)
v Network hardware and configuration
v Network utilization
v Network reliability
v Communication protocol
v Communication protocol tuning
v Final output repository type (disk, tape, optical)

It is not practical to cover all possible combinations of these functions here. The
topics are limited to those that you can control to some degree without replacing
hardware.

© Copyright IBM Corp. 1993, 2007 1


2 IBM Tivoli Storage Manager: Performance Tuning Guide
Chapter 2. IBM Tivoli Storage Manager server performance
tuning
You can tune the performance of Tivoli Storage Manager servers through server
options, command parameters, and other configuration settings.

The options are tunable on most Tivoli Storage Manager servers. See the
Administrator’s Reference to determine the options available for your platform. You
can change any option setting in the server options file (dsmserv.opt). If you
change the server options file, you must stop and restart the server for the changes
to take effect. You can change some settings with the server SETOPT command.

BUFPOOLSIZE
The database buffer pool provides cache storage, which allows database pages to
remain in memory for a longer period of time and provide faster access. Use the
BUFPOOLSIZE server option to set the optimal size for the database buffer pool.

When database pages remain in cache, the server can make continuous updates to
the pages without requiring I/O operations to external storage. While a larger
database buffer pool can improve server performance, it also requires more
memory.

An optimal setting for the database buffer pool is one that results in a cache hit
percentage greater than or equal to 99%. Performance decreases drastically if the
buffer pool cache hit ratio drops below 99%. To check the cache hit percentage,
issue the QUERY DB FORMAT=DETAIL command.

Increasing the BUFPOOLSIZE parameter can improve the performance of many


Tivoli Storage Manager server functions such as multi-client backup, storage pool
migration, storage pool backup, expiration processing, and move data. If the cache
hit percentage is lower than 99%, increase the size of the BUFPOOLSIZE parameter
in the dsmserv.opt file, or use the SETOPT command. If you change the option in
dsmserv.opt file, you must stop and restart the server before the change takes
effect.

The default value is 32768, which is a reasonable starting value that equals 8192
database pages. Based on your cache hit rate, increase BUFPOOLSIZE in
increments. A cache hit percentage greater than 99% is an indication that the
proper BUFPOOLSIZE has been reached. However, raising the BUFPOOLSIZE
beyond that level can be very helpful. While increasing BUFPOOLSIZE, take care
not to cause paging in the virtual memory system. Monitor system memory usage
to check for any increased paging after the BUFPOOLSIZE change. Use the RESET
BUFPOOL command to reset the cache hit statistics. If you are paging, buffer pool
cache hit statistics are misleading because the database pages come from the
paging file. Additionally, the buffer pool can be so large that the cost of managing
it and the cost of locating pages within it can outweigh the cache benefits. In most
cases, the benefits of buffering can be achieved with a pool size of one gigabyte or
less.
v The optimal buffer pool size is from 1/8 to 1/4 of real memory up to 1 GB.
Buffer pools much larger than 1 GB could actually lower performance in some
environments.

© Copyright IBM Corp. 1993, 2007 3


v For AIX®, to avoid paging, see “AIX: vmo and ioo commands” on page 15.
v See the SELFTUNEBUFPOOL server option regarding automatic tuning of the
BUFPOOLSIZE

Recommended starting points


Recommended Buffer Pool Recommended Buffer Pool
System Memory (MB) Size (KB) Size (MB)
256 32678 32
512 65536 64
1024 131072 128
2048 262144 256
3072 393216 384
4096 524288 512

EXPINTERVAL
Inventory expiration removes client backup and archive file copies from the server.
EXPINTERVAL specifies the interval, in hours, between automatic inventory
expirations run by the Tivoli Storage Manager server. The default is 24.

Backup and archive copy groups can specify the criteria that make copies of files
eligible for deletion from data storage. However, even when a file becomes eligible
for deletion, the file is not deleted until expiration processing occurs. If expiration
processing does not occur periodically, storage pool space is not reclaimed from
expired client files, and the Tivoli Storage Manager server requires increased disk
storage space.

Expiration processing is CPU and I/O intensive. If possible, it should be run when
other Tivoli Storage Manager processes are not occurring. To enable this, either
schedule expiration once per day, or set EXPINTERVAL to 0 and manually start the
process with the EXPIRE INVENTORY server command. Expiration processing can
also be scheduled in an administrative schedule.

When using the DURATION parameter on an administrative schedule, periodically


check that expiration is actually completing within the specified time.

Recommendation
EXPINTERVAL 0

Specifies no expiration processing. Use an


administrative schedule to run expiration at
an appropriate time each day.

4 IBM Tivoli Storage Manager: Performance Tuning Guide


LOGPOOLSIZE
The LOGPOOLSIZE option specifies the size of the recovery log buffer pool. The
recovery log buffer pool is used to hold new transaction records until they can be
written to the recovery log.

A large recovery log buffer pool might increase the rate at which recovery log
transactions are committed to the database, but it also requires more memory. The
size of the recovery log buffer pool can affect the frequency in which the server
forces records to the recovery log. To determine if LOGPOOLSIZE should be
increased, run the QUERY LOG FORMAT=DETAIL command and check the value
of LogPool Percentage Wait. If the value is greater than zero, increase the value of
LOGPOOLSIZE. As the size of the recovery log buffer pool is increased, remember
to monitor system memory usage. Using sizes much larger than the recommended
values might result in lower performance.

Recommendation
LOGPOOLSIZE 2048 - 8192 KB

MAXNUMMP
The MAXNUMMP server option specifies the maximum number of mount points a
node is allowed to use on the server.

The MAXNUMMP option can be set to an integer from 0-999. Zero specifies that
the node cannot acquire any mount point for a backup or archive operation.
However, the server still allows the node to use a mount point for a restore or
retrieve operation. If the client stores its data in a storage pool that has copy
storage pools defined for simultaneous backup, the client might require additional
mount points.

As a general rule, assign one mount point for each copy storage pool of sequential
device type. If the primary storage pool is of sequential device type, assign a
mount point for the primary storage pool also.

MAXSESSIONS
The MAXSESSIONS option specifies the maximum number of simultaneous client
sessions that can connect with the Tivoli Storage Manager server.

The default value is 25 client sessions. The minimum value is 2 client sessions. The
maximum value is limited only by available virtual memory or communication
resources. This option specifies the maximum number of simultaneous client
sessions that can connect with the Tivoli Storage Manager server. By limiting the
number of clients, server performance can be improved, but the availability of
Tivoli Storage Manager services to the clients is reduced.

Chapter 2. IBM Tivoli Storage Manager server performance tuning 5


MOVEBATCHSIZE and MOVESIZETHRESH
The MOVEBATCHSIZE and MOVESIZETHRESH options tune the performance of
server processes that involve the movement of data between storage media. These
processes include storage pool backup and restore, migration, reclamation, and
move data operations.

MOVEBATCHSIZE specifies the number of files to be moved and grouped


together in a batch, within the same server transaction. The default value for
MOVEBATCHSIZE is 40, and the maximum value is 1000. The
MOVESIZETHRESH option specifies, in megabytes, a threshold for the amount of
data moved as a batch, within the same server transaction. When this threshold is
reached, no more files are added to the current batch, and a new transaction is
started after the current batch is moved. The default value for MOVESIZETHRESH
is 500, and the maximum value is 2048.

The number of client files moved for each server database transaction during a
server storage pool backup/restore, migration, reclamation or move data operation
are determined by the number and size of the files in the batch. If the number of
files in the batch equals the MOVEBATCHSIZE before the cumulative size of the
files becomes greater than the MOVESIZETHRESH, then the MOVEBATCHSIZE is
used to determine the number of files moved or copied in the transaction. If the
cumulative size of files being gathered for a move or copy operation exceeds the
MOVESIZETHRESH value before the number of files becomes equivalent to the
MOVEBATCHSIZE, then the MOVESIZETHRESH value is used to determine the
number of files moved or copied in the transaction.

When the MOVEBATCHSIZE or MOVESIZETHRESH parameters are increased


from their default values, the server requires more space in the recovery log. The
recovery log might require an allocation of space two or more times larger than a
recovery log size which uses the defaults. In addition, the server requires a longer
initialization time at startup. The impact of a larger recovery log size is felt while
running the server with the log mode set to NORMAL (the default value). If you
choose to increase these values for performance reasons, be sure to monitor
recovery log usage during the first few storage pool backup/restore, migration,
reclamation, or move data executions to ensure sufficient recovery log space is
available.

The DEFINE LOGVOLUME and EXTEND LOG commands are used to add space
to the server recovery log. Also, ensure additional volumes are available (formatted
with the DSMFMT or ANRFMT program) for extending the recovery log.

Recommendation
MOVEBATCHSIZE 1000
MOVESIZETHRESH 2048

6 IBM Tivoli Storage Manager: Performance Tuning Guide


RESTOREINTERVAL
The RESTOREINTERVAL option specifies how long, in minutes, that a restartable
restore session can be in the database before it can be expired. Restartable restores
allow restores to continue after an interruption without starting at the beginning.

Restartable restores reduce duplicate effort or manual determination of where a


restore process was terminated. The RESTOREINTERVAL option defines the
amount of time an interrupted restore can remain in the restartable state.

The minimum value is 0. The maximum is 10080 (one week). The default is 1440
(24 hours). If the value is set to 0 and the restore is interrupted or fails, the restore
is still put in the restartable state. However, it is immediately eligible to be expired.
Restartable restore sessions consume resources on the Tivoli Storage Manager
server. You should not keep these sessions any longer than they are needed.

Recommendation
RESTOREINTERVAL Tune to your environment.

SELFTUNEBUFPOOLSIZE
The SELFTUNEBUFPOOLSIZE server option controls the automatic adjustment of
the buffer pool size.

The SELFTUNEBUFPOOLSIZE option can be set to YES or NO. The default is NO.
Specifying YES causes the database cache hit ratio statistics to be reset before
starting expiration processing and to be examined after expiration processing
completes. The buffer pool size is adjusted if cache hit ratio is less than 98%. The
percentage increase in buffer pool size is half the difference between the 98% target
and the actual buffer pool cache hit ratio. This increase is not done if a
platform-specific check fails:
UNIX Buffer pool size may not exceed 10% of physical memory.
Windows
The same as UNIX with an additional check for memory load not
exceeding 80%.
z/OS® May not exceed 50% of region size.

Be careful when using SELFTUNEFBUFPOOLSIZE on UNIX and Windows


systems. Because the upper limit is small (10% of real memory), the self-tuning
algorithm might not achieve the optimal size. Monitor the buffer pool hit ratio and
tune manually if necessary.

| You must have sufficient physical memory. Lack of sufficient physical memory
| means that increased paging occurs. The self-tuning process gradually raises the
| buffer pool size to an appropriate value. Set the initial size according to the
| guidelines in “BUFPOOLSIZE” on page 3, and ensure that the buffer pool size is
| appropriate each time the server starts.
|| Recommendation
| SELFTUNEBUFPOOLSIZE No
|

Chapter 2. IBM Tivoli Storage Manager server performance tuning 7


TCPWINDOWSIZE
| The TCPWINDOWSIZE server option specifies the amount of receive data in
| kilobytes that can be in-transit at one time on a TCP/IP connection. The
| TCPWINDOWSIZE server option applies to backups and archives. The
| TCPWINDOWSIZE client option applies to restores and retrieves.

To improve backup performance, increase the TCP receive window on the Tivoli
Storage Manager server. To improve restore performance, increase the TCP receive
window on the client.

The sending host cannot send more data until an acknowledgement and TCP
receive window update are received. Each TCP packet contains the advertised TCP
receive window on the connection. A larger window allows the sender to continue
sending data, and might improve communication performance, especially on fast
networks with high latency.

The TCPWINDOWSIZE option overrides the operating system’s TCP send and
receive spaces. In AIX, for instance, these parameters are tcp_sendspace and
tcp_recvspace and are set as “no” options. For Tivoli Storage Manager, the default
is 63 KB, and the maximum is 2048 KB. Specifying TCPWINDOWSIZE 0, will
cause Tivoli Storage Manager to use the operating system default. This is not
recommended because the optimal setting for Tivoli Storage Manager might not be
same as the optimal setting for other applications.

The TCPWINDOWSIZE option specifies the size of the TCP sliding window for all
clients and all servers. On the server this applies to all sessions. Therefore, raising
TCPWINDOWSIZE can increase memory significantly when there are multiple,
concurrent sessions. A larger window size can improve communication
performance, but uses more memory. It enables multiple frames to be sent before
an acknowledgment is obtained from the receiver. If long transmission delays are
being observed, increasing the TCPWINDOWSIZE might improve throughput.

For all platforms, rfc1323 must be set to have window sizes larger than 64 KB-1:
v AIX: Use no -o rfc1323=1
v HP-UX: Using a window size greater than 64 KB-1 automatically enables large
window support.
v Sun Solaris 10: Use ″ndd ″set /dev/tcp tcp_wscale_always 1″ This should be
enabled by default.
v Linux®: Should be on by default for recent kernel levels. Check with ″cat
/proc/sys/net/ipv4/tcp_window_scaling″. Recent Linux kernels use autotuning,
and changing TCP values might have a negative effect on autotuning. Make
changes with caution.
v Windows XP and 2003: Add or modify, with regedit, the following registry
name/value pair under [HKEY_LOCAL_MACHINE\SYSTEM\
CurrentControlSet\Services\Tcpip\Parameters] Tcp1323Opts, REG_DWORD, 3

Attention: Mistakes with regedit can have very serious consequences that are
difficult to correct. You are strongly encouraged to backup the entire registry before
you start.

8 IBM Tivoli Storage Manager: Performance Tuning Guide


Recommendations
63
128
TCPWINDOWSIZE
(Gigabit Ethernet with Jumbo Frames –
9000 MTU)

| Note: This option is also valid on the Tivoli Storage Manager client.

TXNGROUPMAX
| The TXNGROUPMAX server option specifies the number of files transferred as a
| group between commit points. This parameter is used in conjunction with the
| TXNBYTELIMIT client option.

| It is possible to affect the performance of client backup, archive, restore, and


| retrieve operations by using a larger value for this option:
1. If you increase the value of the TXNGROUPMAX option by a large amount,
watch for possible effects on the recovery log. A larger value for the
TXNGROUPMAX option can result in increased utilization of the recovery log,
as well as an increased length of time for a transaction to commit. If the effects
are severe enough, they can lead to problems with operation of the server. For
more information on managing the recovery log, see the Administrator’s Guide.
2. Increasing the value of the TXNGROUPMAX option can improve throughput
for operations storing data directly to tape, especially when storing a large
number of objects. However, a larger value of the TXNGROUPMAX option can
also increase the number of objects that must be resent in the case where the
transaction is aborted because an input file changed during backup, or because
a new storage volume was required. The larger the value of the
TXNGROUPMAX option, the more data must be resent.
3. Increasing the TXNGROUPMAX value affects the responsiveness of stopping
the operation, and the client might have to wait longer for the transaction to
complete.

| You can override the value of this option for individual client nodes. See the
| TXNGROUPMAX parameter in the REGISTER NODE and UPDATE NODE
| commands.

| This option is related to the TXNBYTELIMIT option in the client options file.
| TXNBYTELIMIT controls the number of bytes, as opposed to the number of
| objects, that are transferred between transaction commit points. At the completion
| of transferring an object, the client commits the transaction if the number of bytes
| transferred during the transaction reaches or exceeds the value of TXNBYTELIMIT,
| regardless of the number of objects transferred.
| v Check the recovery log percent utilization to ensure there is enough space. Issue
| the query log format=detailed command, and check the Pct Util column. It is
| best to ensure that this value rarely exceeds 80%. If the recovery log becomes
| full, session or process failures might occur.
| v Increasing the number of files per transaction might result in more data being
| retransmitted if a retry occurs. This might negatively affect performance.

Lab tests have shown that settings higher than 4096 give no benefit. Set
TXNGROUPMAX to 256 in your server options file. If some clients have small files

Chapter 2. IBM Tivoli Storage Manager server performance tuning 9


and go straight to a tape storage pool, then consider raising TXNGROUPMAX to
higher values (up to 4096) for those clients only.

Recommendations
TXNGROUPMAX 256 in server options files, higher values
(max 4096) for only those clients with small
files and direct to tape backup

| Performance recommendations for all server platforms


| There are some actions that you can take on most or all platforms to improve
| server performance.

| The following recommendations can help you optimize Tivoli Storage Manager
| server performance for your environment.
| v Formatting disk storage pool volumes that are placed in the same file system
| with parallel running dsmfmt processes can cause the storage pool volumes to
| be highly fragmented. For disk storage pool volumes in the same file system, do
| not issue multiple dsmfmt or DEFINE VOLUME commands simultaneously.
| Format disk storage pool volumes sequentially, one at a time, if they are placed
| into the same file system. This should create files that have only a few gaps and
| will improve sequential read/write performance.
| v To avoid disk I/O contention between database, recovery log and disk storage
| pool volumes, use separate physical disks for database, recovery log, and disk
| storage pool.
| v Place the Tivoli Storage Manager server database volumes on the fastest disks. If
| write cache exists for the disk adapter and that adapter has attached disks that
| contain only database volumes (no storage pool volumes), then enable the write
| cache for the best database performance.
| v Set the TXNGROUPMAX server option to 256 and the TXNBYTELIMIT client
| option to 25600 to maximize the transaction size. A larger transaction size
| increases the size of the server file aggregates. File aggregation provides
| throughput improvements for many server data movement and inventory
| functions, such as storage pool migration, storage pool backup, and inventory
| expiration. A larger transaction size when using backup directly to tape reduces
| the number of tape buffer flushes and therefore can improve throughput.

Database and recovery log performance


The Tivoli Storage Manager database, recovery log, and storage pools have
different I/O behaviors and must be considered separately for tuning.

| The access pattern for the database is random for most operations. Because of this,
| it is best to use the fastest available disk to support the database operations. Also
| ensure that write caching is enabled on the disk volumes that the database and log
| reside on, but only if the cache is non-volatile and can survive unexpected power
| outages and other failure modes.

During BACKUP DB, the access pattern for the database is sequential with the
database volumes being read, one at time, from beginning to the end of the
database volumes. The volumes are read in the order they are defined to the Tivoli
Storage Manager server.

10 IBM Tivoli Storage Manager: Performance Tuning Guide


During normal operations, when a database volume is first used by the server, it is
filled to end-of-volume before going on to the next volume. For a newly deployed
server, the I/O activity tends to be localized within the database and usually a
small number of volumes. Over time, the I/O activity tends to spread across the
database and across more volumes. Database reorganization moves the data onto
fewer volumes. However, database reorganization does not always lead to better
performance.

It is useful to gauge the performance of a database volume by measuring the I/O


per second (IOPS) rate. This works best if only one database volume is allocated
per physical volume. A typical fiber channel attached disk can handle roughly 150
IOPS before queuing occurs. Some disks, such as SATA attached disk, will handle
fewer IOPS and some disks might be able to handle more. If a physical disk
approaches the 150 IOPS mark, it is time add additional physical disks (and
additional database volumes) to the database configuration. Operating system tools
(such as iostat or filemon) are available to measure IOPS for physical volumes. The
Tivoli Storage Manager server instrumentation can be used to measure IOPS for a
database volume.

| The Tivoli Storage Manager server issues I/Os to the database volumes in batches
| of up to 64 4 KB blocks for all operations except BACKUP DB. For database
| backups operations it issues 256 KB block reads. On Windows, direct I/O is always
| used for database operations. On UNIX, direct I/O is never used for database
| operations.

It is best to use multiple volumes for the database. It is best to place each database
volume on a separate physical volume. For RAID configurations, it is best to limit
the number of database volumes to the number of physical volumes in the RAID
array. The Tivoli Storage Manager server is able to spread I/O workload over
several volumes in parallel, which increases read and write performance.
Consequently a higher quantity of smaller volumes is better than a smaller number
of disks with larger size and the same rotation speed. It is best to define the
database with 4 to 16 volumes.

| The access pattern for the recovery log is always sequential and almost always
| write. The block size is always 4 KB. Tivoli Storage Manager on Windows always
| uses direct I/O for recovery log operations. On UNIX, direct I/O is never used for
| recovery log operations. The log is a good candidate for RAID0 with Tivoli Storage
| Manager mirroring or RAID1 without Tivoli Storage Manager mirroring. Physical
| placement on the underlying disk is very important. It is best to isolate the log
| from the database and from the disk storage pools. If this cannot be done, then
| place the log with storage pools and not with the database. The number of log
| volumes is not important but having two volumes might be desirable for ease of
| maintenance.

Database and log mirroring provides higher reliability, but comes at a cost in
performance (especially with sequential mirroring). To minimize the impact of
database write activity, use disk subsystems with non-volatile write caching ability.
Use of disk adaptors with write cache can nearly double (or more) the
performance of database write activity (when parallel mirroring). This is true even
if there appears to be plenty of bandwidth left on the database disks. Set the
MIRRORWRITE server option to DB PARALLEL, and use database page
shadowing to reduce overhead of mirrored writes.

Chapter 2. IBM Tivoli Storage Manager server performance tuning 11


Backup performance
When possible, limit the number of versions of any backup file to the minimum
required.

File backup performance is degraded when there are many versions of an object.
Use the DEFINE COPYGROUP command and modify the VEREXISTS parameter
to control the number of versions, or use the UPDATE COPYGROUP command.
The default number of backup versions is 2.

| If the retention requirements in your environment differ among client machines,


| use different copy groups rather than taking the lowest common denominator. For
| example, if your accounting machines require records to be kept for seven years,
| but other machines need data kept for only two years, do not specify seven for all.
| Instead, create two separate copy groups. Not only are backups potentially faster,
| but you also consume less storage because you are not keeping data that you do
| not need.

Disaster recovery performance


Using export and import operations for disaster recovery is not recommended.

Tivoli Storage Manager provides procedures for backing up and restoring the
storage pools for disaster recovery. The performance of storage pool backup and
recovery for disaster recovery is superior to the performance of export and import.

Cached disk storage pools


| Using cached disk storage pools can increase restore performance by avoiding tape
| mounts. Cached disk storage pool does not refer to cache in the disk hardware, or
| file system cache in the operating system.

The benefit of using cached disk storage pools is seen for restoring files that were
recently backed up. If the disk pool is large enough to hold a day’s worth of data,
then caching is a good option. Use the DEFINE STGPOOL or UPDATE STGPOOL
command with the CACHE=YES parameter to enable caching. However, when the
storage pool is large and CACHE is set to YES, the storage pool might become
fragmented and response will suffer. If this condition is suspected, our
recommendation is to turn disk storage pool caching off. Also, disk caching can
affect backup throughput because database updates are required to delete cached
files in order to create space for the backup files.

| Clearing cached files


| To improve backup performance, turn disk storage pool caching off.

| To clear your cached files, perform this procedure:


| 1. Issue the QUERY STGPOOL command with the FORMAT parameter set to
| DETAILED.
| 2. Verify that the Percent Migratable entry is 0.
| 3. Issue the UPDATE STGPOOL command to turn off caching.
| 4. Issue the MOVE DATA command without the STGPOOL parameter. Files are
| moved to other volumes within the same storage pool, and the cached files are
| deleted.

12 IBM Tivoli Storage Manager: Performance Tuning Guide


Tuning storage pool migration
You can improve performance by tuning storage pool migration processes and
thresholds.

Tuning migration processes


Use the DEFINE STGPOOL command and modify the MIGPROCESS parameter to
control the number of migration processes that are used for migrating files from a
disk storage pool to a tape storage pool.

When data is migrated from disk to tape, multiple processes can be used if
multiple tape drives are available. In some cases, this can improve the time to
empty the disk storage volumes, since each migration process works on data for
different client nodes. For the MIGPROCESS option, you can specify an integer
from 1-999, inclusive, but it should not exceed the number of tape drives available.
The default value is 1. You can also use the UPDATE STGPOOL command to
modify the number of migration processes.

Tuning migration thresholds


Use the DEFINE STGPOOL command and modify the HIGHMIG and LOWMIG
parameters to tune migration thresholds. If the thresholds are set too high,
migration is delayed.

This can cause the disk storage pool to fill, and when a client attempts to send
data to the disk storage pool, it sees the full condition and attempts to go to the
volume indicated at the next level in the storage hierarchy. If this is a tape pool,
then all drives might be in use by a migration process, in which case the client
session waits on the tape media to be freed by the migration process. The client
then sits idle. In this case, the migration thresholds should be lowered so migration
starts earlier, or more disk space should be allocated to the disk storage pool. You
can also use the UPDATE STGPOOL command to modify the migration thresholds.

| Collocation by group
| If you are using collocation by node or filespace on the target storage pools of a
| migration, consider using collocate by group instead.

| Collocating by group can greatly reduce the number of tape mounts during
| migration. Instead of one tape mount per node or filespace there is just one mount
| for an entire group, unless the amount of data to be migrated for that group fills
| the volume.

Searching the server activity log


Many times when performance problems occur, an abnormal system condition
might have been the cause. You can often determine the cause of these problems
by examining server activity logs, client error files, or the appropriate system logs
for your operating system.

Chapter 2. IBM Tivoli Storage Manager server performance tuning 13


Scheduling sessions and processes
If possible, schedule server processes such as expiration, storage pool backup,
move data, export and import operations, and so on, when client backups are not
active.

| Tivoli Storage Manager throughput can degrade if all client backups are started
| simultaneously. It is better to spread out backup session start-ups over time.
| Creating several client schedules with staggered start times and assign nodes to
| those schedules appropriately. For nodes that use the client polling method of
| scheduling, use the SET RANDOMIZE command to spread out the nodes startup
| times.

LAN-free backup
| Using LAN-free backup can improve performance. To do so requires the Tivoli
| Storage Manager storage agent on the client for LAN-free backups to SAN
| attached tape, and Tivoli SANergy® if backups are to be sent to FILE volumes on
| SAN-attached disk.
v Back up and restore to tape or disk using the SAN. The advantages are:
| – Metadata is sent to the server using the LAN while client data is sent over
| the SAN.
– Frees the Tivoli Storage Manager server from handling data leading to better
scalability.
– Potentially faster than LAN backup and restore.
– Better for large file workloads, databases (Data Protection).
– Small file workloads have bottlenecks other than data movement.
v Ensure that there are sufficient data paths to tape drives.
v Do not use LAN-free backup if you bundle more than 20 separate dsmc
commands in a script.
– dsmc start/stop overhead is higher due to tape mounts.
– Use the new file list feature to back up a list of files.

Storage agent tuning


There are steps you can take to get the best performance from your Tivoli Storage
Manager storage agents.

To get the best performance consider the following items when you set options for
storage agents:
v The storage agent has its own configuration file, dsmsta.opt, containing many of
the same options as the server dsmserv.opt. In general, use the same settings as
recommended for the server.
v You can use the DSMSTA SETSTORAGESERVER command for some options.
| v Ensure TCPNODELAY is set to YES (the default) on both the storage agent and
| server. Use the option LANFREECOMMMETHOD SHAREDMEM in the client
| options file on client platforms that support it to obtain the lowest CPU usage.

14 IBM Tivoli Storage Manager: Performance Tuning Guide


AIX: vmo and ioo commands
You can use the vmo and ioo commands to improve the performance of an AIX
server.

The AIX Virtual Address space is managed by the Virtual Memory Manager
(VMM). VMM is administered by the vmo AIX command; I/O that can be tuned is
controlled by the ioo command. In AIX 5.3 and later, the vmo and ioo commands
replace vmtune.
v The vmo command is used to tune the AIX virtual memory system.
v The vmo and ioo options can be displayed using the vmo -a and ioo -a
commands.
v You can change options by specifying the appropriate option and value.
v Changes to vmo parameters do not survive reboot unless the –p option is
specified.

Read ahead (ioo maxpgahead)


v When AIX detects sequential file reading is occurring, it can read ahead even
though the application has not yet requested the data.
v Read ahead improves sequential read performance on JFS and JFS2 file systems.
v Tivoli Storage Manager client - Improves large file backup throughput.
v Tivoli Storage Manager server - Improves storage pool migration throughput on
JFS volumes only (does not apply to JFS2 or raw logical volumes).
v The Recommended setting of maxpgahead is 256 for both JFS and JFS2:
ioo –p –o maxpgahead=256 –o j2_maxPageReadAhead=256
v When altering the read ahead parameter, you must also alter the maxfree
parameter so that there is enough free memory to store the read ahead data.
v The following equation must be true:
minfree + maxpgahead ≤ maxfree

To calculate minfree and maxfree use these formulae:


– minfree = 120 multiplied by the number of processors (or default if larger)
– maxfree = 120 + maxpgahead (or j2_maxPageReadAhead) multiplied by the
number of processors (or the default if larger)
This does not improve read performance on raw logical volumes or JFS2
volumes on the Tivoli Storage Manager server. The server uses direct I/O on
JFS2 file systems.
v Using raw logical volumes for the server can cut CPU consumption, but doing
so might be slower during storage pool migration due to lack of read ahead.

Recommendation
256 (maximum)

AIX file cache (vmo minperm/maxperm)


v Determines how much memory AIX sets aside for file system cache.
v AIX can page out application memory (for example the Tivoli Storage Manager
application) in favor of caching file system data. This can cause paging of the
database buffer pool leading to slow database performance.
v Paging of the database buffer pool can cause database cache hit statistics to be
overly optimistic.

Chapter 2. IBM Tivoli Storage Manager server performance tuning 15


v Tivoli Storage Manager server does not benefit greatly from file system cache.
v Lowering maxperm causes AIX to retain more application memory.
v Stop Tivoli Storage Manager server virtual memory paging by modifying the
minperm/maxperm parameters. There are two exceptions: RAM-constrained
systems and when the database buffer pool size is too large. See
“BUFPOOLSIZE” on page 3 for the proper settings.
v A good starting point is setting aside a maximum of 50% (vmo –p –o
maxperm%=50) for file system caching, instead of the default of 80%. Lower file
system caching further if 50% is not effective (change realtime). As maxperm
approaches minperm, consider lowering minperm as well. Watch vmstat for
progress, if pageouts go to zero, pageins eventually lower as well.

UNIX file systems and raw logical volumes


| When raw logical volumes are used, I/O is processed through the virtual memory
| layer of the operating system. This can result in lower CPU utilization for each
| disk I/O. On the other hand, when reading from server disk volumes, raw logical
| volumes do not use the read ahead mechanism of the operating systems. This can
| result in poorer performance on restores and server move operations from disk to
| tape.

In general, there is a performance improvement when the recovery log or data


base volumes are put on raw logical volumes. If you choose to use raw logical
volumes instead of file systems, consider the following warnings:
v Do not use the operating system to mirror raw logical volumes with Tivoli
Storage Manager. Instead use Tivoli Storage Manager mirroring facilities. The
operating system puts information in a control block on disk that can overwrite
Tivoli Storage Manager control information.
v Do not change the size of raw logical volumes except by using Tivoli Storage
Manager facilities. Refer to Tivoli Storage ManagerAdministrator’s Reference for
details about the EXTEND DB and EXTEND LOG commands.
v Be careful not to start multiple instances of the server that might use the same
raw logical volumes. Tivoli Storage Manager implements a locking mechanism
that is designed to prevent the overwrite of volumes by another server instance.
However, it is best to name raw logical volumes in order to prevent overwrite.

Solaris direct I/O should be used if not using raw logical volumes. For AIX, if you
choose not to use raw logical volumes, direct I/O should always be used for JFS2
file systems. For JFS, direct I/O might cause degradation, especially with
large-file-enabled file systems.

Performance recommendations by server platform


The recommended values for some server options vary depending on the platform.

16 IBM Tivoli Storage Manager: Performance Tuning Guide


AIX server
There are a number of actions that can improve performance for a Tivoli Storage
Manager server running in an AIX environment.
v Use raw partitions for server database, recovery log, and disk storage pool
volumes on the AIX platform. Customer experience and measurements in the lab
show that raw logical volumes offer better client backup/restore throughput and
server administrative process performance.
v If you choose not to use raw logical volumes, direct I/O should always be used
for JFS2 file systems. For JFS file systems, direct I/O might cause degradation,
especially with large-file enabled file systems. JFS2 file systems generally
provide better performance than JFS file systems.
v If you have new generation SP™ nodes, set the TCPWINDOWSIZE to 128 for
both SP client and SP AIX server. This is especially true if you have ATM card in
the SP machine. On the newer and faster SP machines (and faster HPS),
TCPWINDOWSIZE 128 seems to work well.

HP-UX server
Use raw partition for disk storage pools on HP-UX Tivoli Storage Manager server.

Raw partition is recommended because measurements in the lab show that raw
partition volumes offer better backup/restore throughput than when VXFS
volumes are used on HP-UX. However, for the data integrity and recoverability of
database and recovery log volume, the database and recovery log volume should
be allocated in the file systems.

Linux server
Disable any unneeded daemons (services).

Most enterprise distributions come with a great many features. However, most of
the time only a small subset of these features are used. For example, the TCP/IP
data movement could be blocked or slowed down significantly by the internal
firewall in SUSE 9 x86_64. It can be terminated by /etc/init.d/SuSEfirewall2_setup
stop.

Sun Solaris server


There are a number of actions that can improve performance for a Tivoli Storage
Manager server running in a Sun Solaris environment.
v Use raw partitions for server database, recovery log, and disk storage pool
volumes on the Solaris platform. Raw logical volumes offer better client backup
and restore throughput and server administrative process performance than UFS
or Veritas file system volumes.
v The VxFS file system with direct I/O offers I/O characteristics similar to raw
I/O on raw partitions. By switching to raw or direct I/O and by giving enough
memory, a much larger working set of data can be cached, and a much higher
cache hit rate can be sustained with obvious performance benefits. When
physical I/O is required, the CPU cost of performing that I/O is reduced
because the data is not first copied to file system buffers. We therefore
recommend using VxFS file systems mounted with the direct I/O option
(convosync=direct)
v When UFS file system volumes are used, mount these file systems using the
forcedirectio flag If the file system is mounted using forcedirectio, then data is
transferred directly between user address space and the disk. If the file system is

Chapter 2. IBM Tivoli Storage Manager server performance tuning 17


mounted using noforcedirectio, then data is buffered in kernel address space
when data is transferred between user address space and the disk. Forcedirectio
is a performance option that benefits only from large sequential data transfers.
The default behavior is noforcedirectio.

Windows server
There are a number of actions that can improve performance for a Tivoli Storage
Manager server running in a Windows environment.
v Using the NTFS file system on the disks used by the Tivoli Storage Manager
server is most often the best choice. These disks include the recovery log,
database, and storage pool volumes. Although there is generally no performance
difference for Tivoli Storage Manager functions between using NTFS and FAT on
these disks, NTFS has the following advantages:
– Support for larger disk partitions
– Better data recovery
– Better file security
– Formatting storage pool volumes on NTFS is much faster
v NTFS file compression should not be used on disk volumes that are used by the
Tivoli Storage Manager server, because of the potential for performance
degradation.
v For optimal Tivoli Storage Manager for Windows server performance with
respect to Windows real memory usage, use the server property setting for
Maximize Throughput for Network Applications. This setting gives priority to
application requests for memory over requests from the Cache Manager for file
system cache. This setting will make the most difference in performance on
systems that are memory constrained.
v Use the server and client option TCPWINDOWSIZE to improve network
throughput during backup and restore and archive and retrieve operations. For
Windows 2000 and XP servers that are communicating exclusively with other
Windows 2000 and XP or UNIX systems, a TCPWINDOWSIZE greater than 63
might be useful.
v For optimal backup and restore performance when using a local client on a
Windows system, use the shared memory communication method. This is done
by including the COMMMETHOD SHAREDMEM option in the server options
file and in the client options file.
v Miscellaneous Tivoli Storage Manager client and server issues:
– Anti-virus software can negatively impact backup performance.
– Disable or do not install unused services.
– Disable or do not install unused network protocols.
– Give preference to background application performance.
– Avoid screen savers.
– Make sure the paging file is not fragmented.
– Keep device drivers updated, especially for new hardware.

18 IBM Tivoli Storage Manager: Performance Tuning Guide


z/OS server
There are a number of actions that can improve performance for a Tivoli Storage
Manager server running in a z/OS environment.
v Place the server database volumes on MVS™ volumes with the fastest I/O
response times. Generally, this means avoiding volumes with other datasets that
might have significant I/O activity. Remember that performance and space
utilization involve trade-offs, and it might be advisable to leave a volume with a
Tivoli Storage Manager database free of other datasets.
v Place the server recovery log volumes on volumes with similar I/O response
times as the database volumes. If this is not possible, then use slower volumes
for the recovery log datasets.
v Place the server disk storage pool volumes on MVS volumes with the fastest I/O
throughput. If necessary, use the VSAM extended data set striping capabilities to
spread a single storage pool volume over multiple (2 to 4) volumes. Using
volumes on different disk storage subsystem RAID ranks will help achieve
optimal throughput.
v Avoid allocating more than one database, recovery log, or storage pool volume
on a single volume. If this must be done, try to provide enough PAVs to avoid
volume access serialization.
v The Tivoli Storage Manager server provides the capability to use VSAM volumes
larger than 4 GB. Utilizing this capability might provide better performance by
reducing the number server database, recovery log, and disk storage pool
volumes.
v A region size of 512 MB (REGION=512M) or larger is recommended for starting
the Tivoli Storage Manager server. However, this value can be decreased or
increased based on server workload. To determine the optimum value for your
installation, monitor server operation and set the region size according to the
results.
v A region size of 0M (REGION=0M) is not recommended. Specifying 0M will
result in poor server performance.
v If the specified region is too small, server performance can be significantly
impacted, especially during periods of high server activity. For example, the
operating system GETMAIN and FREEMAIN processing can have a major
impact on the performance of transaction-oriented applications such as Tivoli
Storage Manager. To eliminate or minimize GETMAIN and FREEMAIN calls, the
server uses its own method for satisfying storage requests. The ability of the
server to avoid calls to the GETMAIN and FREEMAIN procedures is highly
dependent on adequate region size for the workload.
v When increasing the region size, use 128 MB increments until the desired
performance is achieved, or until it is clear that the additional storage is not
yielding improved performance. Once this occurs, you can reduce the region size
in small increments (for example, 32 MB), to determine the optimum region size
for your workload. It is important that performance be monitored for a period of
time and over periods of high activity.
v VSAM I/O pages can be fixed to allow a faster throughput of data for
operations involving the database, the recovery log, and all storage pool
volumes. Specifying a size for the VSAM pages can significantly reduce the
locking and unlocking of VSAM pages while operations are occurring to and
from disk storage pool volumes.

Chapter 2. IBM Tivoli Storage Manager server performance tuning 19


Estimating throughput
There are procedures for estimating the throughput rate of Tivoli Storage Manager
in both tested and untested environments.

Estimating throughput rate for average workloads


You can estimate the Tivoli Storage Manager throughput rate for workloads with
average file sizes based on tested server and client environments.

The Tivoli Storage Manager performance lab has conducted performance


evaluations for some combinations of server and client platforms and with various
processor, disk and tape combinations. The results of these evaluations are not
generally available to the public. However, results can be provided for specific
situations. Contact your IBM representative if you are interested in specific results.

It is easy to estimate the throughput for workloads with average file sizes different
from those that were tested in the performance lab. However, the overall Tivoli
Storage Manager environment must conform to one of the environments in one of
our evaluation reports.
1. Find the table in an evaluation report for the Tivoli Storage Manager function
and environment that matches your specific requirements.
2. Determine the average file size of the client workload for which the estimate is
to be made. The following formulas call this value EstFileSize and expect this
value in KB. The apply one of the following rules:
v If the average file size is greater than 256 MB, use the throughput in
KB/second or GB/hour for the 256 MB workload.
v If the average file size is less than 1 KB, use the throughput in KB/second
for the 1 KB workload. Throughput is effectively limited by the number of
files that can be processed in a given amount of time. Calculate the
throughput for the estimated file size in KB per second using the following
formula:

Throughput(KB/sec) = 1KBThroughput(KB/sec) * EstFileSize(KB)

Convert KB per second to GB per hour using the following formula:

Throughput(GB/hr) = Throughput(KB/sec) * 3600 / 1048576


3. If the average file size is greater than 1 KB and less than 256 MB, then calculate
the throughput using the two known measurement points in the table that
most closely bound the estimate point.
a. Obtain the following values from the table for the function and
environment of interest:

LowerFileSize - average file size in KB at the lower measurement point


LowerThroughput - throughput in KB/s at the lower measurement point
UpperFileSize - average file size in KB at the upper measurement point
UpperThroughput - throughput in KB/s at the upper measurement point
b. Calculate the throughput in KB per second using the following formulas:

A = log(UpperThroughput(KB/sec)/LowerThroughput(KB/sec))
B = log(EstFileSize(KB)/LowerFileSize(KB))
C = log(UpperFileSize(KB)/LowerFileSize(KB))
D = log(LowerThroughput(KB/sec))

20 IBM Tivoli Storage Manager: Performance Tuning Guide


Throughput(KB/sec)= 10 ** ((A * B / C)+ D)

All references to log in the above formulas imply log base 10

Estimating throughput in other environments


You can estimate the Tivoli Storage Manager throughput rate for untested
environments.

Estimating throughput for environments that have not been directly tested can be
difficult. However, the following important observations can be made:
v Throughput over a network can be expected to reach saturation at around 80
percent of its rated capacity. Efficiency indicates the percentage of maximum
throughput rate that can realistically be achieved. This leads to the following
maximum throughputs that can be obtained for given networks:

Network MB per second MB per second GB per hour % Efficiency


| Ethernet 10 1.0 5.7 90
Token ring 16 1.6 5.7 80
| Ethernet 100 10 17.6 90
FDDI 100 10 35.2 80
ATM 155 15.5 34.1 50
SPSwitch 120 265 50
T3 45 4.48 15.8 80
T1 1.54 0.16 0.56 80
Gigabyte 1 GB 100 tbd 80
Ethernet

v Throughput for backup and restore of small file workloads is basically


independent of network type, as long as the network remains unsaturated, and
propagation delays are not excessive due to intervening routers or switches.
| v Gigabit Ethernet performance is highly dependent on the quality of the Ethernet
| chipset and the type of bus used. In addition, taking advantage of certain
| chipset features, such as jumbo frames and other TCP offload features, can have
| a large impact on performance. Therefore, performance can vary widely. On
| some chipsets and machines only 25% efficiency may be possible while on
| others 90% is easily reached.

Tuning tape drive performance


There are a few basic procedures for maintaining the performance of your tape
drives.

Configuring enough tape drives

You should configure enough tape drives for your environment:


v The maximum number of Tivoli Storage Manager client sessions backing up
directly to tape at any time during the peak backup window.
v Additional tape drives for other functions that run during the backup window.
For example, storage pool migration, storage pool backup, and reclamation.

Chapter 2. IBM Tivoli Storage Manager server performance tuning 21


Cleaning tape drives

Cleaning the tape drive according to the manufacturer’s specifications is very


important to ensure maximum tape drive performance. Failure to clean tape drives
can cause read and write errors, drive failures, and generally poor performance.

Enabling tape compression

In many cases, enabling compression at the tape drive improves Tivoli Storage
Manager throughput. You can use the FORMAT option of the DEFINE DEVCLASS
command to specify the appropriate recording format to be used when writing
data to sequential access media. The default is DRIVE, which specifies that Tivoli
Storage Manager selects the highest format that can be supported by the sequential
access drive on which a volume is mounted. This setting usually allows the tape
control unit to perform compression.

Attention: Avoid specifying the DRIVE value when a mixture of devices are used
in the same library. For example, if you have drives that support recording formats
superior to other drives in a library, do not specify the FORMAT=DRIVE option.
Refer to the appropriate Tivoli Storage Manager Administrator’s Guide for more
information

If you do not use compression at the client and your data is compressible, you
should achieve higher system throughput if you use compression at the tape
control unit. Refer to the appropriate Tivoli Storage Manager Administrator’s Guide
for more information concerning your specific tape drive. If you compress the data
at the client, we recommend that you not use compression at the tape drive. In this
case, you might lose up to 10-12% of the tape capacity at the tape media.

Using collocation with tape drives


Using collocation can significantly improve the performance of restores for large
amounts of data with many files, because fewer tapes are searched for the
necessary files. Collocation also decreases the chance for media contention with
other clients. The trade-off is that more tapes are needed. Use the COLLOCATE
option on the DEFINE STGPOOL command or UPDATE STGPOOL command to
enable collocation.

| The default is collocation by group. Until node groups are defined, however, no
| collocation occurs. When node groups nodes are defined, the server can collocate
| data based on these groups. Collocation by group can yield the following
| performance benefits:
| v Reduce unused tape capacity by allowing more collocated data on individual
| tapes
| v Minimize mounts of target volumes
| v Minimize database scanning and reduce tape passes for sequential-to-sequential
| transfer
| Collocation by group gives the best balance of restore performance versus tape
| volume efficiency. For those nodes where collocation is needed to improve restore
| performance, use collocation by group. Manage the number of nodes in the groups
| so that backup data for the entire group is spread over a manageable number of
| volumes. Where practical, a node can be moved from one collocation group to
| another by first changing the group affinity with the DEFINE
| NODEGROUPMEMBER command then using the MOVE NODEDATA command.

22 IBM Tivoli Storage Manager: Performance Tuning Guide


IBM LTO Ultrium tape drives
There are many factors that affect the sustained transfer rate of IBM LTO Ultrium
tape drives. The sustained transfer rate takes into account the net effect of all these
factors.

These factors include:


v Native transfer rate
v Compression ratio
v File size
v Server attachment
v Server attachment HBA type
v Disk transfer rate
v Network bandwidth
v Server utilization
v Start/stop performance
v Application control file activity
v Bus bandwidth
v Quality of the media

IBM LTO Ultrium streaming rate performance


Streaming rate is the rate at which a tape drive can read and write, not including
any start and stop operations. Most uses of tape do include some start and stop
operations, which slow down the sustained rate at which the drive operates.

The IBM LTO Ultrium tape drive has a native streaming data rate of up to 15 MB
per second, and up to 30 MB per second with 2:1 compression. For example, when
writing to a tape drive, normally the drive returns control to the application when
the data is in the tape drive buffer but before the data has been written to tape.
This mode of operation provides all tape drives a significant performance
improvement. However, the drive’s buffer is volatile. For the application to ensure
that the write makes it to tape, the application must flush the buffer. Flushing the
buffer causes the tape drive to back hitch (start/stop). The Tivoli Storage Manager
parameters TXNBYTELIMIT and TXNGROUPMAX control how frequently Tivoli
Storage Manager issues this buffer flush command.

When writing to a tape drive, you must consider network bandwidth. For
example, 100BaseT Ethernet LAN can sustain 5 to 6 MB per second. Therefore, you
cannot backup to LTO or any tape drive any faster than that.

IBM LTO Ultrium performance recommendations


When you use LTO Ultrium drives with Tivoli Storage Manager, it is important to
use the appropriate server and client options to enhance performance.

Server option recommendations


TXNGROUPMAX 256
MOVESIZETHRESH 2048
MOVEBATCHSIZE 1000
Client option recommendation
TXNBYTELIMIT 2097152

Chapter 2. IBM Tivoli Storage Manager server performance tuning 23


If on average Tivoli Storage Manager clients have files smaller than 100 KB, it is
recommended that these clients back up to a disk storage pool for later migration
to tape. This allows more efficient data movement to tape.

When IBM 3580 LTO drives are used, ensure that the drive microcode is at the
most current level. Instructions for verifying the current LTO drive microcode
release and how to install the new release can be found in the IBM Ultrium Device
Drivers Installation and User’s Guide. Go to this Web site to find the manual:
ftp://ftp.software.ibm.com/storage/devdrvr/

Tuning disk performance


You can configure Tivoli Storage Manager disk storage to optimize performance.
v Server configuration – disk: Use as many independent physical disks as you
can afford to minimize I/O contention. Configure one Tivoli Storage Manager
volume per physical disk, or at most two. Separate the recovery log, database,
disk storage pool volumes. Place Tivoli Storage Manager volumes at the outside
diameter of physical disk. This gives better sequential throughput and faster
seek time. Using raw logical volumes on UNIX systems can cut CPU
consumption but might be slower during storage pool migrations due to lack of
read-ahead.
v Server configuration - disk write cache: Use disk subsystem/adapter write
cache for all RAID 5 arrays and physical disks with Tivoli Storage Manager
database volumes (random I/O).
v Choosing physical disks (JBOD) or RAID arrays: RAID requires more physical
disks for equivalent performance. Be sure to consider the write penalty of
RAID5 arrays. Write throughput is important during backup and archive. Tivoli
Storage Manager recovery log and database mirroring provides better
recoverability than hardware redundancy.

Busses
If your machine has multiple PCI busses, spread out high-throughput adaptors
among the different busses.

| For example, if you are going to do a lot of backups to disk, you probably do not
| want your network card and disk adaptor on the same PCI bus. Theoretical limits
| of busses are just that, theoretical, though you should be able to get close in most
| cases. As a general rule it is best to have only one or two tape drives per SCSI bus
| and one to four tape drives per fibre HBA.

| Note: Even if a given tape drive is slower than the fibre channel SAN being used,
| tape drive performance is usually better on the faster interfaces. This is
| because the individual blocks are transferred with lower latency, allowing
| Tivoli Storage Manager and the operating system to send the next block
| quicker. For example, an LTO 4 drive will perform better on a 4 Gbit SAN
| than a 2 Gbit SAN even though the drive is only capable of speeds under 2
| Gbit.

24 IBM Tivoli Storage Manager: Performance Tuning Guide


Chapter 3. IBM Tivoli Storage Manager client performance
tuning
You can tune the performance of Tivoli Storage Manager clients.

Some Tivoli Storage Manager client options can be tuned to improve performance.
Tivoli Storage Manager client options are specified in either the dsm.sys file or the
dsm.opt file.

COMPRESSION
The COMPRESSION client option specifies if compression is enabled on the Tivoli
Storage Manager client. For optimal backup and restore performance with a large
number of clients, consider using client compression.

Compressing the data on the client reduces demand on the network and the Tivoli
Storage Manager server. The reduced amount of data on the server continues to
provide performance benefits whenever this data is moved, such as for storage
pool migration and storage pool backup. However, client compression significantly
reduces the performance of each client, and the reduction is more pronounced on
the slowest client systems.

For optimal backup and restore performance when using fast clients and heavily
loaded network or server, use client compression. For optimal backup and restore
performance when using a slow client, or a lightly loaded network or server, do
not use compression. However, be sure to consider the trade-off of greater storage
requirements on the server when not using client compression. The default for the
COMPRESSION option is NO.

For maximum performance with a single fast client, fast network, and fast server,
turn compression off.

Two alternatives exist to using compression:


v If you are backing up to tape, and the tape drive supports its own compression,
use the tape drive compression. See c_tape_device_tuning for more information.
v Do not use compression if a client has built-in file compression support.
Compression on these clients does reduce the amount of data backed up to the
server. NetWare and Windows have optional built-in file compression.

Compression can cause severe performance degradation when there are many
retries due to failed compression attempts. Compression fails when the compressed
file is larger than the original. The client detects this and will stop the compression,
fail the transaction and resend the entire transaction uncompressed. This occurs
because the file type is not suitable for compression or the file is already
compressed (zip files, tar files, and so on). Short of turning off compression, there
are two options you can use to reduce or eliminate retries due to compression:
v Use the COMPRESSALWAYS option. This option eliminates retries due to
compression.
v Use the EXCLUDE.COMPRESSION option in the client options file. This option
disables compression for specific files or sets of files (for example, zip files or jpg

© Copyright IBM Corp. 1993, 2007 25


files). Look in the client output (dsmsched.log) for files that are causing
compression retries and then filter those file types.

Recommendations
NO (single fast client, fast network, fast
server)
COMPRESSION
YES (multiple clients, slow network, slow
server)

COMPRESSALWAYS
The COMPRESSALWAYS option specifies whether to continue compressing an
object if it grows during compression, or resend the object, uncompressed. This
option is used with the compression option.

The COMPRESSALWAYS option is used with the archive, incremental, and


selective commands. This option can also be defined on the server. If
COMPRESSALWAYS YES (the default) is specified, files continue compression even
if the file size increases. To stop compression if the file size grows, and resend the
file, uncompressed, specify COMPRESSALWAYS NO. This option controls
compression only if your administrator specifies that your client node determines
the selection. To reduce the impact of retries, use COMPRESSALWAYS YES.

| It is better to identify common types of files that do not compress well and list
| these on one or more client option EXCLUDE.COMPRESSION statements. Files
| that contain large amounts of graphics, audio, or video files and files that are
| already encrypted do not compress well. Even files that seem to be mostly text
| data (for example, Microsoft Word documents) can contain a significant amount of
| graphic data that might cause the files to not compress well.

| Using Tivoli Storage Manager client compression and encryption for the same files
| is perfectly valid. The client first compresses the file data and then encrypts it, so
| that there is no loss in compression effectiveness due to the encryption, and
| encryption is faster if there is less data to encrypt. For example, to exclude objects
| that are already compressed or encrypted, enter the following statements
| exclude.compression ?:\...\*.gif
| exclude.compression ?:\...\*.jpg
| exclude.compression ?:\...\*.zip
| exclude.compression ?:\...\*.mp3
| exclude.compression ?:\...\*.cab
| Recommendations
COMPRESSALWAYS Yes

26 IBM Tivoli Storage Manager: Performance Tuning Guide


COMMRESTARTDURATION and COMMRESTARTINTERVAL
The COMMRESTARTDURATION and COMMRESTARTINTERVAL options control
the restart window of time and interval between restarts.

To make clients are more tolerant of network connectivity interruptions, use the
options COMMRESTARTDURATION and COMMRESTARTINTERVAL to control
the restart window of time and interval between restarts. These options assist in
environments which are subject to heavy network congestion or frequent
interruptions, and eases the manageability of large numbers of clients by reducing
intervention on error conditions. In a sense, performance is improved if
consideration is given to account for time to detect, correct, and restart as a result
of an error condition.

Note: A scheduled event continues if the client reconnects with the server before
the COMMRESTARTDURATION value elapses, even if the event’s startup
window has elapsed.

Syntax
COMMRESTARTDURATION minutes

The maximum number of minutes you want


the client to attempt to reconnect with a
server after a communication failure occurs.
The range of values is zero through 9999;
the default is 60.
COMMRESTARTINTERVAL seconds

The number of seconds you want the client


to wait between attempts to reconnect with
a server after a communication failure
occurs. The range of values is zero through
65535; the default is 15.

Recommendation
COMMRESTARTDURATION Tune to your environment.
COMMRESTARTINTERVAL Tune to your environment.

QUIET
The QUIET client option can prevent messages from being written to the screen
during Tivoli Storage Manager backups.

The default is the VERBOSE option, and Tivoli Storage Manager displays
information about each file it backs up. To prevent this, use the QUIET option.
However, messages and summary information are still written to the log files.
There are two main benefits to using the QUIET option:
v For tape backup, the first transaction group of data is always resent. To avoid
this, use the QUIET option to reduce retransmissions at the client.
v If you are using the client scheduler to schedule backups, using the QUIET
option dramatically reduces disk I/O overhead to the schedule log and
improves throughput.

Chapter 3. IBM Tivoli Storage Manager client performance tuning 27


Recommendation
QUIET

DISKBUFFSIZE
The DISKBUFFSIZE client option specifies the maximum disk I/O buffer size (in
kilobytes) that the client may use when reading files.

Optimal backup, archive, or HSM migration client performance may be achieved if


the value for this option is equal to or smaller than the amount of file read ahead
provided by the client file system. A larger buffer will require more memory and
might not improve performance.

| The default value is 32 for all clients except AIX. For AIX, the default value is 256
| except when ENABLELANFREE YES is specified. When ENABLELANFREE YES is
| specified on AIX, the default value is 32. API client applications have a default
| value of 1023, except for Windows API client applications (version 5.3.7 and later),
| which have a default value of 32.

Recommendation
DISKBUFFSIZE Use the default value.

PROCESSORUTILIZATION
The PROCESSORUTILIZATION option (only on the Novell client) specifies, in
hundredths of a second, the amount of CPU time the length of time Tivoli Storage
Manager controls the CPU. Because this option can affect other applications on
your client node, use it only when speed is a high priority.

The default is 1. The recommended values are from 1 to 20. If set to less than 1,
this parameter could have a negative impact on performance. Increasing this value
increases the priority ofTivoli Storage Manager to the CPU, lessening the priorities
other process. Setting PROCESSORUTILIZATION greater than 20 might prevent
other scheduled processes or NetWare requestors from accessing the file server.

Multiple session backup and restore


Multiple session restore allows the backup-archive clients to perform multiple
restore sessions for no-query restore operations, thus increasing the speed of
restores. Multiple session restore is similar to multiple session backup support.

Multiple session restores can be used under the following conditions:


v The data to be restored resides on several tapes.
v There are sufficient mount points available.
v The restore is done using the no-query restore protocol. For details about
no-query restore, refer to the Backup-Archive Clients Installation and User’s Guide.

For backup or archive operations, the MAXNUMMP parameter on the UPDATE


NODE or REGISTER NODE command controls the number of mount points
allowed to a client. The RESOURCEUTILIZATION option affects how many
sessions the client can use. Set RESOURCEUTILIZATION to one greater than the
number of required sessions (equal to the number of drives that the client to use

28 IBM Tivoli Storage Manager: Performance Tuning Guide


will use). Issue the restore command so that it results in a no-query-restore process.
For backup or archive operations, if the MAXNUMMP setting is too low and if
there are not enough mount points for each of the sessions, it might not be
possible to use the number of sessions allowed by the RESOURCEUTILIZATION
option.

The number of sessions used depends on the following settings:


v The number of mount points available to the client. This number is controlled by
the MOUNTLIMIT setting in the DEFINE DEVCLASS and UPDATE DEVCLASS
commands and by the MAXNUMMP setting in REGISTER NODE and UPDATE
NODE server commands. The MAXNUMMP setting is not enforced for restore
or retrieve operations. See “MAXNUMMP” on page 5 for details.
v The RESOURCEUTILIZATION client option setting. Because the number of
sessions increases for a multiple session restore, set the MAXSESSIONS server
option accordingly. See “RESOURCEUTILIZATION” on page 30 and
“MAXSESSIONS” on page 5 for details.

| If all the files are on random disk, only one session is used. There is no
| multi-session restore for a random disk-only storage pool restore. However, if you
| are performing a restore in which the files reside on four tapes or four sequential
| disk volumes and some on random disk, you could use up to five sessions during
| the restore. You can use the MAXNUMMP parameter to set the maximum number
| of mount points a node can use on the server. If the RESOURCEUTILIZATION
| option value exceeds the value of the MAXNUMMP on the server for a node, you
| are limited to the number of session specified in MAXNUMMP.

| If the data you want to restore is on five different tape volumes, the maximum
| number of mount points is 5 for your node, and RESOURCEUTILIZATION is set
| to three, then three sessions are used for the restore. If you increase the
| RESOURCEUTILIZATION setting to 5, then 5 sessions are used for the restore.
| There is a one to one relationship to the number of restore sessions allowed for the
| RESOURCEUTILIZATION setting. Multiple restore sessions are only allowed for
| no query restore operations.

| The server sends the MAXNUMMP value to the client during sign on. During an
| no-query restore, if the client receives a notification from the server that another
| volume has been found and another session can be started to restore the data, the
| client checks the MAXNUMMP value. If another session would exceed that value,
| the client will not start the session.

Some backup considerations:


v Only one session per file system compares attributes for incremental backup.
Incremental backup throughput does not improve for a single file system with a
small amount of changed data.
v Data transfer sessions do not have file system affinity; each session could send
files from multiple file systems. This is good for workload balancing. This is not
so good if you are backing up directly to a tape storage pool collocated by
filespace. Do not use multiple sessions to back up directly to a storage pool
collocated by filespace. Use multiple commands, one per filespace.
v Multiple sessions might not start if there are not enough entries on the
transaction queue.
| v For backup operations directly to tape, you can prevent multiple sessions so that
| data is not spread across multiple volumes by setting RESOURCEUTILIZATION
| to 2.

Chapter 3. IBM Tivoli Storage Manager client performance tuning 29


Some restore considerations:
v Only one session is used when restoring from random access disk storage pools.
v Only one file system can be restored at a time with the command line; multiple
sessions may still be used for a single file system.
v Even small clients can gain throughput for restores requiring many tape mounts
or locates.
v Tape cartridge contention might occur, especially when restoring from a
collocated node.

RESOURCEUTILIZATION
The RESOURCEUTILIZATION client option regulates the number of concurrent
sessions that the Tivoli Storage Manager client and server can use during
processing. Multiple sessions can be initiated automatically through a Tivoli
Storage Manager backup, restore, archive, or retrieve command. Although the
multiple session function is transparent to the user, there are parameters that
enable the user to customize it.

The RESOURCEUTILIZATION option increases or decreases the ability of the


client to create multiple sessions. For backup or archive, the value of
RESOURCEUTILIZATION does not directly specify the number of sessions created
by the client. However, the setting does specify the level of resources the server
and client can use during backup or archive processing. The higher the value, the
more sessions the client can start.

The range for the parameter is from 1 to 10. If the option is not set, by default only
two sessions can be started, one for querying the server and one for sending file
data. A setting of 5 permits up to four sessions: two for queries and two for
sending data. A setting of 10 permits up to eight sessions: four for queries and four
for sending data. The relationship between RESOURCEUTILIZATION and the
maximum number of sessions created is part of an internalized algorithm and, as
such, is subject to change. This table lists the relationships between
RESOURCEUTILIZATION values and the maximum sessions created. Producer
sessions scan the client system for eligible files. The remaining sessions are
consumer sessions and are used for data transfer. The threshold value affects how
quickly new sessions are created.

Recommendations
RESOURCEUTILIZATION value Maximum Unique number Threshold
number of of producer (seconds)
sessions sessions
1 1 0 45
2 2 1 45
3 3 1 45
4 3 1 30
5 4 2 30
6 4 2 20
7 5 2 20
8 6 2 20
9 7 3 20
10 8 4 10

30 IBM Tivoli Storage Manager: Performance Tuning Guide


Recommendations
RESOURCEUTILIZATION value Maximum Unique number Threshold
number of of producer (seconds)
sessions sessions
0 (default) 2 1 30

Backup throughput improvements that can be achieved by increasing the


RESOURCEUTILIZATION level vary from client node to client node. Factors that
affect throughputs of multiple sessions include the configuration of the client
storage subsystem (the layout of file systems on physical disks), the client’s ability
to drive multiple sessions (sufficient CPU, memory), the server’s ability to handle
multiple client sessions (CPU, memory, number of storage pool volumes), and
sufficient bandwidth in the network to handle the increased traffic.

The MAXSESSIONS parameter controls the maximum number of simultaneous


client sessions with the Tivoli Storage Manager server. The total number of parallel
sessions for a client is counted for the maximum number of sessions allowed with
the server. You need to decide whether to increase the value of the MAXSESSIONS
parameter in the server option file.

When using the RESOURCEUTILIZATION option to enable multiple client/server


sessions for backup direct to tape, the client node maximum mount points allowed
parameter, MAXNUMMP, must also be updated at the server (using the UPDATE
NODE command).

If the client file system is spread across multiple disks (RAID 0 or RAID 5), or
multiple large file systems, the recommended RESOURCEUTILIZATION setting is
a value of 5 or 10. This enables multiple sessions with the server during backup or
archive and can result in substantial throughput improvements in some cases. It is
not likely to improve incremental backup of a single large file system with a small
percentage of changed data. If a backup is direct to tape, the client node maximum
mount points allowed parameter, MAXNUMMP, must also be updated at the
server using the update node command.

RESOURCEUTILIZATION can be set to value other than default if a client backup


involves many files and they span or reside on multiple physical disks. A setting of
5 or greater is recommended. However, for optimal utilization of the Tivoli Storage
Manager environment, you need to evaluate the load of server, network
bandwidth, client CPU and I/O configuration and take that into consideration
before changing the option.

When a restore is requested, the default is to use a maximum of two sessions,


based on how many tapes the requested data is stored on, how many tape drives
are available, and the maximum number of mount points allowed for the node.

The default value for the RESOURCEUTILIZATION option is 1, and the maximum
value is 10. For example, if the data to be restored are on five different tape
volumes, and the maximum number of mount points for the node requesting the
restore is five, and RESOURCEUTILIZATION is set to 3, then three sessions are
used for the restore. If the RESOURCEUTILIZATION setting is increased to 5, then
five sessions are used for the restore. There is a one-to-one relationship to the
number of restore sessions allowed and the RESOURCEUTILIZATION setting.

Chapter 3. IBM Tivoli Storage Manager client performance tuning 31


RESOURCEUTILIZATION values:

Recommendations
Maximum number of sessions
1 for workstations
RESOURCEUTILIZATION
5 for a small server
10 for a large server

Note: Non-root UNIX is limited to one session.

TAPEPROMPT
The TAPEPROMPT client option specifies whether to prompt the user for tape
mounts.

The TAPEPROMPT option specifies if you want Tivoli Storage Manager to wait for
a tape to be mounted for a backup, archive, restore or retrieve operation, or to
prompt you for your choice.

Recommendations
TAPEPROMPT No

TCPBUFFSIZE
The TCPBUFFSIZE option specifies the size of the internal TCP communication
buffer, that is used to transfer data between the client node and the server. A large
buffer can improve communication performance, but requires more memory.

The default is 32 KB, and the maximum is now 512 KB.

Recommendation
TCPBUFFSIZE 32

TCPNODELAY
Use the TCPNODELAY option to disable the TCP/IP Nagle algorithm, which
allows data packets of less than the Maximum Transmission Unit (MTU) size to be
sent out immediately.

The default is YES. This generally results in better performance for Tivoli Storage
Manager client/server communications.

Note: TCPNODELAY is also available as a server option.

Recommendations
TCPNODELAY YES

32 IBM Tivoli Storage Manager: Performance Tuning Guide


TCPWINDOWSIZE
| The TCPWINDOWSIZE client option specifies the size of the TCP sliding window
| in kilobytes. The TCPWINDOWSIZE server option applies to backups and
| archives. The TCPWINDOWSIZE client option applies to restores and retrieves.

The TCPWINDOWSIZE option overrides the operating system’s TCP send and
receive spaces. In AIX, for instance, these parameters are set as “no” options
tcp_sendspace and tcp_recvspace. Specifying TCPWINDOWSIZE 0 causes Tivoli
Storage Manager to use the operating system default. This is not recommended
because the optimal setting for Tivoli Storage Manager might not be same as the
optimal setting for other applications. The default is 63 KB, and the maximum is
2048 KB. TCPWINDOWSIZE option specifies the size of the TCP sliding window
for all clients and all but MVS servers. A larger window size can improve
communication performance, but uses more memory. It enables multiple frames to
be sent before an acknowledgment is obtained from the receiver. If long
transmission delays are being observed, increasing the TCPWINDOWSIZE might
improve throughput.

| Note: For the Sun Solaris client, the TCP buffers, tcp_xmit_hiwat and
| tcp_recv_hiwat, must match the TCPWINDOWSIZE.
|| Recommendations
| TCPWINDOWSIZE 63
|

TXNBYTELIMIT
The TXNBYTELIMIT client option specifies the maximum transaction size in
kilobytes for data transferred between the client and server.

The range of values is 300 KB through 2097152 KB (2 GB); the default is 25600. A
transaction is the unit of work exchanged between the client and server. Because
the client program can transfer more than one file or directory between the client
and server before it commits the data to server storage, a transaction can contain
more than one file or directory. This is called a transaction group. This option
permits you to control the amount of data sent between the client and server
before the server commits the data and changes to the server database, thus
affecting the speed with which the client performs work. The amount of data sent
applies when files are batched together during backup or when receiving files from
the server during a restore procedure. The server administrator can limit the
number of files or directories contained within a group transaction using the
TXNGROUPMAX option. The actual size of a transaction can be less than your
limit. Once this number is reached, the client sends the files to the server even if
the transaction byte limit is not reached.

There are several items to consider when setting this parameter:


v Increasing the amount of data per transaction increases recovery log
requirements on the server. Check log and log pool space to ensure there is
enough space. Also note that a larger log might result in longer server start-up
times.
v Increasing the amount of data per transaction might result in more data being
retransmitted if a retry occurs. This might negatively affect performance.

Chapter 3. IBM Tivoli Storage Manager client performance tuning 33


v The benefits of changing this parameter are dependent on configuration and
workload characteristics. In particular, this parameter benefits tape storage pool
backup more so than disk storage pool backup, especially if many small files are
in the workload.
When setting the size of transactions consider setting a smaller size if you are
suffering many resends due to files changing during backup when using static,
shared static, or shared dynamic. This would apply to static as well as to shared
because when the client realizes a file has changed during a backup and decides to
not send it, the file that is, it would still have to re-send the other files in that
transaction. To enhance performance, set TXNBYTELIMIT to the max 2097152, and
on the server, raise TXNGROUPMAX to 256. Additionally, for small file workloads,
first stage the backups to a disk storage pool and then migrate to tape.

Recommendations
25600
TXNBYTELIMIT
2097152 (for backup direct to tape)

Client command line options


Two options can be used only on the command line and only with specific
commands. When specifying options with a command, always precede the option
with a dash (-). In general, the command line interface is faster than the GUI and
requires less overhead.

Two command line options that might improve Tivoli Storage Manager
performance are:
IFNEWER
This option is used in conjunction with the restore command and restores
files only if the server date is newer than the date of the local file. This
option might result in lower network utilization if less data travels across
the network.
INCRBYDATE
In a regular incremental backup, the server reads the attributes of all the
files in the file system and passes this information to the client. The client
then compares the server list to a list of its current file system. This
comparison can be very time-consuming, especially for clients on NetWare,
Macintosh, and Windows. These clients sometimes have a limited amount
of memory.
With an incremental-by-date backup, the server only passes the date of the
last successful backup. It is not necessary to query every active file on the
Tivoli Storage Manager server. The time savings are significant. However,
regular, periodic incremental backups are still needed to back up files that
have only had their attributes changed. For example, if a new file in your
file system has a creation date previous to the last successful backup date,
future incremental-by-date backups will not back up this file. This is
because the client sees it as already backed up. Also, files that have been
deleted are not detected by an incremental-by-date backup. These deleted
files will be restored if you perform a full system restore.

34 IBM Tivoli Storage Manager: Performance Tuning Guide


Performance recommendations by client platform
Some client performance recommendations vary by platform.

Macintosh client
Try to limit the use of Extended Attributes. If Extended Attributes are used, try to
limit their length. Anti-virus software can negatively affect backup and restore
performance

Windows client
Performance recommendations for Windows clients.
v For optimal backup and restore performance when using a local client on a
Windows system, use the shared memory communication method. Specify
COMMMETHOD SHAREDMEM in both the server options file and the client
options file.
| v Anti-virus products and backup and restore products can use significant
| amounts of system resources and therefore impact application and file system
| performance. They may also interact with each other to seriously degrade the
| performance of either product. For optimal performance of backup and restore:
| – Schedule anti-virus file system scans and incremental backups for
| non-overlapping times.
| – If the anti-virus program allows, change the anti-virus program properties so
| that files are not scanned when opened by the client processes. Some
| anti-virus products can automatically recognize file reads by backup products
| and do not need to be configured. Check the IBM support site for additional
| details.

Windows journal-based backup

Instead of cross referencing the current state of files with the Tivoli Storage
Manager database, you can back up those files indicated as changed in the change
journal. Journal-based backup uses a real-time determination of changed files and
directories and avoids file system scan and attribute comparison. Here are some
advantages of journal-based backup
v Much faster than classic incremental, but improvement depends on the amount
of changed data.
v Requires less memory usage and less disk usage.
v Good for large file systems with many files that do not change often.

Journal-based backup requires the installation of a Tivoli Journal Engine Service


which monitors file system activity for file changes. Impacts performance of the file
system slightly (approximately 5% during a Netbench test run). Journal options are
specified in tsmjbbd.ini. The defaults work well; you just add the file systems to be
monitored.

Chapter 3. IBM Tivoli Storage Manager client performance tuning 35


Client performance considerations
There are a number of actions you can take to improve client performance.

| Run concurrent sessions on a single client

| Running two or more client program instances at the same time on the same
| system might provide better overall throughput, depending on the available
| resources. Scheduling backups for multiple file systems concurrently on one Tivoli
| Storage Manager client system can be done with any of the following methods:
| v Using one node name, running one client scheduler, and setting the
| RESOURCEUTILIZATION client option to 5 or greater with multiple file
| systems included in the schedule or domain specification.
| v Using one node name, running one client scheduler, and scheduling a command
| that runs a script on the client system that includes multiple command line
| client statements (using dsmc).
| v Using multiple node names and running one client scheduler for each node
| name, in which each scheduler uses a unique client options file, etc.

Reduce data flow from the client

Using INCLUDE/EXCLUDE options appropriately can reduce the amount of data


that is backed up and therefore reduce the overall backup and restore elapsed time
and system resource utilization.

Minimize client processing overhead


Using the -FROMNODE option creates additional overhead on all clients. Consider
using the VIRTUALNODENAME instead of the –FROMNODE option.

Hierarchical Storage Manager tuning


If you have to migrate a group of small files to the server, you get better
performance if you go to disk rather than tape. After the files are migrated to disk,
you can use storage pool migration to move the files to tape.

Performance tuning of Hierarchical Storage Manager (HSM) migration is poor for


very small files that are grouped together with wildcard invocation of
DSMMIGRATE command as an example. HSM works on one file at a time, unlike
archive, retrieve, restore and backup which group files at a transaction boundary.
There is one transaction for each file when you use HSM migration and recall. For
a group of small files, it is better to use archive or backup to store them to the
server.

Data Protection for Domino for z/OS


DFSMS™ HFS HIPER APARs OW51210 and OW51732, and their resolving PTFs,
have identified and fixed a performance problem with TDP for Domino®, Domino
for z/OS, and HFS. Ensure that these PTFs are installed on your system.

36 IBM Tivoli Storage Manager: Performance Tuning Guide


| Client incremental backup memory requirements
| By default, the Tivoli Storage Manager backup-archive client uses a lot of memory
| during an incremental backup in order to determine which files are new or
| changed and need to be backed up, and which files are deleted and need to be
| expired on the server.

| In some situations this memory demand causes the backup to fail, and the client
| issues the following message:
| ANS1030E The operating system refused a TSM request for memory allocation.

| This section describes how much memory is required and how to reduce this
| memory requirement.

| For incremental backups the client uses an average of about 300 bytes of memory
| for each file system object (a file or directory). These 300 bytes include the path
| and name of the object and attributes. The average memory requirement increases
| if longer file or directory names or more deeply nested directories are used. For a
| file system with 1 million files and directories, for example, the client would
| require about 300 MB of virtual memory.

| You can reduce the virtual memory requirement by any of the following methods:
| v Use the client option MEMORYEFFICIENTBACKUP DISKCACHEMETHOD.
| This option is only available on the Version 5.4 client, or later. The maximum
| virtual memory used by the client process in this case is usually less than 20
| MB. A significant amount of client disk space might be required. See the
| Backup-Archive Clients Installation and User’s Guide for details.
| v Use the client option MEMORYEFFICIENTBACKUP YES. The maximum virtual
| memory used by the client process in this case becomes 300 bytes times the
| maximum number of files in any one directory.
| v Use the client option VIRTUALMOUNTPOINT (UNIX only) to define multiple
| virtual mount points within the single file system, each of which can be backed
| up independently by Tivoli Storage Manager.
| v If the client option RESOURCEUTILIZATION is set to a value greater than 3 and
| there are multiple file systems being backed up (or could be backed up such as
| in the case of failover in an active-active cluster), then reducing the value of
| RESOURCEUTILIZATION to 3 or below, or using testflag maxproducers:1 limits
| the process to incremental backup of a single file system at a time, resulting in a
| reduction in virtual memory requirements. If backing up of multiple file systems
| in parallel is required for performance reasons, and the combined virtual
| memory requirements exceed the process limits, then multiple backup processes
| should be used in parallel.
| v Use the client option INCRBYDATE, which executes an incremental-by-date
| backup.
| v Use the client include/exclude options to back up only what is necessary.
| v Use the Tivoli Storage Manager journal-based incremental backup function
| (Windows client, and AIX with a Version 5.3.2 client, or later). A full incremental
| backup must be completed before a journal-based backup is possible.
| v Use the Tivoli Storage Manager image backup function to back up the entire
| volume. This might require less time and resources than using incremental
| backups on some file systems with a very large number of very small files.
| v Spread the data across multiple file systems.

Chapter 3. IBM Tivoli Storage Manager client performance tuning 37


| The size of a file system in gigabytes is not an important factor in determining the
| backup-archive client virtual memory requirements, except as it relates to the
| number of files and directories in that file system.

| Virtual memory refers to the memory that is addressable by an application process.


| The virtual memory size can be greater or smaller than the computer’s real
| (physical) memory size. For example, a 32-bit application can address 4 GB of
| virtual memory. The application might be running on a workstation that has only
| 512 MB of real memory, or it might be running on a server that has 8 GB of real
| memory.

| Only part of an application’s virtual memory might be used for allocating data
| structures. The maximum virtual memory available for allocation by a 32-bit
| process depends on the operating system. For Windows, this is 2 GB for most
| processes. The Version 5.3.2 (and later) Windows client can use up to 3 GB for data
| structures if the client system has been booted using the /3gb boot.ini flag - . See
| Microsoft Knowledge Base article 291988 (http://support.microsoft.com/kb/
| 291988). For AIX 5L™, this is nine memory segments, or 2.25 GB. Other operating
| systems might have different limits. Operating system quotas or limits might need
| to be set to allow a process to use the maximum virtual memory. Refer to the
| ulimit command on UNIX systems.

| The maximum number of files and directories that Tivoli Storage Manager can
| back up on a 32-bit system (2 GB of process memory) using a single incremental
| backup process and the default option MEMORYEFFICIENTBACKUP NO is about
| 7 million: (2 * 1024 * 1024 * 1024) / (300 * 1000 * 1000). If the client process tries to
| exceed the operating system process virtual memory limit, the backup fails. If the
| client process memory requirements exceed the amount of real memory on the
| system but do not exceed the virtual memory limit, then the backup might succeed
| but is likely to exhibit poor performance due to paging.

| Using a 64-bit client, if available for the platform in question, essentially eliminates
| any concern about the virtual memory requirements, but might still require
| significant real memory for optimal performance.

38 IBM Tivoli Storage Manager: Performance Tuning Guide


Chapter 4. Network protocol tuning
Tuning network protocols can improve the performance of IBM Tivoli Storage
Manager operations.

Networks
There is a variety of actions you can take to tune your networks.
v Use dedicated networks for backup (LAN or SAN).
v Keep device drivers updated.
v Using Ethernet adapter auto detect to set the speed and duplex generally works
well with newer adapters and switches. If your network hardware is more than
three years old and backup and restore network performance is not as expected,
set the speed and duplex to explicit values (for example, 100 MB full-duplex, 100
MB half-duplex, and so on). Make sure that all connections to the same switch
are set to the same values.
v Gb Ethernet jumbo frames (9000 bytes) - Jumbo frames can give improved
throughput and lower host CPU usage especially for larger files. Jumbo frames
are only available if they are supported on client, server, and switch. Not all Gb
Ethernet hardware supports jumbo frames.
| v In networks with mixed frame-size capabilities (for example, standard Ethernet
| frames of 1500 bytes and jumbo Ethernet frames of 9000 bytes) it can be
| advantageous to enable Path Maximum Transmission Unit (PMTU) discovery on
| the systems. Doing so means that each system segments the data sent into
| frames appropriate to the session partners. Those that are fully capable of jumbo
| frames use jumbo frames. Those that have lower capabilities automatically use
| the largest frames that do not cause frame fragmentation and re-assembly
| somewhere in the network path. Avoiding fragmentation is important in
| optimizing the network.

Limiting network traffic


There are several Tivoli Storage Manager server SET commands that can limit the
amount of network traffic due to client sessions.

The SET commands are:


v SET QUERYSCHEDPERIOD sets the frequency that a client can contact the
server to obtain scheduled work (polling mode). This overrides the client setting.
A shorter frequency means more network traffic due to polling. Use longer
settings (6 to 12 hours) to reduce network traffic. Alternately, use Server
Prompted schedule mode to eliminate network traffic due to polling.
v SET MAXCMDRETRIES sets a global limit on number of times a client
scheduled command retries. This overrides the client setting. A smaller number
reduces network traffic due to retries.
v SET RETRYPERIOD specifies the number of minutes between a retry of a
scheduler after a failed attempt to contact the server. This overrides the client
setting. A larger value will reduce network traffic due to retries and will make
successful retry more likely. Be sure to consider your schedule start-up windows
when setting the MAXCMDRETRIES and RETRYPERIOD. If a retry is attempted
outside of the start-up window, it fails.

© Copyright IBM Corp. 1993, 2007 39


TCP/IP communication concepts and tuning
To tune TCP/IP, the most significant performance improvements are found by
modifying parameters that affect the data transfer block size, window values, and
connection availability.

The following tasks require system resources:


v Keeping communication connections available
v Keeping user data until it is acknowledged (on the transmit side)
v Managing the communications layers

These resources include memory, CPU, communications adapters, link utilizations,


and involve the limitations of various communication layer implementations. Data
sizes and flow control are the two main factors that cause resource
over-commitment, which results in system performance degradation.

TCP/IP protocols and functions

The TCP/IP protocols and functions can be categorized by their functional groups:
the network layer, internetwork layer, transport layer, and application layer. Table 2
shows the functional groups and their related protocols.

Group Protocols and Functions


Network layer v Token-Ring
v Ethernet
v Others
Internetwork layer v Internet Protocol (IP)
v Internet Control Message Protocol (ICMP)
v Address Resolution Protocol (ARP)
Transport layer v Transmission Control Protocol (TCP)
v User Datagram Protocol (UDP)
Application layer v Telnet File Transfer Protocol (FTP)
v Remote Procedure Call (RPC)
v Socket Interfaces
v Others

Protocol functions

The protocol functions can be categorized as the following:


v Reliable delivery
v Packet assembly and disassembly
v Connection control
v Flow control
v Error control

Reliable Delivery

Reliable delivery services guarantee to deliver a stream of data sent from one
machine to another without duplication or loss of data. The reliable protocols use a
technique called acknowledgment with retransmission, which requires the recipient
40 IBM Tivoli Storage Manager: Performance Tuning Guide
to communicate with the source, sending back an acknowledgment after it receives
data.

Packet assembly and disassembly

Each layer of a communications protocol can potentially perform some sort of


assembly or disassembly function. If the source and destination nodes do not lie on
the same physical network, then the TCP/IP software has to fragment the packets
traveling from one network to another if the Maximum Transmission Units (MTUs)
on the networks do not match. The TCP/IP software at the receiving station then
reassembles the fragments.

There are advantages to assembly and disassembly:


v A communications network may only accept data blocks up to a certain size,
hence requiring that larger blocks be broken down. For example, an Ethernet
LAN has an MTU size of 1500 bytes, whereas a Token-Ring LAN has an MTU
size of up to 16000 bytes.
v Error control might be more efficient for smaller blocks.
v More equitable access, with shorter delay, may be provided to shared
transmission facilities. For example, if the line is slow, allowing too big a block
to be transmitted could cause a monopolization of the line.

There are also disadvantages to assembly and disassembly:


v Each transmitted unit of data requires some fixed amount of overhead. Hence
the smaller the block, the larger the percentage of overhead.
v More blocks have to be processed for both sending and receiving sides in order
to transmit equal amounts of user data, which can take more time.

Flow control and error control


v Flow control is a function provided by a receiving system that limits the amount
or rate of data that is sent by a transmitting system. The aim is to regulate traffic
to avoid exceeding the receivers system resources.
v Error control is needed to guard against loss or damage of data and control
information. Most techniques involve error detection and retransmission.

Sliding window
| The sliding window allows TCP/IP to use communication channels efficiently, in
| terms of both flow control and error control. The sliding window is controlled in
| Tivoli Storage Manager through the TCPWINDOWSIZE option.

To achieve reliability of communication, the sender sends a packet and waits until
it receives an acknowledgment before transmitting another. The sliding window
protocol enables the sender to transmit multiple packets before waiting for an
acknowledgment. The advantages are:
v Simultaneous communication in both directions.
v Better utilization of network bandwidth, especially if there are large transmission
delays.
v Traffic flow with reverse traffic data, known as piggybacking. This reverse traffic
might or might not have anything to with the acknowledgment that is riding on
it.
v Variable window size over time. Each acknowledgment specifies how many
octets have been received and contains a window advertisement that specifies

Chapter 4. Network protocol tuning 41


how many additional octets of data the receiver is prepared to accept, that is, the
receiver’s current buffer size. In response to decreasing window size, the sender
decreases the size of its window. Advantages of using variable window sizes are
flow control and reliable transfers.

A client continually shrinking its window size is an indication that the client
cannot handle the load, and thus increasing the window size does not improve
performance.

Platform-specific network recommendations


Some network tuning recommendations are platform-specific.

AIX network settings


It is important to minimize all performance constraints on AIX to achieve
maximum throughput on the server. This is accomplished by tuning the network
option parameters on AIX.

Tivoli Storage Manager uses TCP/IP communication protocol over the network. It
is important to tune the TCP protocols to obtain maximum throughput. This
requires changing the network parameters that control the behavior of TCP/IP
protocols and the system in general.

In AIX, an application using the TCP/IP communication protocol opens a TCP


socket and writes data to this socket. The data is copied from the user space into
the socket send buffer, called the tcp_sendspace in kernel space. The receive buffers
are called tcp_recvspace. The send and receive buffers are made up of smaller
buffers called mbufs.

An mbuf is a kernel buffer that uses pinned memory and comes in two sizes, 256
bytes and 4096 bytes called mbuf clusters or simply clusters. The maximum socket
buffer size limit is determined by the sb_max kernel variable. Because mbufs are
primarily used to store data for incoming and outgoing network traffic, they must
be configured to have a positive effect on network performance. To enable efficient
mbuf allocation at all times, a minimum number of mbuf buffers are always kept
in the free buffer pool. The minimum number of mbufs is determined by lowmbuf,
whereas the minimum number of clusters is determined by the lowclust option.
The mb_cl_hiwat option controls the maximum number of free buffers the cluster
pool can contain.

The thewall network option controls the maximum RAM that can be allocated
from the Virtual Memory Manager (VMM) to the mbuf management routines. The
netstat -m can be used to obtain detailed information on the mbufs. The netstat -
I interface-id command can be used to determine if there are errors in packet
transmissions.

If the number is greater than 0, overflows have occurred. At the device driver
layer, the mbuf chain containing the data is put on the transmit queue, and the
adapter is signaled to start the transmission operation. On the receive side, packets
are received by the adapter and then are queued on the driver-managed receive
queue. The adapter transmit and receive queue sizes can be configured using the
System Management Interface Tool (SMIT).

At the device driver layer, both the transmit and receive queues are configurable. It
is possible to overrun these queues. To determine this use netstat -v command,

42 IBM Tivoli Storage Manager: Performance Tuning Guide


which shows Max Transmits Queued and Max receives Queued.

MTU and MSS settings


The Maximum Transmission Unit (MTU) and Maximum Segment Size (MSS)
setting are important factors in tuning AIX for throughput.

For best throughput for systems on the same type of network, it is advisable to use
a large MTU. In multi-network environments, if data travels from a network with a
large MTU to a smaller MTU, the IP layer has to fragment the packet into smaller
packets (to facilitate transmission on a smaller MTU network), which costs the
receiving system CPU time to reassemble the fragment packets. When the data
travels to a remote network, TCP in AIX defaults to a Maximum Segment Size
(MSS) of 512 bytes. This conservative value is based on a requirement that all IP
routers support an MTU of at least 576 bytes.

Network type MTU MSS (RFC1323 0) MSS (RFC1323 1)


FDDI 4352 4312 4300
Token ring 4096 4056 4044
Ethernet 1500 1460 1448

| Note: Jumbo frames can be enabled on Gigabit Ethernet and 10 Gigabit Ethernet
| adapters. Doing so raises the MTU to 9000 bytes. Because there is less
| overhead per packet, jumbo frames typically provide better performance, or
| CPU consumption, or both. Consider jumbo frames especially if you have a
| network dedicated to backup tasks. Jumbo frames should only be
| considered if all equipment between most of your Tivoli Storage Manager
| clients and server supports jumbo frames, including routers and switches.

The default MSS can be overridden in the following three ways:


1. Specify a static route to a specific remote network and use the -mtu option of
the route command to specify the MTU to that network. Disadvantages of this
approach are:
v It does not work with dynamic routing.
v It is impractical when the number of remote networks increases.
v Routes must be set at both ends to negotiate a value larger than a default
MSS.
2. Use the tcp_mssdflt option of the no command to change the default value of
MSS. This is a system wide change. In a multi-network environment with
multiple MTUs, the value specified to override the MSS default should be the
minimum MTU value (of all specified MTUs) less 40. In an environment with a
large default MTU, this approach has the advantage that MSS does not need to
be set on a per-network basis. The disadvantages are:
v Increasing the default can lead to IP router fragmentation if the destination is
on a remote network, and the MTUs of intervening networks is not known.
v The tcp_mssdflt parameter must be set to the same value on the destination
host.
3. Subnet and set the subnetsarelocal option of the no command. Several physical
networks can be made to share the same network number by subnetting. The
subnetsarelocal option specifies, on a system-wide basis, whether subnets are to
be considered local or remote networks. With subnetsarelocal=1 (the default),
Host A on subnet 1 considers Host B on subnet 2 to be on the same physical
network. The consequence of this is that when Host A and Host B establish

Chapter 4. Network protocol tuning 43


connection , they negotiate the MSS assuming they are on the same network.
This approach has the following advantages:
v It does not require any static bindings MSS is automatically negotiated.
v It does not disable or override the TCP MSS negotiation so that small
differences in the MTU between adjacent subnets can be handled
appropriately.
The disadvantages are:
v Potential IP router fragmentation when two high-MTU networks are linked
through a lower-MTU network.
v Source and destination networks must both consider subnets to be local.

Refer to the AIX Performance Tuning Guide for details.

Recommendation
In an SP2 environment with a high speed switch, use MTU of 64 KB.

AIX - no (network options)

The network option parameters can be configured using the no command.


v Use no -a to view current settings.
v When using TCP window sizes ≥ 64, set rfc1323 to 1.
v If you see non-zero ″No mbuf errors″ in entstat, fddistat, or atmstat, raise
thewall.
v Recommend setting thewall to at least 131072 and sb_max to at least 1310720.
Newer versions of AIX have larger defaults.
v no settings do not survive reboot, so use the -p option.
v Recommended changes: no -o rfc1323=1

The following table shows the recommended values for the parameters described
in this section.

Note: With the exception of rcf1323, use the current values if higher.
Table 1.
Network options
lowclust = 200
lowmbuf = 400
thewall = 131072
mb_cl_hiwat = 1200
sb_max = 1310720
rfc1323 = 1

Note: The lowmbuf, lowclust and mb_cl_hiwat options are applicable only for AIX
V3.2.x and not for AIX V4.1.x. In AIX V4.1.x, if setting sb_max to 1310720
results in an error message when running TCP/IP applications, then set the
sb_max value to 757760.

44 IBM Tivoli Storage Manager: Performance Tuning Guide


NetWare client cache tuning
Tuning the NetWare client cache can achieve very good performance.

The following table provides recommendations for tunable parameters in NetWare


5.1 and 6.5 (through the SERVMAN utility).

NetWare Parameter Default Range Recommendation


MAXIMUM NUMBER OF INTERNAL 100 40-1000 1000
DIRECTORY HANDLES
DIRECTORY CACHE ALLOCATION 2.2 0.5 - 0.5
WAIT TIME 120
MAXIMUM PHYSICAL receive PACKET tba tba 4202
SIZE (TR)
MAXIMUM PHYSICAL receive PACKET tba tba 1442
SIZE (EN)
MINIMUM DIRECTORY CACHE tba tba 2000
BUFFERS
MAXIMUM DIRECTORY CACHE 100 20-2000 4000
BUFFERS
MAXIMUM CONCURRENT DISK tba tba 1000
CACHE WRITES
MAXIMUM CONCURRENT tba tba 25
DIRECTORY CACHE WRITES
MINIMUM PACKET receive BUFFERS tba tba 100

Maximum Transmission Unit settings


The Maximum Transmission Unit (MTU) in a NetWare environment must be tuned
to the appropriate size for the best performance.

In multi-network environments, data traveling from a network with a larger MTU


to a smaller MTU, the IP layer must fragment the packet into smaller packets. This
costs the receiving system CPU time to reassemble the fragment packets. The MTU
size is configured by editing STARTUP.NCF (NetWare V3.x) or AUTOEXEC.NCF
file (NetWare V4.x) as shown below.

TcpMSSinternetlimit

When data travels to a remote network or a different subnet, the TCPIP. NLMsets
the MTU size to the default Maximum Segment Size (MSS) value of 536 bytes. The
TcpMSSinternetlimit parameter can be used to override the default MSS value and
to set a larger MTU. For NetWare v4.x with TCP/IP v3.0, setting
TcpMSSinternetlimit off in SYS:\ETC\TCPIP.CFG will cause the TCPIP. NLM to
use the MTU value specified in STARTUP.NCF file (maximum physical receive
packet size). Also, note that the TcpMSSinternetlimit parameter is case sensitive. If
this parameter is not specified correctly, it will be dropped automatically from the
tcpip.cfg file by NetWare.

TCPIP.CFG
TcpMSSinternetlimit off

For NetWare v3.x, Novell patch TCP31A.EXE (for TCP/IP v2.75) can provide the
same option.
Chapter 4. Network protocol tuning 45
Sun Solaris network settings
Tuning TCP/IP settings for Sun Solaris servers and clients can improve
performance.
v TCPWINDOWSIZE 32K, which is set in client’s dsm.sys file, is recommended for
the Solaris client in the FDDI and fast (100 mbit) Ethernet network environment.
v TCPWINDOWSIZE 63K or higher is recommended for Gigabit Ethernet network
environment. One good way to find the optimal TCPWINDOWSIZE value in
your specific network environment is to run the TTCP program multiple times,
with different TCPWINDOWSIZE set for each run. The raw network throughput
number reported by TTCP can be used as a guide for selecting the best
TCPWINDOWSIZE for your Tivoli Storage Manager server and client. TTCP is
freeware which can be downloaded from many Sun freeware web sites. The
default values for TCP xmit and recv buffers are only 8 KB for Solaris. The
default value for tcp_xmit_hiwat and tcp_recv_hiwat must be changed to the
value of TCPWINDOWSIZE to avoid any TCP buffer overrun problem. You can
use the Solaris ″ndd -set″ command to change the value of these two TCP
buffers.
v On SunOS, the TCP/IP software parameters can be changed by editing the min
inetinet/in_proto.c file in the release kernel build directory (usually/usr/sys).
After changing the parameters, you need to rebuild the kernel. The parameters
that can affect performance are:
tcp_default_mss
Specifies the default Maximum Segment Size (MSS) for TCP in bytes.
The MSS is based on the Maximum Transmission Unit (MTU) size of the
network if the destination is on the same network. To avoid
fragmentation, the conservative value of 512 is used . For improved
performance on Ethernet or Token-Ring, larger MSS values are
recommended. (For example, settings of 1024, 1500, 2048, or 4096 can be
used.) On Ethernet LANs, the largest MTU value is 1500.
tcp_sendspace
Specifies the number of bytes that the user can send to a TCP socket
buffer before being blocked. The default values can be changed on a
given socket with the SO_SNDBUF ioctl. The default value is 4096
Recommendation: Set the tcp_sendspace parameter to 16 KB or 32 KB.
tcp_recvspace
Specifies the number of bytes that the remote TCP can send to a TCP
socket buffer before being blocked. The default values can be changed
on a given socket with the SO_RCVBUF ioctl. The default value is 4096.
Recommendation: Set the parameter to tcp_recvspace 16 KB or 32 KB.

z/OS network settings


You can configure the TCP/IP address space for IBM TCP/IP for z/OS and tune
TCP/IP and z/OS UNIX system services.

46 IBM Tivoli Storage Manager: Performance Tuning Guide


z/OS server with IBM TCP/IP for z/OS
You can configure the TCP/IP address space for IBM TCP/IP for z/OS.

During initialization of the TCP/IP address space, system operation and


configuration parameters are read from a configuration dataset. The program
searches for the data set job_name.node_name.TCPIP, where node is the node
name of the system as specified on the VMCF initialization record. VMCF is a
subsystem defined by a line in the IEFSSNxx member that causes the VMCF
address space to be created and initialized. If this dataset is not found, the
program uses the first of the following data sets it finds:
v tcpip.node_name.TCPIP
v job_name.PROFILE.TCPI
v tcpip.PROFILE.TCPIP

We only discuss the configuration parameters that affect overall system


performance. The various free pool sizes can be configured depending on the user
environment and are discussed below. In our lab environment, default values were
used, except as noted. These settings serve as our recommended values. However,
these values might need to be altered depending on system capacity requirements.

TCPIP.DATA

TCPIP.DATA contains hostname, domainorigin, nsinteraddr (name server), and so


on. The content of TCPIP.DATA is the same as for previous releases of TCP/IP for
z/OS. For a sample TCPIP.DATA, see the IP Configuration manual or see the
sample shipped with the product. One important recommendation is to keep the
statement ″TRACE RESOLVER″ commented out to avoid complete tracing of all
name server queries. This trace should be used for debugging purposes only.

PROFILE.TCPIP

During initialization of the TCPIP stack, configuration parameters for the stack are
read from the PROFILE.TCPIP configuration data set. Reference the z/OS IP
Configuration manual for additional information on the parameters that are used in
this file.

The PROFILE.TCPIP contains TCP buffer sizes, LAN controller definitions, server
ports, home IP addresses, gateway statements, VTAM® LUs for Telnet use, and so
on.

| The TCPWINDOWSIZE server option allows you to set the TCP/IP send and
| receive buffers independently from TCP/IP. The default size is 63 KB. Therefore,
| you only need to set the TCP/IP profile TCPMAXRCVBUFRSIZE parameter to a
| value equal to or larger than the value you want for the server TCPWINDOWSIZE
| option. You can set the TCPSENDBFRSIZE and TCPRCVBUFRSIZE parameters to
| values appropriate for the non-Tivoli Storage Manager network workloads on the
| system, because these parameters are overridden by the server TCPWINDOWSIZE
| option. When send/recv buffer sizes are not specified in the PROFILE, a default
| size of 16 KB is used for send/recv buffers.
| IPCONFIG PATHMTUDISCOVERY
| TCPCONFIG TCPMAXRCVBUFRSIZE 524288
| TCPSENDBFRSIZE 65535
| TCPRCVBUFRSIZE 65535

Chapter 4. Network protocol tuning 47


| Note: The FTP server and client application override the default settings and use
64 KB-1 as the TCP window size and 180 KB bytes for send/recv buffers.
Therefore, there is no change required in the TCPCONFIG statement for FTP
server and client.

| BEGINROUTES-ENDROUTES

| Use the BEGINROUTES statement to add static routes to the IP route table. The
| BEGINROUTES allows a BSD style syntax to be specified for the destination IP
| address and address mask. The destination IP address can be an IPv4 or IPv6
| address and does not need to be a fully qualified address. BEGINROUTES is the
| recommended method for defining static routes.

| Specify the MTU for the OSA-Express or OSA-Express2 link on the ROUTE
| statement within a BEGINROUTES/ENDROUTES block. The maximum frame size
| supported on OSA-Express or OSA-Express2 adapters is 8992 bytes. Use
| NODELAYACKS on the PORT statement for the TSM server or on a ROUTE
| statement for a link dedicated for backup activity. Immediately acknowledging
| packets received by the server can improve backup throughput significantly.
| BEGINROUTES
| ;
| ; Destination Subnet Mask First Hop Link Name Packet Size Options
| ;
| ROUTE 10.11.214.0 255.255.252.0 = ET0 MTU 1500
| ROUTE 10.10.48.0 255.255.248.0 = GIGA1 MTU 8992 NODELAYACKS
| ROUTE 10.10.56.0 255.255.248.0 = GIGA2 MTU 8992 NODELAYACKS
| ROUTE DEFAULT 10.11.214.1 ET0 MTU 1500
| ;
| ENDROUTES

| The IP static routes can be modified by the following:


| v Replace the routing table using the VARY TCPIP,,OBEYFILE command.
| v Incoming ICMP Redirect packets can replace IPv4 static routes, and also add
| routes to the routing table.
| v Incoming ICMPv6 Redirect packets can replace IPv6 static routes, and also add
| routes to the routing table.
| v Dynamic routing daemons (for example, OMPROUTE) can replace IPv4 or IPv6
| replaceable static routes, as well as add dynamic routes to the routing table.
| v Router advertisements can update IPv6 replaceable static routes, as well as add
| dynamic routes to the routing table.

| The first BEGINROUTES statement of each configuration data set that is issued
| replaces all static routes in the existing routing table with the new route
| information. All static routes are deleted, along with all routes learned by way of
| ICMP or ICMPv6 redirects. Routes created by OMPROUTE and router
| advertisements are not deleted. Subsequent BEGINROUTES statements in the same
| data set add entries to the routing table.

| Static routes defined by the BEGINROUTES-ENDROUTES block cannot be


| replaced by OMPROUTE or router advertisements unless the static routes are
| defined as replaceable. If you want OMPROUTE or router advertisements to begin
| managing all routes, an empty BEGINROUTES-ENDROUTES block can be used in
| a VARY TCPIP,,OBEYFILE command data set to eliminate the existing static routes.
| ROUTE entries within a BEGINROUTES-ENDROUTES block can be coded only for
| LINK names or INTERFACE names that exist when the entry is processed.

48 IBM Tivoli Storage Manager: Performance Tuning Guide


| When an incorrect ROUTE entry statement is encountered, the ROUTE entry is
| rejected with an error message, but the rest of the ROUTE entries in that
| BEGINROUTES-ENDROUTES block are still processed. Subsequent
| BEGINROUTES-ENDROUTES blocks in the same initial profile or VARY
| TCPIP,,OBEYFILE command data set are also processed.

| Route precedence is as follows:


| v If a route exists to the destination address (a host route), it is chosen first.
| v For IPv4, if subnet, network, or supernetwork routes exist to the destination, the
| route with the most specific network mask (the mask with the most bits on) is
| chosen second.
| v For IPv6, if prefix routes exist to the destination, the route with the most specific
| prefix is chosen second.
| v If multicast default routes exist (only valid for IPv4), the one with the most
| specific multicast address is chosen third.
| v Default routes are chosen when no other route exists to a destination.

TCP/IP and z/OS UNIX system services performance tuning


You can tune TCP/IP and z/OS UNIX system services.
v Set the client/server TCP window size to the allowed maximum.

Tip: Set the TCP window size on z/OS to the allowed maximum by setting
TCPRCVBUFRSIZE to 32K or larger. If client workstation permits, set the
client window size to 65535. However, if the installation is storage
constrained, use the default TCPRCVBUFRSIZE of 16K.
v Ensure that client and server MTU/packet size are equal. Follow the
recommendations given in the PROFILE.TCPIP section
v Ensure that TCP/IP and all other traces are turned off for optimal performance.
Trace activity does create an extra processing overhead
v Follow the z/OS UNIX System Services performance tuning guidelines in the
z/OS UNIX System Services Planning manual or at this URL http://
www.ibm.com/servers/eserver/zseries/zos/unix/bpxa1tun.html
v Region sizes and dispatching priority: It is highly recommended to set the region
size to 0K or 0M for the TCPIP stack address space and for started tasks such as
the FTP server, the SMTP/NJE server, the Web server, the Tivoli Storage
Manager server, and so on.
v If your environment permits, set the dispatching priority for TCPIP and VTAM
equivalent and keep servers slightly lower than TCPIP and VTAM. For other
started tasks, such as FTP, keep them slightly lower than the TCPIP task.
v If you are using Work Load Manager, follow the above recommendations when
your installation defines performance goals in a service policy. Service policies
are defined through an ISPF application and they set goals for all types of z/OS
managed work.
v If you are using TCP/IP V3R2 reference MVS TCP/IP V3R2 Performance Tuning
Guide This tuning guide also includes step by step process for tuning other
TCP/IP platforms such as AIX, OS/2®, DOS and VM.
v Estimate how many z/OS UNIX System Services users, processes, ptys, sockets
and threads would be needed for your z/OS UNIX installation. Update your
BPXPRMxx member in SYS1.PARMLIB
v Spread z/OS UNIX user HFS datasets over more DASD volumes for optimal
performance.

Chapter 4. Network protocol tuning 49


v Monitor your z/OS UNIX resources with RMF™ or system commands (DISPLAY
ACTIVE, and DISPLAY OMVS, and so on.)

50 IBM Tivoli Storage Manager: Performance Tuning Guide


Chapter 5. Archive function
The archive function is a powerful way to store inactive data with a fixed retention
time. Some Tivoli Storage Manager users have been frustrated with the archive
function because of inadequate performance or a perceived lack of functionality. A
review of the changes that the archive function has undergone and some
implementation advice might overcome that frustration.

The Tivoli Storage Manager client provides these basic functions:


v Backup and restore, which provides protection against data loss. This function
includes versioning and management of the active set of data for full system
restore.
v Archive and retrieve, which stores inactive data for a fixed retention period. This
data can be individual files or groups of files. You can also archive an entire
system.

Since the first version of Tivoli Storage Manager, AdStar Distributed Storage
Manager (ADSM), the archiving function has undergone major changes. Today,
database performance, function, and ease of use have improved significantly.

Using the archive function


Before using the archive function, you should determine if archiving is the proper
function for your specific goal.

Consider using backup sets as an alternative for archives if you want to archive an
entire filespace at a point in time. Backup sets have a much smaller impact on the
database. Each backup set creates only one entry in the database. If backup sets are
not appropriate, another option might be to aggregate the files and directories
before archiving them (for example, PKZIP, tar, and so on).

Archiving a set of related files, even on a repeated basis, is appropriate. Also,


archiving to prepare a snapshot of a set of files or directories before making a
significant change is appropriate.

Some archive practices that should be avoided are:


v As a point-in-time recovery strategy This usually means frequent, large-scale
archives. These kinds of archives tend to exacerbate the problems described
above by flooding the database with many directory and file entries. Often the
requirement can be satisfied with client backups or with server-generated
backup sets.
v As a tape rotation strategy: In some cases there is a need to implement a tape
management strategy that regularly and predictably send specific tapes off-site
and then, at a predefined time, returns those tapes back on-site. This is not the
normal mode of operation of Tivoli Storage Manager, but it can be accomplished
various ways. Using archiving to accomplish tape rotation is likely to create
problems with excessive directory and file entries in the database. A more
efficient way to do this by using the client backup function and a series of
storage pools. Backups are directed to the storage pools in a round-robin
fashion. When it is time to recover the tape volumes from the off-site location,
all the volumes are deleted from that storage pool.

© Copyright IBM Corp. 1993, 2007 51


Identifying and correcting archive-related problems
You can recognize some symptoms of possible archive-related problems and take
corrective actions.

Here are some of the symptoms that might indicate a problem with archiving:
v Slow archive or retrieve throughput: Archive performance degrades as more
and more duplicate directory entries are archived. Archive throughput can
degrade over a period of days or months and archive throughput might
gradually worsen as the archive progresses. Throughput can even become so
slow that it appears that the archive has stopped. Examine all other tuning
variables first to be sure the problem is not one that can be fixed by parameter
changes.
v Long inventory expiration duration: Inventory expiration might take longer and
longer because it must process an increasing load of duplicate archived
directories. Also, for each expired directory archive, inventory expiration checks
for dependant files. This tends to compound the problem when there are many
duplicate archived directories.
v Excessive growth of the database: The database grows because each archived
directory requires an entry. First consider other possible causes of database
growth: new clients, changes in management class retention or versioning, or
heavy growth from existing clients.

Corrective actions

If you are experiencing any of the symptoms described above and you think it
might be related to excessive archived directories, the best action to take is to
contact the IBM Software Support Center. They can help you use the service
utilities to examine your system and to take corrective action if it is appropriate.
You might be asked to run these commands:
v QUERY OCCUPANCY TYPE=ARCHIVE
v UPDATE ARCHIVE SHOWSTAT

| The duplicates value indicated in the UPDATE ARCHIVE output is based on


| duplicates by path and directory name only. What you are looking for in this
| output is a large number of duplicates, hundreds of thousands at least, or a high
| ratio of directories to files. These might indicate a problem.

| Attention: Do not use the UPDATE ARCHIVE command unless directed to do so


| by an IBM representative.

| If it is determined that you are experiencing a problem with excessive directory


| archives, you might be directed to one or more of these solutions:
| v Use the UPDATE ARCHIVE to eliminate some or most of the unnecessary
| directory archives. This utility is not effective in all cases depending on when
| and how you arrived at the current condition.
| v Use the UNDO ARCHCONVERSION command to remove the archive package
| performance enhancements for a node. Then issue the CONVERT ARCHIVE
| command. This forces the node to use the new table structure in the database.
| This new structure is more efficient.
| v Use the -V2ARCHIVE option when you create the archives. This option reverts
| the archive process to Version 2 functionality. This means that directories are not

52 IBM Tivoli Storage Manager: Performance Tuning Guide


| archived. On retrieve, Tivoli Storage Manager, if necessary, creates directories
| with default security permissions. Use this option if you can tolerate archive of
| files only.
| v Use a blank character string (not null) for the archive descriptions. This prevents
| excessive unique archives of the same path and directory name.

Chapter 5. Archive function 53


54 IBM Tivoli Storage Manager: Performance Tuning Guide
Appendix. Notices
This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:

IBM Director of Licensing


IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.

For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:

IBM World Trade Asia Corporation


Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106-0032, Japan

The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.

This information could include technical inaccuracies or typographical errors.


Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.

Any performance data contained herein was determined in a controlled


environment. Therefore, the results obtained in other operating environments may
vary significantly. Some measurements may have been made on development-level
systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been

© Copyright IBM Corp. 1993, 2007 55


estimated through extrapolation. Actual results may vary. Users of this document
should verify the applicable data for their specific environment.

Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.

Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:

IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758
U.S.A.

Such information may be available, subject to appropriate terms and conditions,


including in some cases, payment of a fee.

The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.

Any performance data contained herein was determined in a controlled


environment. Therefore, the results obtained in other operating environments may
vary significantly. Some measurements may have been made on development-level
systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been
estimated through extrapolation. Actual results may vary. Users of this document
should verify the applicable data for their specific environment.

Information concerning non-IBM products was obtained from the suppliers of


those products, their published announcements or other publicly available sources.
IBM has not tested those products and cannot confirm the accuracy of
performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.

This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

56 IBM Tivoli Storage Manager: Performance Tuning Guide


Trademarks
The following terms are trademarks of International Business Machines
Corporation in the United States, other countries, or both:

AIX pSeries
DB2 RACF
DFSMSrmm Redbooks
Domino SANergy
Enterprise Storage Server SecureWay
eServer SP
IBM Tivoli
Informix TotalStorage
iSeries VTAM
Lotus WebSphere
Magstar z/OS
MVS zSeries
OS/390
Passport Advantage

Microsoft, Windows, and the Windows logo are trademarks of Microsoft


Corporation in the United States, other countries, or both.

Linux is a trademark of Linus Torvalds in the United States, other countries, or


both.

UNIX is a registered trademark of The Open Group in the United States and other
countries.

Other company, product, or service names may be trademarks or service marks of


others.

Appendix. Notices 57
58 IBM Tivoli Storage Manager: Performance Tuning Guide
Glossary
The terms in this glossary are defined as they pertain to the IBM Tivoli Storage
Manager library. If you do not find the term you need, refer to the IBM Software
Glossary on the Web at this address: http://www.ibm.com/ibm/terminology/.

This glossary may include terms and definitions from:


v The American National Standard Dictionary for Information Systems, ANSI
X3.172-1990, copyright (ANSI). Copies may be purchased from the American
National Standards Institute, 11 West 42nd Street, New York 10036.
v The Information Technology Vocabulary, developed by Subcommittee 1, Joint
Technical Committee 1, of the International Organization for Standardization and
the International Electrotechnical Commission (ISO/IEC JTC2/SC1).
absolute mode
A backup copy group mode that specifies that a file is considered for
incremental backup even if the file has not changed since the last backup.
See also mode. Contrast with modified mode.
access mode
An attribute of a storage pool or a storage volume that specifies whether
the server can write to or read from the storage pool or storage volume.
The access mode can be read/write, read-only, or unavailable. Volumes in
primary storage pools can also have an access mode of destroyed. Volumes
in copy storage pools can also have an access mode of offsite.
acknowledgment
A message sent from one machine to another confirming receipt of data
activate
To validate the contents of a policy set and then make it the active policy
set.
active-data pool
A named set of storage pool volumes that contains only active versions of
client backup data.
active policy set
The activated policy set that contains the policy rules currently in use by
all client nodes assigned to the policy domain. See also policy domain and
policy set.
active version
The most recent backup copy of a file stored by IBM Tivoli Storage
Manager. The active version of a file cannot be deleted until a backup
process detects that the user has either replaced the file with a newer
version or has deleted the file from the workstation. Contrast with inactive
version.
activity log
A log that records normal activity messages generated by the server. These
messages include information about server and client operations, such as
the start time of sessions or device I/O errors. Each message includes a
message ID, date and time stamp, and a text description. The number of
days to retain messages in the activity log can be specified.

© Copyright IBM Corp. 1993, 2007 59


administrative client
A program that runs on a file server, workstation, or mainframe that
administrators use to control and monitor the IBM Tivoli Storage Manager
server. Contrast with backup-archive client.
administrative command schedule
A database record that describes the planned processing of an
administrative command during a specific time period. See also client
schedule.
administrative privilege class
See privilege class.
administrative session
A period of time in which an administrator user ID communicates with a
server to perform administrative tasks. Contrast with client node session.
administrator
A user who has been registered to the server. Administrators can be
authorized to one or more of the following administrative privilege classes:
system, policy, storage, operator, or analyst. Administrators can use the
administrative commands and queries allowed by their privileges.
agent node
A client node that has been granted proxy authority to perform operations
on behalf of another client node, which is the target node.
aggregate
An object, stored in one or more storage pools, consisting of a group of
logical files packaged together. See logical file and physical file.
analyst privilege class
A privilege class that allows an administrator to reset statistics. See also
privilege class.
application client
One of the IBM Tivoli Storage Manager data protection programs installed
on a system to protect an application. The IBM Tivoli Storage Manager
server provides backup services to an application client.
archive
To copy one or more files to a storage pool for long-term storage. Archived
files can include descriptive information and can be retrieved by archive
date, by file name, or by description. Contrast with retrieve.
archive copy
A file that was archived to server storage.
archive copy group
A policy object containing attributes that control the generation,
destination, and expiration of archived files. An archive copy group
belongs to a management class.
archive retention grace period
The number of days that IBM Tivoli Storage Manager retains an archived
file when the server is unable to rebind the file to an appropriate
management class.
assigned capacity
The portion of available space that can be used to store database or
recovery log information. See also available space.

60 IBM Tivoli Storage Manager: Performance Tuning Guide


association
The defined relationship between a client node and a client schedule. An
association identifies the name of a schedule, the name of the policy
domain to which the schedule belongs, and the name of a client node that
performs scheduled operations.
On a configuration manager, the defined relationship between a profile and
an object such as a policy domain. Profile associations define the
configuration information that will be distributed to a managed server
when it subscribes to the profile.
audit To check for logical inconsistencies between information that the server has
and the actual condition of the system. IBM Tivoli Storage Manager can
audit volumes, the database, libraries, and licenses. For example, when
IBM Tivoli Storage Manager audits a volume, the server checks for
inconsistencies between information about backed-up or archived files
stored in the database and the actual data associated with each backup
version or archive copy in server storage.
authentication
The process of checking a user’s password before allowing that user access
to the server. Authentication can be turned on or off by an administrator
with system privilege.
authority
The right granted to a user to perform tasks with IBM Tivoli Storage
Manager servers and clients. See also privilege class.
autochanger
A small, multi-slot tape device that automatically puts tape cartridges into
tape drives. See also library.
available space
The amount of space, in megabytes, that is available to the database or the
recovery log. This space can be used to extend the capacity of the database
or the recovery log, or to provide sufficient free space before a volume is
deleted from the database or the recovery log.
back up
To copy information to another location to ensure against loss of data. In
IBM Tivoli Storage Manager, you can back up user files, the IBM Tivoli
Storage Manager database, and storage pools. Contrast with restore. See
also database backup series and incremental backup.
backup-archive client
A program that runs on a workstation or file server and provides a means
for users to back up, archive, restore, and retrieve files. Contrast with
administrative client.
backup copy group
A policy object containing attributes that control the generation,
destination, and expiration of backup versions of files. A backup copy
group belongs to a management class.
backup retention grace period
The number of days that IBM Tivoli Storage Manager retains a backup
version after the server is unable to rebind the file to an appropriate
management class.

Glossary 61
backup set
A portable, consolidated group of active backup versions of files, generated
for a backup-archive client.
backup version
A file that a user backed up to server storage. More than one backup
version can exist in server storage, but only one backup version is the
active version. See also active version and inactive version.
binding
The process of associating a file with a management class name. See
rebinding.
buffer pool
Temporary space used by the server to hold database or recovery log
pages. See database buffer pool and recovery log buffer pool.
cache The process of leaving a duplicate copy on random access media when the
server migrates a file to another storage pool in the hierarchy.
central scheduler
A function that allows an administrator to schedule client operations and
administrative commands. The operations can be scheduled to occur
periodically or on a specific date. See client schedule and administrative
command schedule.
client A program running on a PC, workstation, file server, LAN server, or
mainframe that requests services of another program, called the server. The
following types of clients can obtain services from an IBM Tivoli Storage
Manager server: administrative client, application client, API client,
backup-archive client, and HSM client (also known as IBM Tivoli Storage
Manager).
client domain
The set of drives, file systems, or volumes that the user selects to back up
or archive using the backup-archive client.
client migration
The process of copying a file from a client node to server storage and
replacing the file with a stub file on the client node. The space
management attributes in the management class control this migration. See
also space management.
client node
A file server or workstation on which the backup-archive client program
has been installed, and which has been registered to the server.
client node session
A period of time in which a client node communicates with a server to
perform backup, restore, archive, retrieve, migrate, or recall requests.
Contrast with administrative session.
client options file
A file that a client can change, containing a set of processing options that
identify the server, communication method, and options for backup,
archive, hierarchical storage management, and scheduling. Also called the
dsm.opt file.
client-polling scheduling mode
A client/server communication technique where the client queries the
server for work. Contrast with server-prompted scheduling mode.

62 IBM Tivoli Storage Manager: Performance Tuning Guide


client schedule
A database record that describes the planned processing of a client
operation during a specific time period. The client operation can be a
backup, archive, restore, or retrieve operation, a client operating system
command, or a macro. See also administrative command schedule.
client system options file
A file, used on UNIX clients, containing a set of processing options that
identify the IBM Tivoli Storage Manager servers to be contacted for
services. This file also specifies communication methods and options for
backup, archive, hierarchical storage management, and scheduling. Also
called the dsm.sys file. See also client user options file.
client (TCP/IP)
A client is a computer or process that requests services on the network. A
server is a computer or process that responds to a request for service from
a client.
client user options file
A user-created file, used on UNIX clients, containing a set of processing
options that identify the server, communication method, backup and
archive options, space management options, and scheduling options. Also
called the dsm.opt file. See also client system options file.
closed registration
A registration process in which only an administrator can register
workstations as client nodes with the server. Contrast with open registration.
collocation
The process of keeping all data belonging to a single client file space, a
single client node, or a group of client nodes on a minimal number of
sequential-access volumes within a storage pool. Collocation can reduce
the number of volumes that must be accessed when a large amount of data
must be restored.
collocation group
A user-defined group of client nodes whose data is stored on a minimal
number of volumes through the process of collocation.
compression
The process of saving storage space by eliminating empty fields or
unnecessary data in a file. In IBM Tivoli Storage Manager, compression can
occur at a workstation before files are backed up or archived to server
storage. On some types of tape drives, hardware compression can be used.
configuration manager
One IBM Tivoli Storage Manager server that distributes configuration
information to other IBM Tivoli Storage Manager servers (called managed
servers) via profiles. Configuration information can include policy and
schedules. See managed server and profile.
conversation (APPC)
The actual exchange of information between two TPs.
copy group
A policy object whose attributes control how backup versions or archive
copies are generated, where backup versions or archive copies are initially
located, and when backup versions or archive copies expire. A copy group
belongs to a management class. See also archive copy group, backup copy
group, backup version, and management class.

Glossary 63
copy storage pool
A named set of volumes that contains copies of files that reside in primary
storage pools. Copy storage pools are used only to back up the data stored
in primary storage pools. A copy storage pool cannot be a destination for a
backup copy group, an archive copy group, or a management class (for
space-managed files). See primary storage pool and destination.
CPU utilization
CPU utilization is computed as total system CPU time divided by the
elapsed time.
damaged file
A physical file for which IBM Tivoli Storage Manager has detected read
errors.
Data Link Control (DLC) (APPC)
The communications link protocol used to transmit data between two
physically linked machines. User data is transmitted inside DLC frames.
Token-ring, Ethernet, and SDLC protocols are examples of commonly used
DLCs on SNA networks today.
database
A collection of information about all objects managed by the server,
including policy management objects, users and administrators, and client
nodes.
database backup series
One full backup of the database, plus up to 32 incremental backups made
since that full backup. Each full backup that is run starts a new database
backup series. A number identifies each backup series.
database backup trigger
A set of criteria that defines when and how database backups are run
automatically. The criteria determine how often the backup is run, whether
the backup is a full or incremental backup, and where the backup is stored.
database buffer pool
Storage that is used as a cache to allow database pages to remain in
memory for long periods of time, so that the server can make continuous
updates to pages without requiring input or output (I/O) operations from
external storage.
database snapshot
A complete backup of the entire IBM Tivoli Storage Manager database to
media that can be taken off-site. When a database snapshot is created, the
current database backup series is not interrupted. A database snapshot
cannot have incremental database backups associated with it. See also
database backup series. Contrast with full backup.
data mover
A device, defined to IBM Tivoli Storage Manager, that moves data on
behalf of the server. A NAS file server can be a data mover.
default management class
A management class assigned to a policy set that the server uses to
manage backed-up or archived files when a user does not specify a
management class for a file.

64 IBM Tivoli Storage Manager: Performance Tuning Guide


desktop client
The group of backup-archive clients supported by IBM Tivoli Storage
Manager that includes clients on Windows, Apple, and Novell NetWare
operating systems.
destination
A copy group or management class attribute that specifies the primary
storage pool to which a client file will be backed up, archived, or migrated.
device class
A named set of characteristics applied to a group of storage devices. Each
device class has a unique name and represents a device type of disk, file,
optical disk, or tape.
device configuration file
A file that contains information about defined device classes, and, on some
IBM Tivoli Storage Manager servers, defined libraries and drives. The file
can be created by using an IBM Tivoli Storage Manager administrative
command or by using an option in the server options file. The information
is a copy of the device configuration information in the IBM Tivoli Storage
Manager database.
disaster recovery manager (DRM)
A function in IBM Tivoli Storage Manager that assists in preparing and
later using a disaster recovery plan file for the IBM Tivoli Storage Manager
server.
disaster recovery plan
A file created by disaster recovery manager (DRM) that contains
information about how to recover computer systems if a disaster occurs
and scripts that can be run to perform some recovery tasks. The file
includes information about the software and hardware used by the IBM
Tivoli Storage Manager server, and the location of recovery media.
domain
See policy domain or client domain.
DRM A short name for disaster recovery manager.
dsm.opt file
See client options file and client user options file.
dsmserv.opt
See server options file.
dsm.sys file
See client system options file.
dynamic
A value for serialization that specifies that IBM Tivoli Storage Manager
accepts the first attempt to back up or archive a file regardless of whether
the file is modified during the backup or archive process. See also
serialization. Contrast with shared dynamic, shared static, and static.
elapsed time
All elapsed time is calculated by extracting the time from the Tivoli
Storage Manager activity log for the function being performed. Therefore,
the connect time for client backup and restore is not included.
enterprise configuration
A method of setting up IBM Tivoli Storage Manager servers so that the
administrator can distribute the configuration of one of the servers to the

Glossary 65
other servers, using server-to-server communication. See configuration
manager, managed server, profile, and subscription.
enterprise logging
The sending of events from IBM Tivoli Storage Manager servers to a
designated event server. The event server routes the events to designated
receivers, such as to a user exit. See also event.
estimated capacity
The available space, in megabytes, of a storage pool.
event An administrative command or a client operation that is scheduled to be
run using IBM Tivoli Storage Manager scheduling.
A message that an IBM Tivoli Storage Manager server or client issues.
Messages can be logged using IBM Tivoli Storage Manager event logging.
event record
A database record that describes actual status and results for events.
event server
A server to which other servers can send events for logging. The event
server routes the events to any receivers that are enabled for the sending
server’s events.
exclude
To identify files that you do not want to include in a specific client
operation, such as backup or archive. You identify the files in an
include-exclude list.
exclude-include list
See include-exclude list.
expiration
The process by which files are identified for deletion because their
expiration date or retention period has passed. Backed-up or archived files
are marked expired by IBM Tivoli Storage Manager based on the criteria
defined in the backup or archive copy group.
expiration date
On some IBM Tivoli Storage Manager servers, a device class attribute used
to notify tape management systems of the date when IBM Tivoli Storage
Manager no longer needs a tape volume. The date is placed in the tape
label so that the tape management system does not overwrite the
information on the tape volume before the expiration date.
export To copy administrator definitions, client node definitions, policy
definitions, server control information, or file data to external media, or
directly to another server. Used to move or copy information between
servers.
extend
To increase the portion of available space that can be used to store
database or recovery log information. Contrast with reduce.
file space
A logical space in IBM Tivoli Storage Manager server storage that contains
a group of files backed up or archived by a client. For clients on Windows
systems, a file space contains files from a logical partition that is identified
by a volume label. For clients on UNIX systems, a file space contains files
from the same file system, or the part of a file system that stems from a
virtual mount point. Clients can restore, retrieve, or delete their file spaces

66 IBM Tivoli Storage Manager: Performance Tuning Guide


from IBM Tivoli Storage Manager server storage. IBM Tivoli Storage
Manager does not necessarily store all the files belonging to a single file
space together, but can identify all the files in server storage that belong to
a single file space.
file space ID (FSID)
A unique numeric identifier that the server assigns to a file space when it
is stored in server storage.
frequency
A copy group attribute that specifies the minimum interval, in days,
between incremental backups.
FSID See file space ID.
full backup
The process of backing up the entire server database. A full backup begins
a new database backup series. See also database backup series and incremental
backup. Contrast with database snapshot.
fuzzy copy
A backup version or archive copy of a file that might not accurately reflect
the original content of the file because IBM Tivoli Storage Manager backed
up or archived the file while the file was being modified. See also backup
version and archive copy.
Gigabyte (GB)
1,073,741,824 bytes (two to the thirtieth power) when used in this
publication.
hierarchical storage management (HSM) client
The Tivoli Storage Manager for Space Management program that runs on
workstations to allow users to maintain free space on their workstations by
migrating and recalling files to and from IBM Tivoli Storage Manager
storage. Synonymous with space manager client.
high migration threshold
A percentage of the storage pool capacity that identifies when the server
can start migrating files to the next available storage pool in the hierarchy.
Contrast with low migration threshold. See server migration.
HSM client
Hierarchical storage management client. Also known as the space manager
client.
IBM Tivoli Storage Manager command script
A sequence of IBM Tivoli Storage Manager administrative commands that
are stored in the database of the IBM Tivoli Storage Manager server. You
can run the script from any interface to the server. The script can include
substitution for command parameters and conditional logic.
image backup
A backup of a full file system or raw logical volume as a single object.
import
The process of copying exported administrator definitions, client node
definitions, policy definitions, server control information or file data from
external media to a target server. A subset of information can be imported
to a target server from the external media. Used to move or copy
information between servers. See export.

Glossary 67
inactive version
A backup version of a file that is either not the most recent backup version,
or that is a backup version of a file that no longer exists on the client
system. Inactive backup versions are eligible for expiration processing
according to the management class assigned to the file. Contrast with active
version.
include-exclude file
A file containing statements that IBM Tivoli Storage Manager uses to
determine whether to include certain files in specific client operations, and
to determine the associated management classes to use for backup, archive,
and space management. See include-exclude list.
include-exclude list
A group of include and exclude option statements that IBM Tivoli Storage
Manager uses. The exclude options identify files that are not to be included
in specific client operations such as backup or space management. The
include options identify files that are exempt from the exclusion rules. The
include options can also assign a management class to a file or group of
files for backup, archive, or space management services. The
include-exclude list for a client may include option statements from the
client options file, from separate include-exclude files, and from a client
option set on the server.
incremental backup
The process of backing up files or directories that are new or have changed
since the last incremental backup. See also selective backup.
The process of copying only the pages in the database that are new or
changed since the last full or incremental backup of the database. Contrast
with full backup. See also database backup series.
Internet address (TCP/IP)
Internet Address is a unique 32 bit address identifying each node in an
internet. An internet address consists of a network number and a local
address. Internet addresses are represented as a dotted-decimal notation
and are used to route packets through the network.
ITR The Internal Throughput Rate measured in units of work, for example, files
processed, per unit of CPU time.
Kilobyte (KB)
1024 (two to the tenth power) when used in this publication.
KB per CPU second
The number compares how effectively a single CPU transfers KB of data
per one CPU busy second. The number is calculated as follows:KB per CPU
Second = (Throughput (kb/sec) x 100) / ( number of CPUs x % server
CPU utilization)
A larger number means CPU is more efficient in transferring the data. This
number can be used to compare effectiveness of CPU across different
workloads or to compare different CPU types for a given performance
evaluation. For SMP systems it is important to understand that this metric
applies to the efficiency of a single CPU. The total effectiveness for
multiple CPUs in an SMP system working together can be estimated by
multiplying by ″KB per CPU Second″ by the number of available CPUs.
LAN-free data movement
The direct movement of client data between a client machine and a storage
device on a SAN, rather than on the LAN.

68 IBM Tivoli Storage Manager: Performance Tuning Guide


library
A repository for demountable recorded media, such as magnetic tapes.
For IBM Tivoli Storage Manager , a collection of one or more drives, and
possibly robotic devices (depending on the library type), which can be
used to access storage volumes.
In the AS/400 system, a system object that serves as a directory to other
objects. A library groups related objects, and allows the user to find objects
by name.
library client
An IBM Tivoli Storage Manager server that uses server-to-server
communication to access a library that is managed by another IBM Tivoli
Storage Manager server. See also library manager.
library manager
An IBM Tivoli Storage Manager server that controls device operations
when multiple IBM Tivoli Storage Manager servers share a storage device.
The device operations include mount, dismount, volume ownership, and
library inventory. See also library client.
logical file
A file stored in one or more server storage pools, either by itself or as part
of an aggregate. See also aggregate and physical file.
logical occupancy
The space used by logical files in a storage pool. Because logical occupancy
does not include the unused space created when logical files are deleted
from aggregates, it may be less than physical occupancy. See also physical file
and logical file.
logical volume
A portion of a physical volume that contains a file system.
For the IBM Tivoli Storage Manager server, the combined space on all
volumes for either the database or the recovery log. The database is one
logical volume, and the recovery log is one logical volume.
Logical Unit (LU) (APPC)
The ″socket″ used by a TP to obtain access to an SNA network. LU is the
SNA software which accepts and executes the verbs from your Transaction
Programs. An LU manages the network on behalf of TP’s and is
responsible for data routing. A TP gains access to SNA via an LU.
low migration threshold
A percentage of the storage pool capacity that specifies when the server
can stop the migration of files to the next storage pool. Contrast with high
migration threshold. See server migration.
macro file
A file that contains one or more IBM Tivoli Storage Manager
administrative commands, which can be run only from an administrative
client by using the MACRO command. Contrast with IBM Tivoli Storage
Manager command script.
managed object
A definition in the database of a managed server that was distributed to
the managed server by a configuration manager. When a managed server
subscribes to a profile, all objects associated with that profile become
managed objects in the database of the managed server. In general, a
managed object cannot be modified locally on the managed server. Objects

Glossary 69
can include policy, schedules, client options sets, server scripts,
administrator registrations, and server and server group definitions.
managed server
An IBM Tivoli Storage Manager server that receives configuration
information from a configuration manager via subscription to one or more
profiles. Configuration information can include definitions of objects such
as policy and schedules. See configuration manager, subscription, and profile.
managed system
A client or server that requests services from the IBM Tivoli Storage
Manager server.
management class
A policy object that users can bind to each file to specify how the server
manages the file. The management class can contain a backup copy group,
an archive copy group, and space management attributes. The copy groups
determine how the server manages backup versions or archive copies of
the file. The space management attributes determine whether the file is
eligible to be migrated by the space manager client to server storage and
under what conditions the file is migrated. See also copy group, space
manager client, binding, and rebinding.
MAXDATARCV (NetBIOS)
The maximum receive data size, in bytes. This is the maximum size of the
user data in any frame that this node will receive on a session.
maximum extension
Specifies the maximum amount of storage space, in megabytes, that you
can extend the database or the recovery log.
maximum reduction
Specifies the maximum amount of storage space, in megabytes, that you
can reduce the database or the recovery log.
Maximum Transmission Unit (TCP/IP)
The size, in bytes, of the largest packet that a given layer of a
communications protocol can pass onwards.
maximum utilization
The highest percentage of assigned capacity used by the database or the
recovery log.
MAXIN (NetBIOS)
The number of NetBIOS message packets received before sending an
acknowledgment.
MAXOUT (NetBIOS)
The number of NetBIOS message packets to send before expecting an
acknowledgment.
Megabyte (MB)
1,048,576 bytes (two to the twentieth power) when used in this publication.
migrate
To move data from one storage location to another. See also client migration
and server migration.
mirroring
The process of writing the same data to multiple disks at the same time.
The mirroring of data protects against data loss within the database or
within the recovery log.

70 IBM Tivoli Storage Manager: Performance Tuning Guide


mode A copy group attribute that specifies whether to back up a file that has not
been modified since the last time the file was backed up. See modified and
absolute.
modified mode
A backup copy group mode that specifies that a file is considered for
incremental backup only if it has changed since the last backup. A file is
considered changed if the date, size, owner, or permissions of the file has
changed. See also mode. Contrast with absolute mode.
mount To place a data medium (such as a tape cartridge) on a drive in a position
to operate.
mount limit
A device class attribute that specifies the maximum number of volumes
that can be simultaneously accessed from the same device class. The mount
limit determines the maximum number of mount points. See mount point.
mount point
A logical drive through which volumes in a sequential access device class
are accessed. For removable media device types, such as tape, a mount
point is a logical drive associated with a physical drive. For the file device
type, a mount point is a logical drive associated with an I/O stream. The
number of mount points for a device class is defined by the value of the
mount limit attribute for that device class. See mount limit.
mount retention period
A device class attribute that specifies the maximum number of minutes
that the server retains a mounted sequential access media volume that is
not being used before it dismounts the sequential access media volume.
mount wait period
A device class attribute that specifies the maximum number of minutes
that the server waits for a sequential access volume mount request to be
satisfied before canceling the request.
MTU (TCP/IP)
See Maximum Transmission Unit.
NAS Network-attached storage.
NAS node
A client node that is a network-attached storage (NAS) file server. Data for
the NAS node is transferred by the NAS file server, which is controlled by
the IBM Tivoli Storage Manager server. The Tivoli Storage Manager server
uses the network data management protocol (NDMP) to direct the NAS
file server in its backup and restore operations. A NAS node is also called
a NAS file server node.
native format
A format of data that is written to a storage pool directly by the IBM Tivoli
Storage Manager server. Contrast with non-native data format.
NDMP
Network Data Management Protocol.
NETBIOSBUFFERSIZE (NetBIOS)
The size of the NetBIOS communications buffer in kilobytes. It is found in
the dsm.opt server options file.

Glossary 71
NETBIOSTIMEOUT (NetBIOS)
The number of seconds that must elapse before a time out occurs for a
NetBIOS send or receive. It is found in the dsm.opt server options file.
network-attached storage (NAS) file server
A dedicated storage device with an operating system that is optimized for
file-serving functions. In IBM Tivoli Storage Manager , a NAS file server
can have the characteristics of both a node and a data mover. See also data
mover and NAS node.
Network Data Management Protocol (NDMP)
An industry-standard protocol that allows a network storage-management
application (such as IBM Tivoli Storage Manager ) to control the backup
and recovery of an NDMP-compliant file server, without installing
third-party software on that file server.
node A workstation or file server that is registered with an IBM Tivoli Storage
Manager server to receive its services. See also client node and NAS node.
In a Microsoft cluster configuration, one of the computer systems that
make up the cluster.
node privilege class
A privilege class that allows an administrator to remotely access
backup-archive clients for a specific client node or for all clients in a policy
domain. See also privilege class.
non-native data format
A format of data written to a storage pool that is different from the format
that the server uses for basic LAN-based operations. The data is written by
a data mover instead of the server. Storage pools with data written in a
non-native format may not support some server operations, such as audit
of a volume. The NETAPPDUMP data format for NAS node backups is an
example of a non-native data format.
open registration
A registration process in which any users can register their own
workstations as client nodes with the server. Contrast with closed
registration.
operator privilege class
A privilege class that allows an administrator to issue commands that
disable or halt the server, enable the server, cancel server processes, and
manage removable media. See also privilege class.
Pacing (APPC)
A mechanism used to control the flow of data in SNA. Pacing allows large,
fast machines to communicate with smaller, less capable, machines. Pacing
is unidirectional. That means that a machine can have a different send
window than its receive window.
Packet (TCP/IP)
A packet refers to the unit or block of data of one transaction between a
host and its network. A packet usually contains a network header, at least
one high-protocol header and data blocks. Packets are the exchange
medium used at the Internetwork layer to send receive data.
PACKETS (NetBIOS)
The number of I-frame packet descriptors that the NetBIOS protocol can
use to build DLC frames from NetBIOS messages.

72 IBM Tivoli Storage Manager: Performance Tuning Guide


page A unit of space allocation within IBM Tivoli Storage Manager database
volumes.
path An IBM Tivoli Storage Manager object that defines a one-to-one
relationship between a source and a destination. Using the path, the source
accesses the destination. Data may flow from the source to the destination,
and back. An example of a source is a data mover (such as a NAS file
server), and an example of a destination is a tape drive.
physical file
A file stored in one or more storage pools, consisting of either a single
logical file, or a group of logical files packaged together (an aggregate). See
also aggregate and logical file.
physical occupancy
The amount of space used by physical files in a storage pool. This space
includes the unused space created when logical files are deleted from
aggregates. See also physical file, logical file, and logical occupancy.
Physical Unit (PU) (APPC)
A program on an SNA node that manages physical network resources on
behalf of all LU’s on that node. For example a PU manages the connections
of the node to adjacent nodes. The PU manages the physical data links on
behalf of LU’s, which in turn manage the sessions or logical needs between
nodes.
PIGGYBACKACKS (NetBIOS)
Specifies whether NetBIOS will send and receive acknowledgments piggy
backed with incoming data.
policy domain
A grouping of policy users with one or more policy sets, which manage
data or storage resources for the users. In IBM Tivoli Storage Manager , the
users are client nodes. See policy set, management class, and copy group.
policy privilege class
A privilege class that allows an administrator to manage policy objects,
register client nodes, and schedule client operations for client nodes.
Authority can be restricted to certain policy domains. See also privilege
class.
policy set
A policy object that contains a group of management classes that exist for a
policy domain. Several policy sets can exist within a policy domain but
only one policy set is active at one time. See management class and active
policy set.
Port (TCP/IP) -
A port is an end point for communication between applications, generally
referring to a logical connection. TCP/IP uses protocol port numbers to
identify the ultimate destination within a machine.
premigration
For a space manager client, the process of copying files that are eligible for
migration to server storage, while leaving the original file intact on the
local system.
primary storage pool
A named set of volumes that the server uses to store backup versions of

Glossary 73
files, archive copies of files, and files migrated from HSM client nodes. You
can back up a primary storage pool to a copy storage pool. See destination
and copy storage pool.
privilege class
A level of authority granted to an administrator. The privilege class
determines which administrative tasks the administrator can perform. For
example, an administrator with system privilege class can perform any
administrative task. Also called administrative privilege class. See also
system privilege class, policy privilege class, storage privilege class, operator
privilege class, analyst privilege class, and node privilege class.
profile
A named group of configuration information that can be distributed from a
configuration manager when a managed server subscribes. Configuration
information can include registered administrators, policy, client schedules,
client option sets, administrative schedules, IBM Tivoli Storage Manager
command scripts, server definitions, and server group definitions. See
configuration manager and managed server.
randomization
The process of distributing schedule start times for different clients within
a specified percentage of the schedule’s startup window.
rebinding
The process of associating a backed-up file with a new management class
name. For example, rebinding occurs when the management class
associated with a file is deleted. See binding.
recall To access files that were migrated from workstations to server storage by
using the space manager client. Contrast with migrate.
receiver
A server repository that contains a log of server messages and client
messages as events. For example, a receiver can be a file exit, a user exit,
or the IBM Tivoli Storage Manager server console and activity log. See also
event.
reclamation
A process of consolidating the remaining data from many sequential access
volumes onto fewer new sequential access volumes.
reclamation threshold
The percentage of reclaimable space that a sequential access media volume
must have before the server can reclaim the volume. Space becomes
reclaimable when files are expired or are deleted. The percentage is set for
a storage pool.
recovery log
A log of updates that are about to be written to the database. The log can
be used to recover from system and media failures.
recovery log buffer pool
Storage that the server uses to hold new transaction records until they can
be written to the recovery log.
reduce
To free up enough space from the database or the recovery log, such that
you can delete a volume. Contrast with extend.

74 IBM Tivoli Storage Manager: Performance Tuning Guide


register
To define a client node or administrator who can access the server. See
registration.
To specify licenses that have been purchased for the server.
registration
The process of identifying a client node or administrator to the server.
Request/Response Unit (RU) (APPC)
A unit of SNA packet which carries user and network control data. It is
used to carry user application data from one machine to another and can
vary greatly in size.
restore
To copy information from its backup location to the active storage location
for use. In IBM Tivoli Storage Manager , you can restore the server
database, storage pools, storage pool volumes, and users’ backed-up files.
The backup version of a file in the storage pool is not affected by the
restore operation. Contrast with backup.
retention
The amount of time, in days, that inactive backed-up or archived files are
kept in the storage pool before they are deleted. Copy group attributes and
default retention grace periods for the domain define retention.
retention period
On an MVS server, a device class attribute that specifies how long files are
retained on sequential access media. When used, IBM Tivoli Storage
Manager passes this information to the MVS operating system to ensure
that other tape management systems do not overwrite tape volumes that
contain retained data.
retrieve
To copy archived information from the storage pool to the client node for
use. The retrieve operation does not affect the archive copy in the storage
pool. Contrast with archive.
rollback
To remove changes that were made to database files since the last commit
point.
schedule
A database record that describes scheduled client operations or
administrative commands. See administrative command schedule and client
schedule.
scheduling mode
The method of interaction between a server and a client for running
scheduled operations on the client. IBM Tivoli Storage Manager supports
two scheduling modes for client operations: client-polling and
server-prompted.
scratch volume
A labeled volume that is either blank or contains no valid data, that is not
currently defined to IBM Tivoli Storage Manager , and that is available for
use.
script See IBM Tivoli Storage Manager command script.

Glossary 75
Segment (TCP/IP)
TCP views the data stream as a sequence of octets or bytes that it divides
into segments for transmission. Each segment travels across an internet in
a single IP Datagram.
selective backup
The process of backing up selected files or directories from a client domain.
See also incremental backup.
serialization
The process of handling files that are modified during backup or archive
processing. See static, dynamic, shared static, and shared dynamic.
Server CPU Time
This is the total CPU time on the Tivoli Storage Manager server divided by
the total workload in logical MB and expressed as CPU seconds per MB.
For measurements with a local Tivoli Storage Manager client, this includes
both the client and server CPU time.
Server CPU Utilization
This is the total CPU time on the Tivoli Storage Manager server divided by
the elapsed time and expressed as a percentage. For measurements with a
local Tivoli Storage Manager client, this includes both the client and server
CPU time. On multiple processor machines, this is an average across all
processors.
Server Efficiency
This is the total client workload executed divided by the total CPU time on
the Tivoli Storage Manager server and expressed as KB per CPU second.
For measurements with a local Tivoli Storage Manager client, this includes
both the client and server CPU time. This value can be used to compare
the CPU efficiency of execution of different tests or workloads.
Server ITR
This is the internal throughput rate (ITR) in workload files processed
divided by the CPU time on the Tivoli Storage Manager server and
expressed as files per CPU second. For measurements with a local Tivoli
Storage Manager client, this includes both the client and server CPU time.
server migration
The process of moving data from one storage pool to the next storage pool
defined in the hierarchy, based on the migration thresholds defined for the
storage pools. See also high migration threshold and low migration threshold.
server options file
A file that contains settings that control various server operations. These
settings, or options, affect such things as communications, devices, and
performance.
server-prompted scheduling mode
A client/server communication technique where the server contacts the
client when a scheduled operation needs to be done. Contrast with
client-polling scheduling mode.
server storage
The primary and copy storage pools used by the server to store users’ files:
backup versions, archive copies, and files migrated from space manager
client nodes (space-managed files). See primary storage pool, copy storage
pool, storage pool volume, and volume.

76 IBM Tivoli Storage Manager: Performance Tuning Guide


session resource usage
The amount of wait time, CPU time, and space used or retrieved during a
client session.
shared dynamic
A value for serialization that specifies that a file must not be backed up or
archived if it is being modified during the operation. IBM Tivoli Storage
Manager retries the backup or archive operation a number of times; if the
file is being modified during each attempt, IBM Tivoli Storage Manager
will back up or archive the file on its last try. See also serialization. Contrast
with dynamic, shared static, and static.
shared library
A library device that is shared among multiple IBM Tivoli Storage
Manager servers.
shared static
A value for serialization that specifies that a file must not be backed up or
archived if it is being modified during the operation. IBM Tivoli Storage
Manager retries the backup or archive operation a number of times; if the
file is being modified during each attempt, IBM Tivoli Storage Manager
will not back up or archive the file. See also serialization. Contrast with
dynamic, shared dynamic, and static.
SNA session (APPC)
A logical connection between two LU’s across an SNA network.
snapshot
See database snapshot.
source server
A server that can send data, in the form of virtual volumes, to another
server. Contrast with target server.
space-managed file
A file that is migrated from a client node by the space manager client
(HSM client). The space manager client recalls the file to the client node on
demand.
space management
The process of keeping sufficient free storage space available on a client
node by migrating files to server storage. The files are migrated based on
criteria defined in management classes to which the files are bound, and
the include-exclude list. Synonymous with hierarchical storage management.
See also migration.
space manager client
The Tivoli Storage Manager for Space Management program that enables
users to maintain free space on their workstations by migrating and
recalling files to and from server storage. Also called hierarchical storage
management (HSM) client.
startup window
A time period during which a schedule must be initiated.
static A value for serialization that specifies that a file must not be backed up or
archived if it is being modified during the operation. IBM Tivoli Storage
Manager does not retry the operation. See also serialization. Contrast with
dynamic, shared dynamic, and shared static.

Glossary 77
storage agent
A program that enables IBM Tivoli Storage Manager to back up and restore
client data directly to and from SAN-attached storage.
storage hierarchy
A logical ordering of primary storage pools, as defined by an
administrator. The ordering is usually based on the speed and capacity of
the devices that the storage pools use. In IBM Tivoli Storage Manager , the
storage hierarchy is defined by identifying the next storage pool in a
storage pool definition. See storage pool.
storage pool
A named set of storage volumes that is the destination that the IBM Tivoli
Storage Manager server uses to store client data. A storage pool stores
backup versions, archive copies, and files that are migrated from space
manager client nodes. You back up a primary storage pool to a copy
storage pool. See primary storage pool and copy storage pool.
storage pool volume
A volume that has been assigned to a storage pool. See volume, copy storage
pool, and primary storage pool.
storage privilege class
A privilege class that allows an administrator to control how storage
resources for the server are allocated and used, such as monitoring the
database, the recovery log, and server storage. Authority can be restricted
to certain storage pools. See also privilege class.
stub file
A file that replaces the original file on a client node when the file is
migrated from the client node to server storage by Tivoli Storage Manager
for Space Management .
subscription
In a Tivoli environment, the process of identifying the subscribers that the
profiles are distributed to. For IBM Tivoli Storage Manager , this is the
process by which a managed server requests that it receive configuration
information associated with a particular profile on a configuration
manager. See managed server, configuration manager, and profile.
system privilege class
A privilege class that allows an administrator to issue all server
commands. See also privilege class.
tape library
A term used to refer to a collection of drives and tape cartridges. The tape
library may be an automated device that performs tape cartridge mounts
and demounts without operator intervention.
tape volume prefix
A device class attribute that is the high-level-qualifier of the file name or
the data set name in the standard tape label.
target node
A client node for which other client nodes (called agent nodes) have been
granted proxy authority. The proxy authority allows the agent nodes to
perform operations such as backup and restore on behalf of the target
node, which owns the data being operated on.

78 IBM Tivoli Storage Manager: Performance Tuning Guide


target server
A server that can receive data sent from another server. Contrast with
source server. See also virtual volumes.
Throughput
Throughput, mentioned in this guide, is total bytes in the workload backed
up or restored divided by elapsed time. The total bytes does not include
communication protocol and Tivoli Storage Manager overhead. Note that
this differs from the throughput rate which appears on the client screens.
The latter is the instantaneous data transfer rate and does not include any
delays due to client or server processing or any wait time in
communications Throughput is reported in KB per second, where KB is
1024 bytes. Note that network devices and protocols often quote Kbits or
Mbits per seconds (such as a 16 Mbit token ring). For example, 16 Mbits is
equivalent to 2 MB.
Throughput Ratio
Compares the relative throughput between two performance
measurements. For a given description, the first row of each group in the
table is used as a reference. The throughput in the following rows is
compared to the reference using the following calculation: Throughput
Ratio = 100 x (Throughput measured / Reference Throughput)
The most prevalent example is how throughput scales when multiple
clients are added. A larger throughput ratio means better scaling,
conversely smaller number translates to poor scaling, and a number close
to 100 indicates little change. This metric acts as a quick comparison tool
for the comparison of different environments. This includes but is not
limited to:
v Adding clients
v Adding CPUs (for SMP systems)
v Different levels of server codes
Transaction Program (TP) (APPC)
The logical network name used by an application program to communicate
with another application program receive an SNA network.
UCS-2 An ISO/IEC 10646 encoding form, Universal Character Set coded in 2
octets. The IBM Tivoli Storage Manager client on Windows uses the UCS-2
code page when the client is enabled for Unicode.
Unicode Standard
A universal character encoding standard that supports the interchange,
processing, and display of text that is written in any of the languages of
the modern world. It can also support many classical and historical texts
and is continually being expanded. The Unicode Standard is compatible
with ISO/IEC 10646.
UTF-8 Unicode transformation format - 8. A byte-oriented encoding form
specified by the Unicode Standard.
validate
To check a policy set for conditions that can cause problems if that policy
set becomes the active policy set. For example, the validation process
checks whether the policy set contains a default management class.
version
A backup copy of a file stored in server storage. The most recent backup
copy of a file is the active version. Earlier copies of the same file are

Glossary 79
inactive versions. The number of versions retained by the server is
determined by the copy group attributes in the management class.
virtual file space
A representation of a directory on a network-attached storage (NAS) file
system as a path to that directory. A virtual file space is used to back up
the directory as a file space in IBM Tivoli Storage Manager server storage.
virtual volume
An archive file on a target server that represents a sequential media volume
to a source server.
volume
The basic unit of storage for the IBM Tivoli Storage Manager database,
recovery log, and storage pools. A volume can be an LVM logical volume,
a standard file system file, a tape cartridge, or an optical cartridge. Each
volume is identified by a unique volume identifier. See database volume,
scratch volume, and storage pool volume.
volume history file
A file that contains information about: volumes used for database backups
and database dumps; volumes used for export of administrator, node,
policy, or server data; and sequential access storage pool volumes that have
been added, reused, or deleted. The information is a copy of the same
types of volume information in the IBM Tivoli Storage Manager database.
Window size (TCP/IP)
The number of packets that can be unacknowledged at any given time is
called the window size. For example, in a sliding window protocol with
window size 8, the sender is permitted to send 8 packets before it receives
an acknowledgment.

80 IBM Tivoli Storage Manager: Performance Tuning Guide


Index
Special characters client options (continued)
TCPBUFFSIZE 32
-V2ARCHIVE option 52 TCPNODELAY 32
TCPWINDOWSIZE 17, 18, 33, 46
TXNBYTELIMIT 10, 23, 33
A VIRTUALNODENAME 36
accessibility features xi Windows 35
adapters per fiber HBA 24 collocation 22
AIX by filespace 13
ioo command 15 by group 13
performance recommendations 17 by node 13
server and client TCP/IP tuning 42 COMMMETHOD SHAREDMEM client option 18, 35
Virtual Address space 15 COMMMETHOD SHAREDMEM server option 18, 35
vmo command 15 COMMRESTARTDURATION client option 27
anti-virus software 35 COMMRESTARTINTERVAL client option 27
archive COMPRESSALWAYS client option 25, 26
performance 51 compression
problems with 52 enabling on tape drives 21
throughput 52 COMPRESSION client option 25
when to 51 CONVERT ARCHIVE server command 52
customer support, contacting x

B
backup D
LAN-free 14 database
operations 12 mirroring 24
performance 12 performance 10
throughput 30 DEFINE COPYGROUP server command 12
BACKUP DB server command 10 DEFINE DEVCLASS server command 21
BEGINROUTES/ENDROUTES block 47 DEFINE LOGVOLUME command 6
buffer pool 3, 7 DEFINE STGPOOL server command 12, 13, 22
BUFPOOLSIZE server option 3 DEFINE VOLUME server command 10
busses device drivers 39
multiple PCI 24 direct I/O
AIX 16
Sun Solaris 16
C disaster recovery 12
disk
cached disk storage pools 12 performance considerations 24
Cached files write cache 24
clearing 12 DISKBUFFSIZE client option 28
client dsm.opt file 25, 37
incremental backup 37 dsm.sys file 25, 37
tuning options 25 DSMMIGRATE client command 36
client commands
DSMMIGRATE 36
client options 35
command line only E
IFNEWER 34 education
INCRBYDATE 34 see Tivoli technical training viii
COMMMETHOD SHAREDMEM 18, 35 ENABLELANFREE client option 28
COMMRESTARTDURATION 27 Ethernet adapters 39
COMMRESTARTINTERVAL 27 EXPINTERVAL server option 4
COMPRESSALWAYS 25, 26 export 12
COMPRESSION 25 EXTEND DB server command 16
DISKBUFFSIZE 28 EXTEND LOG command 6
ENABLELANFREE 28 EXTEND LOG server command 16
PROCESSORUTILIZATION 28
QUIET 27
RESOURCEUTILIZATION 28, 30, 36
TAPEPROMPT 32

© Copyright IBM Corp. 1993, 2007 81


F mirroring (continued)
recovery log 24
fixes, obtaining ix MIRRORWRITE server option 10
FREEMAIN 19 mount points, virtual 36
FROMNODE option 36 MOVE DATA command 6
MOVEBATCHSIZE server option 6, 23
MOVESIZETHRESH server option 6, 23
G multi-client backups and restores 36
Gb Ethernet jumbo frames 39 multiple client sessions 30
GETMAIN 19 multiple session backup and restore 28

H N
Hierarchical Storage Manager migration 36 NetWare
client cache tuning 45
networks
I dedicated 39
for backup 39
IBM LTO Ultrium tape drives protocol tuning 39
streaming rate 23 settings
transfer rate 23 AIX 42
IBM Software Support Sun Solaris 46
submitting a problem xi z/OS 46
IBM Support Assistant ix traffic 39
import 12 NTFS file compression 18
INCLUDE/EXCLUDE lists 36 NTFS file system 18
incremental backup 37
Internet, search for problem resolution viii
Internet, searching for problem resolution ix
inventory expiration 52 P
ioo command 15 problem determination
describing problem for IBM Software Support xi
determining business impact for IBM Software Support x
J submitting a problem to IBM Software xi
PROCESSORUTILIZATION client option 28
Journal File System 15, 17 PROFILE.TCPIP configuration data set 47
journal-based backup publications
Windows 35 download v
order v
related hardware vii
K related software viii
knowledge bases, searching viii search v
Tivoli Storage Manager v
z/OS viii
L
LAN-free backup 14
Linux servers Q
performance recommendations 17 QUERY OCCUPANCY server command 52
LOGPOOLSIZE server option 5 QUIET client option 27

M R
Macintosh client RAID arrays 10, 24
anti-virus software 35 raw logical volumes 15, 17
Extended Attributes 35 advantages and disadvantages 16
Maximum Segment Size (MSS) 43 database 16
Maximum Transmission Unit (MTU) 32, 43 mirroring 16
NetWare 45 recovery log 16
MAXNUMMP server option 5, 28, 30 raw partitions 17
MAXSESSIONS server option 5, 28, 30 recommended values by platform 35
migration recovery log
Hierarchical Storage Manager 36 mirroring 24
processes 13 performance 10
thresholds 13 REGISTER NODE server command 28
mirroring 16 RESET BUFPOOL server command 3
database 24 RESOURCEUTILIZATION client option 28, 30, 36

82 IBM Tivoli Storage Manager: Performance Tuning Guide


RESTOREINTERVAL server option 7 support information viii

S T
scheduling tape drives
processes 14 cleaning 21
sessions 14 compression 21
SELFTUNEFBUFPOOLSIZE server option 7 on a SCSI bus 24
server activity log required number 21
searching 13 TAPEPROMPT client option 32
server commands TCP communication buffer 32
BACKUP DB 10 TCP/IP
CONVERT ARCHIVE 52 AIX server and client tuning 42
DEFINE COPYGROUP 12 concepts 40
DEFINE DEVCLASS 21 connection availability 40
DEFINE STGPOOL 12, 13, 22 data transfer block size 40
DEFINE VOLUME 10 error control 41
EXTEND DB 16 flow control 41
EXTEND LOG 16 functional groups
QUERY OCCUPANCY 52 application layer 40
REGISTER NODE 28 internetwork layer 40
RESET BUFPOOL 3 network layer 40
SET MAXCMDRETRIES 39 transport layer 40
SET QUERYSCHEDPERIOD 39 HP-UX server and client tuning 17
SET RETRYPERIOD 39 Maximum Segment Size (MSS) 43
UNDO ARCHCONVERSION 52 Maximum Transmission Unit (MTU) 43
UPDATE ARCHIVE 52 NetWare
UPDATE COPYGROUP 12 client cache tuning 45
UPDATE NODE 28, 30 Maximum Transmission Unit (MTU) 45
UPDATE STGPOOL 13, 22 packet assembly and disassembly 41
server options sliding window 33, 41
BUFPOOLSIZE 3 Sun Solaris server and client tuning 46
COMMMETHOD SHAREDMEM 18, 35 tuning 40
EXPINTERVAL 4 window values 40
LOGPOOLSIZE 5 z/OS server tuning 47
MAXNUMMP 5, 28, 30 TCP/IP and z/OS UNIX system services
MAXSESSIONS 5, 28, 30 performance tuning 49
MIRRORWRITE DB 10 TCPBUFFSIZE client option 32
MOVEBATCHSIZE 6, 23 TCPIP.DATA 47
MOVESIZETHRESH 6, 23 TCPNODELAY client option 32
recommended settings by platform 16 TCPNODELAY server option 14
RESTOREINTERVAL 7 TCPWINDOWSIZE client option 17, 18, 33, 46
SELFTUNEFBUFPOOLSIZE 7 TCPWINDOWSIZE server option 8, 17, 18
TCPNODELAY 14 thresholds
TCPWINDOWSIZE 8, 17, 18 migration 13
TXNBYTELIMIT 23 throughput
TXNGROUPMAX 9, 10, 23, 33 estimating 20
server tuning overview 1 for average workloads 20
SET MAXCMDRETRIES server command 39 formula 20
SET QUERYSCHEDPERIOD server command 39 in tested environments 20
SET RETRYPERIOD server command 39 in untested environments 21
sliding window 33 Tivoli technical training viii
Software Support training, Tivoli technical viii
contacting x transaction size 33
describing problem for IBM Software Support xi TXNBYTELIMIT client option 10, 23, 33
determining business impact for IBM Software Support x TXNBYTELIMIT server option 23
Storage Agent 14 TXNGROUPMAX server option 9, 10, 23, 33
storage pool
backup and restore 12
migrating files 13
migration 13
U
UFS file system volumes 17
storage pools
UNDO ARCHCONVERSION server command 52
cached disk 12
UNIX file systems
Sun Solaris
advantages and disadvantages 16
server and client TCP/IP tuning 46
UPDATE ARCHIVE server command 52
server performance recommendations 17
UPDATE COPYGROUP server command 12
TCPWINDOWSIZE client option 46

Index 83
UPDATE NODE server command 28, 30
UPDATE STGPOOL server command 13, 22

V
Virtual Memory Manager 15, 42
VIRTUALNODENAME client option 36
vmo command 15
VSAM I/O pages 19
VxFS file system 17

W
Windows
journal-based backup 35
performance recommendations 18

Z
z/OS server
performance recommendations 19
server TCP/IP tuning 47

84 IBM Tivoli Storage Manager: Performance Tuning Guide




Program Number: 5608-ACS


5608-APD
5608-APE
5608-APR
5608-CSS
5608-E08
5608-HSM
5608-ISM
5608-ISX
5608-SAN
5608-SPM
5698-USS

Printed in USA

SC32-0141-01

You might also like