Professional Documents
Culture Documents
Version 5.5
SC32-0141-01
Tivoli Storage Manager
®
Version 5.5
SC32-0141-01
Note!
Before using this information and the product it supports, read the general information in "Notices" appendix.
Edition notice
This edition applies to Version 5 Release 5 of IBM Tivoli Storage Manager Performance Tuning Guide (program
numbers 5608-ACS, 5608-APD, 5608-APE, 5608-APR, 5608-ARM, 5608-CSS, 5608-HSM, 5608-ISM, 5608-ISX,
5608-SAN, 5608-SPM, 5698-USS) and to any subsequent releases until otherwise indicated in new editions.
Changes since the previous edition are marked with a vertical bar (|) in the left margin. Ensure that you are using
the correct edition for the level of the product.
© Copyright International Business Machines Corporation 1993, 2007. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Preface . . . . . . . . . . . . . . . v Tuning tape drive performance . . . . . . . . 21
Who should read this guide . . . . . . . . . v Using collocation with tape drives . . . . . . . 22
Publications . . . . . . . . . . . . . . v IBM LTO Ultrium tape drives . . . . . . . . 23
Tivoli Storage Manager publications . . . . . v IBM LTO Ultrium streaming rate performance . . 23
Related hardware publications . . . . . . . vii IBM LTO Ultrium performance recommendations 23
Related software publications . . . . . . . viii Tuning disk performance . . . . . . . . . . 24
Support information . . . . . . . . . . . viii Busses . . . . . . . . . . . . . . . . 24
Getting technical training . . . . . . . . viii
Searching knowledge bases . . . . . . . . viii Chapter 3. IBM Tivoli Storage Manager
Contacting IBM Software Support . . . . . . x client performance tuning . . . . . . 25
Accessibility features . . . . . . . . . . . xi COMPRESSION . . . . . . . . . . . . . 25
COMPRESSALWAYS . . . . . . . . . . . 26
Chapter 1. Overview of IBM Tivoli COMMRESTARTDURATION and
Storage Manager tuning . . . . . . . . 1 COMMRESTARTINTERVAL . . . . . . . . . 27
QUIET . . . . . . . . . . . . . . . . 27
Chapter 2. IBM Tivoli Storage Manager DISKBUFFSIZE . . . . . . . . . . . . . 28
PROCESSORUTILIZATION . . . . . . . . . 28
server performance tuning. . . . . . . 3 Multiple session backup and restore . . . . . . 28
BUFPOOLSIZE . . . . . . . . . . . . . 3 RESOURCEUTILIZATION . . . . . . . . . 30
EXPINTERVAL . . . . . . . . . . . . . 4 TAPEPROMPT . . . . . . . . . . . . . 32
LOGPOOLSIZE . . . . . . . . . . . . . 5 TCPBUFFSIZE . . . . . . . . . . . . . 32
MAXNUMMP . . . . . . . . . . . . . . 5 TCPNODELAY . . . . . . . . . . . . . 32
MAXSESSIONS . . . . . . . . . . . . . 5 TCPWINDOWSIZE . . . . . . . . . . . . 33
MOVEBATCHSIZE and MOVESIZETHRESH . . . 6 TXNBYTELIMIT . . . . . . . . . . . . . 33
RESTOREINTERVAL . . . . . . . . . . . 7 Client command line options . . . . . . . . 34
SELFTUNEBUFPOOLSIZE . . . . . . . . . . 7 Performance recommendations by client platform . 35
TCPWINDOWSIZE . . . . . . . . . . . . 8 Macintosh client . . . . . . . . . . . . 35
TXNGROUPMAX . . . . . . . . . . . . 9 Windows client . . . . . . . . . . . . 35
| Performance recommendations for all server Client performance considerations . . . . . . . 36
| platforms . . . . . . . . . . . . . . . 10 Hierarchical Storage Manager tuning . . . . . . 36
Database and recovery log performance . . . . . 10 Data Protection for Domino for z/OS . . . . . . 36
Backup performance . . . . . . . . . . . 12 | Client incremental backup memory requirements . . 37
Disaster recovery performance . . . . . . . . 12
Cached disk storage pools . . . . . . . . . 12
| Clearing cached files . . . . . . . . . . 12
Chapter 4. Network protocol tuning . . 39
Tuning storage pool migration . . . . . . . . 13 Networks . . . . . . . . . . . . . . . 39
Tuning migration processes . . . . . . . . 13 Limiting network traffic . . . . . . . . . . 39
Tuning migration thresholds . . . . . . . . 13 TCP/IP communication concepts and tuning . . . 40
| Collocation by group . . . . . . . . . . 13 Sliding window . . . . . . . . . . . . 41
Searching the server activity log . . . . . . . 13 Platform-specific network recommendations . . . 42
Scheduling sessions and processes . . . . . . . 14 AIX network settings . . . . . . . . . . 42
LAN-free backup . . . . . . . . . . . . 14 NetWare client cache tuning . . . . . . . . 45
Storage agent tuning . . . . . . . . . . . 14 Sun Solaris network settings . . . . . . . . 46
AIX: vmo and ioo commands . . . . . . . . 15 z/OS network settings . . . . . . . . . . 46
UNIX file systems and raw logical volumes . . . 16
Performance recommendations by server platform 16 Chapter 5. Archive function . . . . . . 51
AIX server . . . . . . . . . . . . . . 17 Using the archive function . . . . . . . . . 51
HP-UX server . . . . . . . . . . . . 17 Identifying and correcting archive-related problems 52
Linux server . . . . . . . . . . . . . 17
Sun Solaris server . . . . . . . . . . . 17 Appendix. Notices . . . . . . . . . . 55
Windows server . . . . . . . . . . . . 18 Trademarks . . . . . . . . . . . . . . 57
z/OS server . . . . . . . . . . . . . 19
Estimating throughput. . . . . . . . . . . 20 Glossary . . . . . . . . . . . . . . 59
Estimating throughput rate for average
workloads . . . . . . . . . . . . . . 20
Estimating throughput in other environments . . 21 Index . . . . . . . . . . . . . . . 81
Before using this publication, you should be familiar with the following areas:
v The operating systems on which your IBM Tivoli Storage Manager servers and
clients reside
v The communication protocols installed on your client and server machines
Publications
Tivoli Storage Manager publications and other related publications are available
online.
You can search all the Tivoli Storage Manager publications in the Information
Center: http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/index.jsp
You can download PDF versions of publications from the IBM Publications Center:
http://www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/
pbi.cgi
You can also order some related publications from this Web site. The Web site
provides information for ordering publications from countries other than the
United States. In the United States, you can order publications by calling
800-879-2755.
Preface vii
Title Order Number
BM LTO Ultrium 3581 Tape Autoloader Setup, Operator, and Service GA32-0412
Guide
IBM LTO Ultrium 3583 Tape Library Setup and Operator Guide GA32-0411
™
StorageSmart by IBM TX200 Ultrium External Tape Drive M/T 3585 GA32-0421
Setup, Operator, and Service Guide
StorageSmart by IBM SL7 Ultrium Tape Autoloader M/T 3586 Setup, GA32-0423
Operator, and Service Guide
Support information
You can find support information for IBM products from a number of different
sources:
v “Getting technical training”
v “Searching knowledge bases”
v “Contacting IBM Software Support” on page x
http://www.ibm.com/software/tivoli/education/
You can begin with the Information Center, from which you can search all the
Tivoli Storage Manager publications: http://publib.boulder.ibm.com/infocenter/
tivihelp/v1r1/index.jsp
To search multiple Internet resources, go to the support web site for Tivoli Storage
Manager: http://www.ibm.com/software/sysmgmt/products/support/
IBMTivoliStorageManager.html. From this section, you can search a variety of
resources including:
v IBM technotes
v IBM downloads
The IBM Support Assistant helps you gather support information when you need
to open a problem management record (PMR), which you can then use to track the
problem. The product-specific plug-in modules provide you with the following
resources:
v Support links
v Education links
v Ability to submit problem management reports
For more information, see the IBM Support Assistant Web site at
http://www.ibm.com/software/support/isa/
Preface ix
Contacting IBM Software Support
Before you contact IBM Software Support, you must have an active IBM software
maintenance contract, and you must be authorized to submit problems to IBM. The
type of software maintenance contract that you need depends on the type of
product you have.
v For IBM distributed software products (including, but not limited to, Tivoli,
Lotus®, and Rational® products, as well as DB2® and WebSphere® products that
run on Windows® or UNIX® operating systems), enroll in Passport Advantage®
in one of the following ways:
Online
Go to the Passport Advantage Web page (http://www.ibm.com/
software/sw-lotus/services/cwepassport.nsf/wdocs/passporthome) and
click How to Enroll
By phone
For the phone number to call in your country, go to the IBM Contacts
Web page (http://techsupport.services.ibm.com/guides/contacts.html)
and click the name of your geographic region.
v For IBM eServer™ software products (including, but not limited to, DB2 and
WebSphere products that run in zSeries®, pSeries®, and iSeries™ environments),
you can purchase a software maintenance agreement by working directly with
an IBM sales representative or an IBM Business Partner. For more information
about support for eServer software products, go to the IBM Technical Support
Advantage Web page: http://www.ibm.com/servers/eserver/techsupport.html.
If you are not sure what type of software maintenance contract you need, call
1-800-IBMSERV (1-800-426-7378) in the United States. For a list of telephone
numbers of people who provide support for your location, go to the IBM Contacts
Web page, http://techsupport.services.ibm.com/guides/contacts.html, and click
the name of your geographic region.
Severity 1 Critical business impact: You are unable to use the program,
resulting in a critical impact on operations. This condition
requires an immediate solution.
Severity 2 Significant business impact: The program is usable but is
severely limited.
Severity 3 Some business impact: The program is usable with less
significant features (not critical to operations) unavailable.
Severity 4 Minimal business impact: The problem causes little impact on
operations, or a reasonable circumvention to the problem has
been implemented.
If the problem you submit is for a software defect or for missing or inaccurate
documentation, IBM Software Support creates an Authorized Program Analysis
Report (APAR). The APAR describes the problem in detail. Whenever possible,
IBM Software Support provides a workaround for you to implement until the
APAR is resolved and a fix is delivered. IBM publishes resolved APARs on the
IBM product support Web pages daily, so that other users who experience the
same problem can benefit from the same resolutions.
Accessibility features
Accessibility features help a user who has a physical disability, such as restricted
mobility or limited vision, to use software products successfully. The major
accessibility features of Tivoli Storage Manager are described in this topic.
v Server and client command-line interfaces provide comprehensive control of
Tivoli Storage Manager using a keyboard.
v The Windows client-graphical interface can be navigated and operated using a
keyboard.
v The Web backup-archive client interface is HTML 4.0 compliant, and accessibility
is limited only by the choice of Internet browser.
v All user documentation is provided in HTML and PDF format. Descriptive text
is provided for all documentation images.
v The Tivoli Storage Manager for Windows Console follows Microsoft®
conventions for all keyboard navigation and access. Drag and Drop support is
handled using the Microsoft Windows Accessibility option known as
Preface xi
MouseKeys. For more information about MouseKeys and other Windows
accessibility options, please refer to the Windows Online Help (keyword:
MouseKeys).
Tuning Tivoli Storage Manager can be complex because of the many operating
systems, network configurations, and storage devices that Tivoli Storage Manager
supports. Performance tuning even for a single platform function is complex. The
factors that can affect performance and have significant effects include:
v Average client file size
v Percentage of files changed since last incremental backup
v Percentage of bytes changed since last incremental backup
v Client hardware (CPUs, RAM, disk drives, network adapters)
v Client operating system
v Client activity (non-Tivoli Storage Manager workload)
v Server hardware (CPUs, RAM, disk drives, network adapters)
v Server storage pool devices (disk, tape, optical)
v Server operating system
v Server activity (non-Tivoli Storage Manager workload)
v Network hardware and configuration
v Network utilization
v Network reliability
v Communication protocol
v Communication protocol tuning
v Final output repository type (disk, tape, optical)
It is not practical to cover all possible combinations of these functions here. The
topics are limited to those that you can control to some degree without replacing
hardware.
The options are tunable on most Tivoli Storage Manager servers. See the
Administrator’s Reference to determine the options available for your platform. You
can change any option setting in the server options file (dsmserv.opt). If you
change the server options file, you must stop and restart the server for the changes
to take effect. You can change some settings with the server SETOPT command.
BUFPOOLSIZE
The database buffer pool provides cache storage, which allows database pages to
remain in memory for a longer period of time and provide faster access. Use the
BUFPOOLSIZE server option to set the optimal size for the database buffer pool.
When database pages remain in cache, the server can make continuous updates to
the pages without requiring I/O operations to external storage. While a larger
database buffer pool can improve server performance, it also requires more
memory.
An optimal setting for the database buffer pool is one that results in a cache hit
percentage greater than or equal to 99%. Performance decreases drastically if the
buffer pool cache hit ratio drops below 99%. To check the cache hit percentage,
issue the QUERY DB FORMAT=DETAIL command.
The default value is 32768, which is a reasonable starting value that equals 8192
database pages. Based on your cache hit rate, increase BUFPOOLSIZE in
increments. A cache hit percentage greater than 99% is an indication that the
proper BUFPOOLSIZE has been reached. However, raising the BUFPOOLSIZE
beyond that level can be very helpful. While increasing BUFPOOLSIZE, take care
not to cause paging in the virtual memory system. Monitor system memory usage
to check for any increased paging after the BUFPOOLSIZE change. Use the RESET
BUFPOOL command to reset the cache hit statistics. If you are paging, buffer pool
cache hit statistics are misleading because the database pages come from the
paging file. Additionally, the buffer pool can be so large that the cost of managing
it and the cost of locating pages within it can outweigh the cache benefits. In most
cases, the benefits of buffering can be achieved with a pool size of one gigabyte or
less.
v The optimal buffer pool size is from 1/8 to 1/4 of real memory up to 1 GB.
Buffer pools much larger than 1 GB could actually lower performance in some
environments.
EXPINTERVAL
Inventory expiration removes client backup and archive file copies from the server.
EXPINTERVAL specifies the interval, in hours, between automatic inventory
expirations run by the Tivoli Storage Manager server. The default is 24.
Backup and archive copy groups can specify the criteria that make copies of files
eligible for deletion from data storage. However, even when a file becomes eligible
for deletion, the file is not deleted until expiration processing occurs. If expiration
processing does not occur periodically, storage pool space is not reclaimed from
expired client files, and the Tivoli Storage Manager server requires increased disk
storage space.
Expiration processing is CPU and I/O intensive. If possible, it should be run when
other Tivoli Storage Manager processes are not occurring. To enable this, either
schedule expiration once per day, or set EXPINTERVAL to 0 and manually start the
process with the EXPIRE INVENTORY server command. Expiration processing can
also be scheduled in an administrative schedule.
Recommendation
EXPINTERVAL 0
A large recovery log buffer pool might increase the rate at which recovery log
transactions are committed to the database, but it also requires more memory. The
size of the recovery log buffer pool can affect the frequency in which the server
forces records to the recovery log. To determine if LOGPOOLSIZE should be
increased, run the QUERY LOG FORMAT=DETAIL command and check the value
of LogPool Percentage Wait. If the value is greater than zero, increase the value of
LOGPOOLSIZE. As the size of the recovery log buffer pool is increased, remember
to monitor system memory usage. Using sizes much larger than the recommended
values might result in lower performance.
Recommendation
LOGPOOLSIZE 2048 - 8192 KB
MAXNUMMP
The MAXNUMMP server option specifies the maximum number of mount points a
node is allowed to use on the server.
The MAXNUMMP option can be set to an integer from 0-999. Zero specifies that
the node cannot acquire any mount point for a backup or archive operation.
However, the server still allows the node to use a mount point for a restore or
retrieve operation. If the client stores its data in a storage pool that has copy
storage pools defined for simultaneous backup, the client might require additional
mount points.
As a general rule, assign one mount point for each copy storage pool of sequential
device type. If the primary storage pool is of sequential device type, assign a
mount point for the primary storage pool also.
MAXSESSIONS
The MAXSESSIONS option specifies the maximum number of simultaneous client
sessions that can connect with the Tivoli Storage Manager server.
The default value is 25 client sessions. The minimum value is 2 client sessions. The
maximum value is limited only by available virtual memory or communication
resources. This option specifies the maximum number of simultaneous client
sessions that can connect with the Tivoli Storage Manager server. By limiting the
number of clients, server performance can be improved, but the availability of
Tivoli Storage Manager services to the clients is reduced.
The number of client files moved for each server database transaction during a
server storage pool backup/restore, migration, reclamation or move data operation
are determined by the number and size of the files in the batch. If the number of
files in the batch equals the MOVEBATCHSIZE before the cumulative size of the
files becomes greater than the MOVESIZETHRESH, then the MOVEBATCHSIZE is
used to determine the number of files moved or copied in the transaction. If the
cumulative size of files being gathered for a move or copy operation exceeds the
MOVESIZETHRESH value before the number of files becomes equivalent to the
MOVEBATCHSIZE, then the MOVESIZETHRESH value is used to determine the
number of files moved or copied in the transaction.
The DEFINE LOGVOLUME and EXTEND LOG commands are used to add space
to the server recovery log. Also, ensure additional volumes are available (formatted
with the DSMFMT or ANRFMT program) for extending the recovery log.
Recommendation
MOVEBATCHSIZE 1000
MOVESIZETHRESH 2048
The minimum value is 0. The maximum is 10080 (one week). The default is 1440
(24 hours). If the value is set to 0 and the restore is interrupted or fails, the restore
is still put in the restartable state. However, it is immediately eligible to be expired.
Restartable restore sessions consume resources on the Tivoli Storage Manager
server. You should not keep these sessions any longer than they are needed.
Recommendation
RESTOREINTERVAL Tune to your environment.
SELFTUNEBUFPOOLSIZE
The SELFTUNEBUFPOOLSIZE server option controls the automatic adjustment of
the buffer pool size.
The SELFTUNEBUFPOOLSIZE option can be set to YES or NO. The default is NO.
Specifying YES causes the database cache hit ratio statistics to be reset before
starting expiration processing and to be examined after expiration processing
completes. The buffer pool size is adjusted if cache hit ratio is less than 98%. The
percentage increase in buffer pool size is half the difference between the 98% target
and the actual buffer pool cache hit ratio. This increase is not done if a
platform-specific check fails:
UNIX Buffer pool size may not exceed 10% of physical memory.
Windows
The same as UNIX with an additional check for memory load not
exceeding 80%.
z/OS® May not exceed 50% of region size.
| You must have sufficient physical memory. Lack of sufficient physical memory
| means that increased paging occurs. The self-tuning process gradually raises the
| buffer pool size to an appropriate value. Set the initial size according to the
| guidelines in “BUFPOOLSIZE” on page 3, and ensure that the buffer pool size is
| appropriate each time the server starts.
|| Recommendation
| SELFTUNEBUFPOOLSIZE No
|
To improve backup performance, increase the TCP receive window on the Tivoli
Storage Manager server. To improve restore performance, increase the TCP receive
window on the client.
The sending host cannot send more data until an acknowledgement and TCP
receive window update are received. Each TCP packet contains the advertised TCP
receive window on the connection. A larger window allows the sender to continue
sending data, and might improve communication performance, especially on fast
networks with high latency.
The TCPWINDOWSIZE option overrides the operating system’s TCP send and
receive spaces. In AIX, for instance, these parameters are tcp_sendspace and
tcp_recvspace and are set as “no” options. For Tivoli Storage Manager, the default
is 63 KB, and the maximum is 2048 KB. Specifying TCPWINDOWSIZE 0, will
cause Tivoli Storage Manager to use the operating system default. This is not
recommended because the optimal setting for Tivoli Storage Manager might not be
same as the optimal setting for other applications.
The TCPWINDOWSIZE option specifies the size of the TCP sliding window for all
clients and all servers. On the server this applies to all sessions. Therefore, raising
TCPWINDOWSIZE can increase memory significantly when there are multiple,
concurrent sessions. A larger window size can improve communication
performance, but uses more memory. It enables multiple frames to be sent before
an acknowledgment is obtained from the receiver. If long transmission delays are
being observed, increasing the TCPWINDOWSIZE might improve throughput.
For all platforms, rfc1323 must be set to have window sizes larger than 64 KB-1:
v AIX: Use no -o rfc1323=1
v HP-UX: Using a window size greater than 64 KB-1 automatically enables large
window support.
v Sun Solaris 10: Use ″ndd ″set /dev/tcp tcp_wscale_always 1″ This should be
enabled by default.
v Linux®: Should be on by default for recent kernel levels. Check with ″cat
/proc/sys/net/ipv4/tcp_window_scaling″. Recent Linux kernels use autotuning,
and changing TCP values might have a negative effect on autotuning. Make
changes with caution.
v Windows XP and 2003: Add or modify, with regedit, the following registry
name/value pair under [HKEY_LOCAL_MACHINE\SYSTEM\
CurrentControlSet\Services\Tcpip\Parameters] Tcp1323Opts, REG_DWORD, 3
Attention: Mistakes with regedit can have very serious consequences that are
difficult to correct. You are strongly encouraged to backup the entire registry before
you start.
| Note: This option is also valid on the Tivoli Storage Manager client.
TXNGROUPMAX
| The TXNGROUPMAX server option specifies the number of files transferred as a
| group between commit points. This parameter is used in conjunction with the
| TXNBYTELIMIT client option.
| You can override the value of this option for individual client nodes. See the
| TXNGROUPMAX parameter in the REGISTER NODE and UPDATE NODE
| commands.
| This option is related to the TXNBYTELIMIT option in the client options file.
| TXNBYTELIMIT controls the number of bytes, as opposed to the number of
| objects, that are transferred between transaction commit points. At the completion
| of transferring an object, the client commits the transaction if the number of bytes
| transferred during the transaction reaches or exceeds the value of TXNBYTELIMIT,
| regardless of the number of objects transferred.
| v Check the recovery log percent utilization to ensure there is enough space. Issue
| the query log format=detailed command, and check the Pct Util column. It is
| best to ensure that this value rarely exceeds 80%. If the recovery log becomes
| full, session or process failures might occur.
| v Increasing the number of files per transaction might result in more data being
| retransmitted if a retry occurs. This might negatively affect performance.
Lab tests have shown that settings higher than 4096 give no benefit. Set
TXNGROUPMAX to 256 in your server options file. If some clients have small files
Recommendations
TXNGROUPMAX 256 in server options files, higher values
(max 4096) for only those clients with small
files and direct to tape backup
| The following recommendations can help you optimize Tivoli Storage Manager
| server performance for your environment.
| v Formatting disk storage pool volumes that are placed in the same file system
| with parallel running dsmfmt processes can cause the storage pool volumes to
| be highly fragmented. For disk storage pool volumes in the same file system, do
| not issue multiple dsmfmt or DEFINE VOLUME commands simultaneously.
| Format disk storage pool volumes sequentially, one at a time, if they are placed
| into the same file system. This should create files that have only a few gaps and
| will improve sequential read/write performance.
| v To avoid disk I/O contention between database, recovery log and disk storage
| pool volumes, use separate physical disks for database, recovery log, and disk
| storage pool.
| v Place the Tivoli Storage Manager server database volumes on the fastest disks. If
| write cache exists for the disk adapter and that adapter has attached disks that
| contain only database volumes (no storage pool volumes), then enable the write
| cache for the best database performance.
| v Set the TXNGROUPMAX server option to 256 and the TXNBYTELIMIT client
| option to 25600 to maximize the transaction size. A larger transaction size
| increases the size of the server file aggregates. File aggregation provides
| throughput improvements for many server data movement and inventory
| functions, such as storage pool migration, storage pool backup, and inventory
| expiration. A larger transaction size when using backup directly to tape reduces
| the number of tape buffer flushes and therefore can improve throughput.
| The access pattern for the database is random for most operations. Because of this,
| it is best to use the fastest available disk to support the database operations. Also
| ensure that write caching is enabled on the disk volumes that the database and log
| reside on, but only if the cache is non-volatile and can survive unexpected power
| outages and other failure modes.
During BACKUP DB, the access pattern for the database is sequential with the
database volumes being read, one at time, from beginning to the end of the
database volumes. The volumes are read in the order they are defined to the Tivoli
Storage Manager server.
| The Tivoli Storage Manager server issues I/Os to the database volumes in batches
| of up to 64 4 KB blocks for all operations except BACKUP DB. For database
| backups operations it issues 256 KB block reads. On Windows, direct I/O is always
| used for database operations. On UNIX, direct I/O is never used for database
| operations.
It is best to use multiple volumes for the database. It is best to place each database
volume on a separate physical volume. For RAID configurations, it is best to limit
the number of database volumes to the number of physical volumes in the RAID
array. The Tivoli Storage Manager server is able to spread I/O workload over
several volumes in parallel, which increases read and write performance.
Consequently a higher quantity of smaller volumes is better than a smaller number
of disks with larger size and the same rotation speed. It is best to define the
database with 4 to 16 volumes.
| The access pattern for the recovery log is always sequential and almost always
| write. The block size is always 4 KB. Tivoli Storage Manager on Windows always
| uses direct I/O for recovery log operations. On UNIX, direct I/O is never used for
| recovery log operations. The log is a good candidate for RAID0 with Tivoli Storage
| Manager mirroring or RAID1 without Tivoli Storage Manager mirroring. Physical
| placement on the underlying disk is very important. It is best to isolate the log
| from the database and from the disk storage pools. If this cannot be done, then
| place the log with storage pools and not with the database. The number of log
| volumes is not important but having two volumes might be desirable for ease of
| maintenance.
Database and log mirroring provides higher reliability, but comes at a cost in
performance (especially with sequential mirroring). To minimize the impact of
database write activity, use disk subsystems with non-volatile write caching ability.
Use of disk adaptors with write cache can nearly double (or more) the
performance of database write activity (when parallel mirroring). This is true even
if there appears to be plenty of bandwidth left on the database disks. Set the
MIRRORWRITE server option to DB PARALLEL, and use database page
shadowing to reduce overhead of mirrored writes.
File backup performance is degraded when there are many versions of an object.
Use the DEFINE COPYGROUP command and modify the VEREXISTS parameter
to control the number of versions, or use the UPDATE COPYGROUP command.
The default number of backup versions is 2.
Tivoli Storage Manager provides procedures for backing up and restoring the
storage pools for disaster recovery. The performance of storage pool backup and
recovery for disaster recovery is superior to the performance of export and import.
The benefit of using cached disk storage pools is seen for restoring files that were
recently backed up. If the disk pool is large enough to hold a day’s worth of data,
then caching is a good option. Use the DEFINE STGPOOL or UPDATE STGPOOL
command with the CACHE=YES parameter to enable caching. However, when the
storage pool is large and CACHE is set to YES, the storage pool might become
fragmented and response will suffer. If this condition is suspected, our
recommendation is to turn disk storage pool caching off. Also, disk caching can
affect backup throughput because database updates are required to delete cached
files in order to create space for the backup files.
When data is migrated from disk to tape, multiple processes can be used if
multiple tape drives are available. In some cases, this can improve the time to
empty the disk storage volumes, since each migration process works on data for
different client nodes. For the MIGPROCESS option, you can specify an integer
from 1-999, inclusive, but it should not exceed the number of tape drives available.
The default value is 1. You can also use the UPDATE STGPOOL command to
modify the number of migration processes.
This can cause the disk storage pool to fill, and when a client attempts to send
data to the disk storage pool, it sees the full condition and attempts to go to the
volume indicated at the next level in the storage hierarchy. If this is a tape pool,
then all drives might be in use by a migration process, in which case the client
session waits on the tape media to be freed by the migration process. The client
then sits idle. In this case, the migration thresholds should be lowered so migration
starts earlier, or more disk space should be allocated to the disk storage pool. You
can also use the UPDATE STGPOOL command to modify the migration thresholds.
| Collocation by group
| If you are using collocation by node or filespace on the target storage pools of a
| migration, consider using collocate by group instead.
| Collocating by group can greatly reduce the number of tape mounts during
| migration. Instead of one tape mount per node or filespace there is just one mount
| for an entire group, unless the amount of data to be migrated for that group fills
| the volume.
| Tivoli Storage Manager throughput can degrade if all client backups are started
| simultaneously. It is better to spread out backup session start-ups over time.
| Creating several client schedules with staggered start times and assign nodes to
| those schedules appropriately. For nodes that use the client polling method of
| scheduling, use the SET RANDOMIZE command to spread out the nodes startup
| times.
LAN-free backup
| Using LAN-free backup can improve performance. To do so requires the Tivoli
| Storage Manager storage agent on the client for LAN-free backups to SAN
| attached tape, and Tivoli SANergy® if backups are to be sent to FILE volumes on
| SAN-attached disk.
v Back up and restore to tape or disk using the SAN. The advantages are:
| – Metadata is sent to the server using the LAN while client data is sent over
| the SAN.
– Frees the Tivoli Storage Manager server from handling data leading to better
scalability.
– Potentially faster than LAN backup and restore.
– Better for large file workloads, databases (Data Protection).
– Small file workloads have bottlenecks other than data movement.
v Ensure that there are sufficient data paths to tape drives.
v Do not use LAN-free backup if you bundle more than 20 separate dsmc
commands in a script.
– dsmc start/stop overhead is higher due to tape mounts.
– Use the new file list feature to back up a list of files.
To get the best performance consider the following items when you set options for
storage agents:
v The storage agent has its own configuration file, dsmsta.opt, containing many of
the same options as the server dsmserv.opt. In general, use the same settings as
recommended for the server.
v You can use the DSMSTA SETSTORAGESERVER command for some options.
| v Ensure TCPNODELAY is set to YES (the default) on both the storage agent and
| server. Use the option LANFREECOMMMETHOD SHAREDMEM in the client
| options file on client platforms that support it to obtain the lowest CPU usage.
The AIX Virtual Address space is managed by the Virtual Memory Manager
(VMM). VMM is administered by the vmo AIX command; I/O that can be tuned is
controlled by the ioo command. In AIX 5.3 and later, the vmo and ioo commands
replace vmtune.
v The vmo command is used to tune the AIX virtual memory system.
v The vmo and ioo options can be displayed using the vmo -a and ioo -a
commands.
v You can change options by specifying the appropriate option and value.
v Changes to vmo parameters do not survive reboot unless the –p option is
specified.
Recommendation
256 (maximum)
Solaris direct I/O should be used if not using raw logical volumes. For AIX, if you
choose not to use raw logical volumes, direct I/O should always be used for JFS2
file systems. For JFS, direct I/O might cause degradation, especially with
large-file-enabled file systems.
HP-UX server
Use raw partition for disk storage pools on HP-UX Tivoli Storage Manager server.
Raw partition is recommended because measurements in the lab show that raw
partition volumes offer better backup/restore throughput than when VXFS
volumes are used on HP-UX. However, for the data integrity and recoverability of
database and recovery log volume, the database and recovery log volume should
be allocated in the file systems.
Linux server
Disable any unneeded daemons (services).
Most enterprise distributions come with a great many features. However, most of
the time only a small subset of these features are used. For example, the TCP/IP
data movement could be blocked or slowed down significantly by the internal
firewall in SUSE 9 x86_64. It can be terminated by /etc/init.d/SuSEfirewall2_setup
stop.
Windows server
There are a number of actions that can improve performance for a Tivoli Storage
Manager server running in a Windows environment.
v Using the NTFS file system on the disks used by the Tivoli Storage Manager
server is most often the best choice. These disks include the recovery log,
database, and storage pool volumes. Although there is generally no performance
difference for Tivoli Storage Manager functions between using NTFS and FAT on
these disks, NTFS has the following advantages:
– Support for larger disk partitions
– Better data recovery
– Better file security
– Formatting storage pool volumes on NTFS is much faster
v NTFS file compression should not be used on disk volumes that are used by the
Tivoli Storage Manager server, because of the potential for performance
degradation.
v For optimal Tivoli Storage Manager for Windows server performance with
respect to Windows real memory usage, use the server property setting for
Maximize Throughput for Network Applications. This setting gives priority to
application requests for memory over requests from the Cache Manager for file
system cache. This setting will make the most difference in performance on
systems that are memory constrained.
v Use the server and client option TCPWINDOWSIZE to improve network
throughput during backup and restore and archive and retrieve operations. For
Windows 2000 and XP servers that are communicating exclusively with other
Windows 2000 and XP or UNIX systems, a TCPWINDOWSIZE greater than 63
might be useful.
v For optimal backup and restore performance when using a local client on a
Windows system, use the shared memory communication method. This is done
by including the COMMMETHOD SHAREDMEM option in the server options
file and in the client options file.
v Miscellaneous Tivoli Storage Manager client and server issues:
– Anti-virus software can negatively impact backup performance.
– Disable or do not install unused services.
– Disable or do not install unused network protocols.
– Give preference to background application performance.
– Avoid screen savers.
– Make sure the paging file is not fragmented.
– Keep device drivers updated, especially for new hardware.
It is easy to estimate the throughput for workloads with average file sizes different
from those that were tested in the performance lab. However, the overall Tivoli
Storage Manager environment must conform to one of the environments in one of
our evaluation reports.
1. Find the table in an evaluation report for the Tivoli Storage Manager function
and environment that matches your specific requirements.
2. Determine the average file size of the client workload for which the estimate is
to be made. The following formulas call this value EstFileSize and expect this
value in KB. The apply one of the following rules:
v If the average file size is greater than 256 MB, use the throughput in
KB/second or GB/hour for the 256 MB workload.
v If the average file size is less than 1 KB, use the throughput in KB/second
for the 1 KB workload. Throughput is effectively limited by the number of
files that can be processed in a given amount of time. Calculate the
throughput for the estimated file size in KB per second using the following
formula:
A = log(UpperThroughput(KB/sec)/LowerThroughput(KB/sec))
B = log(EstFileSize(KB)/LowerFileSize(KB))
C = log(UpperFileSize(KB)/LowerFileSize(KB))
D = log(LowerThroughput(KB/sec))
Estimating throughput for environments that have not been directly tested can be
difficult. However, the following important observations can be made:
v Throughput over a network can be expected to reach saturation at around 80
percent of its rated capacity. Efficiency indicates the percentage of maximum
throughput rate that can realistically be achieved. This leads to the following
maximum throughputs that can be obtained for given networks:
In many cases, enabling compression at the tape drive improves Tivoli Storage
Manager throughput. You can use the FORMAT option of the DEFINE DEVCLASS
command to specify the appropriate recording format to be used when writing
data to sequential access media. The default is DRIVE, which specifies that Tivoli
Storage Manager selects the highest format that can be supported by the sequential
access drive on which a volume is mounted. This setting usually allows the tape
control unit to perform compression.
Attention: Avoid specifying the DRIVE value when a mixture of devices are used
in the same library. For example, if you have drives that support recording formats
superior to other drives in a library, do not specify the FORMAT=DRIVE option.
Refer to the appropriate Tivoli Storage Manager Administrator’s Guide for more
information
If you do not use compression at the client and your data is compressible, you
should achieve higher system throughput if you use compression at the tape
control unit. Refer to the appropriate Tivoli Storage Manager Administrator’s Guide
for more information concerning your specific tape drive. If you compress the data
at the client, we recommend that you not use compression at the tape drive. In this
case, you might lose up to 10-12% of the tape capacity at the tape media.
| The default is collocation by group. Until node groups are defined, however, no
| collocation occurs. When node groups nodes are defined, the server can collocate
| data based on these groups. Collocation by group can yield the following
| performance benefits:
| v Reduce unused tape capacity by allowing more collocated data on individual
| tapes
| v Minimize mounts of target volumes
| v Minimize database scanning and reduce tape passes for sequential-to-sequential
| transfer
| Collocation by group gives the best balance of restore performance versus tape
| volume efficiency. For those nodes where collocation is needed to improve restore
| performance, use collocation by group. Manage the number of nodes in the groups
| so that backup data for the entire group is spread over a manageable number of
| volumes. Where practical, a node can be moved from one collocation group to
| another by first changing the group affinity with the DEFINE
| NODEGROUPMEMBER command then using the MOVE NODEDATA command.
The IBM LTO Ultrium tape drive has a native streaming data rate of up to 15 MB
per second, and up to 30 MB per second with 2:1 compression. For example, when
writing to a tape drive, normally the drive returns control to the application when
the data is in the tape drive buffer but before the data has been written to tape.
This mode of operation provides all tape drives a significant performance
improvement. However, the drive’s buffer is volatile. For the application to ensure
that the write makes it to tape, the application must flush the buffer. Flushing the
buffer causes the tape drive to back hitch (start/stop). The Tivoli Storage Manager
parameters TXNBYTELIMIT and TXNGROUPMAX control how frequently Tivoli
Storage Manager issues this buffer flush command.
When writing to a tape drive, you must consider network bandwidth. For
example, 100BaseT Ethernet LAN can sustain 5 to 6 MB per second. Therefore, you
cannot backup to LTO or any tape drive any faster than that.
When IBM 3580 LTO drives are used, ensure that the drive microcode is at the
most current level. Instructions for verifying the current LTO drive microcode
release and how to install the new release can be found in the IBM Ultrium Device
Drivers Installation and User’s Guide. Go to this Web site to find the manual:
ftp://ftp.software.ibm.com/storage/devdrvr/
Busses
If your machine has multiple PCI busses, spread out high-throughput adaptors
among the different busses.
| For example, if you are going to do a lot of backups to disk, you probably do not
| want your network card and disk adaptor on the same PCI bus. Theoretical limits
| of busses are just that, theoretical, though you should be able to get close in most
| cases. As a general rule it is best to have only one or two tape drives per SCSI bus
| and one to four tape drives per fibre HBA.
| Note: Even if a given tape drive is slower than the fibre channel SAN being used,
| tape drive performance is usually better on the faster interfaces. This is
| because the individual blocks are transferred with lower latency, allowing
| Tivoli Storage Manager and the operating system to send the next block
| quicker. For example, an LTO 4 drive will perform better on a 4 Gbit SAN
| than a 2 Gbit SAN even though the drive is only capable of speeds under 2
| Gbit.
Some Tivoli Storage Manager client options can be tuned to improve performance.
Tivoli Storage Manager client options are specified in either the dsm.sys file or the
dsm.opt file.
COMPRESSION
The COMPRESSION client option specifies if compression is enabled on the Tivoli
Storage Manager client. For optimal backup and restore performance with a large
number of clients, consider using client compression.
Compressing the data on the client reduces demand on the network and the Tivoli
Storage Manager server. The reduced amount of data on the server continues to
provide performance benefits whenever this data is moved, such as for storage
pool migration and storage pool backup. However, client compression significantly
reduces the performance of each client, and the reduction is more pronounced on
the slowest client systems.
For optimal backup and restore performance when using fast clients and heavily
loaded network or server, use client compression. For optimal backup and restore
performance when using a slow client, or a lightly loaded network or server, do
not use compression. However, be sure to consider the trade-off of greater storage
requirements on the server when not using client compression. The default for the
COMPRESSION option is NO.
For maximum performance with a single fast client, fast network, and fast server,
turn compression off.
Compression can cause severe performance degradation when there are many
retries due to failed compression attempts. Compression fails when the compressed
file is larger than the original. The client detects this and will stop the compression,
fail the transaction and resend the entire transaction uncompressed. This occurs
because the file type is not suitable for compression or the file is already
compressed (zip files, tar files, and so on). Short of turning off compression, there
are two options you can use to reduce or eliminate retries due to compression:
v Use the COMPRESSALWAYS option. This option eliminates retries due to
compression.
v Use the EXCLUDE.COMPRESSION option in the client options file. This option
disables compression for specific files or sets of files (for example, zip files or jpg
Recommendations
NO (single fast client, fast network, fast
server)
COMPRESSION
YES (multiple clients, slow network, slow
server)
COMPRESSALWAYS
The COMPRESSALWAYS option specifies whether to continue compressing an
object if it grows during compression, or resend the object, uncompressed. This
option is used with the compression option.
| It is better to identify common types of files that do not compress well and list
| these on one or more client option EXCLUDE.COMPRESSION statements. Files
| that contain large amounts of graphics, audio, or video files and files that are
| already encrypted do not compress well. Even files that seem to be mostly text
| data (for example, Microsoft Word documents) can contain a significant amount of
| graphic data that might cause the files to not compress well.
| Using Tivoli Storage Manager client compression and encryption for the same files
| is perfectly valid. The client first compresses the file data and then encrypts it, so
| that there is no loss in compression effectiveness due to the encryption, and
| encryption is faster if there is less data to encrypt. For example, to exclude objects
| that are already compressed or encrypted, enter the following statements
| exclude.compression ?:\...\*.gif
| exclude.compression ?:\...\*.jpg
| exclude.compression ?:\...\*.zip
| exclude.compression ?:\...\*.mp3
| exclude.compression ?:\...\*.cab
| Recommendations
COMPRESSALWAYS Yes
To make clients are more tolerant of network connectivity interruptions, use the
options COMMRESTARTDURATION and COMMRESTARTINTERVAL to control
the restart window of time and interval between restarts. These options assist in
environments which are subject to heavy network congestion or frequent
interruptions, and eases the manageability of large numbers of clients by reducing
intervention on error conditions. In a sense, performance is improved if
consideration is given to account for time to detect, correct, and restart as a result
of an error condition.
Note: A scheduled event continues if the client reconnects with the server before
the COMMRESTARTDURATION value elapses, even if the event’s startup
window has elapsed.
Syntax
COMMRESTARTDURATION minutes
Recommendation
COMMRESTARTDURATION Tune to your environment.
COMMRESTARTINTERVAL Tune to your environment.
QUIET
The QUIET client option can prevent messages from being written to the screen
during Tivoli Storage Manager backups.
The default is the VERBOSE option, and Tivoli Storage Manager displays
information about each file it backs up. To prevent this, use the QUIET option.
However, messages and summary information are still written to the log files.
There are two main benefits to using the QUIET option:
v For tape backup, the first transaction group of data is always resent. To avoid
this, use the QUIET option to reduce retransmissions at the client.
v If you are using the client scheduler to schedule backups, using the QUIET
option dramatically reduces disk I/O overhead to the schedule log and
improves throughput.
DISKBUFFSIZE
The DISKBUFFSIZE client option specifies the maximum disk I/O buffer size (in
kilobytes) that the client may use when reading files.
| The default value is 32 for all clients except AIX. For AIX, the default value is 256
| except when ENABLELANFREE YES is specified. When ENABLELANFREE YES is
| specified on AIX, the default value is 32. API client applications have a default
| value of 1023, except for Windows API client applications (version 5.3.7 and later),
| which have a default value of 32.
Recommendation
DISKBUFFSIZE Use the default value.
PROCESSORUTILIZATION
The PROCESSORUTILIZATION option (only on the Novell client) specifies, in
hundredths of a second, the amount of CPU time the length of time Tivoli Storage
Manager controls the CPU. Because this option can affect other applications on
your client node, use it only when speed is a high priority.
The default is 1. The recommended values are from 1 to 20. If set to less than 1,
this parameter could have a negative impact on performance. Increasing this value
increases the priority ofTivoli Storage Manager to the CPU, lessening the priorities
other process. Setting PROCESSORUTILIZATION greater than 20 might prevent
other scheduled processes or NetWare requestors from accessing the file server.
| If all the files are on random disk, only one session is used. There is no
| multi-session restore for a random disk-only storage pool restore. However, if you
| are performing a restore in which the files reside on four tapes or four sequential
| disk volumes and some on random disk, you could use up to five sessions during
| the restore. You can use the MAXNUMMP parameter to set the maximum number
| of mount points a node can use on the server. If the RESOURCEUTILIZATION
| option value exceeds the value of the MAXNUMMP on the server for a node, you
| are limited to the number of session specified in MAXNUMMP.
| If the data you want to restore is on five different tape volumes, the maximum
| number of mount points is 5 for your node, and RESOURCEUTILIZATION is set
| to three, then three sessions are used for the restore. If you increase the
| RESOURCEUTILIZATION setting to 5, then 5 sessions are used for the restore.
| There is a one to one relationship to the number of restore sessions allowed for the
| RESOURCEUTILIZATION setting. Multiple restore sessions are only allowed for
| no query restore operations.
| The server sends the MAXNUMMP value to the client during sign on. During an
| no-query restore, if the client receives a notification from the server that another
| volume has been found and another session can be started to restore the data, the
| client checks the MAXNUMMP value. If another session would exceed that value,
| the client will not start the session.
RESOURCEUTILIZATION
The RESOURCEUTILIZATION client option regulates the number of concurrent
sessions that the Tivoli Storage Manager client and server can use during
processing. Multiple sessions can be initiated automatically through a Tivoli
Storage Manager backup, restore, archive, or retrieve command. Although the
multiple session function is transparent to the user, there are parameters that
enable the user to customize it.
The range for the parameter is from 1 to 10. If the option is not set, by default only
two sessions can be started, one for querying the server and one for sending file
data. A setting of 5 permits up to four sessions: two for queries and two for
sending data. A setting of 10 permits up to eight sessions: four for queries and four
for sending data. The relationship between RESOURCEUTILIZATION and the
maximum number of sessions created is part of an internalized algorithm and, as
such, is subject to change. This table lists the relationships between
RESOURCEUTILIZATION values and the maximum sessions created. Producer
sessions scan the client system for eligible files. The remaining sessions are
consumer sessions and are used for data transfer. The threshold value affects how
quickly new sessions are created.
Recommendations
RESOURCEUTILIZATION value Maximum Unique number Threshold
number of of producer (seconds)
sessions sessions
1 1 0 45
2 2 1 45
3 3 1 45
4 3 1 30
5 4 2 30
6 4 2 20
7 5 2 20
8 6 2 20
9 7 3 20
10 8 4 10
If the client file system is spread across multiple disks (RAID 0 or RAID 5), or
multiple large file systems, the recommended RESOURCEUTILIZATION setting is
a value of 5 or 10. This enables multiple sessions with the server during backup or
archive and can result in substantial throughput improvements in some cases. It is
not likely to improve incremental backup of a single large file system with a small
percentage of changed data. If a backup is direct to tape, the client node maximum
mount points allowed parameter, MAXNUMMP, must also be updated at the
server using the update node command.
The default value for the RESOURCEUTILIZATION option is 1, and the maximum
value is 10. For example, if the data to be restored are on five different tape
volumes, and the maximum number of mount points for the node requesting the
restore is five, and RESOURCEUTILIZATION is set to 3, then three sessions are
used for the restore. If the RESOURCEUTILIZATION setting is increased to 5, then
five sessions are used for the restore. There is a one-to-one relationship to the
number of restore sessions allowed and the RESOURCEUTILIZATION setting.
Recommendations
Maximum number of sessions
1 for workstations
RESOURCEUTILIZATION
5 for a small server
10 for a large server
TAPEPROMPT
The TAPEPROMPT client option specifies whether to prompt the user for tape
mounts.
The TAPEPROMPT option specifies if you want Tivoli Storage Manager to wait for
a tape to be mounted for a backup, archive, restore or retrieve operation, or to
prompt you for your choice.
Recommendations
TAPEPROMPT No
TCPBUFFSIZE
The TCPBUFFSIZE option specifies the size of the internal TCP communication
buffer, that is used to transfer data between the client node and the server. A large
buffer can improve communication performance, but requires more memory.
Recommendation
TCPBUFFSIZE 32
TCPNODELAY
Use the TCPNODELAY option to disable the TCP/IP Nagle algorithm, which
allows data packets of less than the Maximum Transmission Unit (MTU) size to be
sent out immediately.
The default is YES. This generally results in better performance for Tivoli Storage
Manager client/server communications.
Recommendations
TCPNODELAY YES
The TCPWINDOWSIZE option overrides the operating system’s TCP send and
receive spaces. In AIX, for instance, these parameters are set as “no” options
tcp_sendspace and tcp_recvspace. Specifying TCPWINDOWSIZE 0 causes Tivoli
Storage Manager to use the operating system default. This is not recommended
because the optimal setting for Tivoli Storage Manager might not be same as the
optimal setting for other applications. The default is 63 KB, and the maximum is
2048 KB. TCPWINDOWSIZE option specifies the size of the TCP sliding window
for all clients and all but MVS servers. A larger window size can improve
communication performance, but uses more memory. It enables multiple frames to
be sent before an acknowledgment is obtained from the receiver. If long
transmission delays are being observed, increasing the TCPWINDOWSIZE might
improve throughput.
| Note: For the Sun Solaris client, the TCP buffers, tcp_xmit_hiwat and
| tcp_recv_hiwat, must match the TCPWINDOWSIZE.
|| Recommendations
| TCPWINDOWSIZE 63
|
TXNBYTELIMIT
The TXNBYTELIMIT client option specifies the maximum transaction size in
kilobytes for data transferred between the client and server.
The range of values is 300 KB through 2097152 KB (2 GB); the default is 25600. A
transaction is the unit of work exchanged between the client and server. Because
the client program can transfer more than one file or directory between the client
and server before it commits the data to server storage, a transaction can contain
more than one file or directory. This is called a transaction group. This option
permits you to control the amount of data sent between the client and server
before the server commits the data and changes to the server database, thus
affecting the speed with which the client performs work. The amount of data sent
applies when files are batched together during backup or when receiving files from
the server during a restore procedure. The server administrator can limit the
number of files or directories contained within a group transaction using the
TXNGROUPMAX option. The actual size of a transaction can be less than your
limit. Once this number is reached, the client sends the files to the server even if
the transaction byte limit is not reached.
Recommendations
25600
TXNBYTELIMIT
2097152 (for backup direct to tape)
Two command line options that might improve Tivoli Storage Manager
performance are:
IFNEWER
This option is used in conjunction with the restore command and restores
files only if the server date is newer than the date of the local file. This
option might result in lower network utilization if less data travels across
the network.
INCRBYDATE
In a regular incremental backup, the server reads the attributes of all the
files in the file system and passes this information to the client. The client
then compares the server list to a list of its current file system. This
comparison can be very time-consuming, especially for clients on NetWare,
Macintosh, and Windows. These clients sometimes have a limited amount
of memory.
With an incremental-by-date backup, the server only passes the date of the
last successful backup. It is not necessary to query every active file on the
Tivoli Storage Manager server. The time savings are significant. However,
regular, periodic incremental backups are still needed to back up files that
have only had their attributes changed. For example, if a new file in your
file system has a creation date previous to the last successful backup date,
future incremental-by-date backups will not back up this file. This is
because the client sees it as already backed up. Also, files that have been
deleted are not detected by an incremental-by-date backup. These deleted
files will be restored if you perform a full system restore.
Macintosh client
Try to limit the use of Extended Attributes. If Extended Attributes are used, try to
limit their length. Anti-virus software can negatively affect backup and restore
performance
Windows client
Performance recommendations for Windows clients.
v For optimal backup and restore performance when using a local client on a
Windows system, use the shared memory communication method. Specify
COMMMETHOD SHAREDMEM in both the server options file and the client
options file.
| v Anti-virus products and backup and restore products can use significant
| amounts of system resources and therefore impact application and file system
| performance. They may also interact with each other to seriously degrade the
| performance of either product. For optimal performance of backup and restore:
| – Schedule anti-virus file system scans and incremental backups for
| non-overlapping times.
| – If the anti-virus program allows, change the anti-virus program properties so
| that files are not scanned when opened by the client processes. Some
| anti-virus products can automatically recognize file reads by backup products
| and do not need to be configured. Check the IBM support site for additional
| details.
Instead of cross referencing the current state of files with the Tivoli Storage
Manager database, you can back up those files indicated as changed in the change
journal. Journal-based backup uses a real-time determination of changed files and
directories and avoids file system scan and attribute comparison. Here are some
advantages of journal-based backup
v Much faster than classic incremental, but improvement depends on the amount
of changed data.
v Requires less memory usage and less disk usage.
v Good for large file systems with many files that do not change often.
| Running two or more client program instances at the same time on the same
| system might provide better overall throughput, depending on the available
| resources. Scheduling backups for multiple file systems concurrently on one Tivoli
| Storage Manager client system can be done with any of the following methods:
| v Using one node name, running one client scheduler, and setting the
| RESOURCEUTILIZATION client option to 5 or greater with multiple file
| systems included in the schedule or domain specification.
| v Using one node name, running one client scheduler, and scheduling a command
| that runs a script on the client system that includes multiple command line
| client statements (using dsmc).
| v Using multiple node names and running one client scheduler for each node
| name, in which each scheduler uses a unique client options file, etc.
| In some situations this memory demand causes the backup to fail, and the client
| issues the following message:
| ANS1030E The operating system refused a TSM request for memory allocation.
| This section describes how much memory is required and how to reduce this
| memory requirement.
| For incremental backups the client uses an average of about 300 bytes of memory
| for each file system object (a file or directory). These 300 bytes include the path
| and name of the object and attributes. The average memory requirement increases
| if longer file or directory names or more deeply nested directories are used. For a
| file system with 1 million files and directories, for example, the client would
| require about 300 MB of virtual memory.
| You can reduce the virtual memory requirement by any of the following methods:
| v Use the client option MEMORYEFFICIENTBACKUP DISKCACHEMETHOD.
| This option is only available on the Version 5.4 client, or later. The maximum
| virtual memory used by the client process in this case is usually less than 20
| MB. A significant amount of client disk space might be required. See the
| Backup-Archive Clients Installation and User’s Guide for details.
| v Use the client option MEMORYEFFICIENTBACKUP YES. The maximum virtual
| memory used by the client process in this case becomes 300 bytes times the
| maximum number of files in any one directory.
| v Use the client option VIRTUALMOUNTPOINT (UNIX only) to define multiple
| virtual mount points within the single file system, each of which can be backed
| up independently by Tivoli Storage Manager.
| v If the client option RESOURCEUTILIZATION is set to a value greater than 3 and
| there are multiple file systems being backed up (or could be backed up such as
| in the case of failover in an active-active cluster), then reducing the value of
| RESOURCEUTILIZATION to 3 or below, or using testflag maxproducers:1 limits
| the process to incremental backup of a single file system at a time, resulting in a
| reduction in virtual memory requirements. If backing up of multiple file systems
| in parallel is required for performance reasons, and the combined virtual
| memory requirements exceed the process limits, then multiple backup processes
| should be used in parallel.
| v Use the client option INCRBYDATE, which executes an incremental-by-date
| backup.
| v Use the client include/exclude options to back up only what is necessary.
| v Use the Tivoli Storage Manager journal-based incremental backup function
| (Windows client, and AIX with a Version 5.3.2 client, or later). A full incremental
| backup must be completed before a journal-based backup is possible.
| v Use the Tivoli Storage Manager image backup function to back up the entire
| volume. This might require less time and resources than using incremental
| backups on some file systems with a very large number of very small files.
| v Spread the data across multiple file systems.
| Only part of an application’s virtual memory might be used for allocating data
| structures. The maximum virtual memory available for allocation by a 32-bit
| process depends on the operating system. For Windows, this is 2 GB for most
| processes. The Version 5.3.2 (and later) Windows client can use up to 3 GB for data
| structures if the client system has been booted using the /3gb boot.ini flag - . See
| Microsoft Knowledge Base article 291988 (http://support.microsoft.com/kb/
| 291988). For AIX 5L™, this is nine memory segments, or 2.25 GB. Other operating
| systems might have different limits. Operating system quotas or limits might need
| to be set to allow a process to use the maximum virtual memory. Refer to the
| ulimit command on UNIX systems.
| The maximum number of files and directories that Tivoli Storage Manager can
| back up on a 32-bit system (2 GB of process memory) using a single incremental
| backup process and the default option MEMORYEFFICIENTBACKUP NO is about
| 7 million: (2 * 1024 * 1024 * 1024) / (300 * 1000 * 1000). If the client process tries to
| exceed the operating system process virtual memory limit, the backup fails. If the
| client process memory requirements exceed the amount of real memory on the
| system but do not exceed the virtual memory limit, then the backup might succeed
| but is likely to exhibit poor performance due to paging.
| Using a 64-bit client, if available for the platform in question, essentially eliminates
| any concern about the virtual memory requirements, but might still require
| significant real memory for optimal performance.
Networks
There is a variety of actions you can take to tune your networks.
v Use dedicated networks for backup (LAN or SAN).
v Keep device drivers updated.
v Using Ethernet adapter auto detect to set the speed and duplex generally works
well with newer adapters and switches. If your network hardware is more than
three years old and backup and restore network performance is not as expected,
set the speed and duplex to explicit values (for example, 100 MB full-duplex, 100
MB half-duplex, and so on). Make sure that all connections to the same switch
are set to the same values.
v Gb Ethernet jumbo frames (9000 bytes) - Jumbo frames can give improved
throughput and lower host CPU usage especially for larger files. Jumbo frames
are only available if they are supported on client, server, and switch. Not all Gb
Ethernet hardware supports jumbo frames.
| v In networks with mixed frame-size capabilities (for example, standard Ethernet
| frames of 1500 bytes and jumbo Ethernet frames of 9000 bytes) it can be
| advantageous to enable Path Maximum Transmission Unit (PMTU) discovery on
| the systems. Doing so means that each system segments the data sent into
| frames appropriate to the session partners. Those that are fully capable of jumbo
| frames use jumbo frames. Those that have lower capabilities automatically use
| the largest frames that do not cause frame fragmentation and re-assembly
| somewhere in the network path. Avoiding fragmentation is important in
| optimizing the network.
The TCP/IP protocols and functions can be categorized by their functional groups:
the network layer, internetwork layer, transport layer, and application layer. Table 2
shows the functional groups and their related protocols.
Protocol functions
Reliable Delivery
Reliable delivery services guarantee to deliver a stream of data sent from one
machine to another without duplication or loss of data. The reliable protocols use a
technique called acknowledgment with retransmission, which requires the recipient
40 IBM Tivoli Storage Manager: Performance Tuning Guide
to communicate with the source, sending back an acknowledgment after it receives
data.
Sliding window
| The sliding window allows TCP/IP to use communication channels efficiently, in
| terms of both flow control and error control. The sliding window is controlled in
| Tivoli Storage Manager through the TCPWINDOWSIZE option.
To achieve reliability of communication, the sender sends a packet and waits until
it receives an acknowledgment before transmitting another. The sliding window
protocol enables the sender to transmit multiple packets before waiting for an
acknowledgment. The advantages are:
v Simultaneous communication in both directions.
v Better utilization of network bandwidth, especially if there are large transmission
delays.
v Traffic flow with reverse traffic data, known as piggybacking. This reverse traffic
might or might not have anything to with the acknowledgment that is riding on
it.
v Variable window size over time. Each acknowledgment specifies how many
octets have been received and contains a window advertisement that specifies
A client continually shrinking its window size is an indication that the client
cannot handle the load, and thus increasing the window size does not improve
performance.
Tivoli Storage Manager uses TCP/IP communication protocol over the network. It
is important to tune the TCP protocols to obtain maximum throughput. This
requires changing the network parameters that control the behavior of TCP/IP
protocols and the system in general.
An mbuf is a kernel buffer that uses pinned memory and comes in two sizes, 256
bytes and 4096 bytes called mbuf clusters or simply clusters. The maximum socket
buffer size limit is determined by the sb_max kernel variable. Because mbufs are
primarily used to store data for incoming and outgoing network traffic, they must
be configured to have a positive effect on network performance. To enable efficient
mbuf allocation at all times, a minimum number of mbuf buffers are always kept
in the free buffer pool. The minimum number of mbufs is determined by lowmbuf,
whereas the minimum number of clusters is determined by the lowclust option.
The mb_cl_hiwat option controls the maximum number of free buffers the cluster
pool can contain.
The thewall network option controls the maximum RAM that can be allocated
from the Virtual Memory Manager (VMM) to the mbuf management routines. The
netstat -m can be used to obtain detailed information on the mbufs. The netstat -
I interface-id command can be used to determine if there are errors in packet
transmissions.
If the number is greater than 0, overflows have occurred. At the device driver
layer, the mbuf chain containing the data is put on the transmit queue, and the
adapter is signaled to start the transmission operation. On the receive side, packets
are received by the adapter and then are queued on the driver-managed receive
queue. The adapter transmit and receive queue sizes can be configured using the
System Management Interface Tool (SMIT).
At the device driver layer, both the transmit and receive queues are configurable. It
is possible to overrun these queues. To determine this use netstat -v command,
For best throughput for systems on the same type of network, it is advisable to use
a large MTU. In multi-network environments, if data travels from a network with a
large MTU to a smaller MTU, the IP layer has to fragment the packet into smaller
packets (to facilitate transmission on a smaller MTU network), which costs the
receiving system CPU time to reassemble the fragment packets. When the data
travels to a remote network, TCP in AIX defaults to a Maximum Segment Size
(MSS) of 512 bytes. This conservative value is based on a requirement that all IP
routers support an MTU of at least 576 bytes.
| Note: Jumbo frames can be enabled on Gigabit Ethernet and 10 Gigabit Ethernet
| adapters. Doing so raises the MTU to 9000 bytes. Because there is less
| overhead per packet, jumbo frames typically provide better performance, or
| CPU consumption, or both. Consider jumbo frames especially if you have a
| network dedicated to backup tasks. Jumbo frames should only be
| considered if all equipment between most of your Tivoli Storage Manager
| clients and server supports jumbo frames, including routers and switches.
Recommendation
In an SP2 environment with a high speed switch, use MTU of 64 KB.
The following table shows the recommended values for the parameters described
in this section.
Note: With the exception of rcf1323, use the current values if higher.
Table 1.
Network options
lowclust = 200
lowmbuf = 400
thewall = 131072
mb_cl_hiwat = 1200
sb_max = 1310720
rfc1323 = 1
Note: The lowmbuf, lowclust and mb_cl_hiwat options are applicable only for AIX
V3.2.x and not for AIX V4.1.x. In AIX V4.1.x, if setting sb_max to 1310720
results in an error message when running TCP/IP applications, then set the
sb_max value to 757760.
TcpMSSinternetlimit
When data travels to a remote network or a different subnet, the TCPIP. NLMsets
the MTU size to the default Maximum Segment Size (MSS) value of 536 bytes. The
TcpMSSinternetlimit parameter can be used to override the default MSS value and
to set a larger MTU. For NetWare v4.x with TCP/IP v3.0, setting
TcpMSSinternetlimit off in SYS:\ETC\TCPIP.CFG will cause the TCPIP. NLM to
use the MTU value specified in STARTUP.NCF file (maximum physical receive
packet size). Also, note that the TcpMSSinternetlimit parameter is case sensitive. If
this parameter is not specified correctly, it will be dropped automatically from the
tcpip.cfg file by NetWare.
TCPIP.CFG
TcpMSSinternetlimit off
For NetWare v3.x, Novell patch TCP31A.EXE (for TCP/IP v2.75) can provide the
same option.
Chapter 4. Network protocol tuning 45
Sun Solaris network settings
Tuning TCP/IP settings for Sun Solaris servers and clients can improve
performance.
v TCPWINDOWSIZE 32K, which is set in client’s dsm.sys file, is recommended for
the Solaris client in the FDDI and fast (100 mbit) Ethernet network environment.
v TCPWINDOWSIZE 63K or higher is recommended for Gigabit Ethernet network
environment. One good way to find the optimal TCPWINDOWSIZE value in
your specific network environment is to run the TTCP program multiple times,
with different TCPWINDOWSIZE set for each run. The raw network throughput
number reported by TTCP can be used as a guide for selecting the best
TCPWINDOWSIZE for your Tivoli Storage Manager server and client. TTCP is
freeware which can be downloaded from many Sun freeware web sites. The
default values for TCP xmit and recv buffers are only 8 KB for Solaris. The
default value for tcp_xmit_hiwat and tcp_recv_hiwat must be changed to the
value of TCPWINDOWSIZE to avoid any TCP buffer overrun problem. You can
use the Solaris ″ndd -set″ command to change the value of these two TCP
buffers.
v On SunOS, the TCP/IP software parameters can be changed by editing the min
inetinet/in_proto.c file in the release kernel build directory (usually/usr/sys).
After changing the parameters, you need to rebuild the kernel. The parameters
that can affect performance are:
tcp_default_mss
Specifies the default Maximum Segment Size (MSS) for TCP in bytes.
The MSS is based on the Maximum Transmission Unit (MTU) size of the
network if the destination is on the same network. To avoid
fragmentation, the conservative value of 512 is used . For improved
performance on Ethernet or Token-Ring, larger MSS values are
recommended. (For example, settings of 1024, 1500, 2048, or 4096 can be
used.) On Ethernet LANs, the largest MTU value is 1500.
tcp_sendspace
Specifies the number of bytes that the user can send to a TCP socket
buffer before being blocked. The default values can be changed on a
given socket with the SO_SNDBUF ioctl. The default value is 4096
Recommendation: Set the tcp_sendspace parameter to 16 KB or 32 KB.
tcp_recvspace
Specifies the number of bytes that the remote TCP can send to a TCP
socket buffer before being blocked. The default values can be changed
on a given socket with the SO_RCVBUF ioctl. The default value is 4096.
Recommendation: Set the parameter to tcp_recvspace 16 KB or 32 KB.
TCPIP.DATA
PROFILE.TCPIP
During initialization of the TCPIP stack, configuration parameters for the stack are
read from the PROFILE.TCPIP configuration data set. Reference the z/OS IP
Configuration manual for additional information on the parameters that are used in
this file.
The PROFILE.TCPIP contains TCP buffer sizes, LAN controller definitions, server
ports, home IP addresses, gateway statements, VTAM® LUs for Telnet use, and so
on.
| The TCPWINDOWSIZE server option allows you to set the TCP/IP send and
| receive buffers independently from TCP/IP. The default size is 63 KB. Therefore,
| you only need to set the TCP/IP profile TCPMAXRCVBUFRSIZE parameter to a
| value equal to or larger than the value you want for the server TCPWINDOWSIZE
| option. You can set the TCPSENDBFRSIZE and TCPRCVBUFRSIZE parameters to
| values appropriate for the non-Tivoli Storage Manager network workloads on the
| system, because these parameters are overridden by the server TCPWINDOWSIZE
| option. When send/recv buffer sizes are not specified in the PROFILE, a default
| size of 16 KB is used for send/recv buffers.
| IPCONFIG PATHMTUDISCOVERY
| TCPCONFIG TCPMAXRCVBUFRSIZE 524288
| TCPSENDBFRSIZE 65535
| TCPRCVBUFRSIZE 65535
| BEGINROUTES-ENDROUTES
| Use the BEGINROUTES statement to add static routes to the IP route table. The
| BEGINROUTES allows a BSD style syntax to be specified for the destination IP
| address and address mask. The destination IP address can be an IPv4 or IPv6
| address and does not need to be a fully qualified address. BEGINROUTES is the
| recommended method for defining static routes.
| Specify the MTU for the OSA-Express or OSA-Express2 link on the ROUTE
| statement within a BEGINROUTES/ENDROUTES block. The maximum frame size
| supported on OSA-Express or OSA-Express2 adapters is 8992 bytes. Use
| NODELAYACKS on the PORT statement for the TSM server or on a ROUTE
| statement for a link dedicated for backup activity. Immediately acknowledging
| packets received by the server can improve backup throughput significantly.
| BEGINROUTES
| ;
| ; Destination Subnet Mask First Hop Link Name Packet Size Options
| ;
| ROUTE 10.11.214.0 255.255.252.0 = ET0 MTU 1500
| ROUTE 10.10.48.0 255.255.248.0 = GIGA1 MTU 8992 NODELAYACKS
| ROUTE 10.10.56.0 255.255.248.0 = GIGA2 MTU 8992 NODELAYACKS
| ROUTE DEFAULT 10.11.214.1 ET0 MTU 1500
| ;
| ENDROUTES
| The first BEGINROUTES statement of each configuration data set that is issued
| replaces all static routes in the existing routing table with the new route
| information. All static routes are deleted, along with all routes learned by way of
| ICMP or ICMPv6 redirects. Routes created by OMPROUTE and router
| advertisements are not deleted. Subsequent BEGINROUTES statements in the same
| data set add entries to the routing table.
Tip: Set the TCP window size on z/OS to the allowed maximum by setting
TCPRCVBUFRSIZE to 32K or larger. If client workstation permits, set the
client window size to 65535. However, if the installation is storage
constrained, use the default TCPRCVBUFRSIZE of 16K.
v Ensure that client and server MTU/packet size are equal. Follow the
recommendations given in the PROFILE.TCPIP section
v Ensure that TCP/IP and all other traces are turned off for optimal performance.
Trace activity does create an extra processing overhead
v Follow the z/OS UNIX System Services performance tuning guidelines in the
z/OS UNIX System Services Planning manual or at this URL http://
www.ibm.com/servers/eserver/zseries/zos/unix/bpxa1tun.html
v Region sizes and dispatching priority: It is highly recommended to set the region
size to 0K or 0M for the TCPIP stack address space and for started tasks such as
the FTP server, the SMTP/NJE server, the Web server, the Tivoli Storage
Manager server, and so on.
v If your environment permits, set the dispatching priority for TCPIP and VTAM
equivalent and keep servers slightly lower than TCPIP and VTAM. For other
started tasks, such as FTP, keep them slightly lower than the TCPIP task.
v If you are using Work Load Manager, follow the above recommendations when
your installation defines performance goals in a service policy. Service policies
are defined through an ISPF application and they set goals for all types of z/OS
managed work.
v If you are using TCP/IP V3R2 reference MVS TCP/IP V3R2 Performance Tuning
Guide This tuning guide also includes step by step process for tuning other
TCP/IP platforms such as AIX, OS/2®, DOS and VM.
v Estimate how many z/OS UNIX System Services users, processes, ptys, sockets
and threads would be needed for your z/OS UNIX installation. Update your
BPXPRMxx member in SYS1.PARMLIB
v Spread z/OS UNIX user HFS datasets over more DASD volumes for optimal
performance.
Since the first version of Tivoli Storage Manager, AdStar Distributed Storage
Manager (ADSM), the archiving function has undergone major changes. Today,
database performance, function, and ease of use have improved significantly.
Consider using backup sets as an alternative for archives if you want to archive an
entire filespace at a point in time. Backup sets have a much smaller impact on the
database. Each backup set creates only one entry in the database. If backup sets are
not appropriate, another option might be to aggregate the files and directories
before archiving them (for example, PKZIP, tar, and so on).
Here are some of the symptoms that might indicate a problem with archiving:
v Slow archive or retrieve throughput: Archive performance degrades as more
and more duplicate directory entries are archived. Archive throughput can
degrade over a period of days or months and archive throughput might
gradually worsen as the archive progresses. Throughput can even become so
slow that it appears that the archive has stopped. Examine all other tuning
variables first to be sure the problem is not one that can be fixed by parameter
changes.
v Long inventory expiration duration: Inventory expiration might take longer and
longer because it must process an increasing load of duplicate archived
directories. Also, for each expired directory archive, inventory expiration checks
for dependant files. This tends to compound the problem when there are many
duplicate archived directories.
v Excessive growth of the database: The database grows because each archived
directory requires an entry. First consider other possible causes of database
growth: new clients, changes in management class retention or versioning, or
heavy growth from existing clients.
Corrective actions
If you are experiencing any of the symptoms described above and you think it
might be related to excessive archived directories, the best action to take is to
contact the IBM Software Support Center. They can help you use the service
utilities to examine your system and to take corrective action if it is appropriate.
You might be asked to run these commands:
v QUERY OCCUPANCY TYPE=ARCHIVE
v UPDATE ARCHIVE SHOWSTAT
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758
U.S.A.
The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
AIX pSeries
DB2 RACF
DFSMSrmm Redbooks
Domino SANergy
Enterprise Storage Server SecureWay
eServer SP
IBM Tivoli
Informix TotalStorage
iSeries VTAM
Lotus WebSphere
Magstar z/OS
MVS zSeries
OS/390
Passport Advantage
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Appendix. Notices 57
58 IBM Tivoli Storage Manager: Performance Tuning Guide
Glossary
The terms in this glossary are defined as they pertain to the IBM Tivoli Storage
Manager library. If you do not find the term you need, refer to the IBM Software
Glossary on the Web at this address: http://www.ibm.com/ibm/terminology/.
Glossary 61
backup set
A portable, consolidated group of active backup versions of files, generated
for a backup-archive client.
backup version
A file that a user backed up to server storage. More than one backup
version can exist in server storage, but only one backup version is the
active version. See also active version and inactive version.
binding
The process of associating a file with a management class name. See
rebinding.
buffer pool
Temporary space used by the server to hold database or recovery log
pages. See database buffer pool and recovery log buffer pool.
cache The process of leaving a duplicate copy on random access media when the
server migrates a file to another storage pool in the hierarchy.
central scheduler
A function that allows an administrator to schedule client operations and
administrative commands. The operations can be scheduled to occur
periodically or on a specific date. See client schedule and administrative
command schedule.
client A program running on a PC, workstation, file server, LAN server, or
mainframe that requests services of another program, called the server. The
following types of clients can obtain services from an IBM Tivoli Storage
Manager server: administrative client, application client, API client,
backup-archive client, and HSM client (also known as IBM Tivoli Storage
Manager).
client domain
The set of drives, file systems, or volumes that the user selects to back up
or archive using the backup-archive client.
client migration
The process of copying a file from a client node to server storage and
replacing the file with a stub file on the client node. The space
management attributes in the management class control this migration. See
also space management.
client node
A file server or workstation on which the backup-archive client program
has been installed, and which has been registered to the server.
client node session
A period of time in which a client node communicates with a server to
perform backup, restore, archive, retrieve, migrate, or recall requests.
Contrast with administrative session.
client options file
A file that a client can change, containing a set of processing options that
identify the server, communication method, and options for backup,
archive, hierarchical storage management, and scheduling. Also called the
dsm.opt file.
client-polling scheduling mode
A client/server communication technique where the client queries the
server for work. Contrast with server-prompted scheduling mode.
Glossary 63
copy storage pool
A named set of volumes that contains copies of files that reside in primary
storage pools. Copy storage pools are used only to back up the data stored
in primary storage pools. A copy storage pool cannot be a destination for a
backup copy group, an archive copy group, or a management class (for
space-managed files). See primary storage pool and destination.
CPU utilization
CPU utilization is computed as total system CPU time divided by the
elapsed time.
damaged file
A physical file for which IBM Tivoli Storage Manager has detected read
errors.
Data Link Control (DLC) (APPC)
The communications link protocol used to transmit data between two
physically linked machines. User data is transmitted inside DLC frames.
Token-ring, Ethernet, and SDLC protocols are examples of commonly used
DLCs on SNA networks today.
database
A collection of information about all objects managed by the server,
including policy management objects, users and administrators, and client
nodes.
database backup series
One full backup of the database, plus up to 32 incremental backups made
since that full backup. Each full backup that is run starts a new database
backup series. A number identifies each backup series.
database backup trigger
A set of criteria that defines when and how database backups are run
automatically. The criteria determine how often the backup is run, whether
the backup is a full or incremental backup, and where the backup is stored.
database buffer pool
Storage that is used as a cache to allow database pages to remain in
memory for long periods of time, so that the server can make continuous
updates to pages without requiring input or output (I/O) operations from
external storage.
database snapshot
A complete backup of the entire IBM Tivoli Storage Manager database to
media that can be taken off-site. When a database snapshot is created, the
current database backup series is not interrupted. A database snapshot
cannot have incremental database backups associated with it. See also
database backup series. Contrast with full backup.
data mover
A device, defined to IBM Tivoli Storage Manager, that moves data on
behalf of the server. A NAS file server can be a data mover.
default management class
A management class assigned to a policy set that the server uses to
manage backed-up or archived files when a user does not specify a
management class for a file.
Glossary 65
other servers, using server-to-server communication. See configuration
manager, managed server, profile, and subscription.
enterprise logging
The sending of events from IBM Tivoli Storage Manager servers to a
designated event server. The event server routes the events to designated
receivers, such as to a user exit. See also event.
estimated capacity
The available space, in megabytes, of a storage pool.
event An administrative command or a client operation that is scheduled to be
run using IBM Tivoli Storage Manager scheduling.
A message that an IBM Tivoli Storage Manager server or client issues.
Messages can be logged using IBM Tivoli Storage Manager event logging.
event record
A database record that describes actual status and results for events.
event server
A server to which other servers can send events for logging. The event
server routes the events to any receivers that are enabled for the sending
server’s events.
exclude
To identify files that you do not want to include in a specific client
operation, such as backup or archive. You identify the files in an
include-exclude list.
exclude-include list
See include-exclude list.
expiration
The process by which files are identified for deletion because their
expiration date or retention period has passed. Backed-up or archived files
are marked expired by IBM Tivoli Storage Manager based on the criteria
defined in the backup or archive copy group.
expiration date
On some IBM Tivoli Storage Manager servers, a device class attribute used
to notify tape management systems of the date when IBM Tivoli Storage
Manager no longer needs a tape volume. The date is placed in the tape
label so that the tape management system does not overwrite the
information on the tape volume before the expiration date.
export To copy administrator definitions, client node definitions, policy
definitions, server control information, or file data to external media, or
directly to another server. Used to move or copy information between
servers.
extend
To increase the portion of available space that can be used to store
database or recovery log information. Contrast with reduce.
file space
A logical space in IBM Tivoli Storage Manager server storage that contains
a group of files backed up or archived by a client. For clients on Windows
systems, a file space contains files from a logical partition that is identified
by a volume label. For clients on UNIX systems, a file space contains files
from the same file system, or the part of a file system that stems from a
virtual mount point. Clients can restore, retrieve, or delete their file spaces
Glossary 67
inactive version
A backup version of a file that is either not the most recent backup version,
or that is a backup version of a file that no longer exists on the client
system. Inactive backup versions are eligible for expiration processing
according to the management class assigned to the file. Contrast with active
version.
include-exclude file
A file containing statements that IBM Tivoli Storage Manager uses to
determine whether to include certain files in specific client operations, and
to determine the associated management classes to use for backup, archive,
and space management. See include-exclude list.
include-exclude list
A group of include and exclude option statements that IBM Tivoli Storage
Manager uses. The exclude options identify files that are not to be included
in specific client operations such as backup or space management. The
include options identify files that are exempt from the exclusion rules. The
include options can also assign a management class to a file or group of
files for backup, archive, or space management services. The
include-exclude list for a client may include option statements from the
client options file, from separate include-exclude files, and from a client
option set on the server.
incremental backup
The process of backing up files or directories that are new or have changed
since the last incremental backup. See also selective backup.
The process of copying only the pages in the database that are new or
changed since the last full or incremental backup of the database. Contrast
with full backup. See also database backup series.
Internet address (TCP/IP)
Internet Address is a unique 32 bit address identifying each node in an
internet. An internet address consists of a network number and a local
address. Internet addresses are represented as a dotted-decimal notation
and are used to route packets through the network.
ITR The Internal Throughput Rate measured in units of work, for example, files
processed, per unit of CPU time.
Kilobyte (KB)
1024 (two to the tenth power) when used in this publication.
KB per CPU second
The number compares how effectively a single CPU transfers KB of data
per one CPU busy second. The number is calculated as follows:KB per CPU
Second = (Throughput (kb/sec) x 100) / ( number of CPUs x % server
CPU utilization)
A larger number means CPU is more efficient in transferring the data. This
number can be used to compare effectiveness of CPU across different
workloads or to compare different CPU types for a given performance
evaluation. For SMP systems it is important to understand that this metric
applies to the efficiency of a single CPU. The total effectiveness for
multiple CPUs in an SMP system working together can be estimated by
multiplying by ″KB per CPU Second″ by the number of available CPUs.
LAN-free data movement
The direct movement of client data between a client machine and a storage
device on a SAN, rather than on the LAN.
Glossary 69
can include policy, schedules, client options sets, server scripts,
administrator registrations, and server and server group definitions.
managed server
An IBM Tivoli Storage Manager server that receives configuration
information from a configuration manager via subscription to one or more
profiles. Configuration information can include definitions of objects such
as policy and schedules. See configuration manager, subscription, and profile.
managed system
A client or server that requests services from the IBM Tivoli Storage
Manager server.
management class
A policy object that users can bind to each file to specify how the server
manages the file. The management class can contain a backup copy group,
an archive copy group, and space management attributes. The copy groups
determine how the server manages backup versions or archive copies of
the file. The space management attributes determine whether the file is
eligible to be migrated by the space manager client to server storage and
under what conditions the file is migrated. See also copy group, space
manager client, binding, and rebinding.
MAXDATARCV (NetBIOS)
The maximum receive data size, in bytes. This is the maximum size of the
user data in any frame that this node will receive on a session.
maximum extension
Specifies the maximum amount of storage space, in megabytes, that you
can extend the database or the recovery log.
maximum reduction
Specifies the maximum amount of storage space, in megabytes, that you
can reduce the database or the recovery log.
Maximum Transmission Unit (TCP/IP)
The size, in bytes, of the largest packet that a given layer of a
communications protocol can pass onwards.
maximum utilization
The highest percentage of assigned capacity used by the database or the
recovery log.
MAXIN (NetBIOS)
The number of NetBIOS message packets received before sending an
acknowledgment.
MAXOUT (NetBIOS)
The number of NetBIOS message packets to send before expecting an
acknowledgment.
Megabyte (MB)
1,048,576 bytes (two to the twentieth power) when used in this publication.
migrate
To move data from one storage location to another. See also client migration
and server migration.
mirroring
The process of writing the same data to multiple disks at the same time.
The mirroring of data protects against data loss within the database or
within the recovery log.
Glossary 71
NETBIOSTIMEOUT (NetBIOS)
The number of seconds that must elapse before a time out occurs for a
NetBIOS send or receive. It is found in the dsm.opt server options file.
network-attached storage (NAS) file server
A dedicated storage device with an operating system that is optimized for
file-serving functions. In IBM Tivoli Storage Manager , a NAS file server
can have the characteristics of both a node and a data mover. See also data
mover and NAS node.
Network Data Management Protocol (NDMP)
An industry-standard protocol that allows a network storage-management
application (such as IBM Tivoli Storage Manager ) to control the backup
and recovery of an NDMP-compliant file server, without installing
third-party software on that file server.
node A workstation or file server that is registered with an IBM Tivoli Storage
Manager server to receive its services. See also client node and NAS node.
In a Microsoft cluster configuration, one of the computer systems that
make up the cluster.
node privilege class
A privilege class that allows an administrator to remotely access
backup-archive clients for a specific client node or for all clients in a policy
domain. See also privilege class.
non-native data format
A format of data written to a storage pool that is different from the format
that the server uses for basic LAN-based operations. The data is written by
a data mover instead of the server. Storage pools with data written in a
non-native format may not support some server operations, such as audit
of a volume. The NETAPPDUMP data format for NAS node backups is an
example of a non-native data format.
open registration
A registration process in which any users can register their own
workstations as client nodes with the server. Contrast with closed
registration.
operator privilege class
A privilege class that allows an administrator to issue commands that
disable or halt the server, enable the server, cancel server processes, and
manage removable media. See also privilege class.
Pacing (APPC)
A mechanism used to control the flow of data in SNA. Pacing allows large,
fast machines to communicate with smaller, less capable, machines. Pacing
is unidirectional. That means that a machine can have a different send
window than its receive window.
Packet (TCP/IP)
A packet refers to the unit or block of data of one transaction between a
host and its network. A packet usually contains a network header, at least
one high-protocol header and data blocks. Packets are the exchange
medium used at the Internetwork layer to send receive data.
PACKETS (NetBIOS)
The number of I-frame packet descriptors that the NetBIOS protocol can
use to build DLC frames from NetBIOS messages.
Glossary 73
files, archive copies of files, and files migrated from HSM client nodes. You
can back up a primary storage pool to a copy storage pool. See destination
and copy storage pool.
privilege class
A level of authority granted to an administrator. The privilege class
determines which administrative tasks the administrator can perform. For
example, an administrator with system privilege class can perform any
administrative task. Also called administrative privilege class. See also
system privilege class, policy privilege class, storage privilege class, operator
privilege class, analyst privilege class, and node privilege class.
profile
A named group of configuration information that can be distributed from a
configuration manager when a managed server subscribes. Configuration
information can include registered administrators, policy, client schedules,
client option sets, administrative schedules, IBM Tivoli Storage Manager
command scripts, server definitions, and server group definitions. See
configuration manager and managed server.
randomization
The process of distributing schedule start times for different clients within
a specified percentage of the schedule’s startup window.
rebinding
The process of associating a backed-up file with a new management class
name. For example, rebinding occurs when the management class
associated with a file is deleted. See binding.
recall To access files that were migrated from workstations to server storage by
using the space manager client. Contrast with migrate.
receiver
A server repository that contains a log of server messages and client
messages as events. For example, a receiver can be a file exit, a user exit,
or the IBM Tivoli Storage Manager server console and activity log. See also
event.
reclamation
A process of consolidating the remaining data from many sequential access
volumes onto fewer new sequential access volumes.
reclamation threshold
The percentage of reclaimable space that a sequential access media volume
must have before the server can reclaim the volume. Space becomes
reclaimable when files are expired or are deleted. The percentage is set for
a storage pool.
recovery log
A log of updates that are about to be written to the database. The log can
be used to recover from system and media failures.
recovery log buffer pool
Storage that the server uses to hold new transaction records until they can
be written to the recovery log.
reduce
To free up enough space from the database or the recovery log, such that
you can delete a volume. Contrast with extend.
Glossary 75
Segment (TCP/IP)
TCP views the data stream as a sequence of octets or bytes that it divides
into segments for transmission. Each segment travels across an internet in
a single IP Datagram.
selective backup
The process of backing up selected files or directories from a client domain.
See also incremental backup.
serialization
The process of handling files that are modified during backup or archive
processing. See static, dynamic, shared static, and shared dynamic.
Server CPU Time
This is the total CPU time on the Tivoli Storage Manager server divided by
the total workload in logical MB and expressed as CPU seconds per MB.
For measurements with a local Tivoli Storage Manager client, this includes
both the client and server CPU time.
Server CPU Utilization
This is the total CPU time on the Tivoli Storage Manager server divided by
the elapsed time and expressed as a percentage. For measurements with a
local Tivoli Storage Manager client, this includes both the client and server
CPU time. On multiple processor machines, this is an average across all
processors.
Server Efficiency
This is the total client workload executed divided by the total CPU time on
the Tivoli Storage Manager server and expressed as KB per CPU second.
For measurements with a local Tivoli Storage Manager client, this includes
both the client and server CPU time. This value can be used to compare
the CPU efficiency of execution of different tests or workloads.
Server ITR
This is the internal throughput rate (ITR) in workload files processed
divided by the CPU time on the Tivoli Storage Manager server and
expressed as files per CPU second. For measurements with a local Tivoli
Storage Manager client, this includes both the client and server CPU time.
server migration
The process of moving data from one storage pool to the next storage pool
defined in the hierarchy, based on the migration thresholds defined for the
storage pools. See also high migration threshold and low migration threshold.
server options file
A file that contains settings that control various server operations. These
settings, or options, affect such things as communications, devices, and
performance.
server-prompted scheduling mode
A client/server communication technique where the server contacts the
client when a scheduled operation needs to be done. Contrast with
client-polling scheduling mode.
server storage
The primary and copy storage pools used by the server to store users’ files:
backup versions, archive copies, and files migrated from space manager
client nodes (space-managed files). See primary storage pool, copy storage
pool, storage pool volume, and volume.
Glossary 77
storage agent
A program that enables IBM Tivoli Storage Manager to back up and restore
client data directly to and from SAN-attached storage.
storage hierarchy
A logical ordering of primary storage pools, as defined by an
administrator. The ordering is usually based on the speed and capacity of
the devices that the storage pools use. In IBM Tivoli Storage Manager , the
storage hierarchy is defined by identifying the next storage pool in a
storage pool definition. See storage pool.
storage pool
A named set of storage volumes that is the destination that the IBM Tivoli
Storage Manager server uses to store client data. A storage pool stores
backup versions, archive copies, and files that are migrated from space
manager client nodes. You back up a primary storage pool to a copy
storage pool. See primary storage pool and copy storage pool.
storage pool volume
A volume that has been assigned to a storage pool. See volume, copy storage
pool, and primary storage pool.
storage privilege class
A privilege class that allows an administrator to control how storage
resources for the server are allocated and used, such as monitoring the
database, the recovery log, and server storage. Authority can be restricted
to certain storage pools. See also privilege class.
stub file
A file that replaces the original file on a client node when the file is
migrated from the client node to server storage by Tivoli Storage Manager
for Space Management .
subscription
In a Tivoli environment, the process of identifying the subscribers that the
profiles are distributed to. For IBM Tivoli Storage Manager , this is the
process by which a managed server requests that it receive configuration
information associated with a particular profile on a configuration
manager. See managed server, configuration manager, and profile.
system privilege class
A privilege class that allows an administrator to issue all server
commands. See also privilege class.
tape library
A term used to refer to a collection of drives and tape cartridges. The tape
library may be an automated device that performs tape cartridge mounts
and demounts without operator intervention.
tape volume prefix
A device class attribute that is the high-level-qualifier of the file name or
the data set name in the standard tape label.
target node
A client node for which other client nodes (called agent nodes) have been
granted proxy authority. The proxy authority allows the agent nodes to
perform operations such as backup and restore on behalf of the target
node, which owns the data being operated on.
Glossary 79
inactive versions. The number of versions retained by the server is
determined by the copy group attributes in the management class.
virtual file space
A representation of a directory on a network-attached storage (NAS) file
system as a path to that directory. A virtual file space is used to back up
the directory as a file space in IBM Tivoli Storage Manager server storage.
virtual volume
An archive file on a target server that represents a sequential media volume
to a source server.
volume
The basic unit of storage for the IBM Tivoli Storage Manager database,
recovery log, and storage pools. A volume can be an LVM logical volume,
a standard file system file, a tape cartridge, or an optical cartridge. Each
volume is identified by a unique volume identifier. See database volume,
scratch volume, and storage pool volume.
volume history file
A file that contains information about: volumes used for database backups
and database dumps; volumes used for export of administrator, node,
policy, or server data; and sequential access storage pool volumes that have
been added, reused, or deleted. The information is a copy of the same
types of volume information in the IBM Tivoli Storage Manager database.
Window size (TCP/IP)
The number of packets that can be unacknowledged at any given time is
called the window size. For example, in a sliding window protocol with
window size 8, the sender is permitted to send 8 packets before it receives
an acknowledgment.
B
backup D
LAN-free 14 database
operations 12 mirroring 24
performance 12 performance 10
throughput 30 DEFINE COPYGROUP server command 12
BACKUP DB server command 10 DEFINE DEVCLASS server command 21
BEGINROUTES/ENDROUTES block 47 DEFINE LOGVOLUME command 6
buffer pool 3, 7 DEFINE STGPOOL server command 12, 13, 22
BUFPOOLSIZE server option 3 DEFINE VOLUME server command 10
busses device drivers 39
multiple PCI 24 direct I/O
AIX 16
Sun Solaris 16
C disaster recovery 12
disk
cached disk storage pools 12 performance considerations 24
Cached files write cache 24
clearing 12 DISKBUFFSIZE client option 28
client dsm.opt file 25, 37
incremental backup 37 dsm.sys file 25, 37
tuning options 25 DSMMIGRATE client command 36
client commands
DSMMIGRATE 36
client options 35
command line only E
IFNEWER 34 education
INCRBYDATE 34 see Tivoli technical training viii
COMMMETHOD SHAREDMEM 18, 35 ENABLELANFREE client option 28
COMMRESTARTDURATION 27 Ethernet adapters 39
COMMRESTARTINTERVAL 27 EXPINTERVAL server option 4
COMPRESSALWAYS 25, 26 export 12
COMPRESSION 25 EXTEND DB server command 16
DISKBUFFSIZE 28 EXTEND LOG command 6
ENABLELANFREE 28 EXTEND LOG server command 16
PROCESSORUTILIZATION 28
QUIET 27
RESOURCEUTILIZATION 28, 30, 36
TAPEPROMPT 32
H N
Hierarchical Storage Manager migration 36 NetWare
client cache tuning 45
networks
I dedicated 39
for backup 39
IBM LTO Ultrium tape drives protocol tuning 39
streaming rate 23 settings
transfer rate 23 AIX 42
IBM Software Support Sun Solaris 46
submitting a problem xi z/OS 46
IBM Support Assistant ix traffic 39
import 12 NTFS file compression 18
INCLUDE/EXCLUDE lists 36 NTFS file system 18
incremental backup 37
Internet, search for problem resolution viii
Internet, searching for problem resolution ix
inventory expiration 52 P
ioo command 15 problem determination
describing problem for IBM Software Support xi
determining business impact for IBM Software Support x
J submitting a problem to IBM Software xi
PROCESSORUTILIZATION client option 28
Journal File System 15, 17 PROFILE.TCPIP configuration data set 47
journal-based backup publications
Windows 35 download v
order v
related hardware vii
K related software viii
knowledge bases, searching viii search v
Tivoli Storage Manager v
z/OS viii
L
LAN-free backup 14
Linux servers Q
performance recommendations 17 QUERY OCCUPANCY server command 52
LOGPOOLSIZE server option 5 QUIET client option 27
M R
Macintosh client RAID arrays 10, 24
anti-virus software 35 raw logical volumes 15, 17
Extended Attributes 35 advantages and disadvantages 16
Maximum Segment Size (MSS) 43 database 16
Maximum Transmission Unit (MTU) 32, 43 mirroring 16
NetWare 45 recovery log 16
MAXNUMMP server option 5, 28, 30 raw partitions 17
MAXSESSIONS server option 5, 28, 30 recommended values by platform 35
migration recovery log
Hierarchical Storage Manager 36 mirroring 24
processes 13 performance 10
thresholds 13 REGISTER NODE server command 28
mirroring 16 RESET BUFPOOL server command 3
database 24 RESOURCEUTILIZATION client option 28, 30, 36
S T
scheduling tape drives
processes 14 cleaning 21
sessions 14 compression 21
SELFTUNEFBUFPOOLSIZE server option 7 on a SCSI bus 24
server activity log required number 21
searching 13 TAPEPROMPT client option 32
server commands TCP communication buffer 32
BACKUP DB 10 TCP/IP
CONVERT ARCHIVE 52 AIX server and client tuning 42
DEFINE COPYGROUP 12 concepts 40
DEFINE DEVCLASS 21 connection availability 40
DEFINE STGPOOL 12, 13, 22 data transfer block size 40
DEFINE VOLUME 10 error control 41
EXTEND DB 16 flow control 41
EXTEND LOG 16 functional groups
QUERY OCCUPANCY 52 application layer 40
REGISTER NODE 28 internetwork layer 40
RESET BUFPOOL 3 network layer 40
SET MAXCMDRETRIES 39 transport layer 40
SET QUERYSCHEDPERIOD 39 HP-UX server and client tuning 17
SET RETRYPERIOD 39 Maximum Segment Size (MSS) 43
UNDO ARCHCONVERSION 52 Maximum Transmission Unit (MTU) 43
UPDATE ARCHIVE 52 NetWare
UPDATE COPYGROUP 12 client cache tuning 45
UPDATE NODE 28, 30 Maximum Transmission Unit (MTU) 45
UPDATE STGPOOL 13, 22 packet assembly and disassembly 41
server options sliding window 33, 41
BUFPOOLSIZE 3 Sun Solaris server and client tuning 46
COMMMETHOD SHAREDMEM 18, 35 tuning 40
EXPINTERVAL 4 window values 40
LOGPOOLSIZE 5 z/OS server tuning 47
MAXNUMMP 5, 28, 30 TCP/IP and z/OS UNIX system services
MAXSESSIONS 5, 28, 30 performance tuning 49
MIRRORWRITE DB 10 TCPBUFFSIZE client option 32
MOVEBATCHSIZE 6, 23 TCPIP.DATA 47
MOVESIZETHRESH 6, 23 TCPNODELAY client option 32
recommended settings by platform 16 TCPNODELAY server option 14
RESTOREINTERVAL 7 TCPWINDOWSIZE client option 17, 18, 33, 46
SELFTUNEFBUFPOOLSIZE 7 TCPWINDOWSIZE server option 8, 17, 18
TCPNODELAY 14 thresholds
TCPWINDOWSIZE 8, 17, 18 migration 13
TXNBYTELIMIT 23 throughput
TXNGROUPMAX 9, 10, 23, 33 estimating 20
server tuning overview 1 for average workloads 20
SET MAXCMDRETRIES server command 39 formula 20
SET QUERYSCHEDPERIOD server command 39 in tested environments 20
SET RETRYPERIOD server command 39 in untested environments 21
sliding window 33 Tivoli technical training viii
Software Support training, Tivoli technical viii
contacting x transaction size 33
describing problem for IBM Software Support xi TXNBYTELIMIT client option 10, 23, 33
determining business impact for IBM Software Support x TXNBYTELIMIT server option 23
Storage Agent 14 TXNGROUPMAX server option 9, 10, 23, 33
storage pool
backup and restore 12
migrating files 13
migration 13
U
UFS file system volumes 17
storage pools
UNDO ARCHCONVERSION server command 52
cached disk 12
UNIX file systems
Sun Solaris
advantages and disadvantages 16
server and client TCP/IP tuning 46
UPDATE ARCHIVE server command 52
server performance recommendations 17
UPDATE COPYGROUP server command 12
TCPWINDOWSIZE client option 46
Index 83
UPDATE NODE server command 28, 30
UPDATE STGPOOL server command 13, 22
V
Virtual Memory Manager 15, 42
VIRTUALNODENAME client option 36
vmo command 15
VSAM I/O pages 19
VxFS file system 17
W
Windows
journal-based backup 35
performance recommendations 18
Z
z/OS server
performance recommendations 19
server TCP/IP tuning 47
Printed in USA
SC32-0141-01