You are on page 1of 94

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance


Gero Schmidt, ATS System Storage IBM European Storage Competence Center

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Disclaimers
IBM has not formally reviewed this document. While effort has been made to verify the information, this document may contain errors. IBM makes no warranties or representations with respect to the content hereof and specifically disclaim any implied warranties of merchantability or fitness for any particular purpose. IBM assumes no responsibility for any errors that may appear in this document. The information contained in this document is subject to change without any notice. IBM reserves the right to make any such changes without obligation to notify any person of such revision or changes. IBM makes no commitment to keep the information contained herein up to date. Note: This presentation is intended for IBMers and IBM BPs only. As IBM has not formally reviewed this document it may be presented but should not be handed out to clients (especially not as ppt version). A pdf version of the presentation without slides 33, 52 to 54 and 79 may be suitable as hand-out for clients at one's own responsibility using one's own best judgement. If you have any suggestions or corrections please send comments to: gerosch@de.ibm.com
2 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Agenda

Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC/SAS) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10) New Trends & Directions: 2.5" & Solid State Drive (SSD) Basic Principles for Planning Logical Configurations Performance Data Collection and Analysis

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Agenda

Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC/SAS) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10) New Trends & Directions: 2.5" & Solid State Drive (SSD) Basic Principles for Planning Logical Configurations Performance Data Collection and Analysis

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

IBM System Storage Disk Subsystems Making a Choice

DS3000 Entry-level

DS5000 Midrange

Storwize V7000

XIV Enterprise

DS8000

Selecting a storage subsystem: entry-level, midrange or enterprise class support for host systems and interfaces overall capacity & growth considerations overall box performance advanced features and copy services price, costs / TCO, footprint, etc. needs to meet client & application requirements
5 2011-04-04

Subsystem performance: overall I/O processing capability overall bandwidth

choosing the right number and type of Disk Drives


2011 IBM Corporation

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance

Storage Subsystem Specs Data Rate (MBps)


max. throughput may be achieved with a relatively low no. of disk drives subsystem architecture: frontend / backend bandwidth capabilities are key SATA may be considered for applications requiring throughput

Note: Results as of 6-26-2006. Source of information from Engenio and not confirmed by IBM. Performance results achieved under ideal circumstances in a benchmark test environment. Actual customer results will vary based on configuration and infrastructure components. The number of drives used for MB/s performance does not reflect an optimized test config. The number of drives required could be lower/higher.

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Storage Subsystem Specs I/O Rate (IOps)


max. IOps performance requires a high no. of fast FC/SAS disk drives subsystem architecture: I/O processing capability >> disk drives IOps capability SATA is not a good fit for enterprise class applications requiring transaction performance

Note: Results as of 6-26-2006. Source of information from Engenio and not confirmed by IBM. Performance results achieved under ideal circumstances in a benchmark test environment. Actual customer results will vary based on configuration and infrastructure components. Drives were short-stroked to optimize for IOPs performance. Real-life may take more drives to achieve the numbers listed..

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Storage Performance Council (SPC) - Benchmarks


The Storage Performance Council (SPC) is a vendor-neutral standards body focused on the storage industry. It has created the first industry-standard performance benchmark targeted at the needs and concerns of the storage industry. From component-level evaluation to the measurement of complete distributed storage systems, SPC benchmarks will provide a rigorous, audited and reliable measure of performance.

http://www.storageperformance.org
8 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Agenda

Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC/SAS) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10) New Trends & Directions: 2.5" & Solid State Drive (SSD) Basic Principles for Planning Logical Configurations Performance Data Collection and Analysis

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Application I/O An Overview


Avg. Access Time for an I/O operation:
CPU cycle MEMORY DISK (HDD) < 0.000001ms < 0.001ms < 10ms

Disk access is SLOW compared to CPU and MEMORY

Application I/O performance: Efficient memory usage is key! Access to memory is >10000 times faster than disk access!

Application File Systems Volume Manager Device Drivers


Application Software Software

M E M O R y

S E R V E R

SAN

FC iSCSI IB SAS SATA

SCSI
Hardware

C A C H E

S T O R A G E

Hardware Setup

Storage subsystem Cache hit: < 1 ms Physical HDD: ~ 5...15 ms Storage I/O performance: Proper data placement is key!
2011 IBM Corporation

System Software
10 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance

Application I/O On a typical System Time Scale

CPU

1ns (1GHz) = 0.000000001s

MEMORY

100ns = 0.000000100s

DISK 10ms = 0.010000000s

12

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Application I/O On a human Time Scale

CPU

1 cycle := 1 second

MEMORY

1:40 minutes

SLOW

DISK

116 days

13

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Application I/O Where does it come from?

I/O

I/O

Transaction Processing A single end-user is capable of initiating only a moderate number of transactions with a limited amount of data changes per minute Thousands of end-users can already initiate thousands of transactions and generate high I/O rates with only low data rates End-users are directly affected by the application response time Peoples work time is expensive Excellent overall response time of the application is business critical and requires low I/O response times at high I/O rates
14 2011-04-04

Batch Jobs A single batch job can already generate a considerable amount of disk I/O operations in terms of I/O rate and data rate Multiple batchjobs can create a huge amount of disk activity Batch jobs should not interact with end-user transactions and are typically run outside end-user business hours Time frames for batch jobs even during nights / weekends are limited Overall job runtime is critical and mostly dependent on achieved overall data rate
2011 IBM Corporation

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance

Application I/O Workload Characteristics


I/O rate in IO/s (IOps)
! ! Time to data is critical Dependent on number and type of disk drives

Data rate in MB/s (MBps)


! ! Data transfer rate enables performance Dependent on internal controller bandwidth

Transaction processing workloads


typical for transaction processing workloads with random, small-block I/O requests, e.g. OLTP online transaction processing, databases, mail servers the majority of enterprise applications avg. I/O response time is most important here (RT < 10ms is a good initial choice) number and speed of of disk drives is essential (e.g. 73GB15k FC drives as best choice) SATA disk drives not generally recommended, high speed FC/SAS/SCSI disk drives preferred balanced system configuration and volume layout is key to utilize all disk spindles

Throughput dependent workloads


typical for throughput dependent workloads with sequential, large-block I/O requests, e.g. HPC, seismic processing, data mining, streaming video applications, large file access, backup/restore, batch jobs avg. I/O response time is less important (high overall throughput required) bandwidth requirements (no. of adapters and host ports, link speed) must be met not necessarily a high number of disk drives required SATA disk drives may be a suitable choice balanced system configuration and volume layout is important to utilize full system bandwidth
2011 IBM Corporation

15

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance

Application I/O Workload Performance Characteristics


Basic workload performance characteristics: I/O rate [IOps] (transactions) or data rate [MBps] (throughput) Random access or sequential access workload pattern Read:write ratio (percentage of read:write I/O requests, e.g. 70:30) average I/O request size (average I/O transfer size or block size, e.g. 8kB for Oracle DB, 64kB or larger for streaming applications, 256kB for TSM)

Additional workload performance characteristics / objectives: Read cache hit ratio (percentage of read cache hits) average response time (RT) requirements (e.g. RT < 10ms)

16

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Agenda

Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC/SAS) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10) New Trends & Directions: 2.5" & Solid State Drive (SSD) Basic Principles for Planning Logical Configurations Performance Data Collection and Analysis

17

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Hard Disk Drive (HDD) Basics Its all mechanic...


Read / write cache hits are in the range < 1ms Physical disk I/O operations are in the range of > 5ms because mechanical components such as head movements and spinning disks are involved Each hard disk drive (HDD) can only process a limited no. of random I/O operations per second, mainly determined by :

Average Seek Time [ms] (head movement to required track) Rotational Latency [ms] (disk platter spinning until the first sector addressed passes under the r/w heads; avg. time = half a rotation) Transfer Time [ms] (read/write data sectors, 1 sector = 512 Byte)

Start
18 2011-04-04

Seek Time

Rotational Transfer Time Latency


2011 IBM Corporation

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance

Simple IOps Calculation per Hard Disk Drive (HDD)

Avg. Seek Time Rotational Latency Transfer Time

= see manufacturer specs (typical: 4-10ms) = (60000/RPM) [ms] (typical: 2-4ms) = 1000 sectors sector size / avg. Transfer Rate [ms] (typically << 1ms for small I/O request sizes 32kB)
2011 IBM Corporation

19

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance

Manufacturer Specs for Hard Disk Drives

This is just an example for getting a view on typical disk drive characteristics. The chosen disk types above do not necessarily represent the characteristics of the disk drive modules used in IBM System Storage systems. Source: www.seagate.com (2008)
20 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Example Random IOps Calculation per Hard Disk Drive

Disk Drive FC 146GB15k FC 146GB10k SATA2 500GB7.2k

Speed 15000 rpm 10000 rpm 7200 rpm

Rotational Latency 2 ms 3 ms 4.2 ms

Avg. Seek Time 4 ms 5 ms 9 ms

IOps 167 125 76

Rules of Thumb - Random IOps/HDD (conservative estimate to start with): FC 15k DDM : ~160 IOps FC 10k DDM : ~120 IOps SATA2 7.2k DDM: ~75 IOps
21 2011-04-04

A single disk drive is only capable of processing a limited number of I/O operations per second!

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Efforts to improve HDD Performance


Efforts to reduce HDD access times (mechanical delays) Disk Drive: Introduce Command Queuing and Re-Ordering of I/Os SATA: NCQ (Native Command Queuing) SCSI: TCQ (Tagged Command Queuing) Disk Drive Usage: 'Short Stroking' of HDDs Disk Subsystem: Subsystem Cache
Caching / Cache Hits Seek latency optimization

Intelligent Cache Page Replacement & Prefetching Algorithms Standard: LRU (least recently used) / LFU (least frequently used) IBM System Storage DS8000 - Advanced Caching Algorithms 2004 ARC (Adaptive Replacement Cache) 2007 AMP (Adaptive Multi-stream Prefetching) 2009 IWC (Intelligent Write Caching)
IBM Almaden Research Center - Storage Systems Caching Technologies http://www.almaden.ibm.com/storagesystems/projects/arc/technologies/
22 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Increase HDD Performance - Command Queuing

Tagged Command Queuing (TCQ, SCSI-2) & Native Command Queuing (NCQ, SATA2) further improves disk drive random access performance by re-ordering the I/O commands so that workloads can experience seek times which are considerably less than the nominal seek times Queue Depth: SATA2 (NCQ): 32 in-flight commands, SCSI-3 (TCQ): 2^64 in-flight commands
23 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Increase HDD Performance - Short Stroking


Short Stroking: Approach to achieve maximum possible performance from an HDD by limiting the overall head movement and thus minimizing the average seek time. Implementation: - Use only a small portion of overall capacity - Use tracks on outer edge with higher data density Disadvantage: - Typically large number of HDDs involved - Only small portion of storage capacity used Typical usage: Applications with high access densities (IOps/GB) that require high random I/O rates at low response times but with only a comparatively small amount of data.
24 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Short stroking DS8800 Single Rank Performance

100% random read 100% random read

100% 100%

50% 50%

25% 25%

DS8800 R6.0 DS8800 R6.0


25 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Increase HDD Performance - Subsystem Cache

Disk Subsystem Cache Read Cache Hits Write Cache Hits / Write behind Sequential Prefetch Algorithms
1:500 1:1000 1:500 1:1000

Intelligent Cache Page Replacement & Prefetch Algorithms What data should be stored in cache based upon the recent access and frequency needs of the hosts (LRU/LFU)? Determine what data in cache can be removed to accommodate newer data. Predictive algorithms to anticipate data prior to a host request and loading it into cache.

26

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Sample Random IOps Calculation with reduced Seek Times


Rotational Latency 2 ms 3 ms 4.2 ms Avg. Seek Time 4 ms 5 ms 9 ms Reduced Seek Time 4/3 ms 5/3 ms 9/3 ms IOps Red. Seek 300 214 138

Disk Drive FC 146GB15k FC 146GB10k SATA2 500GB7.2k

Speed 15000 rpm 10000 rpm 7200 rpm

Even with reduced average seek times you cannot expect more than a few hundred random I/O operations per second from a single HDD. So a single HDD can only process a limited number of random IOps with average access times in the typical range of 5...15ms due to the mechanical delays associated with spinning disks (HDDs).
27 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Storage Disk Subsystem Typical I/O Rate & Response Time Relation
Response Time versus I/O Rate
30 25 20 15 10 5

Response Time [ms]

+/- 10% change in I/O rate


0
1000 2000 3000 4000 5000 6000 7000 8000 9000 0 10000 11000
2011 IBM Corporation

Total I/O [IO/s]

28

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance

Agenda

Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC/SAS) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10) New Trends & Directions: 2.5" & Solid State Drive (SSD) Basic Principles for Planning Logical Configurations Performance Data Collection and Analysis

29

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Subsystem Sizing Meeting Performance and Capacity Requirements Capacity:


number of disk drives to meet capacity requirements only low no. of large capacity disks required to meet capacity needs

Performance:
number and speed of disk drives (spindles) to meet IOps requirements high no. of fast, low capacity drives required to meet performance needs

Cost:

Performance
IOps IOps
no. of drives

Capacity
drive capacity

GB GB

higher

lower

COST
30 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

146GB15k drives are an excellent trade off between performance and capacity needs
2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Average Access Density over recent Years


Source: IBM data, other consultants
hot

2005 0.7
IOps IOps = Access Density GB GB [IOps/GB]
cold data
2011 IBM Corporation

Access Density is a measure of I/O throughput per unit of usable storage capacity (backstore). The primary use of access density is to identify a range on a response time curves to give the typical response time expected by the average customer, based on the amount of total usable storage in their environment. The average industry value for access density in the year 2005 is thought to be approximately 0.7 I/Os per second per GB. Year-to-year industry data is incomplete, but the value has been decreasing as companies acquire usable storage faster than they access it.
31 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance

Subsystem Sizing Meeting Performance and Capacity Requirements Application: Capacity 1000GB; Performance 1000 IOps (1.0 IOps/GB)

SATA
Access Density: 1.1 IOps/GB

FC

SATA
Access Density: 0.075 IOps/GB

SATA

7x 146GB15k FC
(160 IOps/HDD; 15W)

1x 1TB 7.2k SATA


(75 IOps/HDD; 9.8W)

14x 1TB 7.2k SATA


(75 IOps/HDD; 9.8W)

1120 IOps
1022 GB 105 W
32 2011-04-04

75 IOps
1000 GB 9.8 W

1050 IOps
14000 GB (!) 137.2 W
2011 IBM Corporation

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance

DS4300: SATA vs. FC - Read IOps (RAID5)

11 Drawer FC Disks Drawer FC Disks

IOps performance increases with no. of disks

22 Drawer FC Disks Drawer FC Disks

SATA-1
33 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

SATA vs FC - HDD Performance Positioning


Fibre Channel (FC) disk drives / Serial Attached SCSI (SAS)
Offer highest enterprise-class performance, reliability, and availability for business-critical applications requiring high I/O transaction performance (high access densities)

Serial Advanced Technology Attachment (SATA) disk drives


Price-attractive alternative to the enterprise class FC drives for near-line applications with lower production costs, larger capacities but also lower specifications (e.g. rotational speeds, data rates, seek times) up to 80%

SATA vs. FC Drive Positioning & Considerations


sequential workloads: SATA drives perform quite well with only about 20% reduction in throughput compared to FC drives. random workloads: SATA drive transaction performance is considerably below FC drives and their use in environments with critical online transaction workloads and lowest response times is not generally recommended! SATA drives typically are very well suited for various fixed content, data archival, reference data, and near-line applications that require large amounts of data at low cost, e.g. bandwidth / streaming applications, audio/video streaming, surveillance data, seismic data, medical imaging or secondary storage. They also can be a reasonable choice for business critical applications in selected environments with less critical IOps performance requirements (e.g. low access densities).

SATA 7.2k

around 45% of FC 15k

34

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Agenda

Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC/SAS) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10) New Trends & Directions: Solid State Drive (SSD) Basic Principles for Planning Logical Configurations Performance Data Collection and Analysis

35

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

RAID Level Comparison - RAID5 vs RAID10


RAID5 cost-effective with regard to performance and usable capacity (87.5% usable capacity for 7+P) provides fault tolerance for one disk drive failure data is striped across all drives in the array with the parity being distributed across all the drives A single random small block write operation typically causes a RAID5 write penalty, initiating four I/O operations to the disk back-end by reading the old data and the old parity block before finally writing the new data and the new parity block (this is kind of a worst-case scenario it may take less operations when writing partial or even full stripes dependent on the I/Os in cache). On modern disk systems write operations are generally cached by the storage subsystem and thus handled asynchronously so that RAID5 write penalties are generally shielded from the users in terms of disk response time. However, with steady and heavy random write workloads, the cache destages to the back-end may still become a limiting factor so that either more disks or a RAID10 configuration might be required to provide sufficient disk back-end write performance. RAID10 best choice for fault-tolerant, write-sensitive environments at the cost of 50% usable capacity can tolerate at least one, and in most cases even multiple disk failures. data is striped across several disks and the first set of disk drives is mirrored to an identical set. each write operation initiates two write operations at the disk back-end
36 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

RAID5 Writing a single data block


RAID5 - Read-Modify-Write: RAID5 Write Penalty
Worst case scenario with one write operation requiring four disk operations to array (1) read old data (2) read old parity [ MODIFY ] (3) write new data (4) write new parity

RAID5 (7+P) ARRAY


performing XOR calculation (1) (3) Cache

Parity

(2) (4)

= data being read from disk

= data being written to disk

37

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

RAID5 Writing a full stripe


RAID5 - Full Stripe Write
Especially with large I/O transfer sizes or sequential workoads full stripe writes can be accomplished with RAID5 where the parity can be calculated on the fly without the need to read any old data from the array prior to the write operation

RAID5 (7+P) ARRAY

Parity

Cache = data being read from disk = data being written to disk

38

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

RAID5 vs RAID10 Backend I/O rate calculation example


Sustained front-end I/O rate: 1000 IOps (70:30:50) Example for a typical 70:30:50 random, small-block application workload (Read:write ratio = 70:30; Read cache hit ratio = 50%) Sustained back-end I/O rate: 1550 IOps RAID5 vs 950 IOps RAID10 RAID5 : 1000 logical random IOps 700 reads 50% cache hits = 350 reads 300 writes 4 (write penalty: read old data/parity, write new data/parity) = 1200 reads & writes a total of 1550 physical IOps on the disks at the physical backend RAID10 : 1000 logical random IOps

700 Reads 50% Cache Hits = 350 Reads 300 Writes 2 (two mirrored writes) = 600 Writes a total of 950 physical IOps on the disks at the physical backend
RAID10 already outperforms RAID5 in a typical 70-30-50 workload.
!!! Consider using RAID10 if random write percentage is higher than 35% !!!
39 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

RAID5 vs RAID10 Performance summary


RAID level RAID5 RAID10 Random Read Random Write Sequential Sequential Read Write Capacity 8-DDMs 87.5% 50.0%

+ +

o +

+ +

+ o

RAID5 vs RAID10 - Performance


RAID5 and RAID10 basically deliver a comparable performance for read operations. RAID5 tends to perform better than RAID10 for large block sequential writes. RAID10 always performs better than RAID5 for small block random writes.

RAID5 vs RAID10 - Selection


RAID5 is a good choice for most environments requiring high availability and fewer writes than reads (e.g. multi-user environments with transaction database applications and a high read activity). RAID10 should be considered for fault-tolerant and performance-critical, write-sensitive transaction processing environments with a high random write percentage above 35%.

40

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

RAID6 - Overview
RAID6: Dual parity RAID
DS8000: 5+P+Q+S or 6+P+Q arrays (using modified EVENODD code) Survives 2 erasures 2 drive failures 1 drive failure plus a medium error, such as during rebuild (especially with large capacity drives) Like RAID5, parity is distributed in stripes, with the parity blocks in a different place in each stripe RAID6 does have a higher performance penalty on write operations than RAID5 due to the additional parity calculations.

RAID Level Comparison:


RAID Level RAID-5, 7+P RAID-10, 4+4 RAID-6, 6+P+Q
41 2011-04-04

Reliability (#Erasures) 1 at least 1

Space efficiency

Write penalty (Disk ops) 4

87.5%
50% 75%

2
6
2011 IBM Corporation

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance

DS8000 - Single Rank RAID Performance (1/2)

DS8000 R4.0 DS8000 R4.0 (no IWC) (no IWC) full stroke full stroke
42 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

DS8000 - Single Rank RAID Performance (2/2)

RAID6 RAID6

RAID5 RAID5

RAID10 RAID10

increasing write penalty

DS8000 R4.0 DS8000 R4.0 (no IWC) (no IWC) full stroke full stroke
43 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Agenda

Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC/SAS) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10) New Trends & Directions: 2.5" & Solid State Drive (SSD) Basic Principles for Planning Logical Configurations Performance Data Collection and Analysis

44

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

New Trends Small-form-factor (2.5") SAS disk drives


High-density small-form-factor (2.5") SAS HDDs are replacing LFF (3.5") FC/SAS HDDs providing a high level of performance and energy efficiency on a smaller footprint: allowing higher packing densities with more disks in the same footprint consuming considerably less power than 3.5" 15k drives increasing system-level performance up to 115 percent over same speed drives offering both high transactional performance and low power consumption thus improving IOPS/W ratio considerably over comparable 3.5" 15k drives

Example: DS8800 (2.5" 146GB15k SAS HDDs) vs. DS8700 (3.5" 146GB15k HDDs)
Fully configured with a base frame and two expansion frames, the new DS8800 can reduce floor space requirements by 40% and energy requirements by over 35%, all while supporting more drives than a five-frame DS8700 model (e.g. 1056x 2.5" disks in 3 frames in DS8800 vs. 1024x 3.5" DDMs in 5 frames DS8700). The small-form-factor drives offer better performance at the same rotational speeds, as well as better energy usage per drive and at a lower cost per gigabyte than the large-form-factor enterprise Fibre Channel drives available on most highend systems today. 16x 3.5" 24x 2.5" HDDs HDDs Estimated Storage Enclosure power DS8700 DS8800
Table takes into account controller card power, power efficiencies, power for cooling, and power for disks. Power per Enclosure Power per Disk 310 W 19.4 W 245 W 10.2 W
2011 IBM Corporation

45

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance

New Trends Small-form-factor (2.5") SAS Disk Drive Specs


2.5" 146GB15k 3.5" 146GB15k

=6.95W This is just an example for getting a view on typical disk drive characteristics. The chosen disk types above do not necessarily represent the characteristics of the disk drive modules used in IBM System Storage systems. Source: www.seagate.com (2010)
46 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

DS8800 - Single Rank RAID5 performance: 2.5" vs. 3.5"

RAID6 RAID6

RAID5 RAID5

RAID10 RAID10

3.5" HDDs 3.5" HDDs

2.5" HDDs 2.5" HDDs

DS8700 R5.0 / / DS8800 R6.0 DS8700 R5.0 DS8800 R6.0 full stroke full stroke

47

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Processing Capabilities and Disk Performance over 50 years


4 GHz Operations per second
s ie lit bi pa Ca

1956 IBM RAMAC (1st disk drive) 5 MB storage, 1200 RPM data transfer rate 8800 characters per second 2010 Enterprise FC Hard Disk Drive (HDD) 600GB storage capacity, 15000 RPM data transfer rate 122 to 204 MB/s Last 50 years of HDD technology: HDD RPM: HDD Capacity 12.5 x 120 000 x

or ss ce ro P

Performance Gap

0.1 MHz

ce orman k Perf Di s

Time

New: SSD drives (STEC-inc):

48

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

New Trends & Directions - Solid State Drives (SSD)


What are solid-state drives? Semiconductor (NAND flash, non-volatile) No mechanical read/write interface, no rotating parts: i.e. no seek time or rotational delays Electronically erasable medium Random access storage Capable of driving tens of thousands of IOps with response times less than 1ms Absence of mechanical moving parts makes SSDs significantly more reliable than HDDs Wear issues are overcome through over-provisioning and intelligent controller algorithms (Wear-Levelling) Application benefits Increased performance for transactional applications with high random IO rates (IOps): Online Banking / ATM / Currency Trading, Point-of-Sale Transactions / Processing, Real-time data mining Solid state disks in DS8000 offer a new higher performance option for enterprise applications. Best suited for cache-unfriendly data with high access densities (IOps/GB) requiring low response times Additional benefit of lower energy consumption, cooling and space requirements (data center footprint)
49 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Solid State Drives (SSD) - DS8300 R4.2 Single Rank Performance


Single RAID5 Rank - Random Read Single RAID5 Rank - Random I/O Random I/O: SSDs >> HDDs Random I/O: SSDs >> HDDs

RAID5 Write-Penalty RAID5 Write-Penalty (1:4 Backend Ops) (1:4 Backend Ops)

Single RAID5 Rank - Random Read Single RAID5 Rank - Sequential I/O SSDs show exceptionally low response times SSDs show exceptionally low response times Sequential I/O: SSDs ~~ HDDs Sequential I/O: SSDs HDDs

Source: IBM Whitepaper, IBM System Storage DS8000 with SSDs - An In-Depth Look at SSD Performance in the DS8000, http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101466
50 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Solid State Drives (SSD) - DS8700 R5.1 Single Rank Performance


4kB Random IO
New 750GX Device Adapter
Note: A single rank is only serviced by a single Device Adapter (DA) of a DA pair and managed by either CEC#0 (rank group 0) or CEC#1 (reank group 1) after assignment to an extent pool. Two or more ranks are required to be able to utilize the full I/O bandwidth of a DA pair by assigning half of the ranks of each DA pair to even extent pools (P0, P2, P4, ..., managed by CEC#0 / rank group 0) and half of the ranks to odd extent pools (P1, P3, P5, ..., managed by CEC#1 / rank group 1).

64kB Sequential IO

Source: IBM Whitepaper, IBM System Storage DS8700 Performance Whitepaper, ftp://public.dhe.ibm.com/common/ssi/sa/wh/n/tsw03053usen/TSW03053USEN.PDF 51 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Solid State Drives (SSD) - DS8700 Single Rank Performance

IOps (12x)

RT

DS8700 R5.1 DS8700 R5.1 full stroke full stroke

52

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Solid State Drives (SSD) - DS8700 Single Rank Performance

IOps (>8x)

RT

DS8700 R5.1 DS8700 R5.1 full stroke full stroke

53

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Solid State Drive (SSD) Latency and IOps vs I/O block size

Adding writes and/or increasing transfer size reduces SSD throughput and increases latency substantially
Source: Session sDS10, Storage Performance Made Easy with Easy Tier and SSDs, IBM, IBM STG Technical Conference, Lyon, 2010 54 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Application I/O On a human Time Scale (SSDs vs. HDDs)

CPU

1 cycle := 1 second

MEMORY

1:40 minutes

>10x faster than HDD

(<1ms) >100x
more IOps than HDD

SSD

11 days

56

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Solid State Drive (SSD) - Tiered Storage Concepts


Tier 0 Tier 1 Tier 2
Solid State Drives (SSD): Highest performance and cost/GB 15k RPM HDDs (FC/SAS): High performance lower cost/GB 7200 RPM HDDs (SATA): Lowest performance and cost/GB
hot

cold

Solid State Drive technology remains more expensive than traditional spinning disks, so the two technologies will coexist in hybrid configurations for several years. Tiered storage is an approach of utilizing different types of storage throughout the storage infrastructure. Using the right mix of tier 0, 1, and 2 drives will provide optimal performance at the minimum cost, power, cooling and space usage. Data Placement is key! To maximize the benefit of SSDs it is important to analyze application workloads and only place data which requires high access densities (IOps/GB) and low response times on them.
IBM System Storage DS8000 with SSDs - An In-Depth Look at SSD Performance in the DS8000 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101466 Driving Business Value on Power Systems with Solid State Drives ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/pow03025usen/POW03025USEN.PDF
57 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

data

SSD whitepapers

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

IBM DS8700 R5.1 Solid-State Storage Optimization with Easy Tier


Solid-state drives (SSDs) offer significantly improved performance compared to mechanical disk drives... but it takes more than just supporting SSDs in a disk subsystem for clients to achieve the full benefit: Task: Optimizing data placement across tiers of drives with different price and performance attributes can help clients operate at peak price/performance. Implementing this type of optimization is a three-step process: (1) Data performance information must be collected. (2) Information must be analyzed to determine optimal data placement. (3) Data must be relocated to the optimal tier. Solution: With DS8700 R5.1 IBM introduced IBM System Storage Easy Tier which automates data placement throughout the DS8700 disk pool (including multiple drive tiers) to intelligently align the system with current workload requirements. This includes the ability for the system to automatically and nondisruptively relocate sub-volume data (at the extent level) across drive tiers, and the ability to manually relocate full volumes or merge extent pools. Easy Tier enables smart data placement and optimizes SSD deployments with minimal costs. The additional Storage Tier Advisor Tool provides guidance for SSD capacity planning based on existing client workloads on the DS8700.
IBM System Storage DS8700 R5.1 Announcement Letter (Easy Tier) http://www.ibm.com/common/ssi/rep_ca/5/877/ENUSZG10-0125/ENUSZG10-0125.PDF IBM Redpaper: IBM System Storage DS8700 Easy Tier http://www.redbooks.ibm.com/abstracts/redp4667.html?Open
59 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Easy Tier optimizes SSD deployments by balancing performance AND cost requirements
Easy Tier delivers the full promise of SSD performance while balancing the costs associated with over provisioning this expensive resource

IBM Easy Tier

LUN Heatmap

Slower, inexpensive

Just Right

Fast, expensive

60

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Smart data placement with Easy Tier: SPC-1 (SATA/SDD)


First ever Storage Performance Council (SPC-1) benchmark submission with SATA and SSD technology Increase of
over

3X!

Easy Tier

T h r o u g h p u t (I O / s )

Over 3x IOPS Improvement

0:00
System configuration: 16x SSD + 96x 1TB SATA

2:00

4:00

6:00

8:00

10:00 Time

12:00

14:00

16:00

18:00

Source: Storage Performance Council, April 2010: http://www.storageperformance.org/results/benchmark_results_spc1#a00092 IBM Whitepaper, May 2010: IBM System Storage DS8700 Performance with Easy Tier, http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101675 61 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Smart data placement with Easy Tier: SPC-1 (SATA/SDD)


SSD + SATA + Easy Tier Config vs. FC 15K HDDs Config (same capacity)
DS8700 R5.1
192 FC HDD
Dual frames

96 SATA + 16 SSD
Single Frame
96 SATA HDD RAID10 plus 16 SSD RAID5

DS8700 R5.1

Response Time (ms)

15.00 10.00 5.00 0.00 0

96 SATA HDD RAID10 (no SSD)

Improves RT in range Improves RT in range ofImproves RT in range ordinary use


of ordinary use of ordinary use

192 FC HDD RAID5

10000

20000

30000

40000

50000

60000

Throughput (IO/s)
62 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Smart data placement with Easy Tier: SPC-1 Backend I/O Migration

6 5 % Capacity migrated 4 3 2 1 0 0 10 20 30 40 50 60 70 80 90 % Backend IO migrated


With approximately 4.9% of the SPC-1 data being migrated to SSDs With approximately 4.9% of the SPC-1 data being migrated to SSDs about 76% of all backend IOs were moved to SSDs! about 76% of all backend IOs were moved to SSDs!

63

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Sizing for Easy Tier Skew Level


Disk Magic provides
80% IOps on SSD

7.0
de ns ity )

3 predefined skew levels:


3.5

55% IOps on SSD

2.0
us

1.0
en eo og

ac ce ss

heavily skewed medium skewed lightly skewed to predict the amount of the I/O workload that can be serviced by Solid State Drives (SSDs).

(h o

37% IOps on SSD


sk ew

Sk

ew

Le

no

ve l

Skew Heavy

Skew Value 7.0 3.5 2.0

Capacity on SSD 20% 20% 20%

IOps on SSD 80% 55% 37%

20% capacity on SSD

Medium Light

64

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Disk Magic model with Easy Tier

65

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Agenda

Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC/SAS) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10) New Trends & Directions: Solid State Drive (SSD) Basic Principles for Planning Logical Configurations Performance Data Collection and Analysis

66

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Logical Configuration - Basic Principles


Three major principles for the logical configuration to optimize storage subsystem performance:

(1) Workload isolation (2) Workload resource-sharing (3) Workload spreading (1) Workload isolation (2) Workload resource-sharing (3) Workload spreading
Workload isolation (e.g. on extent pool and array level) dedicate a subset of hardware resources to a high priority workload in order to reduce impacts of less important workloads (protect the loved ones) and meet given service level agreements (SLAs) limit low priority workloads which tend to fully utilize given resources to only a subset of hardware resources in order to avoid impacting other more important workloads (isolate the badly behaving ones) provides guaranteed availability of the dedicated hardware resources but also limits the isolated workload to only a subset of the total subsystem resources and overall subsystem performance Workload resource sharing multiple workloads share a common set of subsystem hardware resources, such as arrays, adapters, ports single workloads now can utilize more subsystem resources and experience a higher performance than with only a smaller subset of dedicated resources if the workloads do not show contention with each other good approach when workload information is not available, with workloads that do not try to consume all the hardware resources available, or with workloads that show workload peaks at different times Workload spreading most important principle of performance optimization, applies to both isolated workloads and resource-sharing workloads simply means using all available resources of the storage subsystem in a balanced manner by spreading the workload evenly across all available resources that are dedicated to that workload, e.g. arrays, controllers, disk adapters, host adapters, host ports host-level striping and multi-pathing software may further help to spread workloads evenly
67 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Logical Configuration DS8100/DS8300/DS8700 Examples

s me olu ts m) ea eV n tat Exte od ( Ro te eth ing ota n M s u R tio or ca llo A nt xte E


68 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Logical Configuration DS8100/DS8300/DS8700 Examples

) s nt am xte d (e E o ate eth ing) ot M R ip g ion l Str sin cat o u o lo Al ge P nt a r te to Ex (S


69 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Logical Configuration - DS8800 new high-density enclosures


DS8700 Megapack DS8800 Gigapack

3.5 (LFF) Fibre Channel, 2Gbps FC Supports 16 disks per enclosure

2.5 (SFF) SAS, 6Gbps SAS to disks Supports 24 disks per enclosure

70

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Agenda

Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC/SAS) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10) New Trends & Directions: Solid State Drive (SSD) Basic Principles for Planning Logical Configurations Performance Data Collection and Analysis

71

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Analyzing Disk Subsystem I/O Performance


Questions to ask when performance problems occur:
What exactly is considered to perform poorly? Which application, server, volumes? Is there a detailed description of the performance problem and environment available? What is the actual business impact of the performance problem? What was the first occurrance of the problem and were there any changes in the environment? When does the problem typically occur, e.g. during daily business hours or nightly batch runs? What facts indicate that the performance problem is related to the storage subsystem? What would be the criteria for the problem to be considered as solved? Any expectations?

Data to collect and analyze:


description & config of the architecture (application server SAN storage) application characteristics, logical and physical volume layout (usage, mapping server/storage) I/O performance data collection during problem occurrance on server and storage subsystem: (a) Server Performance Data Collection: AIX Linux Windows # iostat [sT|-sTD] [interval] [no. of intervals] # filemon o fmon.log O lv,pv; sleep 60; trcstop # iostat x [interval] [no. of intervals] # perfmon GUI, then select Physical Disk Counters

(b) Storage Subsystem Performance Data Collection: DS3k/DS4k/DS5k (SMcli), XIV (XCLI), DS6k/DS8k and other (TPC for Disk)
72 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

DS3000/4000/5000 Performance Monitor

only counters for quantity of processed I/Os up to current point in time no counters for quality of processed I/Os as, for example, I/O service times additional host system performance statistics required for I/O response times
73 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

DS3000/4000/5000 Performance Metrics


DS3000/DS4000/DS5000 Performance Metrics: Total IOs (total number of processed IOs since start of data collection) Read Percentage (read percentage of all processed IOs since start of data collection) Cache Hit Percentage (read cache hit percentage of all processed read IOs) Current kB/second (average data rate in binary kB/s for current measurement interval) Maximum kB/second (maximum data rate in binary kB/s since start) Current IO/second (average I/O rate for the current measurement interval) Maximum IO/second (maximum I/O rate since start) Please note: The Read Percentage and the (Read) Cache Hit Percentage provided by these native DS3000/DS4000/DS5000 performance statistics refer to the total number of I/Os (Total IOs) which have been processed during the the whole measurement so far (i.e. from the start of the performance data collection up to the current measurement interval). They do not solely refer to the current measurement interval. Read percentage and read cache hit percentage for the I/O rate of the current measurement interval can be derived from these values. However, due to the limited decimals for these percentages the calculation will lack accuracy with a growing number of Total IOs. If the change of Total IOs during a measurement interval becomes less than 0.1% it is impossible to correctly calculate the read and read cache hit percentage for this interval anymore.
74 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

DS3000/4000/5000 Performance Data Collection


SMcli script for continuous performance data collection over given time frame:
perfmon.scr on error stop; set performanceMonitor interval=60 iterations=1440; upload storageSubsystem file="c:\perf01.txt" content=performanceStats; >smcli [IP-Addr. Ctr.A] [IP-Addr. Ctr.B] -f perfmon.scr
Performing syntax check... Syntax check complete. Executing script... Script execution complete. SMcli completed successfully.

Always collect the Performance Statistics together with latest Subsystem Profile to document the actual subsystem configuration used during data collection
75 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

DS3000/4000/5000 Performance Data Collection Example


Example of performance statistics file collected on DS4000 with firmware < v7.xx
Performance Monitor Statistics for Storage Subsystem: 174290U-13F1217-AS_DC1 Date/Time: 4/13/05 7:20:12 AM Polling interval in seconds: 60 Devices,Total,Read,Cache Hit,Current,Maximum,Current,Maximum ,IOs,Percentage,Percentage,KB/second,KB/second,IO/second,IO/second Capture Iteration: 1 Date/Time: 4/13/05 7:20:13 AM CONTROLLER IN SLOT A,593368.0,15.4,20.2,1516.6,1516.6,164.8,164.8, Logical Drive AIX01_09,38.0,28.9,54.5,0.1,0.1,0.0,0.0, Logical Drive AIX01_15,119.0,61.3,75.3,2.2,2.2,0.0,0.0, Logical Drive AIX02_08,59.0,27.1,37.5,0.1,0.1,0.0,0.0, [...] CONTROLLER IN SLOT B,2347017.0,59.4,34.5,16469.9,16469.9,651.8,651.8, Logical Drive AIX01_08,107.0,63.6,80.9,2.1,2.1,0.0,0.0, Logical Drive AIX01_10,112.0,67.0,73.3,2.2,2.2,0.0,0.0, Logical Drive AIX01_14,109.0,73.4,75.0,2.2,2.2,0.0,0.0, [...] STORAGE SUBSYSTEM TOTALS,2940385.0,50.5,33.6,17986.5,17986.5,816.5,816.5, [...] For more information about how to collect and process these DS4000 performance statistics please see: How to collect performance statistics on IBM DS3000 and DS4000 subsystems (on IBM Techdocs) IBMers http://w3.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD103963 IBM BPs http://partners.boulder.ibm.com/src/atsmastr.nsf/WebIndex/TD103963
76 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

DS3000/4000/5000 Performance Data Collection Example


Example of performance statistics file collected on DS4000 with v7.xx firmware
"Performance Monitor Statistics for Storage Subsystem: DS4700_PFE1 Date/Time: 12.02.08 10:29:13 - Polling interval in seconds: 20" "Storage Subsystems ","Total IOs ","Read Percentage ","Cache Hit Percentage ","Current KB/second ","Maximum KB/second ","Current IO/second ","Maximum IO/second" "Capture Iteration: 1","","","","","","","" "Date/Time: 12.02.08 10:29:14","","","","","","","" "CONTROLLER IN SLOT A","0.0","0.0","0.0","0.0","0.0","0.0","0.0" "Logical Drive Data_1","0.0","0.0","0.0","0.0","0.0","0.0","0.0" "Logical Drive Data_3","0.0","0.0","0.0","0.0","0.0","0.0","0.0" [...] "CONTROLLER IN SLOT B","0.0","0.0","0.0","0.0","0.0","0.0","0.0" "Logical Drive Data_2","0.0","0.0","0.0","0.0","0.0","0.0","0.0" "Logical Drive Data_4","0.0","0.0","0.0","0.0","0.0","0.0","0.0" [...] "STORAGE SUBSYSTEM TOTALS","0.0","0.0","0.0","0.0","0.0","0.0","0.0" [...]

(same format as DS3000/DS5000 performance statistics)


For more information about how to collect and process these DS4000 performance statistics please see: How to collect performance statistics on IBM DS3000 and DS4000 subsystems (on IBM Techdocs) IBMers http://w3.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD103963 IBM BPs http://partners.boulder.ibm.com/src/atsmastr.nsf/WebIndex/TD103963
77 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

DS3000/4000/5000 Performance Data Analysis


Subsystem total IOps / MBps (average / peak) Controller A and B total IOps / MBps Identify busiest volumes Identify busiest arrays Verify if - Array/volume configuration - RAID level - Disk type is appropriate for the workload Verify if workload distribution is balanced across all arrays and both controllers Evaluate response times with appropriate Disk Magic models

78

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

DS3/4/5000 Performance Analyzer Tool (IBM internal only)

MS Excel spreadsheet for aa MS Excel spreadsheet for quick import and analysis quick import and analysis of DS4000 performance of DS4000 performance statistic outputs and statistic outputs and DS4000 profile with DS4000 profile with export feature for export feature for generating aahtml report generating html report

Excel based DS4000 Performance Analyze Tool

http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3088

79

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

DS3000/4000/5000 Performance Data Collection


Storage Explorer Lite (LSI):
Free tool from LSI to collect subsystem performance statistics from a selection of DS3000/4000/5000 subsystems providing historical performance information: IOps, MBps, Read Cache Hit, Read:Write Ratio, Average I/O Size Runs on Windows 32-bit platforms only, including Windows XP and Windows Vista. Installation utility installs the Microsoft .NET 3.5 Framework and Microsoft SQL Server Express 2008 (requires SP2 for Windows XP) from active internet connection. Requires registration at: http://www.lsi.com/DistributionSystem/User/Login.aspx

80

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

XIV XIVGUI Performance Data Collection

81

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

XIV XCLI Performance Data Collection

XCLI (one command line):


>xcli -m IPADDR -u USER -p PASSWD -s -y statistics_get start=2009-10-07.11:00 count=300 interval=1 resolution_unit=minute > C:\xiv_20091007.csv
82 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

DS6000/DS8000 DSCLI Performance Metrics Examples


dscli> showfbvol -metrics 2000 Date/Time: 24. April 2007 14:32:15 CEST IBM DSCLI Version: 5.2.2.224 DS: ID 2000 Date 04/24/2007 14:30:25 CEST normrdrqts 17 normrdhits 5 normwritereq 121050 dscli> showrank -metrics r2 normwritehits 121050 Date/Time: 24. April 2007 14:37:43 CEST IBM seqreadreqs 0 ID R2 seqreadhits 0 Date 04/24/2007 14:35:53 CEST seqwritereq 151127 byteread 587183 seqwritehits 151127 bytewrit 287002 cachfwrreqs 0 Reads 1176760 cachfwrhits 0 Writes 315629 cachefwreqs 0 timeread 2509716 cachfwhits 0 timewrite 392892 inbcachload 0 bypasscach 0 DASDtrans 29 seqDASDtrans 0 dscli> showioport -metrics I001 cachetrans 33315 Date/Time: 24. April 2007 14:41:47 CEST IBM NVSspadel 0 ID I0001 normwriteops 0 Date 04/24/2007 14:39:56 seqwriteops 0 byteread (FICON/ESCON) 0 reccachemis 2 bytewrit (FICON/ESCON) 0 qwriteprots 0 Reads (FICON/ESCON) 0 CKDirtrkac 0 Writes (FICON/ESCON) 0 CKDirtrkhits 0 timeread (FICON/ESCON) 0 cachspdelay 0 timewrite (FICON/ESCON) 0 timelowifact 0 bytewrit (PPRC) 0 phread 25 byteread (PPRC) 0 phwrite 33420 Writes (PPRC) 0 phbyteread 5 Reads (PPRC) 0 phbytewrite 2082 timewrite (PPRC) 0 recmoreads 2 timeread (PPRC) 0 sfiletrkreads 0 byteread (SCSI) 56586 contamwrts 0 bytewrit (SCSI) 454426 PPRCtrks 0 Reads (SCSI) 414404 NVSspallo 272177 Writes (SCSI) 4906333 timephread 28 timeread (SCSI) 2849 timephwrite 40138 timewrite (SCSI) 111272 byteread 0 bytewrit 8508 timeread 4 timewrite 4061
83 2011-04-04

IBM.2107-7503461

DSCLI Version: 5.2.2.224 DS: IBM.2107-7503461

DSCLI Version: 5.2.2.224 DS: IBM.2107-7503461 CEST

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

TPC for Disk Subsystem Performance Monitoring


The IBM Tivoli Storage Productivity Center (TPC) is a suite of storage infrastructure management tools for storage environments by centralizing, simplifying and automating storage tasks associated with storage systems, Storage Area Networks (SAN), replication services and capacity management. IBM Tivoli Storage Productivity Center for Disk (TPC for Disk) is an optional component of TPC, that is designed to manage multiple SAN storage devices and to monitor the performance of SMI-S compliant storage subsystems from a single user interface. IBM Tivoli Storage Productivity Center Standard Edition includes three components of the TPC suite as one bundle at a single price: TPC for Data, Fabric and Disk. New customers with IBM System Storage Productivity Center (SSPC) which includes the preinstalled (but separately purchased) IBM Tivoli Storage Productivity Center Basic Edition only need to purchase the additional TPC for Disk component to be able to collect performance statistics from their supported IBM storage subsystems. TPC for Disk is the official IBM product for clients requiring performance monitoring of their IBM storage subsystems (e.g. DS4k, DS5k, DS6k, DS8k, SVC, ESS, 3584 Tape, ...) TPC V4.x introduced Tivoli Common Reporting (TCR) for creating customized reports from TPC database with BIRT (Business Intelligence Reporting Tools) & IBM Cognos 8 Business Intelligence (Version 8.4)

84

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

TPC for Disk Subsystem Performance Reports


Select to initiate the report creation 4 1 2

85

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

TPC for Disk Subsystem Performance Reports


Select for creating a chart

86

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

TPC for Disk Subsystem Performance Reports

87

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

TPC for Disk Export Subsystem Performance Reports


Select to export performance data as CSV output file using File > Export Data dialog

88

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

TPC for Disk Analyzing Reports in a Spreadsheet

89

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

TPC for Disk Reports of Interest by Subsystem


ESS, DS6000 and DS8000: By Storage Subsystem By Controller By Array By Volume By Port SAN Volume Controller: By Storage Subsystem By IO Group By Node By Managed Disk Group By Volume By Managed Disk By Port

DS4000 and other supported SMI-S compliant storage subsystems: By Storage Subsystem Dont forget to export a complete set of reports for the subsystem of interest, e.g. for a DS8000: By Volume 20080131-75APNK1-subsystem.csv, By Port
20080131-75APNK1-controller.csv,

Some reports may give more or less data, depending on the exact level of SMI-S compliance by the vendor supplied CIM agents.
90 2011-04-04

20080131-75APNK1-ports.csv, 20080131-75APNK1-arrays.csv, 20080131-75APNK1-volumes.csv

Limit the reports to a representative time frame as the amount of data especially for the volume report can be extremly large!
2011 IBM Corporation

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance

TPC for Disk How to start with Performance Monitoring


Simply start monitoring and thus understanding the current workload patterns (workload range and workload profile) developing over the day/week/month for normal operation conditions where no end-user complaints are present. Develop an understanding of the expected behaviour. I/O rates and response times may vary considerably from hour to hour or day to day simply due to various application loads, business times and changes in the workload profile. You may even experience times with high I/O rates and extremly low response times (e.g. high cache hit ratios) as well as times with only moderate I/O rates but higher response times (e.g. lower cache hit ratios) still not being of any concern. Appropriate thresholds for I/O rates and response times can be derived from these statistics based on particular application and business requirements. Regularly collect selected data sets for historical reference and do projections of workload trends. Evaluate trends in I/O rate and response time and plan for growth accordingly. Typically response times increase with increasing I/O rates. Historical performance data is the best source for performance and capacity planning. Watch for any imbalance of the overall workload distribution across the subsystem resources. Avoid single resources from becoming overloaded (hot spots). Redistribute workload if needed. When end-user performance complaints arise simply compare current and historical data and look for appropriate changes in the workload that may lead to performance impacts. Additional performance metrics may help to better understand the workload profile behind the changes in I/O rates and response times: Required for appropriate Read:Write ratio
Read Cache Hit Percentage [%] avg. Read/Write/Overall Transfer Size [kB] per I/O operation

Disk Magic models and performance evaluations


2011 IBM Corporation

91

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

A practical Introduction to Disk Storage System Performance

TPC for Disk Basic Performance Metrics


There are lots of performance metrics available. Which ones are best to start with? Most important metrics for a storage subsystem are: I/O Rate: number of I/O operations per second [IOps or IO/s] Response Time (RT): average service time per I/O operation in milliseconds [ms] These metrics are typically available for read operations, write operations and the total number of processed I/O operations on subsystem, controller, port, array, volume, I/O group, node, mdisk & mdisk group level Basic performance statistics to look at for storage subsystems are in principle: front-end I/O statistics on subsystem level for overview of system overall workload front-end I/O statistics on volume level for selected critical applications / host systems backend I/O statistics on array level (i.e. on the physical disk level / spindles) General thresholds for front-end statistics are difficult to provide, because I/O rate thresholds depend on workload profile and subsystem capabilities RT thresholds depend on application, customer requirements, business hours Additional metric is Data Rate: throughput in megabytes per second [MBps] on subsystem level for overview of overall throughput on port level together with Port RT for overview of port and I/O adapter utilization
92 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

TPC for Disk Basic Guidelines for DS8000


In general, there do not exist typical values or fixed thresholds for all performance metrics as they typically strongly depend on the nature of the workload: Online Transaction Processing (OLTP) workloads (e.g. database)

- small transfer sizes (4kB...16kB) with high I/O rates - low front-end response times around 5ms commonly expected
Backup, batch or sequential-like workloads

- large transfer sizes (32kB...256kB) with low I/O rates but high data rates - high front-end response times even up to 30ms still can be acceptable
Subsystem level front-end metrics (subsystem total average):

- Overall Response Time < 10ms


Array level back-end metrics (physical disk

- Back-end Read Response Time < 25ms - Disk Utilization Percentage << 80% - I/O rate: depends on RAID level, workload profile, number and speed of DDMs

humb s of t le as ru s tions ges shold t e sug e thre riat nmen t som access): nviro pprop e jus r la ar e lues a enera rticul se va . In g s pa h nt The rt wit nts. e clie to sta ireme on th d qu base ion re to be plicat p need and a

considered very busy with I/O rates near or above 1000 I/Os (DS8000/DS6000)

Volume level front-end metrics (I/O performance as experienced by the host systems):

- Overall Response Time < 15ms (depends on application requirements and workload) - Write-cache Delay Percentage < 3% (typically should be 0%)
93 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

94

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

ESCC Storage is our profession!

Client training workshops, seminars

Lab validation, proof of concept

Client strategy workshops

End-to-end client support & services

Channel / skill enablement, certification

Showcases, remote demo, new products

Custom software & solutions

Storage technical assistance

Usergroups, Client councils

http://escc.mainz.de.ibm.com
95 2011-04-04 IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague 2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Disclaimer
Copyright 2011 by International Business Machines Corporation. No part of this document may be reproduced or transmitted in any form without written permission from IBM Corporation. Product data has been reviewed for accuracy as of the date of initial publication. Product data is subject to change without notice. This information could include technical inaccuracies or typographical errors. IBM may make improvements and/or changes in the product(s) and/or programs(s) at any time without notice. Any statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. Any reference to an IBM Program Product in this document is not intended to state or imply that only that program product may be used. Any functionally equivalent program, that does not infringe IBM's intellectually property rights, may be used instead. It is the user's responsibility to evaluate and verify the operation of any on-IBM product, program or service. The performance information contained in this document was derived under specific operating and environmental conditions. The results obtained by any party implementing the products and/or services described in this document will depend on a number of factors specific to such partys operating environment and may vary significantly. IBM makes no representation that these results can be expected in any implementation of such products and/or services. Accordingly, IBM does not provide any representations, assurances, guarantees, or warranties regarding performance. THE INFORMATION PROVIDED IN THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NONINFRINGEMENT. IBM shall have no responsibility to update this information. IBM products are warranted according to the terms and conditions of the agreements (e.g., IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement, etc.) under which they are provided. IBM is not responsible for the performance or interoperability of any non-IBM products discussed herein. The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY, 10504-1785, U.S.A.

96

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

A practical Introduction to Disk Storage System Performance

Trademarks
The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.
AS/400, e business(logo), eServer, FICON, IBM, IBM (logo), iSeries, OS/390, pSeries, RS/6000, S/30, VM/ESA, VSE/ESA, WebSphere, xSeries, z/OS, zSeries, z/VM, System i, System i5, System p, System p5, System x, System z, System z9, BladeCenter, System Storage, System Storage DS, TotalStorage For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market. Those trademarks followed by are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.

The following are trademarks or registered trademarks of other companies.


Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce. LSI is a trademark or registered trademark of LSI Corporation. * All other products may be trademarks or registered trademarks of their respective companies. Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.

97

2011-04-04

IBM Systems Technical University & STG Technical Enablement Conference, April 2011, Prague

2011 IBM Corporation

You might also like