You are on page 1of 44

Hitachi AMS 2500 Using 200GB SSD Drives

Scalability Analysis
A Performance Brief

By Alan Benway (Performance Measurement Group, Technical Operations)

Confidential Hitachi Data Systems Internal and Channel Partner Use Only

August 2010

Executive Summary
The purpose of this testing was to establish a variety of performance comparisons of SSD and SAS drives on
the Hitachi Adaptable Modular Storage 2500 (AMS 2500) midrange storage array. Various tests used 5, 10, 15
and 20 200GB SSD or SAS disks, each used in a RAID-5 (4D+1P) group. No other raid type was tested, since
our field experience shows that almost all users use SSDs in Raid 5 configurations in order to maximize their
price/performance ratio. In some tests (random write, for instance), we know that more IOPs would probably
result if we ran R10 configurations. Additionally, some SSD tests were also run with the use of Hitachi
Dynamic Provisioning for comparison. The AMS 2500s Hardware Load Balancing feature was Enabled.
There were no copy products license keys enabled, so the maximum amount of cache was available. There
were 11 categories of tests conducted in all.

The performance results are presented in the charts in the Test Results Summary section of this report. While
we attempt to profile a variety of application characteristics, no benchmark can replicate a real world
application as well as the actual applications themselves.
Shown below are result summaries from Test 1 (random) and Test 2 (sequential). These tables show the
measured SSD results and the interpolated 146GB 15K RPM SAS results from an AMS 2500 that show the
number of SAS drives required to match the SSD result. Note that the number of host paths in use varied by
the number of LUNs tested. Up to four host paths were used for the SSD tests, and up to 16 were used for the
SAS tests. For example, in the SAS tests, 14 LUNs would have been mapped over 14 paths, while 72 LUNs
would have been mapped across 16 paths.
As a general rule of thumb, 30 SSDs on the AMS 2500 can replace 360 15K RPM SAS drives for meeting
random performance requirements. When reviewing past SAS test results, one can see that the system
scalability limit for certain RAID levels and workloads can occur well below 360 SAS disks. As such, one
cannot expect to use 30 SSD drives plus 120 SAS disks with heavy concurrent loads that have significant write
components. In the SAS sequential results below, when using RAID-5 (4D+1P) the system limit was at about
160 disks. As such, it is expected that heavy sequential use of 30 SSDs in RAID-5 (4D+1P) would consume all
of the arrays internal resources.

100% Random Read Comparison


100%RandomRead

SSD

14Paths

8threads/LUN
Drives
5
10
15

4KB
LUNs
1
2
3

RAID54+1
Threads
8
16
24

IOPS
15,148
30,437
45,298

RT[msec]
0.5
0.5
0.5

IOPS/SSD
3030
3043
3020

20

32

60,086

0.5

3004

100%RandomRead

SAS15k

16Paths

8threads/LUN
Drives
90
180
270

4KB
LUNs
18
36
54

RAID54+1
Threads
144
288
432

IOPS
15,200
30,400
45,600

RT[msec]
9.5
9.5
9.5

IOPS/HDD
169
169
169

360

72

576

60,800

9.5

169

480*

96

96

79,500

9.7

166

Hitachi Data Systems Internal and Channel Partner Confidential

100% Random Write Comparison


RandomWrite

SSD

14Paths

1thread/LUN
Drives
5
10
15

4KB
LUNs
1
2
3

RAID54+1
Threads
1
2
3

IOPS
4,498
10,198
12,767

RT[msec]
0.2
0.2
0.2

IOPS/SSD
900
1,020
851

20

16,687

0.2

834

RandomWrite

SAS15k

1416Paths

1thread/LUN
Drives
70
160
480*

4KB
LUNs
14
32
96

RAID54+1
Threads
14
32
96

IOPS
4,700
9,600
15,000

RT[msec]
22.0
27.0
48.0

IOPS/HDD
67
60
33

SSD

14Paths

*System performance limit.

100% Sequential Read Comparison


100%SequentialRead
1thread/LUN
Drives
5
10
15

256kb
LUNs
1
2
3

RAID54+1
Threads
1
2
3

MB/s
321.2
623.5
910.7

MB/sSSD
64.2
62.4
60.7

20

1224.6

61.2

SAS15k

116Paths

MB/sHDD
59
59
20

100%SequentialRead
1thread/LUN
Drives
5
10
45

256kb
LUNs
1
2
9

RAID54+1
Threads
1
2
9

MB/s
295
590
900

60

12

12

1,200

20

160*

52

52

2300

14.5

Hitachi Data Systems Internal and Channel Partner Confidential

100% Sequential Write Comparison


100%SequentialWrite

SSD

14Paths

1thread/LUN
Drives
5
10
15

256kb
LUNs
1
2
3

RAID54+1
Threads
1
2
3

MB/s
257.9
506.6
716.7

20

931.7

SAS15k

216Paths

100%SequentialWrite
1thread/LUN
Drives
10
55
80
130

256kb
LUNs
2
11
16
26

RAID54+1
Threads
2
11
16
26

MB/s
454.9
495
718
910

160*

52

52

1095

MB/sSSD
51.6
50.7
47.8
46.6

MB/sHDD
45.5
9
9
7
6.8

*System performance limit.

Hitachi Data Systems Internal and Channel Partner Confidential

Notices and Disclaimer


Copyright 2010 Hitachi Data Systems Corporation. All rights reserved.
The performance data contained herein was obtained in a controlled isolated environment. Actual results that
may be obtained in other operating environments may vary significantly. While Hitachi Data Systems
Corporation has reviewed each item for accuracy in a specific situation, there is no guarantee that the same
results can be obtained elsewhere.
All designs, specifications, statements, information and recommendations (collectively, "designs") in this
manual are presented "AS IS," with all faults. Hitachi Data Systems Corporation and its suppliers disclaim all
warranties, including without limitation, the warranty of merchantability, fitness for a particular purpose and
non-infringement or arising from a course of dealing, usage or trade practice. In no event shall Hitachi Data
Systems Corporation or its suppliers be liable for any indirect, special, consequential or incidental damages,
including without limitation, lost profit or loss or damage to data arising out of the use or inability to use the
designs, even if Hitachi Data Systems Corporation or its suppliers have been advised of the possibility of such
damages.
Adaptable Modular Storage is a registered trademark of Hitachi Data Systems, Inc. in the United States, other
countries, or both.
Other company, product or service names may be trademarks or service marks of others.
This document has been reviewed for accuracy as of the date of initial publication. Hitachi Data Systems
Corporation may make improvements and/or changes in product and/or programs at any time without notice.
No part of this document may be reproduced or transmitted without written approval from Hitachi Data
Systems Corporation.

WARNING: This document can only be used as HDS internal documentation for informational purposes only.
This documentation is not meant to be disclosed to customers or discussed without a proper non-disclosure
agreement (NDA).

Hitachi Data Systems Internal and Channel Partner Confidential

Document Revision Level


Revision

Date

Description

1.0

July 2010

Initial Release

1.1

Aug 2010

Fixed typo in executive summary table (seq writes, SAS)

Reference
Hitachi AMS 2000 Architecture and Concepts Guide
Hitachi AMS 2500 Dynamic Provisioning Concepts, Performance, and Best Practices Guide

Contributors
The information included in this document represents the expertise, feedback, and suggestions of a number of
skilled practitioners. The author would like to recognize and thank the following contributors or reviewers of
this document:
Yusuke Nishihara, Engineer, Disk Array Software Development Dept. III, Storage Systems

Development, Disk Array Systems Division, Hitachi LTD


Ian Vogelesang, Performance Measurement Group - Technical Operations
Mel Tungate, Product Management, Midrange

Hitachi Data Systems Internal and Channel Partner Confidential

Table of Contents
Executive Summary ............................................................................................................................................................. 2
Purpose of This Testing....................................................................................................................................................... 9
Workload Generator Information ........................................................................................................................................ 9
Test Configurations and Workloads ................................................................................................................................... 9

Configuration.................................................................................................................................................. 9
Test Methodologies...................................................................................................................................... 10
Tests 1 and 2: Uniform Workloads, RAID Group and Block Size Scalability, Random and Sequential, SSD and
SAS ........................................................................................................................................................................ 11
Tests 3 and 4: Mixed Workloads, RAID Group and Block Size Scalability, Random and Sequential, SSD and SAS11
Test 5: Single Workload, Single RAID Group, Thread Scalability, SSD and SAS .................................................. 12
Tests 6 to 9: Mixed Workloads, RAID Group and Block Size Scalability, Random and Sequential, SSD, HDP and
non-HDP ................................................................................................................................................................ 12
Tests 10 and 11: Mixed Workloads, 4 RAID Groups, Random, SSD and SAS, HDP and non-HDP ...................... 12
AMS 2500 Test Results Summary ..................................................................................................................................... 13

Test 1 Results .............................................................................................................................................. 13


Random Read Summary ........................................................................................................................................ 13
Random Write Summary ........................................................................................................................................ 14

Test 2 Results .............................................................................................................................................. 14


Sequential Read Summary .................................................................................................................................... 14
Sequential Write Summary..................................................................................................................................... 15

Test 3 Results .............................................................................................................................................. 16


Observations .......................................................................................................................................................... 16

Test 4 Results .............................................................................................................................................. 17


Observations .......................................................................................................................................................... 17

Test 5 Results .............................................................................................................................................. 21


Test 6 Results (non-HDP) ............................................................................................................................ 22
Test 7 Results (HDP) ................................................................................................................................... 22
Test 8 Results (non-HDP) ............................................................................................................................ 23
Test 9 Results (HDP) ................................................................................................................................... 23
Test 10 Results ............................................................................................................................................ 23
Test 11 Results (HDP) ................................................................................................................................. 25
Conclusions ........................................................................................................................................................................ 26
APPENDIX A. Test Configuration Details ......................................................................................................................... 28

Test information ........................................................................................................................................... 28


Host Configuration ....................................................................................................................................... 28
Storage Configuration .................................................................................................................................. 28
APPENDIX B. Test-1 Full Results ...................................................................................................................................... 29
APPENDIX C. Test-2 Full Results ...................................................................................................................................... 33
APPENDIX D. Test-3 Full Results ...................................................................................................................................... 37
Random Mixed Workloads ..................................................................................................................................... 37
APPENDIX E. Test-4 Full Results ...................................................................................................................................... 39
Sequential Workloads Using Default 256KB RAID Chunk ..................................................................................... 39
Sequential Workloads Using Optional 64KB RAID Chunk ..................................................................................... 41

Hitachi AMS 2500 Using 200GB SSD HDDs


Scalability Analysis
A Performance Brief
By Alan Benway (Performance Measurement Group, Technical Operations)

Purpose of This Testing


The purpose of this testing was to establish a variety of performance comparisons of SSD and SAS
drives on the Hitachi Adaptable Modular Storage 2500 (AMS 2500) midrange storage array. Various tests
used 5, 10, 15 and 20 200GB SSD or 146GB 15k SAS disks in a RAID-5 (4D+1P) configuration.
Additionally, some SSD tests were also run with the use of Hitachi Dynamic Provisioning for comparison.
The AMS 2500s Hardware Load Balancing feature was Enabled. There were no copy products license
keys enabled, so the maximum amount of cache was available.
These results will help answer questions about the kind of performance capabilities to expect with various
workloads when using a 0% cache hit ratio. The performance results are presented in the charts in the
Test Results Summary section of this report. While we attempt to profile a variety of application
characteristics, no benchmark can replicate a real world application as well as the actual applications
themselves.

Workload Generator Information


Vdbench and IOmeter were used to generate a variety of I/O workloads against raw volumes (no file
systems, with their various overheads). Various workload parameters such as I/O rates, file sizes,
transfer sizes, thread counts, read/write ratios, and random versus sequential were controlled by
parameter files. By using raw volumes, the tests bypassed the host file system and its cache, thus more
accurately reflecting the I/O performance capabilities of the storage unit.

Test Configurations and Workloads


Configuration
There was a single Hitachi AMS 2500 midrange storage system used for these tests. The AMS 2500 was
configured with 16GB of Cache. A RAID-5 (4D+1P) configuration was used for both SSD and SAS
configurations. Various LUN sizes were used during these tests, with an 8GB, 133GB, or 362GB LUN
configured. On some test there were two LUNs per RAID Group rather than one.
There was one HP DL585 G2 server used, with 4 x 3GHz Opteron dual-core processors, 16GB of RAM,
and four Qlogic QLE2462 PCIe 4Gb/sec Fibre Channel HBAs, with up to 8 4Gb/sec paths used for the
tests. The operating system used was Microsoft Windows Server 2003 with Service Pack 2.
Table 1 shows the general locations of the SAS and SSD drives for each RAID Group. Four disk trays
were used, with five empty drive slots per tray. From one to four host ports (1 or 2 per controller) were
used for these tests.

HitachiDataSystemsInternalandChannelPartnerConfidential

Page9

Table 1. AMS 2500 RAID Group Layout RAID-5 (4D+1P)


Tray3

RG3SAS

RG3SSD

Tray2

RG2SAS

RG2SSD

Tray1

RG1SAS

RG1SSD

Tray0

RG0SAS

RG0SSD

Slot#

10

11

12

13

14

Test Methodologies
There were eleven types of tests performed on SSD drives, with six of these tests also run on SAS disks.
The details of these tests are shown below in Tables 1 and 2. Note that tests 6, 8 and 11 also used HDP.
While SAS results are shown later on below, one need examine previous AMS 2500 SAS scalability test
results to see how many SAS disks are needed to achieve comparable levels with these SSD results.
Also note that these tests do not explore the use of 20 SSD drives along with a scaled number of SAS
disks to see where the internal bandwidth of the controllers is exhausted.
Table 2. Test Configuration Overview
Test
Set#

TestName

Configuration

HDD/SSD

RAID54D+1PGroups

LUN/RG

LUSize

HDP

BasicPerformance

HDD,SSD

1,2,3,4

8GB

HDD,SSD

1,2,3,4

8GB

HDD,SSD

1,2,3,4

8GB

HDD,SSD

1,2,3,4

8GB

HDD,SSD

362GB

HDPPerformance

SSD

4(4RG/1pool)

133GB

yes

SSD

133GB

no

SSD

4(4RG/1pool)

133GB

yes

SSD

133GB

no

10

ResponseTime

HDD,SSD

133GB

no

11

Performance

SSD

133GB

yes

Table 3. Workload Details by Test


Test

Workload

Tool

Set#

Threads/LU

BlockSize(KB)

Read%

R:8,32/W:1,8

Random

.5,4,16,64,256,1024

0,100%

IOmeter

R:1,8/W:1,8

Sequential

.5,4,16,64,256,1024

0,100%

IOmeter

Random

2,4,8,16

0,25,50,75,100%

Vdbench

Sequential

64,128,256,512,1024

0,25,50,75,100%

Vdbench

1256

Random

75%

Vdbench

Random

0,25,50,75,100%

Vdbench

Random

0,25,50,75,100%

Vdbench

HitachiDataSystemsInternalandChannelPartnerConfidential

Page10

Sequential

1024

0,25,50,75,100%

Vdbench

Sequential

1024

0,25,50,75,100%

Vdbench

10

8(16,32,64,128)

Random

0,70,100%

Vdbench

11

8(16,32,64,128)

Random

0,70,100%

Vdbench

Tests 1 and 2: Uniform Workloads, RAID Group and Block Size Scalability, Random and
Sequential, SSD and SAS
Test 1 measured the performance of 100% random reads and 100% random writes of 1-4 RAID Groups
of SSD and SAS disks using various block sizes and thread counts (per LUN). Test 2 was the same
except for sequential workloads.
The initial step was to configure 4 RAID Groups (20 disks) using RAID-5 (4D+1P) and then create a
single 8GB LUN per RAID Group for both SSD and SAS disks. These 4 LUNs were evenly assigned to
the four AMS 2500 ports (0A, 1A, 0E, 1E) in use. The AMS 2500 had its internal Hardware Load
Balancing enabled. LUNs were driven by workloads on the controllers that managed them.
For Random workloads, IOmeter was used to drive the workloads on the HP server against raw
volumes. The workload mixes included 100% Read and 100% Write, using block sizes of .5KB, 4KB,
16KB, 64KB, 256KB, and 1024KB. For reads, there were tests with 8 or 32 threads per LUN, and for
writes there were tests with 1 or 8 threads per LUN. Tests were run against 1, 2, 3, and 4 LUNs (or 5, 10,
15, and 20 disks) using 1, 2, 3, or 4 ports.
For Sequential workloads, IOmeter was used to drive the workloads on the HP server against raw
volumes. The workload mixes included 100% Read and 100% Write, using block sizes of .5KB, 4KB,
16KB, 64KB, 256KB, and 1024KB using 1 and 8 threads per LUN for reads and for writes. A special set
of tests were run using a 256KB block size and 1 thread per LUN for read and writes. Tests were run
against 1, 2, 3, and 4 LUNs (or 5, 10, 15, and 20 disks) using 1, 2, 3, or 4 ports.

Tests 3 and 4: Mixed Workloads, RAID Group and Block Size Scalability, Random and
Sequential, SSD and SAS
Test 3 measured the performance of 1-4 RAID Groups of SSD and SAS disks using mixed random
workloads using several block sizes and having 8 threads per LUN. Test 4 was the same except for
sequential workloads, larger block sizes, and only 1 thread per LUN.
The initial step was to configure 4 RAID Groups (20 disks) using RAID-5 (4D+1P) and then create a
single 8GB LUN per RAID Group for both SSD and SAS disks. These 4 LUNs were evenly assigned to
the four AMS 2500 ports (0A, 1A, 0E, 1E) in use. The AMS 2500 had its internal Hardware Load
Balancing enabled. LUNs were driven by workloads on the controllers that managed them.
For Random workloads, Vdbench was used to drive the workloads on the HP server against raw
volumes. The workload mixes included 100%, 75%, 50%, and 25% Read and 100% Write, using block
sizes of 2KB, 4KB, 8KB and 16KB. All tests used 8 threads per LUN. Tests were run against 1, 2, 3, or 4
LUNs (or 5, 10, 15, and 20 disks) using 1, 2, 3, or 4 ports.
For Sequential workloads, IOmeter was configured to drive the workloads on the HP server against the
raw volumes. The workload mixes included 100% Read and 100% Write, using block sizes of .5KB, 4KB,
16KB, 64KB, 256KB, and 1024KB using 8 threads per LUN for reads and for writes. A special set of tests
were run using a 64KB RAID chunk size and 1 thread per LUN for read and writes. Tests were run
against 1, 2, 3, or 4 LUNs (or 5, 10, 15, and 20 disks) using 1, 2, 3, or 4 ports.

HitachiDataSystemsInternalandChannelPartnerConfidential

Page11

Test 5: Single Workload, Single RAID Group, Thread Scalability, SSD and SAS
Test 5 measured the performance of one RAID Group of SSD or SAS drives with one LUN of 133GB
using a single 75% random read workload with an 8KB block size and a thread count scaling from 1 to
256 threads.
The initial step was to configure 1 RAID Group (5 disks) using RAID-5 (4D+1P) and then create a single
362GB LUN for both SSD and SAS disks. This one LUN was assigned to one AMS 2500 port (0A). The
AMS 2500 had its internal Hardware Load Balancing enabled.
For Random workloads, Vdbench was used to drive the workloads on the HP server against one raw
volume. The workload was 75% random read using a block size of 8KB. The tests scaled using used 1 to
256 threads on this LUN using one port.

Tests 6 to 9: Mixed Workloads, RAID Group and Block Size Scalability, Random and
Sequential, SSD, HDP and non-HDP
Test 6 measured the non-HDP performance of 4 RAID Groups of SSD drives using mixed random
workloads with 8 133GB LUNs, an 8KB block size and 16 threads per LUN. Test 7 was the same except
for mixed sequential workloads and a 1024KB block size. Test 8 measured the HDP performance of 4
RAID Groups of SSD drives using mixed random workloads with 8 133GB LUNs, an 8KB block size and
16 threads per LUN. Test 9 was the same except for mixed sequential workloads and a 1024KB block
size.
The initial step was to configure 4 RAID Groups (20 disks) using RAID-5 (4D+1P) and then create two
133GB LUNs per RAID Group for both SSD and SAS drives. These 8 LUNs were evenly assigned to the
four AMS 2500 ports (0A, 1A, 0E, 1E) in use. The AMS 2500 had its internal Hardware Load Balancing
enabled. LUNs were driven by workloads on the controllers that managed them.
For Random workloads, Vdbench was used to drive the workloads on the HP server against raw
volumes. The workload mixes included 100%, 75%, 50%, and 25% Read and 100% Write, using a block
size of 8KB. All tests used 16 threads per LUN. Tests were run against 8 LUNs (20 drives) on four ports.
For Sequential workloads, Vdbench was used to drive the workloads on the HP server against raw
volumes. The workload mixes included 100%, 75%, 50%, and 25% Read and 100% Write, using a block
size of 1024KB. All tests used 16 threads per LUN. Tests were run against 8 LUNs (20 drives) on four
ports.

Tests 10 and 11: Mixed Workloads, 4 RAID Groups, Random, SSD and SAS, HDP and nonHDP
Test 10 measured the performance of 4 RAID Groups of SSD and SAS disks using mixed random
workloads using a single 4KB block size and having various threads per LUN. There were two 133GB
LUNs created per RAID Group, or eight in all. The Test 11 was the same except for the use of all 4 RAID
Groups as Pools Volumes in an HDP configuration, with 8 DP-VOLs created from the Pool.
Unlike any of the previous test sets, a base set of tests within Test 10 and 11 that scaled the load on the
drives, where the aggregate percent busy rate for the RAID Groups was 10%, 50%, 70%, 80%, 90%, and
100%. All of these were run using 8 threads per LUN or DPVOL. Another series of tests were also run
where the percent busy rate was held at 100% and the thread counts varied, using 16, 32, 64, and 128
threads per LUN or DPVOL. This was to gauge the effects of load versus response time.
For Test 10, the initial step was to configure 4 RAID Groups (20 disks) using RAID-5 (4D+1P) and then
create a two 133GB LUNs per RAID Group for both SSD and SAS disks. These 8 LUNs were evenly
assigned to the four AMS 2500 ports (0A, 1A, 0E, 1E) in use. In Test 11, the system was reconfigured to

HitachiDataSystemsInternalandChannelPartnerConfidential

Page12

have the four SSD RAID Groups used as an HDP Pool, with 8 DPVOLs of 133GB created from that Pool.
There were no similar tests performed on SAS disks.
The AMS 2500 had its internal Hardware Load Balancing enabled. LUNs were driven by workloads on the
controllers that managed them.
For these Random workloads, Vdbench was used to drive the workloads on the HP server against raw
volumes. The workload mixes included 100% and 70% and 100% Write, using a block size of 4KB. All
tests used 8 threads per LUN. Tests were run against 8 LUNs (20 drives) or 8 DPVOLs.

AMS 2500 Test Results Summary


Test 1 Results
Random Read Summary
Tables 4 and 5 are summaries of random read results with only the 4KB block size for either 8 or 32
threads per LUN. The tables for all read results are included in Appendix B. Again, these tests used block
sizes of .5, 4, 16, 64, 256 and 1024KB in a 100% random read workload against one, two, three, or four
LUNs (one per RAID Group). Column 8 (SSD: SAS) shows the ratio of SSD over SAS performance.
Column 3 (threads) shows the total threads in use during the test. Columns 4 and 5 show the SSD
values, and columns 7 and 8 show the matching SAS values. Note the large overall increase in IOPS (but
also in the response times) when increasing the workload from 8 to 32 threads per LUN. For these
workloads SSD drives are about 12x faster than SAS disks.
Table 4. Random Read Results with 8 Threads per LUN
Random100%
Read

8threads/LU

4KB

RAID54+1

Drives
5
10
15
20

LUNs
1
2
3
4

Threads
8
16
24
32

SSD
IOPS
15,148
30,437
45,298
60,086

SAS

RT[msec]
0.53
0.52
0.53
0.53

IOPS
1,278
2,562
3,851
5,128

SSD:SAS

RT[msec]
6.26
6.24
6.23
6.24

X:1
11.8
11.9
11.8
11.7

Table 5. Random Read Results with 32 Threads per LUN


Random100%
Read
32threads/LU
Drives
5
10
15
20

4KB
LUNs
1
2
3
4

RAID54+1
Threads
32
64
96
128

IOPS
24,191
48,501
72,262
96,125

SSD
RT[msec]
1.32
1.32
1.33
1.33

HitachiDataSystemsInternalandChannelPartnerConfidential

IOPS
1,969
3,952
5,931
7,893

SAS
RT[msec]
16.25
16.19
16.18
16.21

SSD:SAS
X:1
12.3
12.3
12.2
12.2

Page13

Random Write Summary


Tables 6 and 7 are summaries of random write results with the 4KB block size for either 1 or 8 threads
per LUN. The tables for all write results are included in Appendix A. Again, these tests used block sizes of
.5, 4, 16, 64, 256 and 1024KB in a 100% random write workload against one, two, three, or four LUNs
(one per RAID Group). Column 8 (SSD: SAS) shows the ratio of SSD over SAS performance. Column 3
(threads) shows the total threads in use during the test. Columns 4 and 5 show the SSD values, and
columns 7 and 8 show the matching SAS values. Note the large overall increase in IOPS (but also in the
response times) when increasing the workload from 1 to 8 threads per LUN. For these workloads SSD
drives are about 7x or 9x faster than SAS disks.
Table 6. Random Write Results with 1 Thread per LUN
Random100%Write
1thread/LU
Drives
5
10
15
20

4KB
LUNs
1
2
3
4

RAID5

Threads
1
2
3
4

IOPS
4,498
10,198
12,767
16,687

SSD
RT[msec]
0.20
0.21
0.22
0.22

IOPS
623
1,253
1,903
2,533

SAS
RT[msec]
1.60
1.60
1.58
1.58

SSD:SAS
X:1
7.2
8.1
6.7
6.6

Table 7. Random Write Results with 8 Threads per LUN


Random100%Write
8threads/LU
Drives
5
10
15
20

4KB
LUNs
1
2
3
4

RAID5
Threads
8
16
24
32

IOPS
6,127
12,269
17,626
23,026

SSD
RT[msec]
1.06
1.07
1.16
1.20

IOPS
618
1,260
1,914
2,536

SAS
RT[msec]
12.93
12.70
12.53
12.62

SSD:SAS
X:1
9.9
9.7
9.2
9.1

Test 2 Results
Sequential Read Summary
Tables 8 and 9 are summaries of sequential read results with only the 256KB block size for either 1 or 8
threads per LUN. The tables with all read results are included in Appendix C. Again, these tests used
block sizes of .5, 4, 16, 64, 256 and 1024KB in a 100% sequential read workload against one, two, three,
or four LUNs (one per RAID Group).
These results show that the use of 8 threads per LUN instead of 1 thread per LUN slightly increased total
throughput but at a cost of large increase in response time. Note that response time is not usually a
consideration for sequential workloads, but it does illustrate the effect of overdriving the LUNs. Also note
the fairly small difference (5-15%) with 1 thread per LUN between the use of SSDs and SAS drives in this
workload. Column 8 shows the ratio (as a percent) of the SSD result divided by the SAS result.

HitachiDataSystemsInternalandChannelPartnerConfidential

Page14

Table 8. Sequential Read Results with 1 Thread per LUN


SequentialRead

1thread/LUN
Drives
5
10
15
20

RAID5
4+1
Threads
1
2
3
4

256KB
LUNs
1
2
3
4

MB/s
321.2
623.5
910.7
1224.6

SSD
RT[msec]
0.8
0.8
0.8
0.8

MB/s
278.7
576.6
868.0
1,171.6

RT[msec]
0.9
0.9
0.9
0.9

SSD:SAS
X:1
1.2
1.1
1.0
1.0

SAS

Table 9. Sequential Read Results with 8 Threads per LUN


SequentialRead

8threads/LUN
Drives
5
10
15
20

RAID5
4+1
Threads
8
16
24
32

256KB
LUNs
1
2
3
4

MB/s
380.5
761.4
1142.0
1535.9

SSD
RT[msec]
5.3
5.3
5.3
20.8

SAS
MB/s
293.7
624.2
951.9
1,273.9

SSD:SAS
X:1
1.3
1.2
1.2
1.2

RT[msec]
6.8
6.4
6.3
6.3

Sequential Write Summary


Tables 10 and 11 are summaries of sequential write results with only the 256KB block size for either 1 or
8 threads per LUN. The tables for all write results are included in Appendix B. Again, these tests used
block sizes of .5, 4, 16, 64, 256 and 1024KB in a 100% sequential write workload against one, two, three,
or four LUNs (one per RAID Group).
These results show that the use of 8 threads per LUN instead of 1 thread per LUN slightly increased total
throughput but at a cost of large increase in response time. Note that response time is not usually a
consideration for sequential workloads, but it does illustrate the effect of overdriving the LUNs. Also note
the small difference (7-13%) between the use of SSDs and SAS drives in this workload. Column 8 shows
the ratio (as a percent) of the SSD result divided by the SAS result.
Table 10. Sequential Write Results with 1 Thread per LUN
SequentialWrite
1thread/LUN
Drives
5
10
15
20

256KB
LUNs
1
2
3
4

RAID5
4+1
Threads
1
2
3
4

MB/s
257.9
506.6
716.7
931.7

SSD
RT[msec]
1.0
1.0
1.0
1.1

HitachiDataSystemsInternalandChannelPartnerConfidential

SAS
MB/s
227.4
454.9
665.2
865.8

RT[msec]
1.1
1.1
1.1
1.2

SSD:SAS
X:1
1.1
1.1
1.1
1.1

Page15

Table 11. Sequential Write Results with 8 Threads per LUN


SequentialWrite

8threads/LUN
Drives
5
10
15
20

256KB
LUNs
1
2
3
4

RAID5
4+1
Threads
8
16
24
32

MB/s
265.3
528.7
754.8
965.9

SSD
RT[msec]
7.5
7.5
7.9
8.2

SAS
MB/s
232.9
468.2
683.9
889.7

RT[msec]
8.5
8.4
8.7
8.9

SSD:SAS
X:1
1.1
1.1
1.1
1.1

Test 3 Results
Observations
There is a lot of detailed data presented in the Appendix D below. However, the following summary may
provide the best overall idea on small block random workloads on SSD. This summary show the results of
the 5, 10, 15 and 20 drive tests using just the block size of 8KB with the RAID chunk size of 256KB
(default).
8KB Block Size
As can be seen below there is a linear increase in performance as the test scaled from 5, 10, 15 and then
20 drives using RAID-5 (4d+1P). There was no real difference in the response times.
Chart 1. IOPS Results

520SSD,8KBBlockSize,IOPS
45000
40000
35000

IOPS

30000
25000

5SSD

20000

10SSD

15000

15SSD

10000

20SSD

5000
0
100

75

50

25

RandomRead%

HitachiDataSystemsInternalandChannelPartnerConfidential

Page16

Chart 2. Response Times

520SSD,8KBBlockSize,ResponseTime
1.6

ResponeTime(MS)

1.4
1.2
1.0
5SSD

0.8

10SSD

0.6

15SSD

0.4

20SSD

0.2
0.0
100

75

50

25

RandomRead%

Test 4 Results
Observations
There is a lot of data presented in Appendix E for these tests. However, the following summary may
provide the best overall idea on sequential workloads on SSD and the effect of changing the RAID chunk
size from the default of 256KB down to 64KB. Normally, the response time is not considered with
sequential workloads, but here they provide some interesting insight into the change of behavior with the
two RAID chunk sizes. These three summaries show the results of the 20 drive tests using block sizes of
64KB, 256KB, and 512KB with RAID chunk sizes of 64KB and 256KB (default).
64KB Block Size
As can be seen below there is no effect on performance or response time due to the different RAID chunk
sizes.
Table 12. Sequential Results with 64KB Block Size
Sequential
R54d+1p

Read%
100
75
50
25
0

20SSD
4Threads
64KBBlock
256KBChunk
MB/sec
RT[msec]
971.9
0.3
626.0
0.4
503.9
0.5
446.7
0.6
714.9
0.3

64KBChunk
MB/sec
RT[msec]
972.9
0.3
566.7
0.4
484.1
0.5
412.5
0.6
755.5
0.3

HitachiDataSystemsInternalandChannelPartnerConfidential

Page17

Chart 3. Throughput by RAID Chunk Size

20SSD,64KBBlockSize
1200
1000

MB/sec

800
600
256KBChunk
400

64KBChunk

200
0
100

75

50

25

SequentialRead%

Chart 4. Response Times

20SSD,64KBBlockSize
0.7

ResponeTime(MS)

0.6
0.5
0.4
0.3

256KBChunk

0.2

64KBChunk

0.1
0.0
100

75

50

25

SequentialRead%

256KB Block Size


As can be seen below, the smaller 64KB RAID chunk size has a large advantage over the default 256KB
chunk except for the 100% read or write cases where they are equal. The presence of mixed workloads
gives a large performance and response time advantage to the smaller chunk size.

HitachiDataSystemsInternalandChannelPartnerConfidential

Page18

Table 13. Sequential Results with 256KB Block Size


Sequential

20SSD

R54d+1p

4Threads

256KBBlock

256KBChunk

Read%

MB/sec

100
75
50

64KBChunk

RT[msec]

MB/sec

RT[msec]

1218.8

0.8

1231.7

0.8

241.0

4.2

789.4

1.3

117.9

8.5

596.1

1.7

25

97.6

10.4

486.3

2.1

938.9

1.1

1040.0

1.0

Chart 5. Throughput by RAID Chunk Size

20SSD,256KBBlockSize
1400
1200

MB/sec

1000
800
600

256KBChunk

400

64KBChunk

200
0
100

75

50

25

SequentialRead%

Chart 6. Response Times

20SSD,256KBBlockSize
16

ResponeTime(MS)

14
12
10
8
256KBChunk

64KBChunk

4
2
0
100

75

50

25

SequentialRead%

HitachiDataSystemsInternalandChannelPartnerConfidential

Page19

512KB Block Size


As can be seen below, the smaller 64KB RAID chunk size has a large advantage over the default except
for the 100% read or write cases. The presence of mixed workloads gives a large performance and
response time advantage to the smaller chunk size.
Table 14. Sequential Results with 512KB Block Size
Sequential

20SSD

R54d+1p

4Threads

512KBBlock

256KBChunk

Read%

MB/sec

100
75

64KBChunk

RT[msec]

MB/sec

RT[msec]

1257.4

1.6

1282.2

1.6

347.7

5.8

851.2

2.3

50

189.7

10.6

631.2

3.2

25

144.2

13.9

502.0

4.0

960.3

2.1

1032.0

1.9

Chart 7. Throughput by RAID Chunk Size

20SSD,512KBBlockSize
1400
1200

MB/sec

1000
800
600

256KBChunk

400

64KBChunk

200
0
100

75

50

25

SequentialRead%

HitachiDataSystemsInternalandChannelPartnerConfidential

Page20

Chart 8. Response Times

20SSD,512KBBlockSize
16

ResponeTime(MS)

14
12
10
8
256KBChunk

64KBChunk

4
2
0
100

75

50

25

SequentialRead%

Test 5 Results
These tests used a single RAID Group of either SSD or SAS drives. There was a single LUN of 362GB,
and a workload of 75% random read with a block size of 8KB was used, with a scaling of the thread
counts from 1 to 256 as shown below. These also include the controller percent busy rates (for the single
controller in use).
The SAS drive tests didnt cause much CPU usage until a high thread count of 64 for that LUN was
reached. The 5 SSD drive tests showed a heavy CPU usage from a thread count of 4 and up. Note that
for SSD, at the 16 thread level, the CPU % Busy was 68%. Yet in Test 6, at the 75% test with 16 threads,
the CPU % Busy was 58% with 20 drives and three times the IOPS rate (9570 IOPS versus 28191). So it
appears that the CPU % Busy rates with SSDs dont track with the number of drives or the loads, and
should only be used as a rough guide relative to SAS drives.
Table 15. SSD 5-Disk Thread Scaling
RandomRead75%
8KBblock
RAID5
Threads
IOPS
1
1555
2
2723
4
4420
8
6739
16
9570
32
12230
64
13753
128
11405
256
8874

4d+1p
MB/s
12.2
21.3
34.5
52.7
74.8
95.5
107.4
89.1
69.3

5SSD
RT[msec]
0.6
0.7
0.9
1.2
1.7
2.6
4.7
11.2
28.8

HitachiDataSystemsInternalandChannelPartnerConfidential

362GBLUN
CPUusage
1%
12%
28%
49%
68%
80%
95%
100%
90%

Page21

Table 16. SAS 5-Disk Thread Scaling


RandomRead75%
8KBblock
RAID5
Threads
IOPS
1
165
2
272
4
400
8
534
16
661
32
778
64
861
128
976
256
1008

4d+1p
MB/s
1.3
2.1
3.1
4.2
5.2
6.1
6.7
7.6
7.9

5SAS
RT[msec]
6.1
7.4
10.0
15.0
24.2
41.1
74.4
131.1
254.0

362GBLUN
CPUusage
1%
1%
1%
1%
1%
3%
82%
94%
95%

Test 6 Results (non-HDP)


Random Mixed Workloads Using 4 SSD RAID Groups, 8 133GB LUNs, and 16 Threads
Note the rapid drop in IOPS with a write element in the workload. Also note how well the response time
holds up for all write levels.
Table 17. SSD Mixed Random Workload Results with 20 Drives
Random8kb
SSD(20)
Read%
100
75
50
25
0

NonHDP
R54d+1p
IOPS
44,674
28,191
25,714
26,376
22,452

8133GBLUNs
MB/s
349.0
220.2
200.9
206.1
175.4

16threads
RT[msec]
0.7
1.1
1.2
1.2
1.4

CPUUsage
55%
58%
73%
92%
98%

Test 7 Results (HDP)


Random Workloads Using HDP with 4 SSD RAID Groups, 8 133GB DPVOLs, and 16 Threads
These workloads used 8 DPVOLs instead of 8 LUNs as above.
Table 18. SSD Mixed Random Workload Results with 20 Drives and HDP
Random8kb
SSD(20)
Read%
100
75
50
25
0

HDP
R54d+1p
IOPS
39,688
25,415
22,708
21,603
17,438

8133GBLUNs
MB/s
310.1
198.6
177.4
168.8
136.2

16threads
RT[msec]
0.8
1.3
1.4
1.5
1.8

HitachiDataSystemsInternalandChannelPartnerConfidential

CPUUsage
65%
69%
83%
95%
99%

Page22

Test 8 Results (non-HDP)


Sequential Workloads Using 4 SSD RAID Groups, 8 133GB LUNs, 16 Threads
The performance with 50% to 0% Sequential reads stayed around 1GB/s. The 100% Read test shows
that there is at least a 57% cache hit rate occurring in the server, as this is a 596 MB/s per path result.
Note that 4Gbit/s FC paths top out at about 380 MB/sec.
Table 19. SSD Mixed Sequential Workload Results with 20 Drives
Sequential1024k
SSD(20)
Read%
100
75
50
25
0

R54d+1p
MB/s
2386.7
1470.3
1088.2
957.7
1084.2

NonHDP
8133GBLUNs
RT[msec]
13.4
21.8
29.4
33.4
29.5

16threads
CPUUsage
33%
46%
57%
69%
98%

Test 9 Results (HDP)


Sequential Workloads Using HDP with 4 SSD RAID Groups, 8 133GB DPVOLs, and 16 Threads
These workloads used 8 DPVOLs instead of 8 LUNs as above. Also, the 100% Read result indicates at
least a 10% cache hit rate in the server with the average 415MB/s per path result.
Table 20. SSD Mixed Sequential Workload Results with 20 Drives and HDP
Sequential1024k
SSD(20)
Read%
100
75
50
25
0

R54d+1p
MB/s
1661.2
1129.7
883.3
836.5
1096.2

HDP
8133GBLUNs
RT[msec]
19.3
28.4
36.3
38.3
29.2

16threads
CPUUsage
24%
38%
54%
71%
100%

Test 10 Results
Random Workloads Using 4 RAID Groups, 4KB blocks and 8 133GB LUNs
This set of non-HDP tests has two parts to look at load scaling as opposed to LUN scaling. The first part
of the tests uses up to 8 threads per LUN, but the workload throttles the overall amount of dispatched
threads in such a way as to produce a certain aggregate SSD drive percent busy rate. The tests stepped
from 10%, 50%, 70%, 80%, and then 90%. The next set of tests ran at a constant 100% load, but
increased the threads per LUN counts from 8, 16, 32, 64 and then 128. One set of tests used 100%
random read as the workload, the second used 100% random write, and the last set used 70% random
read plus 30% sequential read in a mixed workload.
In the 100% random read tests, there was a steady performance gain as the workload increased until the
64 threads per LUN (i.e. 512 overall) point when the aggregate controller busy rates likely hit 99% (this
data was not captured).

HitachiDataSystemsInternalandChannelPartnerConfidential

Page23

Table 21. Random Read Results


100%RandomRead4KB
NonHDP
Disk%Busy
10%
50%
70%
80%
90%
100%
100%
100%
100%
100%

SSD(20)
Threads/LUN
8
8
8
8
8
8
16
32
64
128

R54d+1p
IOPS
6,197
30,608
42,794
48,891
55,090
60,424
88,981
111,285
125,238
123,714

8LUNs
MB/s
24.2
119.6
167.2
191.0
215.2
236.0
347.6
434.7
489.2
483.3

133GBLUNs
RT[msec]
0.7
0.6
0.6
0.5
0.5
0.5
0.7
1.1
1.6
2.6

In the 100% random write tests, there was a steady performance gain as the workload increased until the
100% busy test using 8 threads per LUN (i.e. 64 overall) point when the ability of the SSD drives to
accept writes hit the limit.
Table 22. Random Write Results
100%RandomWrite4KB
NonHDP
Disk%Busy
10%
50%
70%
80%
90%
100%
100%
100%
100%
100%

SSD(20)
Threads/LUN
8
8
8
8
8
8
16
32
64
128

R54d+1p
IOPS
2,599
12,995
18,100
20,694
23,318
25,623
23,398
21,337
16,407
11,480

8LUNs
MB/s
10.2
50.8
70.7
80.8
91.1
100.1
91.4
83.4
64.1
44.8

133GBLUNs
RT[msec]
0.2
0.3
0.4
0.6
0.7
1.2
2.7
6.0
15.6
44.6

In the 70/30% tests, there was a steady performance gain as the workload increased until the 100% busy
test using 32 threads per LUN (i.e. 256 overall) point when the ability of the SSD drives to accept writes
along with the reads hit the limit.

HitachiDataSystemsInternalandChannelPartnerConfidential

Page24

Table 23. Mixed Random and Sequential Results


70%RandomRead,30%SequentialRead4KB
NonHDP
SSD(20)
R54d+1p
IOPS/MaxIOPS
Threads/LUN
IOPS
10%
8
2,900
50%
8
14,508
70%
8
20,321
80%
8
23,227
90%
8
26,108
100%
8
28,523
100%
16
40,252
100%
32
51,715
100%
64
32,438
100%
128
26,495

8LUNs
MB/s
11.3
56.7
79.4
90.7
102.0
111.4
157.2
202.0
126.7
103.5

133GBLUNs
RT[msec]
0.5
0.7
0.9
1.0
1.0
1.1
1.6
2.5
7.9
19.3

Test 11 Results (HDP)


Random Workloads Using HDP with 4 RAID Groups, 4KB Blocks and 8 133GB DPVOLs
This set of tests is the same as above but uses 8 HDP DPVOLs rather than 8 standard LUNs (2 per RAID
Group). The four RAID Groups were used in one HDP Pool.
In the 100% random read tests, there was a steady performance gain as the workload increased until the
64 threads per LUN (i.e. 512 overall) point when the aggregate controller busy rates hit 99%.
Table 24. Random Read Results with HDP
100%RandomRead4KB
HDP
Disk%Busy
10%
50%
70%
80%
90%
100%
100%
100%
100%
100%

SSD(20)

R54d+1p

8LUNs

133GBLUNs

Threads/LUN
8
8
8
8
8
8
16
32
64
128

IOPS
5,407
26,699
37,302
42,583
47,990
53,002
74,747
88,647
96,295
96,096

MB/s
21.1
104.3
145.7
166.3
187.5
207.0
292.0
346.3
376.2
375.4

RT[msec]
0.8
0.6
0.6
0.6
0.6
0.6
0.8
1.4
2.6
5.2

%CPU
usage
1
41
60
68
75
81
93
97
99
99

In the 100% random write tests, there was a steady performance gain as the workload increased until the
performance knee with the 100% busy test using 8 threads per LUN (i.e. 64 overall) point when the ability
of the SSD drives to accept writes hit the limit.

HitachiDataSystemsInternalandChannelPartnerConfidential

Page25

Table 25. Random Write Results with HDP


100%RandomWrite4KB
HDP
IOPS/MaxIOPS
10%
50%
70%
80%
90%
100%
100%
100%
100%
100%

SSD(20)
Threads/LUN
8
8
8
8
8
8
16
32
64
128

R54d+1p
IOPS
1,894
9,094
12,822
14,635
16,437
18,256
18,648
15,130
18,868
17,179

8LUNs
MB/s
7.4
35.5
50.1
57.2
64.2
71.3
72.8
59.1
73.7
67.1

133GBLUNs
RT[msec]
0.6
0.6
0.7
0.8
0.9
1.1
3.4
8.5
13.6
29.9

In the 70/30% tests, there was a steady performance gain as the workload increased until the 100% busy
test using 32 threads per LUN (i.e. 256 overall) point when the ability of the SSD drives to accept writes
along with the reads hit the limit.
Table 26. Mixed Random and Sequential Results with HDP
70%RandomRead,30%SequentialRead4KB
HDP
SSD(20)
R54d+1p
IOPS/MaxIOPS
Threads/LUN
IOPS
10%
8
2,701
50%
8
13,513
70%
8
18,916
80%
8
21,495
90%
8
24,211
100%
8
26,631
100%
16
36,132
100%
32
42,615
100%
64
41,799
100%
128
31,613

8LUNs
MB/s
10.6
52.8
73.9
84.0
94.6
104.0
141.1
166.5
163.3
123.5

133GBLUNs
RT[msec]
0.9
1.0
1.0
1.1
1.1
1.2
1.8
3.0
6.1
16.1

This table summarizes the performance knees for each test as a comparison between non-HDP and
HDP. In general there was a performance advantage of from 19% to 30% for the non-HDP configuration.
However, these tests created uniform workloads on all 8 LUNs or DPVOLs, and HDP is primarily intended
to smooth out RAID Group hot spots from having fairly skewed host loads, as is the usual case on
production systems.

Conclusions
In closing, a few general observations can be made when evaluating the performance of SSD drives on
the AMS 2500.
For random workloads, one can see that as the write component was introduced, there was a clear fall-off
in performance due to the large disparity in read to write performance of SSD technology in general. As
expected, the difference with random workloads between equal numbers of SSD drives and SAS drives
HitachiDataSystemsInternalandChannelPartnerConfidential

Page26

was also very large. However, the array can only support a small number of SSD drives, hence a small
total usable capacity.
For large block sequential workloads, there was a fairly small advantage for the SSD drives. Due to their
high cost and limited capacity, SSD drives should not be used instead of SAS for predominantly
sequential workloads.
Assuming a relatively small SSD capacity the use of HDP on an SSD pool would likely not be the
preferred approach. Since the SSD capacity is small the administrator will have to take steps to isolate
the most active workloads onto the smallest storage footprint. SSD also appears to not suffer from the
traditional hotspot degradation problem until it is at the outer edges of its performance capabilities.
Therefore deploying SSD would seem to not be able to take advantage of HDPs benefit trifecta: Space
savings, easy of provisioning, and avoiding hot spots.
The other issue to consider is the rate at which SSDs consume internal bandwidths of the array. Perhaps
a good rule of thumb to follow is that each SSD drive uses up array bandwidth at the ratio of 12-to-1 of
SAS drives for 4KB block random read environments. For random writes, this ratio varies considerably by
the block size, with a 4KB block showing about a 9-to-1 ratio of SSD drives to SAS drives. These results
suggest that using 30 SSD drives for mostly random read environments displaces 360 SAS drives or 270
SAS drives for random writes. One cannot, for example, configure an array with 30 SSDs and 300 SAS
drives with the expectation that both types can be driven hard simultaneously.

HitachiDataSystemsInternalandChannelPartnerConfidential

Page27

APPENDIX A. Test Configuration Details


Test information
Table 1: Test Details
TestPeriod

ReportDate

Location

Tester

NovDec2009

February2010

RSDJapan

YusukeNishihara

Host Configuration
Table 2: Test Server Configuration
Server

OperatingSystem

CPU

Memory

HBA

HPDL585G2

Windows2003ServerSP2

8x3GHz
Opteron8222se

16GBRAM

2QLE2462HBA

Storage Configuration
Table 3: AMS 2500 Configuration
Storage

Microcode

Numberof
hostpaths

TotalCache
Size

Licensekeys
enabled?

LoadBalancing
Enabled

AMS2500

0846/BH

144Gbit/s

16GB

No

Yes

TypeofDisk
SAS
SSD

RAID
Level
RAID5

RAIDConfiguration
4D+1P

SizeofDisk
146GB
200GB

Numberof
RAIDGroups
4

#ofDisks
20
20

Numberof
LUNs
4,8

RPM
15,000

SizeofeachLUN

Chunksize

Spares

8,133,362GB

256KB

HitachiDataSystemsInternalandChannelPartnerConfidential

Page28

APPENDIX B. Test-1 Full Results


5 Drives
Random
Read

R54+1
x1

8
threads

Block(KB)

IOPS

0.5
4
16
64
256

22,840
15,148
8,512
3,958
1,307

1024

MB/s

SSD
RT
[msec]

5drives
SSD/SAS
IOPS

11.2
59.2
133.0
247.4
326.7

0.35
0.53
0.94
2.02
6.12

1739%
1185%
728%
442%
266%

375

375.2

21.32

213%

R54+1
x1

8
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

5drives
SAS/SSD
IOPS

0.5
4
16
64
256

1,313
1,278
1,169
896
490

0.6
5.0
18.3
56.0
122.6

6.09
6.26
6.84
8.93
16.31

5.7%
8.4%
13.7%
22.6%
37.5%

1024

176

176.4

45.34

47.0%

Random
Write

R54+1
x1

1thread

Block(KB)

IOPS

MB/s

SSD
RT
[msec]

5drives
SSD/SAS
IOPS

0.5
4
16
64
256

5,058
4,498
3,209
1,174
322

2.5
17.6
50.1
73.4
80.5

0.20
0.22
0.31
0.85
3.10

762%
722%
618%
383%
256%

1024

19

18.9

53.03

60%

R54+1
x1

1thread

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

5drives
SAS/SSD
IOPS

0.5
4
16
64
256

664
623
519
307
126

0.3
2.4
8.1
19.2
31.5

1.51
1.60
1.92
3.26
7.93

13.1%
13.9%
16.2%
26.1%
39.1%

1024

32

31.5

31.73

167.1%

Random
Read

Random
Write

Random
Read

R54+1
x1

32
threads

Block(KB)

IOPS

0.5
4
16
64
256

36,995
24,191
12,347
4,757
1,430

1024

MB/s

SSD
RT
[msec]

5drives
SSD/SAS
IOPS

18.1
94.5
192.9
297.3
357.5

0.86
1.32
2.59
6.73
22.38

1806%
1228%
698%
369%
229%

388

387.8

82.52

202%

R54+1
x1

32
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

5drives
SAS/SSD
IOPS

0.5
4
16
64
256

2,048
1,969
1,769
1,289
626

1.0
7.7
27.6
80.6
156.4

15.62
16.25
18.09
24.82
51.13

5.5%
8.1%
14.3%
27.1%
43.8%

1024

192

192.2

166.49

49.6%

R54+1
x1

8
threads

Block(KB)

IOPS

MB/s

SSD
RT
[msec]

5drives
SSD/SAS
IOPS

0.5
4
16
64
256

7,553
6,127
3,397
1,191
297

3.7
23.9
53.1
74.4
74.3

1.06
1.31
2.35
6.72
26.92

1137%
992%
652%
395%
239%

1024

99

98.8

80.95

154%

R54+1
x1

8
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

5drives
SAS/SSD
IOPS

0.5
4
16
64
256

665
618
521
301
124

0.3
2.4
8.1
18.8
31.1

12.02
12.93
15.36
26.54
64.25

8.8%
10.1%
15.3%
25.3%
41.9%

1024

64

64.1

124.70

64.8%

Random
Read

Random
Write

Random
Write

HitachiDataSystemsInternalandChannelPartnerConfidential

Page29

10 Drives
Random
Read

R54+1
x2

16
threads

R54+1
x2

64
threads

Block(KB)

IOPS

Block(KB)

IOPS

0.5
4
16
64
256

1739%
1188%
721%
439%
266%

0.5
4
16
64
256

1024

21.31

212%

1024

MB/s

SAS
RT
[msec]

10drives
SAS/SSD
IOPS

1.3
10.0
36.7
112.7
245.4

6.06
6.24
6.81
8.87
16.29

5.8%
8.4%
13.9%
22.8%
37.6%

354

353.6

45.24

47.1%

R54+1
x2

2
threads

Block(KB)

IOPS

0.5
4
16
64
256

10,198
9,098
6,405
2,358
654

1024

MB/s

SSD
RT
[msec]

10drives
SSD/SAS
IOPS

45,910
30,437
16,951
7,922
2,612

22.4
118.9
264.9
495.1
652.9

0.35
0.52
0.94
2.02
6.13

751

750.7

R54+1
x2

16
threads

Block(KB)

IOPS

0.5
4
16
64
256

2,640
2,562
2,351
1,803
982

1024

Random
Write

Random
Read

Random
Read

MB/s

SSD
RT
[msec]

10drives
SSD/SAS
IOPS

74,256
48,501
24,732
9,521
2,858

36.3
189.5
386.4
595.1
714.6

0.86
1.32
2.59
6.72
22.39

1810%
1227%
695%
367%
227%

774

774.3

82.65

201%

R54+1
x2

64
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

10drives
SAS/SSD
IOPS

0.5
4
16
64
256

4,103
3,952
3,557
2,591
1,261

2.0
15.4
55.6
161.9
315.1

15.60
16.19
17.99
24.70
50.75

5.5%
8.1%
14.4%
27.2%
44.1%

1024

385

385.0

166.18

49.7%

R54+1
x2

16
threads

10drives
SSD/SAS
IOPS

Random
Read

Random
Write

MB/s

SSD
RT
[msec]

10drives
SSD/SAS
IOPS

Block(KB)

IOPS

MB/s

SSD
RT
[msec]

5.0
35.5
100.1
147.4
163.6

0.21
0.22
0.31
0.85
3.06

758%
726%
606%
390%
256%

0.5
4
16
64
256

15,145
12,269
6,802
2,391
600

7.4
47.9
106.3
149.5
150.1

1.07
1.30
2.35
6.69
26.64

1115%
974%
642%
399%
243%

38

37.8

52.77

56%

1024

200

199.8

80.07

163%

R54+1
x2

2
threads

16
threads

IOPS

MB/s

10drives
SAS/SSD
IOPS

R54+1
x2

Block(KB)

SAS
RT
[msec]

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

10drives
SAS/SSD
IOPS

0.5
4
16
64
256

1,346
1,253
1,057
604
256

0.7
4.9
16.5
37.8
64.0

1.49
1.60
1.89
3.31
7.80

13.2%
13.8%
16.5%
25.6%
39.1%

0.5
4
16
64
256

1,358
1,260
1,059
599
247

0.7
4.9
16.5
37.4
61.8

11.78
12.70
15.12
26.71
64.70

9.0%
10.3%
15.6%
25.0%
41.2%

1024

67

67.3

29.70

178.1%

1024

122

122.4

130.56

61.3%

Random
Write

Random
Write

HitachiDataSystemsInternalandChannelPartnerConfidential

Page30

15 Drives
Random
Read

R54+1
x3

24
threads

Block(KB)

IOPS

0.5
4
16
64
256

67,592
45,298
25,293
11,879
3,903

1024

Random
Read

MB/s

SSD
RT
[msec]

15drives
SSD/SAS
IOPS

33.0
176.9
395.2
742.4
975.7

0.35
0.53
0.95
2.02
6.15

1704%
1176%
716%
438%
264%

1,123

1,122.5

21.38

211%

R54+1
x3

24
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

15drives
SAS/SSD
IOPS

0.5
4
16
64
256

3,968
3,851
3,532
2,711
1,476

1.9
15.0
55.2
169.4
369.0

6.05
6.23
6.79
8.85
16.26

1024

532

532.4

45.07

Random
Write

R54+1
x3

3
threads

Block(KB)

IOPS

0.5
4
16
64
256

14,490
12,767
9,327
3,470
909

1024

Random
Read

R54+1
x3

96
threads

Block(KB)

IOPS

0.5
4
16
64
256

108,783
72,262
37,095
14,213
4,249

1024

Random
Read

MB/s

SSD
RT
[msec]

15drives
SSD/SAS
IOPS

53.1
282.3
579.6
888.3
1,062.2

0.88
1.33
2.59
6.75
22.59

1769%
1218%
695%
366%
225%

1,161

1,161.4

82.66

201%

R54+1
x3

96
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

15drives
SAS/SSD
IOPS

5.9%
8.5%
14.0%
22.8%
37.8%

0.5
4
16
64
256

6,149
5,931
5,339
3,884
1,891

3.0
23.2
83.4
242.8
472.7

15.61
16.18
17.98
24.71
50.76

5.7%
8.2%
14.4%
27.3%
44.5%

47.4%

1024

578

577.7

166.14

49.7%

R54+1
x3

24
threads

15drives
SSD/SAS
IOPS

Random
Write

MB/s

SSD
RT
[msec]

15drives
SSD/SAS
IOPS

Block(KB)

IOPS

MB/s

SSD
RT
[msec]

7.1
49.9
145.7
216.9
227.1

0.22
0.23
0.32
0.86
3.30

710%
671%
581%
381%
238%

0.5
4
16
64
256

21,311
17,626
9,612
3,514
863

10.4
68.9
150.2
219.6
215.6

1.16
1.36
2.49
6.83
27.80

1041%
921%
586%
388%
229%

57

57.3

51.69

58%

1024

293

293.5

81.76

151%

R54+1
x3

3
threads

24
threads

IOPS

MB/s

15drives
SAS/SSD
IOPS

R54+1
x3

Block(KB)

SAS
RT
[msec]

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

15drives
SAS/SSD
IOPS

0.5
4
16
64
256

2,040
1,903
1,605
910
381

1.0
7.4
25.1
56.9
95.3

1.47
1.58
1.87
3.30
7.85

14.1%
14.9%
17.2%
26.2%
42.0%

0.5
4
16
64
256

2,046
1,914
1,640
907
376

1.0
7.5
25.6
56.7
94.0

11.72
12.53
14.62
26.47
63.80

9.6%
10.9%
17.1%
25.8%
43.6%

1024

99

99.1

29.84

173.0%

1024

194

193.9

123.62

66.1%

Random
Write

Random
Write

HitachiDataSystemsInternalandChannelPartnerConfidential

Page31

20 Drives
Random
Read

R54+1
x4

32
threads

Block(KB)

IOPS

0.5
4
16
64
256

89,877
60,086
33,819
15,854
5,195

1024

Random
Read

MB/s

SSD
RT
[msec]

20drives
SSD/SAS
IOPS

43.9
234.7
528.4
990.9
1,298.6

0.36
0.53
0.95
2.02
6.16

1700%
1172%
717%
439%
264%

1,495

1,494.9

21.40

211%

R54+1
x4

32
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

20drives
SAS/SSD
IOPS

0.5
4
16
64
256

5,286
5,128
4,714
3,612
1,966

2.6
20.0
73.7
225.8
491.6

6.05
6.24
6.79
8.86
16.27

1024

709

709.4

45.10

Random
Write

R54+1
x4

4
threads

Block(KB)

IOPS

0.5
4
16
64
256

18,833
16,687
12,209
4,574
1,187

1024

Random
Read

R54+1
x4

128
threads

Block(KB)

IOPS

0.5
4
16
64
256

143,314
96,125
49,479
18,920
5,644

1024

Random
Read

MB/s

SSD
RT
[msec]

20drives
SSD/SAS
IOPS

70.0
375.5
773.1
1,182.5
1,411.0

1.00
1.33
2.59
6.76
22.68

1752%
1218%
696%
365%
224%

1,548

1,548.5

82.66

201%

R54+1
x4

128
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

20drives
SAS/SSD
IOPS

5.9%
8.5%
13.9%
22.8%
37.9%

0.5
4
16
64
256

8,182
7,893
7,109
5,185
2,524

4.0
30.8
111.1
324.0
630.9

15.64
16.21
18.00
24.69
50.71

5.7%
8.2%
14.4%
27.4%
44.7%

47.5%

1024

770

769.7

166.27

49.7%

R54+1
x4

32
threads

20drives
SSD/SAS
IOPS

Random
Write

MB/s

SSD
RT
[msec]

20drives
SSD/SAS
IOPS

Block(KB)

IOPS

MB/s

SSD
RT
[msec]

9.2
65.2
190.8
285.9
296.6

0.22
0.24
0.33
0.87
3.37

691%
659%
573%
379%
234%

0.5
4
16
64
256

27,573
23,026
12,566
4,643
1,152

13.5
89.9
196.3
290.2
288.0

1.20
1.39
2.55
6.89
27.78

1008%
908%
584%
388%
230%

74

73.9

54.07

59%

1024

389

389.2

82.11

145%

R54+1
x4

4
threads

32
threads

IOPS

MB/s

20drives
SAS/SSD
IOPS

R54+1
x4

Block(KB)

SAS
RT
[msec]

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

20drives
SAS/SSD
IOPS

0.5
4
16
64
256

2,724
2,533
2,130
1,208
508

1.3
9.9
33.3
75.5
126.9

1.47
1.58
1.88
3.31
7.87

14.5%
15.2%
17.4%
26.4%
42.8%

0.5
4
16
64
256

2,735
2,536
2,152
1,197
502

1.3
9.9
33.6
74.8
125.4

11.70
12.62
14.86
26.73
63.79

9.9%
11.0%
17.1%
25.8%
43.5%

1024

124

124.5

31.79

168.4%

1024

268

267.9

119.42

68.8%

Random
Write

Random
Write

HitachiDataSystemsInternalandChannelPartnerConfidential

Page32

APPENDIX C. Test-2 Full Results


5 Drives
Sequential
Read

R54+1
x1

1thread

Block(KB)

IOPS

0.5
4
16
64
256

14,472
11,276
8,693
3,820
1,285

1024

MB/s

SSD
RT
[msec]

5drives
SSD/SAS
IOPS

7.1
44.0
135.8
238.7
321.2

0.07
0.09
0.11
0.26
0.78

119%
103%
111%
108%
115%

337

336.6

2.97

116%

R54+1
x1

1thread

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

5drives
SAS/SSD
IOPS

0.5
4
16
64
256

12,195
10,979
7,805
3,551
1,115

6.0
42.9
122.0
221.9
278.7

0.08
0.09
0.13
0.28
0.90

84.3%
97.4%
89.8%
93.0%
86.8%

1024

290

290.2

3.44

86.2%

R54+1
x1

1thread

Block(KB)

IOPS

MB/s

SSD
RT
[msec]

5drives
SSD/SAS
IOPS

0.5
4
16
64
256

9,899
8,202
6,293
2,889
1,032

4.8
32.0
98.3
180.6
257.9

0.10
0.12
0.16
0.35
0.97

109%
93%
105%
96%
113%

1024

266

266.2

3.76

113%

R54+1
x1

1thread

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

5drives
SAS/SSD
IOPS

0.5
4
16
64
256

9,062
8,788
5,978
3,006
910

4.4
34.3
93.4
187.9
227.4

0.11
0.11
0.17
0.33
1.10

91.5%
107.1%
95.0%
104.0%
88.2%

1024

235

235.5

4.25

88.4%

Sequential
Read

Sequential
Write

Sequential
Write

Sequential
Read

R54+1
x1

8
threads

Block(KB)

IOPS

0.5
4
16
64
256

74,769
50,824
19,447
5,612
1,522

1024

MB/s

SSD
RT
[msec]

5drives
SSD/SAS
IOPS

36.5
198.5
303.9
350.7
380.5

0.11
0.16
0.41
1.43
5.26

114%
108%
109%
119%
130%

385

384.8

20.79

130%

R54+1
x1

8
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

5drives
SAS/SSD
IOPS

0.5
4
16
64
256

65,518
47,094
17,874
4,709
1,175

32.0
184.0
279.3
294.3
293.7

0.12
0.17
0.45
1.70
6.81

87.6%
92.7%
91.9%
83.9%
77.2%

1024

295

294.9

27.13

76.7%

R54+1
x1

8
threads

Block(KB)

IOPS

MB/s

SSD
RT
[msec]

5drives
SSD/SAS
IOPS

0.5
4
16
64
256

39,175
31,220
16,887
4,252
1,061

19.1
122.0
263.9
265.7
265.3

0.20
0.26
0.47
1.88
7.51

101%
100%
117%
113%
114%

1024

266

265.5

30.13

114%

R54+1
x1

8
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

5drives
SAS/SSD
IOPS

0.5
4
16
64
256

38,688
31,314
14,426
3,749
931

18.9
122.3
225.4
234.3
232.9

0.21
0.25
0.55
2.09
8.45

98.8%
100.3%
85.4%
88.2%
87.8%

1024

233

233.0

34.31

87.8%

Sequential
Read

Sequential
Write

Sequential
Write

HitachiDataSystemsInternalandChannelPartnerConfidential

Page33

10 Drives
Sequential
Read

R54+1
x2

2
threads

R54+1
x2

16
threads

Block(KB)

IOPS

Block(KB)

IOPS

0.5
4
16
64
256

105%
106%
105%
106%
108%

0.5
4
16
64
256

1024

2.95

110%

1024

MB/s

SAS
RT
[msec]

10drives
SAS/SSD
IOPS

12.1
85.2
241.3
455.7
576.6

0.08
0.09
0.13
0.27
0.87

95.5%
94.7%
95.2%
94.6%
92.5%

618

617.6

3.24

91.1%

R54+1
x2

2
threads

Block(KB)

IOPS

MB/s

SSD
RT
[msec]

10drives
SSD/SAS
IOPS

0.5
4
16
64
256

18,634
16,735
12,060
5,885
2,026

9.1
65.4
188.4
367.8
506.6

0.11
0.12
0.17
0.34
0.99

104%
104%
103%
102%
111%

1024

530

529.9

3.77

112%

R54+1
x2

2
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

10drives
SAS/SSD
IOPS

0.5
4
16
64
256

17,965
16,153
11,669
5,752
1,819

8.8
63.1
182.3
359.5
454.9

0.11
0.12
0.17
0.35
1.10

96.4%
96.5%
96.8%
97.7%
89.8%

1024

474

474.0

4.22

89.5%

MB/s

SSD
RT
[msec]

10drives
SSD/SAS
IOPS

25,882
23,041
16,213
7,706
2,494

12.6
90.0
253.3
481.6
623.5

0.08
0.09
0.12
0.26
0.80

678

677.6

R54+1
x2

2
threads

Block(KB)

IOPS

0.5
4
16
64
256

24,719
21,808
15,441
7,291
2,306

1024

Sequential
Read

Sequential
Write

Sequential
Write

Sequential
Read

MB/s

SSD
RT
[msec]

10drives
SSD/SAS
IOPS

137,228
101,538
38,907
11,240
3,046

67.0
396.6
607.9
702.5
761.4

0.12
0.16
0.41
1.42
5.25

115%
107%
103%
113%
122%

770

770.1

20.78

123%

R54+1
x2

16
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

10drives
SAS/SSD
IOPS

0.5
4
16
64
256

119,421
94,643
37,625
9,986
2,497

58.3
369.7
587.9
624.1
624.2

0.13
0.17
0.42
1.60
6.41

87.0%
93.2%
96.7%
88.8%
82.0%

1024

626

625.6

25.58

81.2%

R54+1
x2

16
threads

Block(KB)

IOPS

MB/s

SSD
RT
[msec]

10drives
SSD/SAS
IOPS

0.5
4
16
64
256

77,896
62,248
33,896
8,468
2,115

38.0
243.2
529.6
529.3
528.7

0.20
0.26
0.47
1.89
7.55

102%
100%
117%
112%
113%

1024

529

529.2

30.23

113%

R54+1
x2

16
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

10drives
SAS/SSD
IOPS

0.5
4
16
64
256

76,372
61,942
28,970
7,587
1,873

37.3
242.0
452.7
474.2
468.2

0.21
0.26
0.55
2.10
8.38

98.0%
99.5%
85.5%
89.6%
88.5%

1024

468

467.7

34.20

88.4%

Sequential
Read

Sequential
Write

Sequential
Write

HitachiDataSystemsInternalandChannelPartnerConfidential

Page34

15 Drives
Sequential
Read

R54+1
x3

3
threads

R54+1
x3

24
threads

Block(KB)

IOPS

Block(KB)

IOPS

0.5
4
16
64
256

98%
100%
100%
103%
105%

0.5
4
16
64
256

1024

2.97

107%

1024

MB/s

SAS
RT
[msec]

15drives
SAS/SSD
IOPS

18.0
130.0
359.5
691.8
868.0

0.08
0.09
0.13
0.27
0.86

102.4%
100.1%
99.8%
97.0%
95.3%

940

940.3

3.19

93.1%

R54+1
x3

3
threads

Block(KB)

IOPS

MB/s

SSD
RT
[msec]

15drives
SSD/SAS
IOPS

0.5
4
16
64
256

26,409
24,276
17,075
8,529
2,867

12.9
94.8
266.8
533.0
716.7

0.11
0.12
0.18
0.35
1.05

99%
101%
100%
101%
108%

1024

757

756.5

3.96

109%

R54+1
x3

3
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

15drives
SAS/SSD
IOPS

0.5
4
16
64
256

26,613
24,145
17,005
8,477
2,661

13.0
94.3
265.7
529.8
665.2

0.11
0.12
0.18
0.35
1.13

100.8%
99.5%
99.6%
99.4%
92.8%

1024

694

693.8

4.32

91.7%

MB/s

SSD
RT
[msec]

15drives
SSD/SAS
IOPS

36,057
33,238
23,063
11,409
3,643

17.6
129.8
360.4
713.1
910.7

0.08
0.09
0.13
0.26
0.82

1,010

1,010.1

R54+1
x3

3
threads

Block(KB)

IOPS

0.5
4
16
64
256

36,911
33,280
23,009
11,069
3,472

1024

Sequential
Read

Sequential
Write

Sequential
Write

Sequential
Read

MB/s

SSD
RT
[msec]

15drives
SSD/SAS
IOPS

189,013
149,562
58,222
16,856
4,568

92.3
584.2
909.7
1,053.5
1,142.0

0.13
0.16
0.41
1.42
5.25

104%
109%
102%
111%
120%

1,152

1,152.5

20.82

121%

R54+1
x3

24
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

15drives
SAS/SSD
IOPS

0.5
4
16
64
256

182,233
137,689
57,147
15,206
3,808

89.0
537.8
892.9
950.4
951.9

0.13
0.17
0.42
1.58
6.30

96.4%
92.1%
98.2%
90.2%
83.4%

1024

952

951.8

25.21

82.6%

R54+1
x3

24
threads

Block(KB)

IOPS

MB/s

SSD
RT
[msec]

15drives
SSD/SAS
IOPS

0.5
4
16
64
256

113,652
90,075
48,347
12,125
3,019

55.5
351.9
755.4
757.8
754.8

0.21
0.27
0.50
1.97
7.90

99%
101%
114%
110%
110%

1024

756

756.2

31.74

110%

R54+1
x3

24
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

15drives
SAS/SSD
IOPS

0.5
4
16
64
256

114,751
88,794
42,588
11,041
2,736

56.0
346.8
665.4
690.1
683.9

0.21
0.27
0.56
2.16
8.71

101.0%
98.6%
88.1%
91.1%
90.6%

1024

686

686.3

34.96

90.8%

Sequential
Read

Sequential
Write

Sequential
Write

HitachiDataSystemsInternalandChannelPartnerConfidential

Page35

20 Drives
Sequential
Read

R54+1
x4

4
threads

R54+1
x4

32
threads

Block(KB)

IOPS

Block(KB)

IOPS

0.5
4
16
64
256

95%
96%
98%
102%
105%

0.5
4
16
64
256

1024

2.99

106%

1024

MB/s

SAS
RT
[msec]

20drives
SAS/SSD
IOPS

Block(KB)

IOPS

25.6
176.7
496.4
922.4
1,171.6

0.08
0.09
0.13
0.27
0.85

105.5%
104.5%
102.2%
98.0%
95.7%

0.5
4
16
64
256

211,105
178,339
76,141
20,363
5,096

1,259

1,259.0

3.18

94.2%

1024

R54+1
x4

4
threads

Block(KB)

IOPS

MB/s

SSD
RT
[msec]

20drives
SSD/SAS
IOPS

0.5
4
16
64
256

35,735
31,769
22,720
11,124
3,727

17.4
124.1
355.0
695.3
931.7

0.11
0.13
0.18
0.36
1.07

102%
102%
101%
101%
108%

1024

968

968.2

4.13

107%

R54+1
x4

4
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

20drives
SAS/SSD
IOPS

0.5
4
16
64
256

35,146
31,126
22,507
11,013
3,463

17.2
121.6
351.7
688.3
865.8

0.11
0.13
0.18
0.36
1.15

98.4%
98.0%
99.1%
99.0%
92.9%

1024

904

904.1

4.42

93.4%

MB/s

SSD
RT
[msec]

20drives
SSD/SAS
IOPS

49,664
43,282
31,090
15,062
4,898

24.2
169.1
485.8
941.4
1,224.6

0.08
0.09
0.13
0.26
0.82

1,337

1,336.9

R54+1
x4

4
threads

Block(KB)

IOPS

0.5
4
16
64
256

52,399
45,233
31,769
14,758
4,686

1024

Sequential
Read

Sequential
Write

Sequential
Write

Sequential
Read

MB/s

SSD
RT
[msec]

20drives
SSD/SAS
IOPS

243,842
195,664
77,671
22,479
6,094

119.1
764.3
1,213.6
1,404.9
1,523.5

0.13
0.16
0.41
1.42
5.25

116%
110%
102%
110%
120%

1,536

1,535.9

20.83

120%

R54+1
x4

32
threads
MB/s

SAS
RT
[msec]

20drives
SAS/SSD
IOPS

103.1
696.6
1,189.7
1,272.7
1,273.9

0.15
0.18
0.42
1.57
6.28

86.6%
91.1%
98.0%
90.6%
83.6%

1,276

1,276.1

25.08

83.1%

R54+1
x4

32
threads

Block(KB)

IOPS

MB/s

SSD
RT
[msec]

20drives
SSD/SAS
IOPS

0.5
4
16
64
256

147,353
116,121
62,085
15,512
3,863

71.9
453.6
970.1
969.5
965.9

0.22
0.27
0.51
2.06
8.23

100%
101%
112%
108%
109%

1024

969

969.1

33.02

108%

R54+1
x4

32
threads

Block(KB)

IOPS

MB/s

SAS
RT
[msec]

20drives
SAS/SSD
IOPS

0.5
4
16
64
256

146,653
115,288
55,581
14,380
3,559

71.6
450.3
868.4
898.7
889.7

0.22
0.28
0.58
2.22
8.91

99.5%
99.3%
89.5%
92.7%
92.1%

1024

895

895.1

35.74

92.4%

Sequential
Read

Sequential
Write

Sequential
Write

HitachiDataSystemsInternalandChannelPartnerConfidential

Page36

APPENDIX D. Test-3 Full Results


Random Mixed Workloads
All of the detailed test results are shown below in these four sets of tables (by block sizes, using 2KB,
4KB, 8KB, and 16KB).
2KB Block
Random2KB
R54d+1p
Read%
100
75
50
25
0

5SSD
8Threads
IOPS
RT[msec]
18,776
0.4
8,214
1.0
7,136
1.1
7,191
1.1
7,277
1.1

5SAS
8threads
IOPS
RT[msec]
1,323
6.0
1,000
8.0
907
8.8
874
9.2
702
11.4

10SSD
16Threads
IOPS
RT[msec]
37,670
0.4
16,504
1.0
14,318
1.1
14,483
1.1
14,600
1.1

10SAS
16Threads
IOPS
RT[msec]
2,649
6.0
2,001
8.0
1,816
8.8
1,759
9.1
1,410
11.3

Random2KB
R54d+1p
Read%
100
75
50
25
0

15SSD
24Threads
IOPS
RT[msec]
55,569
0.4
24,515
1.0
21,115
1.1
20,810
1.1
20,616
1.2

15SAS
24Threads
IOPS
RT[msec]
3,975
6.0
2,996
8.0
2,728
8.8
2,646
9.1
2,114
11.4

20SSD
32Threads
IOPS
RT[msec]
73,470
0.4
32,349
1.0
27,675
1.2
26,996
1.2
26,628
1.2

20SAS
32Threads
IOPS
RT[msec]
5,293
6.0
3,985
8.0
3,636
8.8
3,538
9.0
2,818
11.4

Random4KB
R54d+1p
Read%
100
75
50
25
0

5SSD
8Threads
IOPS
RT[msec]
14,510
0.5
7,670
1.0
6,777
1.2
6,860
1.2
6,786
1.2

5SAS
8threads
IOPS
RT[msec]
1,362
5.9
1,014
7.9
915
8.7
887
9.0
708
11.3

10SSD
16Threads
IOPS
RT[msec]
29,143
0.5
15,397
1.0
13,655
1.2
13,826
1.2
13,681
1.2

10SAS
16Threads
IOPS
RT[msec]
2,728
5.9
2,021
7.9
1,833
8.7
1,794
8.9
1,421
11.3

Random4KB
R54d+1p
Read%
100
75
50
25
0

15SSD
24Threads
IOPS
RT[msec]
43,329
0.5
22,901
1.0
20,118
1.2
19,988
1.2
19,635
1.2

15SAS
24Threads
IOPS
RT[msec]
4,098
5.9
3,036
7.9
2,736
8.8
2,680
9.0
2,131
11.3

20SSD
32Threads
IOPS
RT[msec]
57,539
0.6
30,250
1.1
26,512
1.2
26,096
1.2
25,486
1.3

20SAS
32Threads
IOPS
RT[msec]
5,451
5.9
4,038
7.9
3,648
8.8
3,572
9.0
2,842
11.3

4KB Block

HitachiDataSystemsInternalandChannelPartnerConfidential

Page37

8KB Block
Random8KB
R54d+1p
Read%
100
75
50
25
0

5SSD
8Threads
IOPS
RT[msec]
9,628
0.8
6,447
1.2
5,936
1.3
6,007
1.3
5,535
1.4

5SAS
8threads
IOPS
RT[msec]
1,397
5.7
1,025
7.8
914
8.8
892
9.0
714
11.2

10SSD
16Threads
IOPS
RT[msec]
19,358
0.8
12,957
1.2
11,906
1.3
12,103
1.3
11,163
1.4

10SAS
16Threads
IOPS
RT[msec]
2,791
5.7
2,047
7.8
1,820
8.8
1,795
8.9
1,437
11.1

Random8KB
R54d+1p
Read%
100
75
50
25
0

15SSD
24Threads
IOPS
RT[msec]
28,929
0.8
19,377
1.2
17,712
1.4
17,837
1.3
16,620
1.4

15SAS
24Threads
IOPS
RT[msec]
4,184
5.7
3,071
7.8
2,725
8.8
2,696
8.9
2,696
11.2

20SSD
32Threads
IOPS
RT[msec]
38,519
0.8
25,614
1.2
23,446
1.4
23,557
1.4
22,062
1.4

20SAS
32Threads
IOPS
RT[msec]
5,581
5.7
4,081
7.8
3,637
8.8
3,589
8.9
2,865
11.2

Random16KB
R54d+1p
Read%
100
75
50
25
0

5SSD
8Threads
IOPS
RT[msec]
6,679
1.2
4,298
1.9
3,910
2.0
4,139
1.9
4,028
2.0

5SAS
8threads
IOPS
RT[msec]
1,338
6.0
991
8.1
881
9.1
912
8.8
746
10.7

10SSD
16Threads
IOPS
RT[msec]
13,372
1.2
8,611
1.9
7,887
2.0
8,317
1.9
8,114
2.0

10SAS
16Threads
IOPS
RT[msec]
2,677
6.0
1,982
8.1
1,758
9.1
1,830
8.7
1,494
10.7

Random
16KB
R54d+1p
Read%
100
75
50
25
0

15SSD
24Threads
IOPS
RT[msec]
20,038
1.2
12,905
1.9
11,766
2.0
12,386
1.9
12,098
2.0

15SAS
24Threads
IOPS
RT[msec]
4,005
6.0
2,979
8.1
2,639
9.1
2,744
8.7
2,240
10.7

20SSD
32Threads
IOPS
RT[msec]
26,672
1.2
17,205
1.9
15,643
2.0
16,458
1.9
16,130
2.0

20SAS
32Threads
IOPS
RT[msec]
5,334
6.0
3,956
8.1
3,517
9.1
3,653
8.8
2,986
10.7

16KB Block

HitachiDataSystemsInternalandChannelPartnerConfidential

Page38

APPENDIX E. Test-4 Full Results


Sequential Workloads Using Default 256KB RAID Chunk
These tests used mixed sequential workloads and block sizes of 64KB, 128KB, 256KB, 512KB, and
1024KB with the default RAID formatting chunk size of 256KB. Tests were run on 5, 10, 15 and 20 drives
using 1, 2, 3, or 4 LUNs. Both SSD and SAS drive results are listed.
64KB Block
Sequential64KB
R54d+1p
Read%
100
75
50
25
0

5SSD
5SAS
10SSD
10SAS
1Thread
1Thread
2Threads
2Threads
MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec]
249.3
0.2
254.7
0.2
493.2
0.3
481.4
0.3
161.6
0.4
91.0
0.7
320.8
0.4
212.7
0.6
127.4
0.5
95.7
0.7
255.1
0.5
190.1
0.7
113.7
0.5
90.9
0.7
228.9
0.5
185.9
0.7
186.6
0.3
189.7
0.3
374.1
0.3
366.2
0.3

Sequential64KB
R54d+1p
Read%
100
75
50
25
0

15SSD
15SAS
20SSD
20SAS
3Threads
3Threads
4Threads
4Threads
MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec]
718.4
0.3
698.2
0.3
971.9
0.3
946.2
0.3
477.8
0.4
335.8
0.6
626.0
0.4
421.1
0.6
378.4
0.5
284.8
0.7
503.9
0.5
381.6
0.7
338.6
0.6
274.1
0.7
446.7
0.6
361.9
0.7
539.9
0.3
527.5
0.4
714.9
0.3
697.4
0.4

128KB Block
Sequential
128KB
R54d+1p
Read%
100
75
50
25
0

5SSD
5SAS
10SSD
10SAS
1Thread
1Thread
2Threads
2Threads
MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec]
273.2
0.5
275.6
0.5
546.4
0.5
566.8
0.4
171.0
0.7
146.9
0.8
342.1
0.7
288.2
0.9
111.2
1.2
109.7
1.2
210.8
1.2
223.2
1.1
57.3
2.5
48.9
3.0
93.5
3.0
127.9
2.1
221.2
0.6
215.9
0.6
441.7
0.6
438.8
0.6

Sequential
128KB
R54d+1p
Read%
100
75
50
25
0

15SSD
15SAS
20SSD
20SAS
3Threads
3Threads
4Threads
4Threads
MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec]
787.1
0.5
803.1
0.5
1055.9
0.5
1061.3
0.5
491.4
0.8
423.0
0.9
652.7
0.8
565.4
0.9
325.0
1.2
322.6
1.2
399.0
1.3
426.6
1.2
173.8
2.3
179.6
2.1
183.9
2.8
212.0
2.5
635.6
0.6
625.0
0.6
823.6
0.6
807.8
0.6

HitachiDataSystemsInternalandChannelPartnerConfidential

Page39

256KB Block
Sequential
256KB
R54d+1p
Read%
100
75
50
25
0

5SSD
5SAS
10SSD
10SAS
1Thread
1Thread
2Threads
2Threads
MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec]
313.4
0.8
319.6
0.8
626.7
0.8
604.5
0.8
65.3
3.9
58.4
4.4
128.9
4.0
151.9
3.4
32.7
7.8
31.9
7.9
68.8
7.3
69.7
7.3
26.5
9.8
30.8
9.0
51.0
9.9
55.4
9.4
255.5
1.0
229.7
1.1
509.8
1.0
457.1
1.1

Sequential
256KB
R54d+1p
Read%
100
75
50
25
0

15SSD
15SAS
20SSD
20SAS
3Threads
3Threads
4Threads
4Threads
MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec]
891.7
0.8
865.6
0.9
1218.8
0.8
1179.4
0.8
180.2
4.2
187.0
4.1
241.0
4.2
238.7
4.2
91.5
8.2
93.5
8.1
117.9
8.5
119.8
8.4
69.8
10.8
74.9
10.3
97.6
10.4
102.6
10.0
716.6
1.0
665.5
1.1
938.9
1.1
866.2
1.2

512KB Block
Sequential
512KB
R54d+1p
Read%
100
75
50
25
0

5SSD
5SAS
10SSD
10SAS
1Thread
1Thread
2Threads
2Threads
MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec]
328.5
1.5
315.2
1.6
657.1
1.5
642.1
1.6
97.7
5.2
91.1
5.6
192.9
5.2
185.0
5.5
51.4
10.1
50.9
9.9
106.2
9.5
101.8
10.0
38.7
13.0
38.9
13.3
78.7
12.9
86.8
12.0
263.1
1.9
242.8
2.1
524.4
1.9
477.0
2.1

Sequential
512KB
R54d+1p
Read%
100
75
50
25
0

15SSD
15SAS
20SSD
20SAS
3Threads
3Threads
4Threads
4Threads
MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec]
946.9
1.6
907.9
1.6
1257.4
1.6
1210.6
1.6
269.4
5.6
255.7
5.9
347.7
5.8
343.8
5.9
149.3
10.1
140.8
10.7
189.7
10.6
183.0
11.0
110.3
13.6
110.1
13.7
144.2
13.9
143.7
14.0
747.6
2.0
684.7
2.2
960.3
2.1
887.1
2.3

HitachiDataSystemsInternalandChannelPartnerConfidential

Page40

1024KB Block
Sequential
1024KB
R54d+1p
Read%
100
75
50
25
0

5SSD
5SAS
10SSD
10SAS
1Thread
1Thread
2Threads
2Threads
MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec]
334.7
3.0
328.3
3.0
667.8
3.0
647.8
3.1
241.2
4.1
198.8
5.0
481.4
4.2
404.2
4.9
182.9
5.5
151.2
6.6
367.2
5.4
292.5
6.8
149.2
6.7
120.1
8.3
299.5
6.7
243.7
8.2
266.1
3.8
236.7
4.2
529.8
3.8
473.9
4.2

Sequential
1024KB
R54d+1p
Read%
100
75
50
25
0

15SSD
15SAS
20SSD
20SAS
3Threads
3Threads
4Threads
4Threads
MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec] MB/sec RT[msec]
942.4
3.2
916.0
3.3
1285.8
3.1
1244.8
3.2
706.7
4.2
596.7
5.0
938.7
4.3
782.6
5.1
539.8
5.6
433.4
6.9
720.9
5.5
583.0
6.9
443.8
6.8
361.7
8.3
586.2
6.8
478.4
8.4
756.4
4.0
690.8
4.3
967.5
4.1
898.6
4.4

Sequential Workloads Using Optional 64KB RAID Chunk


These tests used mixed sequential workloads and block sizes of 64KB, 128KB, 256KB, 512KB, and
1024KB with the optional RAID formatting chunk size of 64KB. Tests were run on 5, 10, 15 and 20 drives
using 1, 2, 3, or 4 LUNs. Both SSD and SAS drive results are listed.
64KB Block, 1 Thread / LUN, 5, 10, 15, 20 drives
Sequential64KB
R54d+1p
Read%
100
75
50
25
0

Sequential64KB
R54d+1p
Read%
100
75
50
25
0

5SSD
1Thread
MB/sec RT[msec]
251.5
0.2
142.7
0.4
120.9
0.5
107.3
0.6
196.3

0.3

15SSD
3Threads
MB/sec RT[msec]
723.8
0.3
420.2
0.4
361.9
0.5
319.4
0.6
561.1

0.3

5SAS
1Thread
MB/sec RT[msec]
245.9
0.3
106.3
0.6
79.4
0.8
66.0
1.0
200.3

0.3

15SAS
3Threads
MB/sec RT[msec]
690.4
0.3
305.4
0.6
232.0
0.8
206.5
0.9
551.0

0.3

10SSD
2Threads
MB/sec RT[msec]
504.3
0.2
285.4
0.4
243.6
0.5
211.1
0.6
394.8

0.3

20SSD
4Threads
MB/sec RT[msec]
972.9
0.3
566.7
0.4
484.1
0.5
412.5
0.6
755.5

HitachiDataSystemsInternalandChannelPartnerConfidential

0.3

10SAS
2Threads
MB/sec RT[msec]
467.0
0.3
200.2
0.6
163.3
0.8
142.9
0.9
392.8

0.3

20SAS
4Threads
MB/sec RT[msec]
940.0
0.3
400.9
0.6
324.9
0.8
281.3
0.9
751.2

0.3

Page41

128KB Block, 1 Thread / LUN, 5, 10, 15, 20 drives


Sequential128KB
R54d+1p
Read%
100
75
50
25
0

Sequential128KB
R54d+1p
Read%
100
75
50
25
0

5SSD
1Thread
MB/sec RT[msec]
286.1
0.4
166.5
0.7
136.6
0.9
110.0
1.1
239.3

0.5

15SSD
3Threads
MB/sec RT[msec]
824.5
0.5
495.1
0.8
385.0
1.0
325.4
1.2
674.2

0.6

5SAS
1Thread
MB/sec RT[msec]
281.8
0.4
117.2
1.1
82.9
1.5
69.4
1.8
248.9

0.5

15SAS
3Threads
MB/sec RT[msec]
792.0
0.5
332.9
1.1
254.4
1.5
222.7
1.7
651.8

0.6

10SSD
2Threads
MB/sec RT[msec]
569.0
0.4
334.8
0.7
265.0
0.9
219.5
1.1
473.9

0.5

20SSD
4Threads
MB/sec RT[msec]
1102.0
0.5
663.2
0.8
526.2
0.9
437.6
1.1
887.0

0.6

10SAS
2Threads
MB/sec RT[msec]
521.2
0.5
234.0
1.1
167.2
1.5
150.0
1.7
434.0

0.6

20SAS
4Threads
MB/sec RT[msec]
1059.7
0.5
451.3
1.1
326.7
1.5
298.7
1.7
829.7

0.6

256KB Block, 1 Thread / LUN, 5, 10, 15, 20 drives


Sequential256KB
R54d+1p
Read%
100
75
50
25
0

Sequential256KB
R54d+1p
Read%
100
75
50
25
0

5SSD
1Thread
MB/sec RT[msec]
317.9
0.8
197.7
1.3
150.5
1.7
122.1
2.0
269.1

0.9

15SSD
3Threads
MB/sec RT[msec]
908.2
0.8
590.5
1.3
445.7
1.7
362.2
2.1
770.5

1.0

5SAS
1Thread
MB/sec RT[msec]
303.6
0.8
129.6
1.9
88.1
2.8
69.3
3.7
216.1

1.2

15SAS
3Threads
MB/sec RT[msec]
839.4
0.9
379.3
2.0
272.2
2.8
237.7
3.2
575.4

1.3

10SSD
2Threads
MB/sec RT[msec]
646.5
0.8
399.5
1.2
306.1
1.6
247.5
2.0
536.2

0.9

20SSD
4Threads
MB/sec RT[msec]
1231.7
0.8
789.4
1.3
596.1
1.7
486.3
2.1
1040.0

HitachiDataSystemsInternalandChannelPartnerConfidential

1.0

10SAS
2Threads
MB/sec RT[msec]
572.9
0.9
262.3
1.9
183.9
2.7
151.3
3.3
429.3

1.2

20SAS
4Threads
MB/sec RT[msec]
1160.4
0.9
525.5
1.9
366.8
2.7
299.5
3.3
954.9

1.0

Page42

512KB Block, 1 Thread / LUN, 5, 10, 15, 20 drives


Sequential512KB
R54d+1p
Read%
100
75
50
25
0

Sequential512KB
R54d+1p
Read%
100
75
50
25
0

5SSD
1Thread
MB/sec RT[msec]
335.1
1.5
213.8
2.3
155.0
3.2
126.9
3.9
271.9

1.8

15SSD
3Threads
MB/sec RT[msec]
954.4
1.6
641.0
2.3
463.5
3.2
372.2
4.0
787.1

1.9

5SAS
1Thread
MB/sec RT[msec]
327.4
1.5
150.3
3.3
120.3
4.2
81.1
6.2
237.4

2.2

15SAS
3Threads
MB/sec RT[msec]
908.4
1.6
420.5
3.6
330.1
4.6
269.6
5.6
542.8

2.9

10SSD
2Threads
MB/sec RT[msec]
668.9
1.5
431.6
2.3
312.5
3.2
251.1
4.0
537.2

1.9

20SSD
4Threads
MB/sec RT[msec]
1282.2
1.6
851.2
2.3
631.2
3.2
502.0
4.0
1032.0

1.9

10SAS
2Threads
MB/sec RT[msec]
600.8
1.7
303.5
3.3
204.7
4.9
167.3
6.0
298.9

3.4

20SAS
4Threads
MB/sec RT[msec]
1232.1
1.6
594.3
3.4
446.3
4.5
347.6
5.8
831.8

2.4

1024KB Block, 1 Thread / LUN, 5, 10, 15, 20 drives


Sequential1024KB
R54d+1p
Read%
100
75
50
25
0

Sequential1024KB
R54d+1p
Read%
100
75
50
25
0

5SSD
1Thread
MB/sec RT[msec]
335.3
3.0
209.8
4.8
146.9
6.8
126.9
7.9
270.2

3.7

15SSD
3Threads
MB/sec RT[msec]
972.9
3.1
617.7
4.9
433.2
6.9
375.0
8.0
815.4

3.7

5SAS
1Thread
MB/sec RT[msec]
325.4
3.1
158.7
6.3
95.9
10.5
75.1
13.4
206.3

5.3

15SAS
3Threads
MB/sec RT[msec]
937.1
3.2
493.6
6.1
292.6
10.3
248.1
12.1
547.0

5.6

10SSD
2Threads
MB/sec RT[msec]
673.8
3.0
419.6
4.8
295.3
6.8
251.4
8.0
536.0

3.7

20SSD
4Threads
MB/sec RT[msec]
1310.7

840.6

582.9

504.1

1048.8

HitachiDataSystemsInternalandChannelPartnerConfidential

10SAS
2Threads
MB/sec RT[msec]
647.6
3.1
317.5
6.3
193.8
10.4
159.7
12.6
420.6

4.9

20SAS
4Threads
MB/sec RT[msec]
1247.1
3.2
630.7
6.4
410.0
9.8
328.3
12.2
650.8

6.2

Page43

Corporate Headquarters 750 Central Expressway, Santa Clara, California 95050-2627 USA
Contact Information: + 1 408 970 1000 www.hds.com / info@hds.com
Asia Pacific and Americas 750 Central Expressway, Santa Clara, California 95050-2627 USA
Contact Information: + 1 408 970 1000 www.hds.com / info@hds.com
Europe Headquarters Sefton Park, Stoke Poges, Buckinghamshire SL2 4HD United Kingdom
Contact Information: + 44 (0) 1753 618000 www.hds.com / info.uk@hds.com
Hitachi is a registered trademark of Hitachi, Ltd., and/or its affiliates in the United States and other countries. Hitachi Data Systems is a registered trademark and
service mark of Hitachi, Ltd. In the United States and other countries.
Microsoft is a registered trademark of Microsoft Corporation.
Hitachi Data Systems has achieved Microsoft Competency in Advanced Infrastructure Solutions.
All other trademarks, service marks, and company names are properties of their respective owners.
Notice: This document is for informational purposes only, and does not set forth any warranty, express or limited, concerning any equipment or service offered or
to be offered by Hitachi Data Systems. This document describes some capabilities that are conditioned on a maintenance contract with Hitachi Data Systems
being in effect, and that may be configuration-dependent, and features that may not be currently available. Contact your local Hitachi Data Systems sales office for
information on feature and product availability.
Hitachi Data Systems sells and licenses its products subject to certain terms and conditions, including limited warranties. To see a copy of these terms and
conditions prior to purchase or license, please go to http://www.hds.com/corporate/legal/index.html or call your local sales representatives to obtain a printed copy.
If you purchase or license the product, you are deemed to have accepted the terms and conditions.
Hitachi Data Systems Corporation 2008. All Rights Reserved
WHP-###-## July 2010

HitachiDataSystemsInternalandChannelPartnerConfidential

Page44

You might also like