Professional Documents
Culture Documents
Introduction
IBM Server and Storage Integration
PowerHA
DB2
AIX / IBM i
Multi pathing / Easy Tier Client
POWER Hardware
GDPS
Media
Manager
System z Hardware
DS8870
zHPF
Easy Tier
Media Manager
Application driven tier management - application informs Easy Tier of appropriate tier
System z Hardware
HyperSwap
Availability
GDPS
DS8870
Simplified Management
DB2 End-to-End I/O Priorities and Cooperative Caching for Power AIX
Integrated performance monitoring tools between Power i and DS8000
Quality of Service (QoS) provided by IO Priority Manager (IOPM)
Easy Tier
PowerHA
DB2
AIX / IBM i
Metro Mirror
Availability
POWER Hardware
Simplified Management
DS8870
Ordering Information
#1835
#6358
#5618
600GB 15,000 RPM FDE Capable HDD Drive Set (16 drives)
#5619
600GB 15,000 RPM FDE Capable CoD HDD Drive Set (16 drives)
Multi-thread Performance Accelerator
#0605
#0745
#7025
#1835
Note: R7.4 is not available for older DS8000 models including the DS8100, DS8300,
DS8700 and DS8800. The release 7 code stream is for DS8870 models only
10
Client benefits
Better performance and save space
Non disruptive upgrade
New features on this page require R7.4 microcode
11
FC#0605 is the Multi-thread Performance Accelerator indicator for the model 961
Feature is activated, if ordered, at the factory or via SSR/PFE if ordered via MES
Solid State
200 GB SSD FDE capable
400 GB SSD FDE capable
800 GB SSD FDE capable
Enterprise
146 GB / 15,000 RPM FDE capable
300 GB / 15,000 RPM FDE capable
600 GB / 10,000 RPM FDE capable
1.2 TB / 10,000 RPM FDE capable
Nearline
4 TB / 7,200 RPM FDE capable
Application
I/O
Site 1
H1
Site 2
Metro Mirror
H3
Feature Codes
FC#0745 242x-961 Indicator for MT-PPRC
Metro Mirror
H2
Easy Tier
Generation 7
15
16
17
DB2
DB2
Re-org
Storage
Storage
19
Assignment
hint
Re-org
SSD
SSD
HDD
HDD
Implementation
New interface between z/OS (Media Manager) and DS8870 Easy Tier API
Introduces three new Media Manager functions and utilized by DB2
1)
2)
Returns token representing the average heat and tier allocation for the volume track range(s)
3)
Easy Tier will set the heat of the volumes track range(s) to the average heat calculated
Media Manager uses these commands to query and then set the desired tier
allocation
Normal heat assignment takes over after the lease expires
Assignment is not permanent and Easy Tier will manage the extents as normal after lease
expiration
20
Manage
Pause migration
Resume migration
21
MANAGEEXTPOOL Command
-action etmonpause Specifies that Easy Tier monitoring of this storage pool is paused. All current
Easy Tier migration plans are unaffected, but no new migration plans are formed.
-action etmonresume Specifies that Easy Tier monitoring of this storage pool is resumed. Any
current Easy Tier migration plans are unaffected.
-action etmonreset Specifies that all Easy Tier monitoring data (history), including migration plans,
are erased. All new plans are based on new monitoring data.
-action etmigpause Specifies that Easy Tier migrations of this storage pool are paused, including
migrations that are required to relieve rank bandwidth performance issues. Easy Tier monitoring is
unaffected by this action.
-action etmigresume Specifies that Easy Tier migrations of this storage pool are resumed. Easy
Tier monitoring is unaffected by this action.
-duration time Specifies the duration of the pause time. For example, 4H would pause for 4 hours.
Maximum duration is 168 hours (one week). Only valid for etmonpause and etmigpause.
22
Pause Easy Tier learning to avoid unfriendly workload and then resume
learning later
23
24
25
26
Flash
Enterprise
Nearline
Nearline
27
28
System Data
Mover
Application
Server
Journal
State
FICON
channels
(local)
Control
FICON
Channels
(Extended)
FICON
channels
(local)
P
S
Primary Site
29
Secondary Site
Thresholds and maximum delays are set by user for each specified volume
Different volumes may have different pacing values
Users must manage to what workload is on what volume
30
31
System
Spool
Paging
Catalog
DB2
TSO
Logs
MQ
VSAM
Batch
CICS
IMS
32
1 highest
2 high
3 medium
4 low
5 lowest
6 discretionary (or default, when not part of a service class)
Importance
Level
Pacing Level
Workload
Pacing Delay
Volume Pacing
Delay
1 (high)
0.04 ms
0.2 ms
3 (medium)
0.2 ms
0.2 ms
5 (low)
12
1.0 ms
0.2 ms
In this example without XRC Workload Based Write Pacing, all I/O to this volume would receive
0.2 ms pacing delay. With Workload Based Write Pacing, each I/O is paced based on WLMs
assignment on I/Os importance
33
IBM zHyperWrite
DB2 Log Write Acceleration
34
zHyperWrite
New z/HyperWrite function for DB2, z/OS and DS8870
with GDPS or TPC-R HyperSwap
Leverages synergy of z/OS and DS8870 replication
technologies
Accelerates DB2 log writes In IBM laboratory
testing, zHyperWrite reduces write response times by
up to 40%
zHyperWrite is better able to handle workload spikes
Improved DB2 transactional latency
Log throughput improvement
Software requirements:
z/OS zHyperWrite function in z/OS 2.1 with APARs
OA45662, OA45125 and OA44973
DB2 version 10 or DB2 version 11 with SPE
IBM DS8870 with R7.4
z/OS and DB2 support planned for year end 2014
ACK
Metro Mirror
3
ACK
36
zHyperWrite
DB2
Data UCB
Log UCB
Log UCB
37
Data
1
2
ACK
ACK
Metro Mirror
38
Multi-Target PPRC
39
Multi-Target PPRC
Single volume is the source for
two separate relationships.
H2
Metro Mirror
H1
40
HyperSwap capability is
maintained
Metro Mirror
H1
Metro Mirror
H3
H3
41
Hyper swap to H2
Resync data with H3
H1
Metro Mirror
H3
42
H2
H1
Internal IR
Pairs
H3
43
H1
Internal
IR Pair
12 1 5 11 7
4 8 6 3
2
9 10
Change Recording
44
H3
H3
45
H1
H2
S
New command to Set/Reset
Use Remote Pair FlashCopy
for a PPRC pair.
Metro
Mirror
T
FlashCopy
H1
S
T
Metro Mirror /
Global Copy
H1
H3
H2
S
T
46
Asynchronous replication
Metro Mirror
H1
Global Mirror
H3
J3
47
Global Mirror
Global Copy
H3
48
H2
H2
Global Mirror
H3
J3
Metro Mirror
H1
H1
Metro Mirror
H4
49
H3
z/Global Mirror
SDM
H2
H1
Metro Mirror
Metro Mirror
H3
50
51
Miscellaneous R7.4
functional enhancements
52
Global Copy can experience its own variation of a collision even when it is used by
Global Mirror to transmit data
Global Copy will use its own implementation of a side file to reduce the impact of Global
Copy collisions
53
54
Speed
A responsive GUI is a requirement, not something that is nice to have.
Simplicity
A simplified and intuitive design can drastically reduce total cost of ownership.
Commonality
Common graphics, widgets, terminology, and metaphors make managing
multiple IBM storage products and software much easier to learn.
55
56
57
User testing
User studies / presentations were done with participants
from the following companies
59
DS8870 UX Roadmap
2012
2013
2014
Release 7.1
Release 7.3
2015
Release 7.5
Preview
Performance Monitoring
YouTube
Video
Settings / Takeaway
Release 7.0
Release 7.2
Release 7.4
Release 7.6/8.0
Demo
Logical Configuration
Custom Notifications
60
61
62
63
Useful Links
DS8000 Design Wiki
Holds approved design documents, schedule and design team contact
information.
DS8000 UX Roadmap
Shows detailed roadmap going into the future. Feel free to comment!
64
65
New URL
67
https://<hmc ip>
68
https://<hmc ip>/service
69
70
71
72
Rack / Frame
DC-UPS
BSM
RPC
Node / Cluster
/ Server / CEC
/ LPAR
PCIe Pass
Through Card
HMC
Storage
Storage
Enclosure Enclosure PSU
CEC HDD
I/O Enclosure
Power Supply
HA Card
I/O Bay / I/O
Enclosure
PCIe/SPCN
Card
73
SSD Controller
Card
SSD Module
SSD Enclosure
DDM / Drive /
Storage
PSU
HighDisk Drive
Enclosure
Performance
FCIC Card
Flash Encl.
DA Card
Cleaning it up
RackFrame
/ Frame
UPS
HMC
DC-UPS
BSM
RPC
Node / Cluster
/ Server / CEC
/ LPAR
HMC
Node
CEC HDD
I/O Enclosure
I/O Enclosure
Power Supply DA Card
I/O Bay / I/O
Enclosure
PCIe/SPCN
Card
74
SSD Controller
Card
HA Card
HA Card
PCIe Pass
Through Card
Storage
Storage
Enclosure Enclosure PSU
Storage
SSD Module
Enclosure
SSD Enclosure
DDM / Drive /
Storage
PSU
HighDisk Drive
Drive
Enclosure
Performance
FCIC Card
Flash Encl.
DA Card
Cleaning it up
Frame
UPS
HMC
Node
I/O Enclosure
DA Card
75
HA Card
Storage
Enclosure
Drive
76
Nodes
77
HMC
78
UPSs
79
Storage
Enclosures
80
I/O Enclosures
81
Drives
82
Host Adapter
83
Device
Adapters
84
Event Reporting
Array offline
3/31/13 10:08 AM
Volume inaccessible
3/31/13 10:09 AM
Volume inaccessible
3/31/13 10:09 AM
Volume inaccessible
3/31/13 10:09 AM
3/12/13 12:22 PM
Array overdriven
3/12/13 9:54 AM
2/22/13 9:54 AM
2/21/13 4:45 PM
Volume created
3/11/13 8:10 AM
Volume created
3/11/13 8:10 AM
Volume created
3/11/13 8:10 AM
Drive normal
2/22/13 3:53 PM
Drive replaced
2/22/13 3:52 PM
Drive normal
2/22/13 3:53 PM
Drive replaced
2/22/13 3:30 PM
Full
everything
Array list
MA14of
I/Oevents
utilizationfor
exceeded
threshold in the system.
1200234
Logical
Configuration
Drive
state
is service required.
1200233
Authentication
Drive
state is service required.
Hardware
Volume
myvol_0000 was created.
VolumeEncryption
myvol_0001 was created.
Easy
Tier was created.
Volume
myvol_0002
1300861
everything
that can change within the system.
Drive
state is normal.
Drive 1200234 replaced with serial number 1300861.
Audit
log can
be offloaded
Drive 1200233
state still
is normal.
Drive 1200233
Eventreplaced
log is with
supplemental
and surfaces internal changes
serial number 1300860.
like assigned
hardware
failure,
or inaccessible data in addition to
Array MA14
to pool
fb_0.
changes
occurring
Array MA15
assigned
to pool fb_1. from user actions
Array assigned
2/9/133:36 PM
Array assigned
2/9/133:35 PM
Pool created
2/9/133:34 PM
Pool created
2/9/133:34 PM
85
Configuration Overview
for DS8000
86
Array Sites
87
Array Sites
Arrays
(RAID)
88
Array Sites
Arrays
(RAID)
Ranks
(FB or CKD)
89
Array Sites
Pool
Pool
Arrays
(RAID)
90
Array Sites
Pool
Pool
Arrays
(RAID)
91
Ranks
(FB or CKD)
Array Sites
Pool
Pool
Arrays
(RAID)
Ranks
(FB or CKD)
92
Array Sites
Pool
Pool
Arrays
(RAID)
LSS
LSS
LSS / LCU
Ranks
(FB or CKD)
Volume
Volume
Volume
Volume
Array Sites
Pool
Pool
Arrays
(RAID)
LSS
LSS
LSS / LCU
Ranks
(FB or CKD)
Volume
Group
Volume
Group
Volume
Volume
Volume
Volume
Host
Connections
Host
Connections
Host
Connections
2014 IBM Corporation
Arrays
95
96
Pool
Arrays
Pool
97
Pool
Arrays
Pool
LSS
LSS
Volume
Volume
Volume
Volume
98
Pool
Arrays
Pool
LSS
LSS
Host Port
Host Port
Volume
Volume
Volume
Volume
Host
Pool Creation
for DS8000
99
Creating Pools
100
101
Creating Pools
102
Creating Pools
103
Creating Pools
104
Creating Pools
105
106
107
108
109
110
111
Migration Secondary
Starting with an existing
H1
H2 pair
H2
H1
Metro
Mirror
H2
Remove original H2
112
Migration Primary
Starting with an existing
H1
H2 pair
Install new H1
H1
Metro
Mirror
HyperSwap to H1
Resume H1
H2, using
Incremental Resync
Metro
Mirror
H2
Terminate relationships on H1
and remove it
113
Global Mirror
Asynchronous replication
Out of region Disaster
Recovery capability
H1
Global
Mirror
H2
J2
114
Asynchronous replication
H2
H1
Global Mirror
H3
J3
115
H2
Metro
Mirror
Failback H2
H1
Internal pairs created
H1
H1
Global
Mirror
H3
J3
116
Failover H3
H1
Failback H2
H3
Global Copy
Delete H1
H3
H2
Metro
Mirror
H1
Global
Global
Copy
Mirror
Global
Mirror
H3
J3
117
H2
HyperSwap to H2
Failover H2H1
Metro
Mirror
Move I/O to H2
When H1 is recovered
H1
Global
Mirror
Failback H2H1
HyperSwap back to H1
Failback H1
H2
H3
J3
Failure at H1
Metro
Mirror
HyperSwap to H2
Incremental Resync H2
H3
Global Copy
Start Global Mirror
H1
Global
Copy
Mirror
When H1 is recovered
Failback H2
H1
Global
Mirror
H3
J3
MultiTarget restored
119
J2
Global
Mirror
When H3 recovered
Resume H1H3
Global
Copy
120
H3
Convert H1
H3 Global Copy
to Global Mirror
When H2 recovered
Resume H1H2, Global Copy
Global
Mirror
Copy
H1
Global
Copy
Mirror
H3
J3
121
H2
J2
Global Copy
Global
Copy
Mirror
Global
Copy
Mirror
When H1 is recovered
Failback H2H1
Global
Copy
H3
J3
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
The R7.4 performance enhancements fall into two categories, and are
distinguished by the use of the Multi-thread performance accelerator, or by the
lack thereof.
Distribute the work to increase parallelism and limit the effect of synchronization to
shared data structures.
Algorithms to cache updates to LRU lists in CPU caches without affecting LRU algorithm.
Multi-threading background processes to keep with foreground I/O.
Splitting hot locks into multiple locks to reduce lock contention.
Enhancements allow for both overall increased system IOps performance as well
as reduced response times at higher IOps rates.
R7.4 Performance
All results are from a DS8870 Model 961 High Performance All Flash
Configuration with 16 P7+ cpu cores, unless stated otherwise.
Performance Benchmarks
Sequential IO
Simulates IO done by OLTP applications. Similar to SPC-1. Do a mix of random reads and
writes with cache hits and cache misses.
DBO(Database Open): 70% read/30% write, 4KB IOs, 50% read cache hit
DB zOS(Database System Z): 75% read/25% write, 4KB IOs, 72% read cache hit
Cache Hostile: 72% read/28% write, 4KB IOs, 40% read cache hit
Large block sequential reads or writes to the storage servers drives. Similar to SPC-2.
Corner benchmarks
Perform just one type of IO pattern. Most applications combine a mix of different IO
patterns.
Cache Hits: 4KB random reads or writes to the storage servers cache.
Cache Read Misses: 4KB random reads to the storage servers drives.
Writes Miss: 4KB random writes to the storage servers drives.
DBO(70/30/50) w/ 8 HPFE
R7.3 vs R7.4 w/ MTPA Off and On
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0
100
200
300
400
500
600
700
800
900
1000
K IOps
R7.3
R7.4 MTPA On
The DBO benchmark does 70% reads/30% writes with 50% read cache hits.
MTPA = Multi-thread Performance Accelerator
2014 IBM Corporation
DBO(70/30/50)
R7.3 vs R7.4 with MTPA On for 1 to 8 HPFE
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0
100
200
300
400
500
600
700
800
900
1000
K IOps
1 HPFE R7.3
2 HPFEs R7.3
4 HPFEs R7.3
8 HPFEs R7.3
1 HPFE R7.4
2 HPFE R7.4
4 HPFE R7.4
8 HPFE R7.4
The DBO benchmark does 70% reads/30% writes with 50% read cache hits.
MTPA = Multi-thread Performance Accelerator
2014 IBM Corporation
0.6
0.4
0.2
0
0
100
200
300
400
500
600
700
800
K IOps
R7.4 MTPA On
The DB z/OS benchmark does 75% reads/25% writes with 72% read cache
hits.
MTPA = Multi-thread Performance Accelerator
2014 IBM Corporation
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0
100
200
300
400
500
600
700
800
K IOps
R7.4 MTPA On
The Cache Hostile benchmark does 72% reads/28% writes with 40% read cache hits.
MTPA = Multi-thread Performance Accelerator
2014 IBM Corporation
Multi-Target
PPRC
2014 IBM Corporation
Multi-Target PPRC
Configuration
1 PPRC primary, 2 PPRC secondary
DS8870, Primary 16-core, secondary: 8-core
8 host connections to 4 HAs on the primary
4 PPRC paths to each of the secondary, sharing 4 HAs
Primary: 8 DA pairs, mix of SSDs/10K
1st secondary: 4 DA pairs, 15K RPM drives
2nd secondary: 2 DA pairs, 1.2TB 10K RPM drives
Batch
Read hit
92%
92%
Read/Write Ratio
3:1
2.4:1
% Sequential Read
23
Destage Rate
8.4%
16.5%
IOPS
Curve
45KIOPs/80KIOPs
50KB
40KB
560
27KB
480
New Drives
2014 IBM Corporation
New Drives
600 GB 15K RPM 2.5 drives
1.6 TB 2.5 SSDs
Thank You
2014 IBM Corporation