Professional Documents
Culture Documents
October 2010
Copyright 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Confidential
Course Objectives
At the end of this presentation the student should be able to :
F-Class
T-Class
Self-Configuring
Self-Healing
Autonomic Policy Management
Self-Optimizing
Utilization Manageability Mesh Active Fast RAID 5 / 6
4 HP Confidential | October 2010
Self-Monitoring
Performance
InForm fine-grained OS
Gen3 ASIC
Zero Detection
planning
Self-configuring Labor-intensive
SC
C C
C C
Each InServ Physical disk is initialized with data chunklets and spare chunklets
Physical Disk
SC SC SC
Solid State disk chunklet group is smaller and reserved for high-value I/O
7 HP Confidential | October 2010
Virtual Volumes
Created and exported in two commands 15 seconds with no pre-planning
Drive Chassis
LD LD LD LD
Medium performance Highest resiliency Lower cost Autonomic
Limit Warning
LD LD
256 MB to 16 TB RAID 0, RAID 10, RAID 50 (1:2-8), RAID 60 (2:6, 2:14) Group
Channel Directors
Cache Boards
Disk Directors
10
11
Logical Disks are bound to, serviced by All LDs are bound and load balanced together transparently Each Physical Drive across allpairs Nodes Drive Magazines have Drive Chassis are Nodes are added in RAID Sets are to form a single is divided into Chunklets 4 of the same drives point-to-point connected for cache redundancy. bound together to for RAID sets ( 3+1 R5 volume. And presented each 256 MB in size. FC 15, SATA, SSD etc. to controllers nodes in An InServ with 4 or more form logical disks example will stripe the L to the host across all D Drive magazines can the T Series InServs to nodes alsoall supports 4 members of the ports and nodes. Each VVcage is automatically be mixed inside the provide level Cache Persistence RAID set into separate Enabling a TRUE widely striped across same drive chassis. availability. Which enables chassis. Massively active/active chunklets on all disk spindles The ability to withstand maintenance windows striping data and configuration on a PER of the same type (Fiber the loss ofchassis an Entire and upgrades without enabling level LUN basis. Not channel, SATA, by default etc. drive enclosure without performance penalties. availability. (default) Active/passive on aparallel per Creating a massively losing access to your LUN basis as other system data. architectures.
OK OK / !
0 3
1 4
2 5
<. E > 0
<. E > 1
|O| | O
C 0
OK OK / !
0 3
1 4
2 5
<. E > 0
<. E > 1
|O| | O
C 0
L D
OK OK / !
0 3
1 4
2 5
<. E > 0
<. E > 1
|O| | O
C 0
OK OK / !
0 3
1 4
2 5
<. E > 0
<. E > 1
|O| | O
C 0
L D
L D
12
Logical Disks (LDs) Mapped to Volumes via 128MB Regions Mapped to Drives via Raidlets (sets of Chunklets with a given RAID and media type) A mapped Region can move to another LD per Adaptive Optimization policy
Regions mapped to Raidlets of the Logical Disk 2 Raidlets (RAID 1) 4 Raidlets (RAID 5, 3+1)
SATA
14
They are the policies by which free chunklets are assembled into logical disks. They are a container for existing volumes and used for reporting They are the basis for service levels and our optimization products.
15
DW
Logical Disks
intelligent combinations of chunklets for tailored cost, performance, availability
Physical Disks
broken into Chunklets (256MB each)
16
Raid 60 at 6+2 or 14+2 (protect against double disk failure with same capacity tradeoff of RAID 50
17
18
C
C
C
p
p
C
C
C
Setsize = 6 (5+1)
Usable space = 1280 MB
(5*256)
19
20
21
The If
default R6 -ha cage set size 8 requires 4 cages per node-pair, just like the default R5 set size 4. 8 cages are available, the layout will use one chunklet per cage.
The
same rules apply to -ha mag up to two chunklets are allowed per mag but the system will place only one chunklet per mag if possible.
22
22
R10 default is 256K R50 default is 128K R60 default varies by Set Size.
Step
23
23
Disk Drive
spare drive
spare chunklets
Controller Node
24
Optimize Cost/Performance Sub-volume, bi-directional data optimization Application specific vs. global thresholds Support for both Thin and Fat volumes
Control Timing, QoS Scheduled movement Vary usage limits and tier definitions by application Minimize Technology Risk Existing sub-volume data movement engine Prevent Data Thrashing Performance data collected after cache Configurable analysis period
No Impacts to Users
Virtual Volume
26
27
1 cluster of 5 hosts and 10 volumes, requires 50 provisioning actions on most traditional arrays!
Error-prone
VMware clusters are dynamic resources subject to growth and frequent change
28
29