You are on page 1of 188

Student Guide for

Introducing Hitachi Storage


Architecture

TCC2640

Courseware Version 1.0


This training course is based on microcode 80-02-XX on VSP G1000 and prerelease T-Code on
VSP G200-G800.

Corporate Headquarters Regional Contact Information


2825 Lafayette Street Americas: +1 408 970 1000 or info@HDS.com
Santa Clara, California 95050-2639 USA Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@HDS.com
www.HDS.com Asia Pacific: +852 3189 7900 or hds.marketing.apac@HDS.com

Hitachi Data Systems Corporation 2015. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Innovate With Information is a trademark or
registered trademark of Hitachi Data Systems Corporation. All other trademarks, service marks, and company names are properties of their respective owners.

ii
Contents
Introduction ................................................................................................................1
Welcome and Introductions ....................................................................................................................... 1
Course Description ................................................................................................................................... 2
Course Objectives .................................................................................................................................... 3
Course Topics .......................................................................................................................................... 4
Learning Paths ......................................................................................................................................... 5
Resources: Product Documents ................................................................................................................. 6
Collaborate and Share .............................................................................................................................. 7
Social Networking Academys Twitter Site ............................................................................................... 8

1. HDS Storage Architecture ................................................................................. 1-1


Module Objectives ................................................................................................................................. 1-1
Module Topics ....................................................................................................................................... 1-2
Overview .............................................................................................................................................. 1-3
HDS Storage Portfolio ................................................................................................................... 1-3
Available HDS VSP Block Storage Family Solutions and Accompanying Solutions ................................ 1-3
VSP Midrange Family Architecture ........................................................................................................... 1-4
Mid-Range Architecture Terminology .............................................................................................. 1-4
Modular Models: Each Product Name Translates Into a Parts Name .................................................. 1-4
Foundation for VSP Midrange Family .............................................................................................. 1-5
Block Model ................................................................................................................................. 1-5
VSP G400, G600, G800 DKC (CTL1 and CTL2) .............................................................................. 1-6
VSP G200 - DKC (CTL1 and CTL2) ................................................................................................. 1-6
VSP Midrange Family Physical Specifications ................................................................................... 1-7
DIMM Configurations .................................................................................................................... 1-8
Memory Content .......................................................................................................................... 1-8
Data Protection ............................................................................................................................ 1-9
VSP Midrange Family Back-End ............................................................................................................. 1-10
DB Types I ................................................................................................................................ 1-10
DB Types II ............................................................................................................................... 1-10
Drive Box Remarks ..................................................................................................................... 1-11
Drive Box ENC (Enclosure) Components ....................................................................................... 1-11
Back-End Example for VSP G400 / VSP G600 ................................................................................ 1-12
VSP G1000 Architecture ........................................................................................................................... 1-13

iii
Contents
Enterprise Components Names and Abbreviations ............................................................................. 1-13
VSP G1000 Overview .................................................................................................................. 1-13
VSP G1000 Logic Box (DKC-0) ..................................................................................................... 1-14
VSP G1000 Specifications ............................................................................................................ 1-14
VSP G1000 Memory .................................................................................................................... 1-15
Distributed Shared DKC-Resources............................................................................................... 1-15
Memory Structure on Cache Section............................................................................................. 1-16
Other Memory Locations ............................................................................................................. 1-17
Data Saved to BKM for Shutdown ................................................................................................ 1-17
VSP G1000 Back-End ........................................................................................................................... 1-18
Racks and DKUs ......................................................................................................................... 1-18
DKU Boxes ................................................................................................................................ 1-19
DB Types I ................................................................................................................................ 1-19
SAS Switches SSWs ................................................................................................................. 1-20
DKU Overview SBX ..................................................................................................................... 1-20
Outline of SSW for SBX/UBX ........................................................................................................ 1-21
Drive Box Remarks ..................................................................................................................... 1-21
Back-End Cabling for SBX/UBX .................................................................................................... 1-22
High Performance Back-End Cabling for SBX/UBX.......................................................................... 1-23
Conceptual and Specifications Comparisons ........................................................................................... 1-24
Concept Differences ................................................................................................................... 1-24
Comparison: VSP Midrange Family to VSP G1000 .......................................................................... 1-24
Comparison: VSP Midrange to HUS 100 Family.............................................................................. 1-25
SVOS Storage Virtualization Operating System ....................................................................................... 1-26
SVOS VSP Midrange Family ......................................................................................................... 1-26
SVOS VSP G1000 ....................................................................................................................... 1-27
Software Packaging for SVOS and Other Features ......................................................................... 1-28
SVOS Packaging for Open Systems .............................................................................................. 1-28
Module Summary ................................................................................................................................ 1-29

2. Disks, Volumes and Provisioning ...................................................................... 2-1


Module Objectives ................................................................................................................................. 2-1
Supported RAID Structures and Sparing Behavior ..................................................................................... 2-2
Hitachi Supported RAID Configurations .......................................................................................... 2-2
Spare Drives ................................................................................................................................ 2-2
Sparing Behaviors ........................................................................................................................ 2-3
Logical Devices and Addressing .............................................................................................................. 2-4

iv
Contents
Review: Modular Storage Architecture and Terms ........................................................................... 2-4
VSP Midrange and Enterprise Storage Architecture and Terms .......................................................... 2-4
Mainframe Storage Device Architecture: A Storage History Lesson .................................................... 2-5
Components of the LDEV ID .......................................................................................................... 2-5
What is an LDEV? ......................................................................................................................... 2-6
How to Use LDEV Types Basic and External .................................................................................... 2-7
How to Use LDEV Type DP ............................................................................................................ 2-8
How to Use LDEV Type Snapshot................................................................................................... 2-8
LDEV Uses by LDEV Type .............................................................................................................. 2-9
LDEV List View HUS VM Block Element Manager Example ............................................................. 2-9
LDEV List View From an HUS VM System ...................................................................................... 2-10
LDEV Ownership ................................................................................................................................. 2-11
LDEV Ownership in VSP Midrange and Enterprise .......................................................................... 2-11
LDEV Ownership on VSP G200 G800 ......................................................................................... 2-12
LDEV Virtualization .............................................................................................................................. 2-13
Types of Virtual LDEVs................................................................................................................ 2-13
Hitachi Dynamic Provisioning ....................................................................................................... 2-13
Dynamic Provisioning Pool Structure ............................................................................................ 2-14
LDEV Virtualization ..................................................................................................................... 2-15
Hitachi Dynamic Tiering .............................................................................................................. 2-16
Create Pool HUS VM Example ................................................................................................... 2-16
Volume Mapping ................................................................................................................................. 2-17
Host Group ................................................................................................................................ 2-17
LDEV Mapping ........................................................................................................................... 2-18
Volume Mapping Task Flow ...................................................................................................... 2-19
Volume Mapping Task Flow 1 ................................................................................................... 2-19
Volume Mapping Task Flow 2 ................................................................................................... 2-20
Volume Mapping Task Flow 3 ................................................................................................... 2-20
Volume Mapping Task Flow 3 continued .................................................................................... 2-21
Host Mode Options ..................................................................................................................... 2-21
Host Group HUS VM Example ................................................................................................... 2-22
Multipathing Support Hitachi Dynamic Link Manager ................................................................... 2-23
Module Summary ................................................................................................................................ 2-24

3. Storage Management Tools .............................................................................. 3-1


Module Objectives ................................................................................................................................. 3-1
Hitachi Storage Maintenance Tools ......................................................................................................... 3-2

v
Contents
Software Tools for Configuring Storage .......................................................................................... 3-2
Web Console/SVP Application (VSP G1000)..................................................................................... 3-3
BEM/MPC/Maintenance Utility (VSP G200 - G800) ........................................................................... 3-4
Maintenance Interfaces ................................................................................................................. 3-5
Maintenance Access ..................................................................................................................... 3-6
Hitachi Storage Management Tools ......................................................................................................... 3-8
Management Interfaces ................................................................................................................ 3-8
Hitachi Storage Navigator/BEM ...................................................................................................... 3-9
Command Line Interface (CLI/RAIDCOM) ..................................................................................... 3-10
Hitachi Command Suite Overview ......................................................................................................... 3-11
Hitachi Command Suite v8.X ....................................................................................................... 3-11
Hitachi Command Suite - Unified Management .............................................................................. 3-13
Hitachi Device Manager (HDvM) .................................................................................................. 3-15
Hitachi Device Manager - Functionality ......................................................................................... 3-16
Hitachi Tiered Storage Manager (HTSM) ....................................................................................... 3-17
Hitachi Tiered Storage Manager Overview .................................................................................... 3-18
Benefits of Tiered Storage Manager ............................................................................................. 3-19
Hitachi Replication Manager (HRpM) ............................................................................................ 3-20
Centralized Replication Management ............................................................................................ 3-21
Hitachi Performance Monitoring and Reporting Products ................................................................ 3-22
Product Positioning ..................................................................................................................... 3-23
Hitachi Tuning Manager .............................................................................................................. 3-24
Hitachi Tuning Manager Overview ................................................................................................ 3-25
Hitachi Dynamic Link Manager (HDLM) Advanced ......................................................................... 3-27
Hitachi Command Director - Central HCS Reporting and Operations ................................................ 3-27
Hitachi Command Director .......................................................................................................... 3-28
Hitachi Command Director Overview ............................................................................................ 3-29
Hitachi Command Director (HCD)................................................................................................. 3-31
Hitachi Command Director - Addresses the Following Challenges .................................................... 3-32
Hitachi Compute Systems Manager (HCSM) .................................................................................. 3-33
Hitachi Infrastructure Director .............................................................................................................. 3-34
Hitachi Infrastructure Director (HID) ............................................................................................ 3-34
Hitachi Infrastructure Director ..................................................................................................... 3-35
Hitachi Infrastructure Director GUI and Command Interfaces ...................................................... 3-36
HCS and HID Coexistence ........................................................................................................... 3-37
HCS and HID Feature-Function Matrix .......................................................................................... 3-38
Hi-Track Remote Monitoring System ..................................................................................................... 3-39

vi
Contents
Hi-Track Overview ...................................................................................................................... 3-39
Hi-Track View Example ............................................................................................................... 3-40
Hi-Track Overview: Hi-Track Monitor Agent - Mobile App ............................................................... 3-41
Module Summary ................................................................................................................................ 3-42

4. Storage Virtualization ....................................................................................... 4-1


Module Objectives ................................................................................................................................. 4-1
Hitachi Universal Volume Manager .......................................................................................................... 4-2
Components of Virtualization of External Storage ............................................................................ 4-2
Virtualization of External Volumes (Example) .................................................................................. 4-3
Supported Storage Systems for UVM .............................................................................................. 4-3
Virtual Storage Machine ......................................................................................................................... 4-4
Virtual Storage Machine Essentials ................................................................................................. 4-4
Components of a Virtual Storage Machine....................................................................................... 4-4
Adding Resources to Virtual Storage Machines ................................................................................ 4-5
Viirtual Storage Machines in HDvM ................................................................................................. 4-5
Use Cases for Virtual Storage Machine ........................................................................................... 4-6
Nondisruptive Migration ......................................................................................................................... 4-7
Nondisruptive Migration Use Case Preparation ................................................................................ 4-7
Nondisruptive Use Case Migration .................................................................................................. 4-8
Supported Cache Modes ............................................................................................................. 4-10
Global-Active Device ............................................................................................................................ 4-11
Purpose of Global-Active Device................................................................................................... 4-11
Components of Global-Active Device ............................................................................................ 4-11
Global-Active Device ................................................................................................................... 4-12
Differences Between VSP G1000 Global-Active Device and VSP High Availability Manager ................. 4-13
Module Summary ................................................................................................................................ 4-14

5. Replication ........................................................................................................ 5-1


Module Objectives ................................................................................................................................. 5-1
Hitachi Replication Products ................................................................................................................... 5-2
Hitachi Replication Portfolio Overview ............................................................................................ 5-2
Hitachi ShadowImage Replication ........................................................................................................... 5-3
Hitachi Thin Image ....................................................................................................................... 5-4
Hitachi TrueCopy Remote Replication ............................................................................................. 5-5
Hitachi Universal Replicator ........................................................................................................... 5-6
Hitachi Replication Manager .......................................................................................................... 5-7
Tools Used For Setting Up Replication ............................................................................................ 5-8

vii
Contents
Tools Used For Setting Up Replication - more ................................................................................. 5-9
Requirements For All Replication Products .................................................................................... 5-10
Replication Status Flow ............................................................................................................... 5-11
Thin Provisioning Awareness..................................................................................................... 5-13
Hitachi ShadowImage Replication ................................................................................................ 5-14
Hitachi ShadowImage Replication Overview .................................................................................. 5-14
Hitachi ShadowImage Replication RAID-Protected Clones .............................................................. 5-15
Applications for ShadowImage In-System Replication .................................................................... 5-16
ShadowImage Replication Consistency Groups .............................................................................. 5-17
Internal ShadowImage Asynchronous Operation ........................................................................... 5-17
Pair Status Over Time ................................................................................................................. 5-18
Hitachi Thin Image .............................................................................................................................. 5-19
What is Hitachi Thin Image?........................................................................................................ 5-19
Hitachi Thin Image Technical Details ............................................................................................ 5-20
Hitachi Thin Image Components .................................................................................................. 5-21
Operations Flow Copy-on-Write Snapshot .................................................................................. 5-22
Operations Flow Copy-After-Write ............................................................................................. 5-23
Thin Image Copy-After-Write or Copy-on-Write Mode .................................................................... 5-24
Hitachi ShadowImage Replication Clones vs. Hitachi Thin Image Snapshots .................................... 5-25
Applications: Hitachi ShadowImage Clones vs. Hitachi Thin Image Snapshots .................................. 5-26
Hitachi TrueCopy Remote Replication .................................................................................................... 5-27
Hitachi TrueCopy Overview ......................................................................................................... 5-27
Basic Hitachi TrueCopy Replication Operation ............................................................................... 5-28
Hitachi TrueCopy Remote Replication (Synchronous) ..................................................................... 5-30
Hitachi Universal Replicator (Asynchronous)........................................................................................... 5-31
Hitachi Universal Replicator Overview ........................................................................................... 5-31
Hitachi Universal Replicator Benefits ............................................................................................ 5-31
Hitachi Universal Replicator Functions .......................................................................................... 5-32
Three-Data-Center Cascade Replication ........................................................................................ 5-32
Three-Data-Center Multi-Target Replication .................................................................................. 5-33
Four-Data-Center Multi-Target Replication .................................................................................... 5-33
Module Summary ................................................................................................................................ 5-34
Additional Training offerings from HDS .................................................................................................. 5-34

Glossary .................................................................................................................. G-1

Evaluate This Course ............................................................................................... E-1

viii
Introduction
Welcome and Introductions

Student Introductions
Name
Position
Experience
Your expectations

Page 1
Introduction
Course Description

Course Description

This web-based course provides an overview of Hitachi block-


oriented storage systems. The course introduces the architecture
of Hitachi Virtual Storage Platform (VSP) G1000, the enterprise
model, and VSP G200, G400, G600 and G800, the midrange
models.

Page 2
Introduction
Course Objectives

Course Objectives

Upon completion of this course, you should be able to:


Describe Hitachi Virtual Storage Platform (VSP) G200, G400, G600 and
G800 hardware architecture
Describe the VSP G1000 hardware architecture
Discuss the licensing model for VSP enterprise and midrange program
products
Distinguish the functions and use of RAID groups, Hitachi Dynamic
Provisioning (HDP) and Hitachi Dynamic Tiering (HDT) volumes
Describe the LDEV unit of storage management
Describe principles of logical device (LDEV) ownership and how to assign
and move them

Upon completion of this course, you should be able to (continued):


Describe volume virtualization layers and provisioning mechanisms
Explain how to access management and maintenance tools
Distinguish between Hitachi Command Suite (HCS), Hitachi Infrastructure
Director (HID), Hitachi Replication Manager (HRpM) and Hitachi Tuning
Manager (HTnM)
Describe virtualization of external storage
Describe the virtual storage machine (VSM), global-active device (GAD) and
nondisruptive migration (NDM) features
Explain the differences between Hitachi replication products (Hitachi
TrueCopy, Hitachi Universal Replicator, Hitachi ShadowImage Replication,
Hitachi Thin Image)

Page 3
Introduction
Course Topics

Course Topics

Modules

1. Hitachi Storage Architecture

2. Disks, Volumes and Provisioning

3. Storage Management Tools

4. Storage Virtualization Features

5. Replication

Page 4
Introduction
Learning Paths

Learning Paths

Are a path to professional


certification

Enable career advancement

Available on:
HDS.com (for customers)
Partner Xchange (for partners)
theLoop (for employees)

Customers

Customer Learning Path (North America, Latin America, and APAC):


http://www.hds.com/assets/pdf/hitachi-data-systems-academy-customer-learning-
paths.pdf

Customer Learning Path (EMEA): http://www.hds.com/assets/pdf/hitachi-data-


systems-academy-customer-training.pdf

Partners

https://portal.hds.com/index.php?option=com_hdspartner&task=displayWebPage&menu
Name=PX_PT_PARTNER_EDUCATION&WT.ac=px_rm_ptedu

Employees

http://loop.hds.com/community/hds_academy

Please contact your local training administrator if you have any questions regarding
Learning Paths or visit your applicable website.

Page 5
Introduction
Resources: Product Documents

Resources: Product Documents

Product documentation that


Set the filter to
provides detailed product Technical
Resources
information and future updates
is now posted on hds.com in
addition to the Support Portal

There are 2 paths to these


documents:
hds.com: Home > Corporate >
Resource Library
Google Search

Resource Library

http://www.hds.com/corporate/resources/?WT.ac=us_inside_rm_reslib

Google Search

Two ways to do a Google search for Hitachi product documentation:

Document name

Any key words about the product you are looking for

o If the key words are covered in the product documents, Google will find it the
resource

For example, if you search Google for System Modes Options for VSP
G1000, it is covered in the user guide so the document will come up on
Google

Page 6
Introduction
Collaborate and Share

Collaborate and Share

Hitachi Data Systems Community Academy in theLoop


Learn best practices to optimize Learn whats new in the Academy
your IT environment Ask the Academy a question
Share your expertise with Discover and share expertise
colleagues facing real challenges
Shorten your time to mastery
Connect and collaborate with
Give your feedback
experts from peer companies
and HDS Participate in forums

For Customers, Partners, Employees Hitachi Data Systems Community:

https://community.hds.com/welcome

For Employees theLoop:

http://loop.hds.com/community/hds_academy?view=overview

Page 7
Introduction
Social Networking Academys Twitter Site

Social Networking Academys Twitter Site

Twitter site
Site URL: http://www.twitter.com/HDSAcademy

Hitachi Data Systems Academy link to Twitter:

http://www.twitter.com/HDSAcademy

Page 8
1. HDS Storage Architecture
Module Objectives

Upon completion of this module, you will be able to:


Compare HDS midrange and enterprise storage
Describe:
Hitachi Virtual Storage Platform (VSP) G200, G400, G600 and G800
architecture (midrange)
Hitachi Virtual Storage Platform G1000 architecture (enterprise)
Hitachi Storage Virtualization Operating System (SVOS)

Page 1-1
HDS Storage Architecture
Module Topics

Module Topics

Overview Specifications Comparisons


VSP Midrange Family Architecture Hitachi Storage Virtualization Operating
VSP G400, VSP G600, VSP G800 System (SVOS) Storage Virtualization
VSP G200 Operating System
VSP Midrange Family Specification
VSP Midrange Family Memory
VSP Midrange Family Back-End
VSP G1000 Architecture
VSP G1000
VSP G1000 Specification
VSP G1000 Memory
VSP G1000 Back-End

Page 1-2
HDS Storage Architecture
Overview

Overview

HDS Storage Portfolio

4100

4080
Performance

4060

4040 Focus today:


G200 G1000

Functionality/Scalability

Hitachi Command Suite Management

Available HDS VSP Block Storage Family Solutions and


Accompanying Solutions

COMMON SOFTWARE AND MANAGEMENT

Hitachi SVOS G1000


Feature Set
Across All! G800
G600
G400
G200 Fully Supported in
Hitachi Command
Suite
COMMON OPERATING SYSTEM

Page 1-3
HDS Storage Architecture
VSP Midrange Family Architecture

VSP Midrange Family Architecture

Mid-Range Architecture Terminology

Blades
CHB - Channel Blade
DKB - Disk Blade
Memory (M)
CM - Cache Memory
LM - Local Memory
PM - Package Memory
SM - Shared Memory (control memory)
CFM - Cache Flash Memory (SSD for CM/SM-Backup)
Trays
CB - Controller Box
DB - Drive Box
HDU - Hard Disk Unit (DB logical name)
ENC - Enclosure Controller

Modular Models: Each Product Name Translates Into a Parts Name

Product Parts
Height Remark
Name Name
VSP G200 HM800S 2U ---------
Upgradable to VSP G600 by expanding drives,
VSP G400 HM800M2 4U
cache, performance scalability
VSP G600 HM800M3 4U ---------
VSP G800 HM800H 4U ---------

Page 1-4
HDS Storage Architecture
Foundation for VSP Midrange Family

Foundation for VSP Midrange Family

Combines block storage services with thin


provisioning virtualization, external storage
virtualization and controller-based replication
functionality
Key data
4- or 8-core Intel CPUs
Max. 512GB cache
6/12 Gb/sec SAS back-end
8/16 Gb/sec Fibre Channel front-end
10 Gb/sec iSCSI front-end
FMD, SSD, SAS, NL-SAS drives (max. 1440)

Block Model

Front-End Back-End Drives


24 x SFF
SAS SFF

SSD SFF
Fibre Channel 12 x LFF
8/16 Gb/s

NL-SAS LFF
SAS 6/12 Gb/s

iSCSI (SFP)
10Gb/s
60 x SFF/LFF SSD SFF*

iSCSI (10Base-T)
1/10 Gb/s SAS SFF*

2U DBF
12 x FMD
FMD

*With converting adaptor 2.5 3.5

Page 1-5
HDS Storage Architecture
VSP G400, G600, G800 DKC (CTL1 and CTL2)

VSP G400, G600, G800 DKC (CTL1 and CTL2)

CFM
Fan
VSP G400, VSP G600,
VSP G800
865mm Battery Height 4U
2 CPU/CTL
CPU
8 DIMM sockets/CTL
DIMM
CPU
8 Slots for FE/BE/CTL
DIMM 2 LAN ports/CTL
Public port
CTL
Maintenance port
175mm CFM for backup
NiMH batteries for backup
12V power supplies units

446.3mm
Logicbox

VSP G200 - DKC (CTL1 and CTL2)


Detailed in the following

CTL2
Drive Box (DB0)
VSP G200
CTL1
Height 2U
1 CPU/CTL
2 DIMM sockets/CTL
2 Slots for FE/CTL
CPU 1 embedded BE port/CTL
CPU 2 LAN ports/CTL
Battery
Public port
Fan Maintenance port
CFM for backup
NiMH batteries for backup
12V Power supplies units
PSU
12 HDD slots 3.5 (CBSL)
24 HDD slots 2.5 (CBSL)
Logicbox

Page 1-6
HDS Storage Architecture
VSP Midrange Family Physical Specifications

VSP Midrange Family Physical Specifications

Item VSP G200 VSP G400 VSP G600 VSP G800


CPU/CTL 1 (4 cores) 2 (4 cores) 2 (8 cores)
RAM Slot/CTL 2 4 4/8
Supported DIMM 8/16 GB 8/16/32 GB
RAM (max)/System 64 GB 128 GB 256 GB 512 GB
CHB (max)/System 4 8/10* 12/16*
DKB (max)/System Embedded 4 8/16
BE bandwidth 6/12 Gb/s
LAN Ports/CTL 2
DB (max) 7+1 embedded 16 24 24/48
Drive (max) 264 480 720 1440
PS/DKC 2
Volumes 2048 4096 16384
LUN Size (max) 60TB

*Diskless configuration

HM800S (VSP G200) has 2 BE ports

1 external port attached to additional drive boxes

1 internal port attached to the embedded drive box (DB0)

Page 1-7
HDS Storage Architecture
DIMM Configurations

DIMM Configurations

Pos. Model DIMM


Capacity type Installable slot number Configurable Max
unit Capacity /
CTL

1 VSP G800 8GB, 16GB, 32GB 8 slot/CTL (CMG0, CMG1) 4 DIMM 256 GB
2 VSP G600 8GB, 16GB 8 slot/CTL (CMG0, CMG1) 4 DIMM 128 GB
3 VSP G400 4 slot/CTL (CMG0) 4 DIMM 64 GB
4 VSP G200 8GB, 16GB 2 slot/CTL (CMG0) 2 DIMM 32 GB

Because of the memory being striped among all DIMMs in the


same CTL, a failure of 1 DIMM will block the entire CTL-board

Memory Content

LM LM Local Memory
PM RAM for cores
PM Package Memory
DXBF
RAM for MPU (ownership
Mirrored
SM
information, bitmaps)
DXBF Data Transfer Buffer
Buffers I/Os for transfer
SM Shared Memory
DIMMs
Config
R1 WP 1 Control Units
CM DMT (HDP/HDT)
Bitmaps
WP 1 R2 Queues
Cache Directory
Size depends on features
CM Cache Memory
Write pendings (duplicated)
Reads (no copy)
CTL1 CTL2

Page 1-8
HDS Storage Architecture
Data Protection

Data Protection

(2) Array Shut-down (3) CFM Backup

AC cable (1) Stop AC supply Max. 6 batteries per CTL


DIMM CFM
DIMM CFM In case of power outage
Battery Array is powerless
Battery UPS
Data is written to CFM
Data will be restored
Battery needs to keep power for Cache Backup
Drive Drive Drive process from DIMM to CFM (Flash Memory). after restart
30% battery charge
HM800
required
# Model DIMM Battery NiMH batteries for
Max
Capacity
Necessary
number
buffering CFM and RAM
/ CTL
Block only
Lifetime 3 years
1 VSP G800 256 GB 3 or 6 pack/CTL

2 VSP G600 128 GB 3 or 6 pack/CTL

3 VSP G400 64 GB 3 pack/CTL

4 VSP G200 16/32GB 1 pack/CTL

Page 1-9
HDS Storage Architecture
VSP Midrange Family Back-End

VSP Midrange Family Back-End

DB Types I

DBS Drive Box Small


24 x SFF drives 2.5
2 U height
2 x ENC, 2 x PS

DBL Drive Box Large


12 x LFF drives 3.5
2 U height
2 x ENC, 2 x PS

DBF Drive Box Flash


12 x FMD (Flash Module Drive)
2 U height
2 x ENC, 2 x PS

DB Types II

View from above DB60 drive box slides out


from the rack
Toward the front to access
the HDDs, which are
installed from the top into
the HDD slots

DB60 drive box 60 (Top View)


60 x LFF drives 3.5
4 U height
2 x ENC, 2 x PS
Max. installation height 26 U

Rear
Front
Slide the DB60 drive box forward out of
the rack to provide access to the
installed HDDs

Page 1-10
HDS Storage Architecture
Drive Box Remarks

Drive Box Remarks

Enclosure chassis are the same as in the Hitachi Unified Storage


(HUS) 110 family and HUS VM

ENCs are different because of 12Gb option

Therefore these drive boxes are only suitable for VSP midrange
family

DB60 has two ENCs and counts as one DB

Drive Box ENC (Enclosure) Components

Located on the rear of the storage system

ENC and drive box power supply components layout


ENC and Drive Box Power Supply components layout
for DBS, DBL and DBF drive tray types
for DB60 drive tray

Page 1-11
HDS Storage Architecture
Back-End Example for VSP G400 / VSP G600

Back-End Example for VSP G400 / VSP G600

To IN port of ENC05-1 in DB05

DB03 In Out In Out


ENC03-1 ENC03-2 Configuration has 1 DKB per
CTL
Two ports per CTL are
DB02 In Out In Out connected to the IN ports of
ENC02-1 ENC02-2 DB-0 and DB-1
OUT ports of ENCx-y are
DB01 In Out In Out attached to IN ports of
ENC01-1 ENC01-2 ENCx+2-y (for example OUT
of ENC01-1 is connected to
IN of ENC03-1 in DB-03)
DB00 In Out In Out Up to 24 DBs (M2=16) can
ENC00-1 ENC00-2 be attached
Max. 576 drives SFF or 12 x
DB60
1H-0 1H-1 2H-0 2H-1 Max. 288 drives per port
DKB-1H DKB-2H Max. 72 drives per WL

CTL1 CTL2

Logical view to simplify cabling diagram

Page 1-12
HDS Storage Architecture
VSP G1000 Architecture

VSP G1000 Architecture

Enterprise Components Names and Abbreviations

CBX Controller Box (part-name)


DKC Disk Controller (logical name)
CHA Channel Adapter
DKA Disk Adapter
SSW SAS Switch (modular: ENC)
CM Cache Memory
LM Local Memory
PM Package Memory
SM Shared Memory (control memory)
CFM Cache Flash Memory (SSD for CM/SM-backup)
DKU Disk Unit (unit of eight HDUs)
DB Drive Box (physical name)
HDU Hard Disk Unit (logical name)

VSP G1000 Overview

1 or 2 DKC
Max. 6 racks
Max. 16 x 8-core intel CPU
Max. 2 TB RAM
Max. 32 x 6 Gb SAS
LFF/SFF Drive
Chassis
LFF/SFF Drive
Chassis
LFF/SFF Drive
Chassis
LFF/SFF Drive
Chassis
LFF/SFF Drive
Chassis
LFF/SFF Drive
Chassis back-end
8/16Gb Fibre Channel
front-end
8Gb FICON front-end
10Gb FCoE front-end
Flash Module
Drive Chassis
Flash Module
Drive Chassis
FMD, SSD, SAS, NL-SAS
drives (max. 2048)
LFF/SFF Drive LFF/SFF Drive LFF/SFF Drive LFF/SFF Drive
Chassis Chassis Chassis Chassis

Secondary controller Primary controller


DKC-1 DKC-0

Page 1-13
HDS Storage Architecture
VSP G1000 Logic Box (DKC-0)

VSP G1000 Logic Box (DKC-0)

BKM PSU
Front Rear

CM

SVP

MPB

CHA
CHB CHB
MPB DKB
CTL2 CTL1 CTL1 CTL2

BKM backup module with 2 x NiMH battery and CFM (SSD)

VSP G1000 Specifications

VSP G1000 Specifications Two Modules One Module


Internal 4.5PB 2.3PB
Max Capacity External 247PB 247PB
Max Volume 64k
3.5" 3TB/4TB 7200 RPM, 600GB 10K, 400GB SSD
2.5" SAS 600/900/1200 10k RPM 300/450GB 15k RPM
Drive Type
2.5" SSD (MLC) 400/800/1600GB
FMD 1.6/3.2 TB
3.5" 1152 576
Number of Drives 2.5" 2304 1152
2.5 SSD 384 192
FMD 192 96
Cache Capacity 1024/2048GB 512/1024GB
Max Ports FC 2/4/8Gb 176 / 192 (16 Port Opt.) 80/96 (16 Port Opt)
FC 4/8/16Gb 176 / 192 (16 Port Opt.) 80/96 (16 Port Opt)
FICON 2/4/8Gb 176 (16 Port Opt.) 80 (16 Port Opt.)
FCoE 10Gb 176/88 (16/8 Port) 80/40
Backend Paths SAS 6Gb 128 (4WL x 32) 64 (4WL x 16)
Size Full Configuration 3610x1100x2006 1810x1100x2006
Power Spec AC 200V Single Phase/400 V Three Phase

Page 1-14
HDS Storage Architecture
VSP G1000 Memory

VSP G1000 Memory

Distributed Shared DKC-Resources

CL1 CL2

MPB

Backplane PCIe Gen3

RAM ASIC ASIC RAM

All MPBs share whole RAM of R800 system (CL1 and CL2)
All CHAs/DKAs are connected to the same internal PCIe-network

Page 1-15
HDS Storage Architecture
Memory Structure on Cache Section

Memory Structure on Cache Section

Control Info (Shared Memory) contains:


Configuration Information
Control Units (CU)
DMT (Dynamic Mapping Tables (pointers for
HDP/HDT)
RTOS queues
Bitmap track tables for replication
Parity Information
Size of SM depends on activated SW features,
numbers of pairs and CUs

SM is only located on the first cache feature of


DKC-0 (module #0)

Cache directory:
Contains cache directory Information for CM
section on same board and GRPP of PK
Size of Cache DIR/PK depends on the number
of installed CMGs

DIR size for CMG0 = 512MB/PK

DIR size for CMG(n) = 128MB/PK

Example:

128GB CACHE per PK with 32GB installed (4 DIMMS, 1 DIMM for each CMG)

DIR/PK = 512MB + 3 x 128GB = 896 MB/PK

Maximum DIR size for a fully populated PK is 1408MB

Page 1-16
HDS Storage Architecture
Other Memory Locations

Other Memory Locations

LM Local Memory
Located in DIMMs on MPB (2x8 GB total capacity)
RAM for cores and housekeeping

PM Package Memory
Located in DIMMs on MPB
1GB per MPB
Hierarchical memory (bitmaps for replication)

DxBF Data Transfer Buffer


2x1 GB on CHA and DKA
Buffers I/O for transfer

Data Saved to BKM for Shutdown

Save in the BKM CFM at power outage


Save in the BKM CFM at scheduled down

Save into the BKM CFM at power outage


Do not save into the BKM CFM at
scheduled down because user data will be
destaged to drives

BKM backup module

CFM cache flash memory (SSD in BKM)

Page 1-17
HDS Storage Architecture
VSP G1000 Back-End

VSP G1000 Back-End

Racks and DKUs

DKUBOX DKUBOX 1100 605


-15 -05
600
3610
600
DKUBOX DKUBOX
-14 -04 600

600
DKUBOX DKUBOX
-13 -03 605

DKUBOX DKUBOX
-12 -02

DKUBOX DKUBOX
-11 -01 2006

DKU-Box Types
DKUBOX DKUBOX
-10 -00
SBX 192 x 2.5 SFF HDD
UBX 96 x 3.5 LFF HDD
DKC 1 DKC 0 FBX 48 x FMD

Maximum 6 DKU-boxes per DKC can be connected

Page 1-18
HDS Storage Architecture
DKU Boxes

DKU Boxes

DKU Front view Rear view Device

8 Trays
SBX 8HDU
16U
(Small Box) 192 SFF
Height:16U

8 Trays
UBX
16U 8 HDU
(Universal
96 LFF
Box)
Height:16U
10U
4 Trays
FBX 8 HDU
(Flash Box) 48 FMD
Height: 8U

DKU can be attached in any order

A DKU consists of 4 hard disk units (HDUs) for drive installation

It is recommendable to install FBX first

DB Types I

DBS Drive Box Small


24 x SFF drives 2.5
2 U height
2 x SSW, 2 x PS
Counts as one HDU

DBL Drive Box Large


12 x LFF drives 3.5
2 U height
2 x SSW, 2 x PS
Counts as one HDU

DBF Drive Box Flash


12 x FMD (Flash Module Drive)
2 U height
2 x SSW, 2 x PS
Counts as two HDUs

Page 1-19
HDS Storage Architecture
SAS Switches SSWs

SAS Switches SSWs

Enterprise storage systems, including VSP


G1000, use SAS switches SSWs to
connect the HDDs to the controller

DKU Overview SBX

HDU-xy7 DKU has 8 HDUs and 2 B4s and


HDU can mount 24 SFF devices
HDU-xy6 DKU can mount 192 SFF devices
HDU has SFF device (HDD, SSD), SSW and DKUPS
14D+2P RGs must start in even slot numbers
HDU-xy5
HDDs are mounted on front side only
Spare drives in Slot 23 only
HDU-xy4

DKU-xy x : DKC No (0 or 1)
HDU-xy3 y : DKU No (0 5)

S
HDU-xy2 P
2

HDU-xy1
7D+1P 14D+2P
3D+1P 2D+2P (starts at
(6D+2P)
S
HDU-xy0 P even slot)
1

0 23
Front View

RAID groups must be installed vertically

8 member RGs are spread over 2 B4s

4 member RG are located in 1 B4

Page 1-20
HDS Storage Architecture
Outline of SSW for SBX/UBX

Outline of SSW for SBX/UBX

D E F G

Pos. Item Name Remark


A LED (Green) PWR LED (Power) Indicate that power supplied from PS
B LED (Amber) Locate LED (Locate) Indicate the chassis location *1

C LED (Red) Shutdown LED (ALARM) Indicates that replacement is possible while the device is blocked

D LED (Green) SSW Path (IN) Indicates that the IN side links up
SSW Path (OUT0) Indicates that the OUT0 side links up
E LED (Green)

F LED (Green) SSW Path (OUT1) Indicates that the OUT1 side links up
G DIP SW DIP Switch Set the SAS Address of SSW (next Page)

*1 Switch ON/OFF in Maintenance Screen of SVP application

Drive Box Remarks

Enclosure chassis are the same as in the HUS 100 family, HUS
VM and the VSP midrange family

SSWs are different because of 3 ports


IN - Incoming connection from DKA- or OUT-port
OUT0 - For daisy chain in standard configuration (not used in case of
high performance configuration
OUT1 - For daisy chain to DBn+8

Therefore, these DBs are only suitable for VSP midrange family

Page 1-21
HDS Storage Architecture
Back-End Cabling for SBX/UBX

Back-End Cabling for SBX/UBX

DKA (CL1-1PA) HDU000 HDU050


SAS CTL IN EXP OUT1 IN OUT1
EXP
OUT0 SSW000-1 OUT0 SSW050-1
SAS CTL
Standard

OUT0 EXP OUT0 EXP


IN SSW000-2 OUT1 IN
DKA (CL2-2PA) SSW050-2 OUT1

SAS CTL HDU001 HDU051


IN EXP OUT1 IN OUT1
EXP
SAS CTL OUT0 SSW001-1 OUT0 SSW051-1

OUT0 EXP OUT0 EXP


IN SSW001-2 OUT1 IN SSW051-2 OUT1

HDU002 HDU052
HDU003 HDU053
HDU004 HDU054
HDU005 HDU055
HDU006 HDU056
HDU007 HDU057
From OUT0 TO IN
DKU-00 DKU-01..04 DKU-05

Notes:

One in 2 SSWs is connected to the same DKA

First port of CL1-DKA (CL1-PA) is attached to 1st SSW in HDU000

First port of CL1-DKA (CL21-PA) is attached to 2nd SSW inHDU001

New Cabling structure guarantees higher reliability in comparison to HUS and HUS VM.

For example, a powerless HDU causes the maximum loss of 1 HDD (14D+2P:2) per RAID group
in the daisy chain. Therefore, all RAID groups will sustain this situation.

Page 1-22
HDS Storage Architecture
High Performance Back-End Cabling for SBX/UBX

High Performance Back-End Cabling for SBX/UBX

DKA (CL1-1PA) HDU000 HDU050


SAS CTL IN EXP OUT1 IN OUT1
EXP
OUT0 SSW000-1 OUT0 SSW050-1
SAS CTL
OUT0 EXP OUT0 EXP
IN SSW000-2 OUT1 IN
DKA (CL2-2PA) SSW050-2 OUT1

SAS CTL HDU001 HDU051


IN EXP OUT1 IN OUT1
EXP
High Performance

SAS CTL OUT0 SSW001-1 OUT0 SSW051-1

OUT0 EXP OUT0 EXP


IN SSW001-2 OUT1 IN SSW051-2 OUT1
DKA (CL1-1PB)
SAS CTL HDU002 HDU052
HDU003 HDU053
SAS CTL
HDU004 HDU054
HDU005 HDU055
DKA (CL2-2PB) HDU006 HDU056

SAS CTL HDU007 HDU057

DKU-00 DKU-01..04 DKU-05


SAS CTL

For a high performance configuration, remove the cables between OUT0- and IN-ports and
connect ports of the 2nd DKA-feature to the clear IN-ports during installation process.

Page 1-23
HDS Storage Architecture
Conceptual and Specifications Comparisons

Conceptual and Specifications Comparisons

Concept Differences

Item VSP Midrange Family VSP G1000


Mainframe support NO YES
Internal architecture Controller architecture with dedicated MP, Cluster architecture with shared devices
RAM and ports
RAID group location Drives can be chosen manually Drives have to be mounted in dedicated
slots (B4 principle)
Maintenance Has design features to facilitate self Maintenance tasks are always initiated by
service maintenance. Some devices do CE via SVP GUI
not require GUI for replacement (example
drives)
Reliability HIGH (modular) HIGHEST (Enterprise)
Service processor (SVP) External 1U-server Internal 1U-PC in DKC
Maintenance tool Maintenance Utility for daily maintenance Java-based SVP software
operation
Front-end ports Bidirectional ports can serve all the Dedicated supporting 1 of 4 possible
purposes in parallel (V01+1) purposes (target/external/initiator/RCU
target)

Comparison: VSP Midrange Family to VSP G1000

Item VSP Midrange Family VSP G1000


CPU-Cores/System 32 64/128
RAM/System 512GB 2TB
Fibre Channel ports/System 48/64* 128 /192*
FICON ports/System ----- 176
iSCSI ports 24/32* Future Enhancement
FCoE ports ----- 128/192*
Back-end links/System 64 128
BE bandwidth 12 Gb/sec 6 Gb/sec
Drives/System 1440 2304
Volumes/System 16K 64K

Maximum numbers

*Diskless configuration

Page 1-24
HDS Storage Architecture
Comparison: VSP Midrange to HUS 100 Family

Comparison: VSP Midrange to HUS 100 Family

Item VSP Midrange Family HUS 100 Family

CPU-Cores/System 32 4
RAM/System 512GB 32GB
Fibre Channel ports/System 48/64* 16
FICON ports/System ----- -----
iSCSI ports 24/32* 8
FCoE ports ----- -----
Back-end links/System 64 32
BE bandwidth 12 Gb/sec 6 Gb/sec
DBS (24 x 2.5 drives) 48 40
DBL (12 x 3.5 drives) 48 80
FBX (48 x 3.5 drives) ----- 20
DB60 (60 x 3.5) 24 -----
Drives/System 1440 960
Volumes/System 16K 4K

*Diskless configuration

VSP G200 has 2 BE ports

1 external port attached to additional drive boxes

1 internal port attached to the embedded drive box (DB0)

Page 1-25
HDS Storage Architecture
SVOS Storage Virtualization Operating System

SVOS Storage Virtualization Operating System

SVOS VSP Midrange Family

Storage Virtualization Operating System (SVOS)


Licensing: Total Usable Capacity
Hitachi Device Manager Volume shredder
Hitachi Dynamic Provisioning (Open) Virtual LUN software
Hitachi Universal Volume Manager LUN manager
Hitachi Virtual Partition Manager Hitachi Server Priority Manager
(32 cache partitions) Hitachi Volume Retention Manager
Hitachi Resource Partition Manager Cache Residency Manager (Open)
(Enables virtual storage machines) Hitachi Storage Navigator
Hitachi Dynamic Link Manager Advanced RAIDCOM, VLVI (CVS), Java API,
(Unlimited licenses and VMware support) CCI, SMI-S provider and SNMP agent
Hitachi Data Retention Utility Hitachi Infrastructure Director
Hitachi Performance Monitor

Page 1-26
HDS Storage Architecture
SVOS VSP G1000

SVOS VSP G1000

Hitachi Storage Virtualization Operating System (SVOS)


Software is delivered as bundles with the desired functionality
Pricing depends on number of MPBs, usable or used capacity
Usable capacity in steps of:
Base capacity 10, 20, 40, 80TB
Capacity upgrades 10, 20, 40, 80, 160, 320, 480TB or unlimited
Used capacity in steps of:
Base capacity 5, 10, 20, 40TB
Capacity upgrades 5, 10, 20, 40, 80, 160, 240TB or unlimited

Base capacity is the initially purchased amount; capacity upgrades for later extension

Storage Virtualization Operating System (SVOS)


Licensing: Total Usable Capacity
Hitachi Device Manager Volume shredder
Hitachi Dynamic Provisioning (Open/MF) Virtual LUN software
Hitachi Universal Volume Manager LUN manager
Hitachi Virtual Partition Manager Hitachi Server Priority Manager
(32 cache partitions) Hitachi Volume Retention Manager
Hitachi Resource Partition Manager Cache Residency Manager (Open /MF)
(Enables virtual storage machines) Hitachi Storage Navigator
Hitachi Dynamic Link Manager Advanced RAIDCOM, VLVI (CVS), Java API,
(unlimited licenses and VMware support) CCI, SMI-S provider and SNMP agent
Hitachi Data Retention Utility
Hitachi Performance Monitor

Page 1-27
HDS Storage Architecture
Software Packaging for SVOS and Other Features

Software Packaging for SVOS and Other Features

SVOS HTnM Mobility Local Replication Remote Replication


Bundle (Hitachi Command Bundle Bundle Bundle
Suite)
Block

Nondisruptive Global-Active
Migration Device Bundle

SVOS Packaging for Open Systems

Hitachi Command Suite Analytics Hitachi Command Suite Mobility


Licensing: Total Usable Capacity as SVOS Licensing: Total Usable Capacity as SVOS
Hitachi Tuning Manager Hitachi Dynamic Tiering
Hitachi Command Director Hitachi Tiered Storage Manager

Hitachi Local Replication Hitachi Remote Replication


Licensing: Total Used Capacity Licensing: Total Used Capacity
Hitachi ShadowImage Replication Hitachi TrueCopy
Hitachi Thin Image Hitachi Universal Replicator
Hitachi Replication Manager Hitachi Replication Manager

Remote Replication extended for enhanced functionality


(M/F and O/S for VSP G1000)
Similar bundles suited for mainframe (VSP G1000)
Single software licenses are available individually

Page 1-28
HDS Storage Architecture
Module Summary

Module Summary

In this module, you should have learned to:


Compare HDS midrange and enterprise storage
Describe:
Hitachi Virtual Storage Platform (VSP) G200, G400, G600 and G800
architecture (midrange)
Hitachi Virtual Storage Platform G1000 architecture (enterprise)
Hitachi Storage Virtualization Operating System (SVOS)

Page 1-29
HDS Storage Architecture
Module Summary

Page 1-30
2. Disks, Volumes and Provisioning
Module Objectives

Upon completion of this module, you should be able to:


List the RAID architectures supported in Hitachi Virtual Storage Platform
(VSP) mid-range and enterprise storage arrays
Describe supported drive sparing behaviors when a disk fails
Define a logical device (LDEV)
Describe the LDEV ID addressing
List the types of LDEVs
List how different types of LDEVs can be used
Describe LDEV ownership, microprocessor units, multipathing
Describe volume virtualization

Page 2-1
Disks, Volumes and Provisioning
Supported RAID Structures and Sparing Behavior

Supported RAID Structures and Sparing Behavior


This section discusses the configuration of disks and devices and the addressing of LDEVs.

Hitachi Supported RAID Configurations

Hitachi Virtual Storage Platform midrange and enterprise storage arrays


support a limited number of RAID types and structures
RAID-1+0
2D, 2D or 4D, 4D
RAID-5
3D, 1P or 7D, 1P (also 2x an d4x concatenation supported)
RAID-6
6D, 2P or 14D, 2P

No other RAID structures or number of HDDs in RAID groups are


supported

Spare Drives

To ensure continued operation of the storage system in the case of a failed disk
drive, the system must be configured with available spares

When usable spare HDDs are available, the system will take the necessary
actions to move (copy) or rebuild the data from the failed/failing drive to the
spare

Two mechanisms: correction copy and dynamic sparing

Page 2-2
Disks, Volumes and Provisioning
Sparing Behaviors

Sparing Behaviors

Dynamic sparing: Each individual disk type has an estimated allowable


number of bad tracks
This threshold is set in microcode and when this value is reached, the disks gets
marked bad and the content gets copied to an available spare

Correction copy: A disk stops working because of an interface or mechanical


error
In the case of RAID-10 the contents of the existing copy will be copied to a spare
In the case of RAID-5, the data gets recalculated from remaining data and parity and
will be written to spare

In both cases, full redundancy will be maintained after a disk error threshhold or
failure

Page 2-3
Disks, Volumes and Provisioning
Logical Devices and Addressing

Logical Devices and Addressing


This section discusses the configuration of disks and devices and the addressing of LDEVs.

Review: Modular Storage Architecture and Terms

Logical Unit (LUN)


In Hitachi modular storage architecture, the LUN is the physical allocation
unit inside the storage array and is also the storage unit that is presented to
the host or server
LUNs are defined on the modular RAID groups
LUNs are identified by a LUN ID
LUNs are presented/mapped to the front-end ports for use by the connected
hosts and servers

VSP Midrange and Enterprise Storage Architecture and Terms

The internal storage allocation and management unit is the logical


device or LDEV

An LDEV is different from a LUN in many important ways

When mapped to a host group, a LDEV is presented as a LUN to the


connect host(s)

Page 2-4
Disks, Volumes and Provisioning
Mainframe Storage Device Architecture: A Storage History Lesson

Mainframe Storage Device Architecture: A Storage History Lesson

One control unit Device


Mainframe CPU Control unit
can contain up to 00
256 devices
I/O Channel 00 01
A mainframe can
address 255 FF
Physical device
control units Physical devices
.
00

01
FE

FF
Physical device
Physical device

Components of the LDEV ID

Traditional LDEV ID = CU:LDEV


00:00

Current LDEV addressing structures has the added LDKC component


LDEV ID = LDKC:CU:LDEV
00:00:00

Page 2-5
Disks, Volumes and Provisioning
What is an LDEV?

What is an LDEV?

An LDEV is a usable amount of disk storage capacity


There are 4 types of LDEVs
Basic Capacity is a set of allocated, physical data blocks on a RAID
group
DP (dynamic provisioning volume) Capacity is a set of virtual data
blocks. Physical capacity is consumed only when data is written to the
volume
External Capacity is addressed in the local storage system but physically
exists on virtualized external storage array
Snapshot Special type of dynamic provisioning volume. Thin Image
pools are only usable for the Thin Image snapshot virtual volumes

Has an address or LDEV ID in the storage system


LDEV address structure is LDKC:CU:LDEV
Looks like 00:00:00

Is assigned ownership to an MPU (microprocessor unit) for all of its I/O processing

Has a maximum capacity in blocks


DP volumes can be expanded

Has emulation type


The only emulation type currently supported for Open systems is OPEN-V
VSP G1000 supports mainframe emulation
Emulation is important in replication and migration operations

Page 2-6
Disks, Volumes and Provisioning
How to Use LDEV Types Basic and External

Each LDEV

Has a fixed maximum capacity at any point in time


Depending on the LDEV type, it may be possible to expand the LDEV
capacity

Can be migrated to different physical blocks on same or different RAID


group(s)
Mobility
Hitachi Dynamic Tiering

Can be replicated or migrated between basic, DP and external types

How to Use LDEV Types Basic and External

LDEV types Basic and External can be used for:


Mapping as LUNs to storage consumers (hosts and servers)
Storage array command device
Pool volumes to build Dynamic Provisioning or Thin Image pools
Target volumes in replication pairs (S-VOLs)
Journal volumes in Hitachi Universal Replicator (HUR) implementations

Page 2-7
Disks, Volumes and Provisioning
How to Use LDEV Type DP

How to Use LDEV Type DP

LDEV type Dynamic Provisioning (DP) can be used for


Mapping as LUNs to storage consumers (hosts and servers)
Storage array command device
Target volumes in replication (S-VOLs)
Journal volumes in HUR implementations

DP type LDEVs cannot be uses as pool volumes to build Dynamic


Provisioning or Thin Image pools

How to Use LDEV Type Snapshot

Thin Image LDEV type is a virtual LDEV


Storage is only consumed when data blocks in the source P-VOL are
changed

Thin Image LDEVs must be created in a Thin Image pool

Thin Image LDEV types can only be used as the target (S-VOL) in a
Thin Image replication pair

Page 2-8
Disks, Volumes and Provisioning
LDEV Uses by LDEV Type

LDEV Uses by LDEV Type

LDEV Type LUN DP or TI Replication HUR Command


(host Pool Pair S-VOL Journal Device
storage) Volume Volume

Basic yes yes yes yes yes

Dynamic yes no yes yes yes


Provisioning (DP)
External yes yes yes no yes

Snapshot (TI) no no Yes Thin no no


Image, only

LDEV List View HUS VM Block Element Manager Example

BEM Block
Element Manager

Also called Hitachi


Storage Navigator
in older systems

Called Hitachi
Device Manager
in newer systems

Page 2-9
Disks, Volumes and Provisioning
LDEV List View From an HUS VM System

LDEV List View From an HUS VM System

Page 2-10
Disks, Volumes and Provisioning
LDEV Ownership

LDEV Ownership
This section provides an overview about enterprise system internals.

In HUS 110/130/150, every volume (LUN) is owned by a certain


controller
HUS modular storage logic includes LUN controller reassignment based
on processor performance busy rates

Introduced with the controller design of Hitachi Virtual Storage


Platform, every LDEV is owned by a microprocessor

In current enterprise architecture, sets of MP cores are assigned to


MPUs for the purposes of LDEV ownership assignment and
workload balancing across the CPUs and cores

MPU microprocessor unit

LDEV Ownership in VSP Midrange and Enterprise

This MPU ownership is assigned when the LDEV is created

Creating single LDEVs puts the ownership on the processor or MPU


with the lowest count of ownerships, balancing the load among
resources

Creating multiple LDEVs at once:


Virtualized LDEVs: Ownership gets distributed among the MPUs using
round robin allocation
LDEVs on SSD/FMD: Ownership gets distributed round robin
Multiple Basic LDEVs on a single RAID group: all LDEVs created on the
same RAID group are assigned ownership to the same MPU

Page 2-11
Disks, Volumes and Provisioning
LDEV Ownership on VSP G200 G800

LDEV Ownership on VSP G200 G800

Every LDEV is owned by a VSP G200 : 8Cores/System

microprocessor unit (MPU) CTL1 CTL2


4Core/CPU x 2
MPU MPU MPU MPU
2Core/MPU
The amount of MPUs is always
the same VSP G400/600 : 16Cores/System
CTL1 CTL2

The number of microprocessor MPU MPU MPU MPU 4Core/CPU x 4


4Core/MPU
cores for each MPU is different
(VSP G200 G800) VSP G800 : 32Cores/System

CTL1 CTL2
MPU MPU MPU MPU 8Core/CPU x 4
8Core/MPU

Page 2-12
Disks, Volumes and Provisioning
LDEV Virtualization

LDEV Virtualization
This section provides an overview about enterprise system internals.

Types of Virtual LDEVs

Virtual LDEV types are:


DP Dynamic Provisioning
Snapshot Thin Image
External virtualized external storage array

Hitachi Dynamic Provisioning

Real, physical storage capacity is used to create storage pools


These can be Basic or External LDEV types

DP LDEV types are defined against the available capacity from the DP pool
A DP volume is a set of pointers
DP volumes have an LDEV ID and are mapped as LUNs to the storage consumers,
hosts and servers
Physical storage capacity from the pool is only consumed when data is written to the
DP volume

The host thinks it has the full allocated LDEV capacity available but the storage
system conserves physical capacity

Page 2-13
Disks, Volumes and Provisioning
Dynamic Provisioning Pool Structure

Dynamic Provisioning Pool Structure

Enterprise Array
Multiple RAID groups
with their basic LDEVs
provide the pool with its
physical space LDEV (HDP-Volume)

Leading practice is to
use RAID-6 parity
groups for the pool
volumes

Page 2-14
Disks, Volumes and Provisioning
LDEV Virtualization

LDEV Virtualization

Enterprise Array
Disk space on pool
volumes is organized in
pages of 42MB
LDEV (HDP-Volume)
Data written to the
HDPVolume gets evenly
distributed to the pages on
all pool volumes
DMT constantly keeps
The owning MPU keeps
track of changes
track in a list, dynamic
mapping table (DMT),
which saves data from the
server in which page on
which volume in the pool

Enterprise Array
In the case of dynamic
provisioning, the pool High Performance Middle Performance

consists of similar
resources (same disk Tier1 Tier2
rpm, type, size and
RAID level)

If different classes
should be implemented,
another pool has to be
created and
HDPVolumes must be
mapped to the servers
SSD SAS 10krpm
accordingly

Page 2-15
Disks, Volumes and Provisioning
Hitachi Dynamic Tiering

Hitachi Dynamic Tiering

Enterprise Array
Introduced with VSP,
dynamic tiering Multi-Tier Pool

implements different
disk performance LDEV (DP Volume in an HDT Pool)
classes in one pool

Load on the pages used


by the HDT-Volume is
constantly monitored
and pages get moved
up or down the tiers HDT
migrations
accordingly SSD SAS 10krpm
Tier1 Tier2
Maximum 3 tiers in 1
pool

Create Pool HUS VM Example

Page 2-16
Disks, Volumes and Provisioning
Volume Mapping

Volume Mapping
This section provides an overview about enterprise system internals.

Volumes created as previously explained, must be mapped to servers

Servers are connected (direct or switched) to front end ports

VSP Mid-Range family supports Fibre Channel (FC) and iSCSI


protocols

VSP enterprise family supports Fibre Channel (FC) Fibre Channel over
Ethernet (FCoE) and Mainframe protocols
Each of these three options require the corresponding type of channel host
(frontend) board (CHB)

Host Group

The host group is the container where the storage consumer is


connected to the storage volumes (LDEVs) to make the storage
available to be used by the host or server as LUNs

Host groups are defined within a storage array front-end CHA port

Multiple hosts in the same host group must be the same type
Operating system
Must share the same Host Mode Settings

One CHA port can support multiple host groups of different OS and Host
Mode settings

Page 2-17
Disks, Volumes and Provisioning
LDEV Mapping

LDEV Mapping

Host groups have to be


created and port security on A
FC ports switched to On WWN1/WWN2

Multiple World Wide Names FC Port FC Port

(WWNs) can be registered Server A / WWN1 Server A / WWN2


in 1 group for cluster setups HG0 LUN 0 HG0 LUN 0

or VMware datastores
HCS or HID cause the Host
Group to be created
The storage administrator
can create the Host Group
using the BEM LDEVs (Basic/DP/DT)

Port security means the port can distinguish whether incoming traffic is from Server A or
Server B and forward it to the proper host group, also called a virtual port.

HCS Hitachi Command Suite

HID Hitachi Infrastructure Director

SN Hitachi Storage Navigator

BEM Block Element Manager

Page 2-18
Disks, Volumes and Provisioning
Volume Mapping Task Flow

Volume Mapping Task Flow

1. Make sure there are LDEVs to be mapped; if not, create LDEV

2. Confirm host connection and verify correct topology

3. Fibre Channel example:


a) Switch Connection: FABRIC ON/Connection P-to-P
b) Direct Connection: FABRIC OFF/Connection FC-AL

4. Create host group(s) on storage array front-end ports where server is


connected

5. Add server HBA World Wide Port Name (WWPN) to host group

6. Add LDEV to host group and assign LUN

Volume Mapping Task Flow 1

Logical
Devices

Page 2-19
Disks, Volumes and Provisioning
Volume Mapping Task Flow 2

Volume Mapping Task Flow 2

Port
Topology

Volume Mapping Task Flow 3

Create Host
Group

Page 2-20
Disks, Volumes and Provisioning
Volume Mapping Task Flow 3 continued

Volume Mapping Task Flow 3 continued

1. Create Host
Group

2. Name it

3. Choose Host
Mode

4. Choose host
(WWPN) to add or
create new one
5. Choose port
where to add
group
6. Repeat for
additional groups
on other ports

Host Mode Options

Page 2-21
Disks, Volumes and Provisioning
Host Group HUS VM Example

Host Group HUS VM Example

Page 2-22
Disks, Volumes and Provisioning
Multipathing Support Hitachi Dynamic Link Manager

Multipathing Support Hitachi Dynamic Link Manager

Multipathing installed on the


Host, two physical SCSI disks A HDLM
installed
are available WWN1/ WWN2 HM 800
Asking Disk1 answer: Ser No 12345
01:0A from HM800 #12345 FC Port FC Port

Server A / WWN1 Server A / WWN2


Asking Disk2 answer:
HG0 LUN 0 HG0 LUN 0
01:0A from HM800 #12345
This is the same disk,
worldwide there is only one
HM800 #12345 presenting the
unique ID 01:0A
Hitachi Dynamic Link Manager
(HDLM) emulates one disk, LDEV ID
routing the traffic over two 01:0A LDEVs (Basic/DP/DT)
ports

One LDEV is mapped over 2 paths to a host.

The host now sees 2 disks, though in reality it is only 1.

To get this fixed, software has to be installed on the host to create 1 emulated disk out
of the 2 physical ones.

The HDS product that fixes this is called Hitachi Dynamic Link Manager (HDLM); many
OS vendors include their own software.

Multipathing asks both disks for their ID, which consists of the Storage Arrays Type,
SerialNo and LDEV ID.

This ID is unique worldwide, and the multipathing sfotware shows 1 emulated disk to
the OS and manages the traffic to the array over the multiple paths.

Page 2-23
Disks, Volumes and Provisioning
Module Summary

Module Summary

In this module, you should have learned to:


List the RAID architectures supported in Hitachi Virtual Storage Platform
(VSP) mid-range and enterprise storage arrays
Describe supported drive sparing behaviors when a disk fails
Define a logical device (LDEV)
Describe the LDEV ID addressing
List the types of LDEVs
List how different types of LDEVs can be used
Describe LDEV ownership, microprocessor units, multipathing
Describe volume virtualization

Page 2-24
3. Storage Management Tools
Module Objectives

Upon completion of this module, you should be able to:


Identify the tools for managing hardware and software functionality
Compare and contrast the tools for managing storage
Provide an overview of Hitachi Command Suite (HCS) features and
functionality, including configuration, mobility, analytics and replication
Describe Hitachi Infrastructure Director (HID) and compare to HCS
Describe the purpose and functions of Hi-Track Remote Monitoring
system and the mobile app

Page 3-1
Storage Management Tools
Hitachi Storage Maintenance Tools

Hitachi Storage Maintenance Tools

Software Tools for Configuring Storage

SVP/MPC/BEM/GUM Hitachi Command Suite


CLI
For VSP G200 G800

Hitachi Infrastructure Director

Web Console/SVP
for VSP G1000

Storage Administators
Maintenance Engineers
VSP midrange

SVP = service processor

MPC = maintenance PC

BEM = Block Element Manager

VSP = Hitachi Virtual Storage Platform

CLI = command line interface

Page 3-2
Storage Management Tools
Web Console/SVP Application (VSP G1000)

Web Console/SVP Application (VSP G1000)

Web Console/SVP Application for VSP, VSP G1000 and HUS VM for Hardware Maintenance

The SVP application is used by the engineers for hardware and software maintenance. The
application is launched by accessing the Web console application.

A PC is used to connect to the arrays SVP with remote desktop.

Page 3-3
Storage Management Tools
BEM/MPC/Maintenance Utility (VSP G200 - G800)

BEM/MPC/Maintenance Utility (VSP G200 - G800)

Block Element Manager, Maintenance PC and Maintenance Utility for VSP G200
G800 for Hardware Maintenance

On the new arrays VSP G200 G800 maintenance happens mainly on the Maintenance Utility,
accessible from customer engineers working environment and from user management GUIs
(Hitachi Command Suite and Hitachi Infrastructure Director).

On customer engineers maintenance PCs, sophisticated adjustments are possible, including


array setup from scratch.

Page 3-4
Storage Management Tools
Maintenance Interfaces

Maintenance Interfaces

VSP G200 G800 introduces Maintenance Utility, a new GUI for


hardware maintenance
Reason: The new ability to provide user maintenance
Maintenance Utility can be invoked from storage admins within Hitachi
Command Suite and from the service engineers maintenance PC (MPC)
Former arrays allow hardware maintenance for CS&S only, using the
integrated service processor (SVP)

Page 3-5
Storage Management Tools
Maintenance Access

Maintenance Access

Management
End-user Server
Management LAN
SVP
SVP SVP
Web Console
running on
MPC/SVP

VSP G200 - 800

Maintenance access on Hitachi Unified Storage VM (HUS VM) or VSP G1000

The customer engineer (CE) connects the laptop to the SVPs console interface or management
LAN and connects to the SVP with remote desktop session.

Installation, configuration and maintenance happen only here with the Web console (software
adjustments like licenses or software configuration settings) and SVP program (hardware
maintenance).

Maintenance access on VSP G200 G800

The CE connects the maintenance PC (MPC) to the maintenance LAN port of the VSP G200
G800 controller.

MPC software has to be installed.

Hardware maintenance happens in the Maintenance Utility. Sophisticated settings like System
Option Modes (SOM) or Online Read Margin (ORM) happen in MPC software running exclusively
on MPC.

Software adjustments or configuration settings are done in Block Element Manager (BEM).

Page 3-6
Storage Management Tools
Maintenance Access

User maintenance

The user works in HCS or HID. From there is the possibility to invoke the Maintenance Utility to
do maintenance.

BEM is the equivalent to the Web console or Hitachi Storage Navigator.

It allows the customer engineer, who has either no access to HCS/HID, or HCS/HID is not yet
installed, to do administration tasks like configure and provision volumes or adjust port settings.

The Web console, BEM or Storage Navigator is not visible to the end user; HCS/HID must be
used.

Page 3-7
Storage Management Tools
Hitachi Storage Management Tools

Hitachi Storage Management Tools

Management Interfaces

Single array configuration is less used, although still possible


It is used by maintenance people for initially setting up single mappings or
in the case of an outage of common management applications (HCS/HID)
Block Element Manager (VSP G200 G800)
Hitachi Storage Navigator (VSP/VSP G1000 emergency mode/HUS VM)
CLI/RAIDCOM (Common to all arrays)

HCS = Hitachi Command Suite

HID = Hitachi Infrastructure Director

Page 3-8
Storage Management Tools
Hitachi Storage Navigator/BEM

Hitachi Storage Navigator/BEM

Hitachi Storage Navigator on VSP G1000 and Block Element Manager on VSP G200 G800 look
nearly identical.

Certain tasks are possible only on individual platforms like RAID Group creation, but just on VSP
G200 G800.

Page 3-9
Storage Management Tools
Command Line Interface (CLI/RAIDCOM)

Command Line Interface (CLI/RAIDCOM)

CLI for single array


configuration

Available for all models


VSP G200 G1000

In band (FC)
Out of band
(TCP/IP)

The CLI supports all storage provisioning and configuration operations that can be
performed through Storage Navigator.

The CLI is implemented through the raidcom command.

The example on this page shows the raidcom command that retrieves the configuration
information about an LDEV.

For in-band CCI operations, the command device is used. The command device is a
user-selected and dedicated logical volume on the storage system that functions as the
interface to the storage system for the UNIX/PC host.

o The dedicated logical volume is called command device and accepts commands
that are executed by the storage system.

For out-of-band CCI operations, a virtual command device is used.

o The virtual command device is defined by specifying the IP address for the SVP.

o CCI commands are issued from the host and transferred through the LAN to the
virtual command device (SVP). The requested operations are then performed by
the storage system.

Page 3-10
Storage Management Tools
Hitachi Command Suite Overview

Hitachi Command Suite Overview

Hitachi Command Suite v8.X

Multi-array management, all block models, all other hardware (Hitachi NAS Platform, Hitachi
Content Platform, Hitachi Compute Blade)

The customer should use the Hitachi Device Manager component of the Hitachi Command Suite
v8.0 storage management software products to view and administer the storage system, as well
as any other HDS storage system. Legacy Storage Navigator will be in-context launch pop-out.

Page 3-11
Storage Management Tools
Hitachi Command Suite v8.X

HDvM HTnM HTSM


Business Intelligence HRpM
Configure Analyze Mobilize Protect
Unified Management Framework
Server Block Unified File Content Appliance
Compute Blade VPS, USP, AMS HUS, HUS VM HNAS HCP HDI

VSP G200-G800

VSP G200-G800

From Command Suite there are additional applications accessible all of them require
HCS/HDvM as a foundation:

Hitachi Tuning Manager for performance analysis

Hitachi Tiered Storage Manager for mobility, moving host volumes to different hardware
resources and definition of storage classes

Hitachi Replication Manager to manage every kind of replication, in-system and remote;
complete setup, management and deletion

Page 3-12
Storage Management Tools
Hitachi Command Suite - Unified Management

Hitachi Command Suite - Unified Management

Unified Management All storage arrays block or file

Unified management scales for the largest infrastructure deployments

Hitachi Command Suite (HCS) is designed to deliver a comprehensive unified way of managing
IT resources. It employs a 3D management approach to efficiently manage all data types to
lower costs for the Agile Data Center with the following 3 management dimensions:

o Manage Up to scale for large data infrastructures and application deployments,


increasing scalability to manage up to 5 million logical objects with a single
management server

o Manage Out with a unified management framework that has the breadth to
manage storage, servers and the IT infrastructure, incorporating both virtualized
storage and servers

o Manage Deep with Hitachi Command Suite integration for the highest levels of
operational efficiency that includes common management of multivendor storage
assets

Page 3-13
Storage Management Tools
Hitachi Command Suite - Unified Management

Unified Management Hitachi NAS Platform (HNAS)

Launching HNAS management tool (SMU) is supported.

Page 3-14
Storage Management Tools
Hitachi Device Manager (HDvM)

Hitachi Device Manager (HDvM)

Component of HCS used to manage storage system volumes


Single platform for centrally managing, configuring and monitoring Hitachi storage
Presents a logical view of storage resources
Hitachi Command Suite

External Virtualized
Volume
External Virtualized
Volume
External Virtualized
Volume

Hitachi Device Manager forms the base of the Hitachi Command Suite while being presented in
the GUI as Hitachi Command Suite. Device Manager provides common storage management
and administration for multiple Hitachi storage systems from which the advanced management
capabilities are built upon.

Using the single unified GUI, customers can manage all of their HDS storage products. Users
can use Device Manager to centrally manage, configure, provision, allocate and report on
storage for Hitachi platforms, including virtualized tiered storage for both virtual and physical
environments. HCS uses consumer-based management. In other words, resources are grouped
by business application and host, so it is tailored to a customers specific environment. Not only
does it manage block-level data, it also manages the file-level data as well.

Hitachi Device Manager provides a single platform for centrally managing, configuring, and
monitoring Hitachi storage systems. By significantly boosting the volume of storage that each
administrator can manage, the single-point-of-control design of Device Manager can help raise
storage management efficiency in these environments, as well as reduce costs. Easy-to-use
Device Manager logically views storage resources, while maintaining independent physical
management capabilities. By offering a continuously available view of actual storage usage and
configuration, Device Manager allows administrators to precisely control all managed storage
systems. This results in a highly efficient use of administrative time and storage assets. When
combined with other Hitachi Command Suite products, Device Manager helps automate entire
storage environments.

Page 3-15
Storage Management Tools
Hitachi Device Manager - Functionality

Hitachi Device Manager - Functionality

Storage operations
Allocating volumes (Add LUN mapping)
Unallocating volumes (Delete LUN path)
Creating volumes (Create LDEV)
Virtualizing storage systems (virtualize volumes)
Virtualizing storage capacity (HDP pools)

Managing storage resources


Group management of storage resources (logical groups)
Searching storage resources and outputting reports

User management

Security settings

Device Manager is the prerequisite for all other HCS products

It owns the core database and user management (SSO)

Page 3-16
Storage Management Tools
Hitachi Tiered Storage Manager (HTSM)

Hitachi Tiered Storage Manager (HTSM)

Simplifies the identification and classification of data volumes


Moves data volumes between heterogeneous arrays (nondisruptive)
Volume migration does not impact running applications

Data Mobility

Storage
Tiers

Virtualized Arrays

Another product of the Hitachi Command Suite framework is Hitachi Tiered Storage Manager.

Hitachi Tiered Storage Manager offers integrated data mobility capabilities for efficient
storage tier management and nondisruptive volume migration between storage tiers.

Hitachi Tiered Storage Manager provides transparent, nondisruptive data volume


movement that simplifies the identification and classification of data volumes internally
or externally attached to the Hitachi storage family. Tiered Storage Manager allows
online data migration of volumes within the Hitachi storage domain. Volume migration
does not impact running applications.

Combined with Hitachi Dynamic Tiering and Hitachi Dynamic Provisioning, these
products comprise the Hitachi Data Mobility product offering.

Page 3-17
Storage Management Tools
Hitachi Tiered Storage Manager Overview

Hitachi Tiered Storage Manager Overview

Manages data mobility across the data center, not just volumes or
pages within a storage ecosystem (when all arrays are virtualized
behind one central array)

Allows you to place data when and where it is needed

Supports mobility automation (when combined with Tuning Manager)

Works with Hitachi Dynamic Tiering to provide an efficient solution for


optimizing macro and micro optimization of data in and across storage
pools and volumes

Available as Mobility tab on HCS GUI

Hitachi Tiered Storage Manager (HTSM) provides an easy-to-use interface for


performing transparent, nondisruptive movement of data volumes across heterogeneous
storage systems. Based on the proven Hitachi Volume Migration data movement engine,
Tiered Storage Manager allows administrators to quickly provision storage to meet
application deployment requirements and then fine-tune provisioning using
multidimensional storage tiers.

As data center infrastructure continues to get consolidated and automated, storage


cannot be managed in an atomic state. To address data center management as a whole,
the focus is moving to managing data mobility across the data center, not just volumes
or pages within a storage ecosystem.

Data mobility is the critical key enabling factor in getting data when and where it is
needed.

HTSM (HCS Data Mobility) provides customers with the unique ability to move data non-
disruptively across pools, volumes and storage arrays.

HTSM and Hitachi Dynamic Tiering (HDT) together provide an efficient solution for
optimizing macro and micro optimization of data in and across storage pools and
volumes.

With all the data mobility features, HTSM is an essential component in managing and
optimizing todays green data centers.

Page 3-18
Storage Management Tools
Benefits of Tiered Storage Manager

Benefits of Tiered Storage Manager

Manages volume migration through the use of custom tiering

Provides volume classification mechanism (logical groups)

Replaces storage system and storage semantics with higher-level application data
quality of service (QoS) metrics and customer-definable storage tiers (custom tiers)

Integration with Hitachi Tuning Manager, enables performance optimization

Easily realigns application storage allocations

Supports completely transparent volume movement without interruptions

Batches migrations together in a plan and lets them be released immediately,


manually or scheduled (via CLI) for a later time

By transparently and interactively migrating data between heterogeneous, custom storage tiers,
Hitachi Tiered Storage Manager enables IT administrators to match application quality of service
requirements to storage system attributes.

Page 3-19
Storage Management Tools
Hitachi Replication Manager (HRpM)

Hitachi Replication Manager (HRpM)

Centralizes and simplifies replication management, monitoring and reporting of


Hitachi replication operations reports replication status
Supports all replication operations on Hitachi enterprise and modular storage

Universal
Replicate Backup
Archive Snap

Data Protection
Software and
Management

Hitachi ShadowImage Copy-on-Write Snapshot/


Replication Hitachi Thin Image

Hitachi TrueCopy
Hitachi Universal Replicator

Next we have Hitachi Replication Manager, also part of Command Suite. This product
centralizes and simplifies replication management by integrating replication capabilities to
configure, monitor and manage Hitachi replication products for in-system or distance replication
across both open systems and mainframe environments.

The synchronous and asynchronous long-distance replication products, as well as the in-
system replication products, were discussed earlier in this course. How do customers
manage all of these copy and replication operations? Replication Manager gives
customers a unified and centralized management GUI to help them manage all of these
operations.

This solution builds on existing Hitachi technology by leveraging the powerful replication
capabilities of the arrays and by combining robust reporting, mirroring and features
previously available in separate offerings. It decreases management complexity while
increasing staff productivity and providing greater control than previously available
solutions through a single, consistent user interface.

Page 3-20
Storage Management Tools
Centralized Replication Management

Centralized Replication Management

Hitachi Replication Manager


Configuration, Scripting, Analysis,
Task/Scheduler Management and Reporting

Business
Copy-On-Write
Universal
Thin Image ShadowImage TrueCopy Continuity
Replicator
Manager
Primary Secondary
Provisioning Provisioning

CCI HORCM

Cross-product, cross-platform, GUI-based replication management

Replication Manager gives an enterprise-wide view of replication configuration and allows


configuring and managing from a single location. Its primary focus is on integration and
usability.

For customers who leverage in-system or distance replication capabilities of their storage arrays,
Hitachi Replication Manager is the software tool that configures, monitors and manages Hitachi
storage array-based replication products for both open systems and mainframe environments in
a way that simplifies and optimizes the:

Configuration

Operations

Task management and automation

Monitoring of the critical storage components of the replication infrastructure

Hitachi Open Remote Copy Manager

Page 3-21
Storage Management Tools
Hitachi Performance Monitoring and Reporting Products

Hitachi Performance Monitoring and Reporting Products

Hitachi Tuning Manager


Advanced application-to-spindle reporting, analysis and troubleshooting for all Hitachi storage systems

Hitachi Performance Monitor


Detailed point-in-time reporting of
individual Hitachi storage systems

Parity Group
MP/MPU

FC/SCSI Port

Cache

Disk
App HBA/Host Switch Storage System

This is a visualization of how these products work, and what they cover.

Hitachi Performance Monitor provides in-depth, point-in-time information about


performance within a Hitachi storage system. It does not provide any information about
the network, the host or the application, nor does it provide any correlation to that
information.

Hitachi Tuning Manager provides end-to-end visibility for storage performance.


Although it is limited to Hitachi storage systems, it provides the most thorough view of
the system, tracking an I/O from an application to the disk. This ability to correlate this
information and link from step-to-step in the I/O path provides the most efficient
solution to identifying performance bottlenecks.

I/O response time, both host side and array side:

o It provides the ability to monitor the round trip response time for troubleshooting
and proactive service level error condition alerting results in improved application
performance. On the Hitachi Enterprise Storage products, this ability includes
and extends to round trip response to/from external storage.

Page 3-22
Storage Management Tools
Product Positioning

Product Positioning

Name Description
Hitachi Tuning Manager Advanced reporting, analysis and troubleshooting application for
Hitachi Data Systems storage systems and services
Application-to-spindle visibility and correlation in near-time and
historical
Full storage path awareness and deep knowledge of Hitachi Data
Systems storage systems

Hitachi Performance A monitoring product that provides detailed point-in-time reporting


Monitor within individual Hitachi Data Systems storage arrays

Tuning Manager is our premier performance and capacity analysis tool:

Its strength is its ability to view performance from the application through the network
and within the storage system.

It is our most robust performance analysis tool.

Performance Monitor is a monitoring product that provides detailed point-in-time reporting


within Hitachi Data Systems storage.

It provides basic reporting and monitoring within a storage system, but only within the
storage system. It has no knowledge of applications.

It cannot correlate information outside the storage system.

It has limited time frames for collecting performance data.

Page 3-23
Storage Management Tools
Hitachi Tuning Manager

Hitachi Tuning Manager

Deep-dive performance analysis


Accurate path-aware monitoring and reporting
Historical capacity and performance trending

Alerts

Hitachi Tuning Manager, another piece of the Command Suite framework, performs
integrated storage performance management for monitoring, reporting and analyzing
end-to-end storage performance and capacity for business applications, in addition to
detailed component performance metrics for Hitachi storage systems. It is a SAN-aware
product in that it monitors and provides performance metrics for servers, applications,
switches and Hitachi storage. This software correlates and analyzes storage resources
with servers and applications to improve overall system performance. It continuously
monitors comprehensive storage performance metrics to reduce delay or downtime
caused by performance issues. It facilitates root cause analysis to enable administrators
to efficiently identify and isolate performance bottlenecks. It allows users to configure
alerts for early notification when performance or capacity thresholds have been
exceeded. In addition, it provides the necessary performance information for customers
to do trending analysis, and forecasts future storage capacity and performance
requirements to minimize unnecessary infrastructure purchases.

What am I going to need to buy? What type of drives? How much capacity am I going
to need? These are the sort of questions that Tuning Manager can help to answer.

In summary, Tuning Manager is a storage performance management application that


maps, monitors and analyzes storage network resources from the application to the
storage device. It provides the end-to-end visibility you need to identify, isolate and
diagnose performance bottlenecks.

This software also provides customizable storage performance reports and alerts for
different audiences and reporting needs.

Page 3-24
Storage Management Tools
Hitachi Tuning Manager Overview

Hitachi Tuning Manager Overview

Detailed storage performance reporting

Custom storage reports and real-time performance alerts

Supports VMware virtual server environments

Provides performance data to Hitachi Tiered Storage Manager to create


performance metrics based tiers

Provides performance data to Hitachi Device Manager Analytics to


identify performance problems and health check reporting

Provides performance data to Replication tab for analysis of Hitachi


Universal Replicator

HTnM provides:

Detailed storage performance reporting

o In-depth performance statistics of Hitachi storage systems and all network


resources on the applications data path

o Reporting of Hitachi Dynamic Tiering and Hitachi Dynamic Provisioning pools for
usage analysis and optimization

Custom storage reports and real time performance alerts

o Customizable storage performance reports and alerts for different audiences and
reporting needs

Support for VMware virtual server environments

o Provides performance correlation for VMware virtual servers, virtual machines,


data stores and Hitachi storage logical devices

Page 3-25
Storage Management Tools
Hitachi Tuning Manager Overview

Performance data to Hitachi Tiered Storage Manager to create performance metrics


based tiers

o By leveraging performance data gathered from network resources throughout


the applications data path, Hitachi Tuning Manager (HTnM) provides the
following business and customer benefits:

Improves management of storage growth Supports faster


application deployment through improved planning and forecasting of
storage resource requirements

Enables operational excellence Maintains storage performance by


reviewing historical trends and identifying the source of bottlenecks

Mitigates risks and increases efficiency Prevents outages with


advanced forecasting and alerts

Reduces operational and capacity costs Enables more storage


resources to be managed per person

Page 3-26
Storage Management Tools
Hitachi Dynamic Link Manager (HDLM) Advanced

Hitachi Dynamic Link Manager (HDLM) Advanced

Reduce server downtime by immediate detection of path failures


Reduce TCO with consolidated paths configuration and status management
Large-scale multi-path environment I/O access paths status
Operating status of I/O access paths monitoring for multiple servers
(With Path ID, HBA, CHA Port, Storage System, Device Name and so on)

VMware Solaris
Windows Linux UNIX UNIX
ESXi Windows

HDLM HDLM HDLM DMP HDLM HDLM


Fail-over Real-time path
Path failure alert
Zero RPO/RTO Dashboard shows error
failure
By storage clustering
SAN status summary

HAM pair
PVol SVol

AMS
Alerts view enables quick path
USP V/VM USP V/VM VSP/VSP G1000 HUS 100 HUS VM
VSP VSP failure detection and actions

Hitachi Command Director - Central HCS Reporting and


Operations

Hitachi Command Director Common Data Reporting Model

Hitachi Tiered Storage


Hitachi Device Manager Hitachi Tuning Manager Manager

Command Director introduces a new common data reporting model across Hitachi Command
Suite. Using a common data reporting model, Command Director consolidates management
statistics from Device Manager (Hitachi Base Operating System), Tuning Manager and Tiered
Storage Manager for centralized storage management operations.

Page 3-27
Storage Management Tools
Hitachi Command Director

Hitachi Command Director

Merge storage performance data from


multiple instances of Hitachi Tuning Manager

Merge storage configuration data from


multiple instances of Hitachi Device Manager

Merge storage tier data from Hitachi


Tiered Storage Manager (optional)

Page 3-28
Storage Management Tools
Hitachi Command Director Overview

Hitachi Command Director Overview

Centralized business application management policies and


operations

Monitor compliance to application-based storage service levels

Improves capacity utilization and planning of Hitachi storage


environments

Centralized business application management policies and operations

o Organize and view storage assets based on business applications and functions

o Consolidates reporting and management of storage configuration, tier, policy,


capacity, and performance across Hitachi Command Suite

Easily align Hitachi storage assets with the business applications that rely
on them

Monitor compliance to application-based storage service levels

o Define policy-based storage service levels by business application

o Monitor key storage capacity and performance indicators by applications to


ensure their adherence to required service levels

o Global dashboard for storage system health and application performance tracking

Page 3-29
Storage Management Tools
Hitachi Command Director Overview

Improves capacity utilization and planning of Hitachi storage environments

o Properly analyzes key statistics aggregated from multiple Hitachi Command Suite
products

o End-to-end capacity utilization trends from application, hosts/virtual host, to


storage devices

o Supports all Hitachi storage environments

By leveraging data from Hitachi Device Manager, Hitachi Tuning Manager, and Hitachi
Tiered Storage Manager, Command Director provides the following business use cases:

o Business centric view of storage allocations and utilizations

o Monitor applications performance and capacity utilization health

o Troubleshoot performance service level violations related to applications

o Provide chargeback support in terms of performance and capacity

o Correlate host and storage side capacity utilization trends for capacity planning

o Analyze capacity utilization to identify waste and risk

o Plan and identify the best place to introduce new application/workload in my


storage system

Page 3-30
Storage Management Tools
Hitachi Command Director (HCD)

Hitachi Command Director (HCD)

Centralized service-level management for mission-critical business applications to


optimize CAPEX and OPEX costs
Measures and reports on Service Level Objectives (SLOs)

One of the big challenges in any environment is to get a business intelligence view of the
storage environment to ensure that storage service level objectives (SLOs) for mission-critical
business applications are being met. IT organizations spend a considerable amount of time and
effort developing tracking processes to correlate and analyze storage resources back to the
respective business applications that rely on them. Without accurate and detailed storage
reporting, there are no assurances that application service levels are being met, and
effectiveness of storage management practices is limited.

Command Director consolidates business intelligence analysis for Hitachi Command Suite by
monitoring and ensuring storage service levels for business applications and storage system
health across a data center. Command Director facilitates customized dashboards for real-time
monitoring of key storage performance and capacity indicators by business application, such as
response times, IOPS (or input and output per second), data transfer rates, cache reads, writes
pending and utilized capacity. By verifying application-specific storage SLOs are being met,
administrators can implement policies to enable the rapid modification of the storage
environment for changing business requirements.

For their key applications, customers want to be able to monitor the Service Level Agreements
(SLAs) that they promised their consumers. If applications are meeting their SLAs, then that is
fine. If not, they need to know that through alerts, so they can begin their analysis of the
causes. Command Director allows them to set up a dashboard that is fine-tuned for their
environment, where they can get information on the state of their applications. By having this
ability they can be more proactive versus waiting for users to complain about performance.

Page 3-31
Storage Management Tools
Hitachi Command Director - Addresses the Following Challenges

Hitachi Command Director - Addresses the Following Challenges

Global Dashboard
Storage Status Summary Application Service Level
Quickly check storage status for my data
Management
center and monitor any service level Assign service level objectives for my
violations applications and investigate any service
Review global dashboard or overall storage level violations
utilization summary report Define service level objectives per
Near real time application status and application
service level monitoring Business Views Enforce application service levels and
Global reporting of defined thresholds and storage tier policies
Create
C Business Operations
Business Applications Busin
a G F Biz. Biz. App B
ess
p e u A Applic
t o n p ations
u p
G G
Ug
Sc eogr eography (UK) and
r t
S ar assign
e C i aph A F
A C la y unction pre-
r o
r Up
e (US (Marketing, defined
e n
K e sh A) Sales) Busin
B a
a Jy
M F ess
u t

Business Ops.
B a t a B B B Operat
s e unct

Grouping
when they have been exceeded
p e y r y y ions y
ion

Drill down service level violations to isolate


i
a k
n G F m F G (Sal G F F
n a G e es)
e e o u e u
e t u e u
s o r n o n
F o i G n o n
s g e c g More c
B un c e c
r t r Busin t
u T ng t o O t
O a i a ess i
s h c G i g n i
p p B o p Views o
i Busin e tS o r l o
e h u n h n
n ess n iS n a y n
r y s y genera
a e View oD
i
p
ted
s genera F n T h O
t n h y autom n
s ted u

and investigate bottlenecks


i B autom n B e e B B aticall l
o y s ny y y yy
n O aticall c
s based
U p y t S U S on
iG GF G F
s based S O a S Busin a
e eu e u
i . on o
A p l A ess l
Busin no on o B n
n S U s e B Ops. e
c c

Business View of Utilization


f G ess a . s S i Groupi s i
T t O t
o r Ops. l A z ngB
z
h i n i
r
B o Groupi e B G U . .
e o l o i
u ng s i i

Business Views
m r K
y n
n z n z
A
a p o z A
U . U M .
p
t i u . B p
KF TK aO p
i n p i
p
u A h rn A
o g i A z B
S n B e kl p
p p .
A A

Capacity Management
n c a p n n i ey ip
g p z
t l z t
A .B
B i e A M G . i
A p
os i ae n
p p
n z ro A g
M . U p p
k B
a p
e K
r A B
B t
k p B B
i
e ip i
n
t z z
g
i .B .
n
g A

Organize my storage assets to support the


A
p p
p p

B B

following business use cases: View and analyze historical utilization


trends for the following activities:
Align mission critical business applications Reports
to tier 1 storage assets Identify underutilized storage capacity
Increase and optimize capacity utilization Determine optimal deployment for new
application workloads
Implement chargeback and cost analysis
Properly plan future storage purchases

Page 3-32
Storage Management Tools
Hitachi Compute Systems Manager (HCSM)

Hitachi Compute Systems Manager (HCSM)

Provisioning, configuration, monitoring, and lifecycle management of Hitachi


Compute Blade and Compute Rack servers
Supports Hitachi Compute Blades (CB 500, CB 2000, CB 2500); Hitachi Compute
Rack (CR 210, CR 220H); 3rd-party servers (IBM, HP, Dell, CISCO and so on)

Hitachi Compute Systems Manager (HCSM) is a systems management tool which allows
seamless integration into Hitachi Command Suite to provide a single management view
of servers and storage.

Compute Systems Manager provides:

o Usability through its GUI being integrated with Command Suite

o Scalability (up to 10,000 heterogeneous servers)

o Maintainability and serviceability

Basic functionality is included with Hitachi servers at no additional charge. Additional


functionality and capability is available through optional plug-in modules.

Compute Systems Manager provides the provisioning, configuration, monitoring, and


lifecycle management of Hitachi Compute Systems, as well as 3rd-party servers such as
IBM, HP, Dell and Cisco.

Page 3-33
Storage Management Tools
Hitachi Infrastructure Director

Hitachi Infrastructure Director

Hitachi Infrastructure Director (HID)

Multi-array management, all new midrange block models (VSP G200 G800)

Page 3-34
Storage Management Tools
Hitachi Infrastructure Director

Hitachi Infrastructure Director

HID abstracts technology and management complexities to facilitate


rapid infrastructure deployments and platform self-service

Initial Configuration Self-Service


Setup Management Maintenance

Reduces Complexity

Delivers Ease of Use

Focused on User Not on


Technology

Smart/Intelligent-based
management
Object driven design
Abstract complexities
Auto build array groups
Suggested pool configs
Auto-zoning
Smart provisioning based on
application templates

Page 3-35
Storage Management Tools
Hitachi Infrastructure Director GUI and Command Interfaces

Hitachi Infrastructure Director GUI and Command Interfaces

HID :: USER INTERFACES

User Access to HID with GUI, CLI and REST-API for further automation and retrieval of
performance data

REST = Representational State Transfer

Page 3-36
Storage Management Tools
HCS and HID Coexistence

HCS and HID Coexistence

Both HID and Hitachi Command Suite (HCS) can be used for
management of Hitachis next-generation midrange storage platform

HID and HCS focus on specific management needs

Hitachi Command Suite Hitachi Infrastructure Director


Addresses broad enterprise infrastructure Addresses ease-of-use, reduced
management requirements and complex complexity, recommended storage
workflows for configuration, remote configurations and end-to-end infrastructure
replication, high availability and data lifecycle management
migration

Dynamic management changes: hybrid cloud, converged, API, open source

o Configuration, Analytics, Mobility, Replication, Automation (new)

o New upcoming products:

Automation: Simplified provisioning (initially)

Analytics: Simplified performance analytics reporting via Mars

Page 3-37
Storage Management Tools
HCS and HID Feature-Function Matrix

HCS and HID Feature-Function Matrix

Feature-Function HCS HID


Self-service provisioning portal N N
Self-service setup and configuration workflows Y Y
Automated provisioning N N
Template-based provisioning N Y
Basic provisioning Y Y
Provisioning and data protection workflows N Y
Auto zoning N Y
Deep-dive performance monitoring and reporting Y planned
Basic performance monitoring and reporting Y planned
Replication management (complex) Y planned
Replication management (basic) Y planned
HA - GAD setup workflow (active/active) Y N
Migration - NDM setup workflow Y N
Storage virtualization setup workflow Y planned
Migration (basic) Y planned
Server/hypervisor management planned planned

Page 3-38
Storage Management Tools
Hi-Track Remote Monitoring System

Hi-Track Remote Monitoring System

Hi-Track Overview

Hi-Track Monitor agent service and remote maintenance tool:


Monitors the operation of the storage system at all times
Collects hardware status and error data and transmits it to HDS Support Center

Transport to the Hi-Track center can be through either HTTPS or FTP (SSL or
standard) through the public Internet or through dialup modem

Hi-Track can send email alerts to customers (user definable destinations) and
offers remote access to SVP for HDS support

Hi-Track Monitor agent is a Microsoft Windows application installed on the


SVP or a management server in the customer data center

Hi-Track Monitor agent is a software utility program

The Hi-Track Monitor agent monitors the operation of the storage at all times, collects
hardware status and error data and transmits this data through a modem to the Hitachi
Data Systems Support Center

o The Support Center analyzes the data and implements corrective action as
needed

o In the unlikely event of a component failure, Hi-Track Monitor service calls the
Hitachi Data Systems Support Center immediately to report the failure, without
requiring any action on the part of the user

o Hi-Track tool enables most problems to be identified and fixed prior to the actual
failure

o The advanced redundancy features enable the system to remain operational


even if one or more components fail

Page 3-39
Storage Management Tools
Hi-Track View Example

Hi-Track Monitor agent enables error analysis, case creation and error/information data
browsing functions

o When Hi-Track Monitor agent is installed and the storage system is configured to
allow it, Hitachi support staff can remotely connect to the storage system

o This feature provides a remote SVP mode for the large RAID systems that
enables the specialist to operate the SVP as if they were at the site

o This allows support specialists to provide immediate, remote troubleshooting and


assistance to any Hi-Track location

Note: Hi-Track Monitor agent does not have access to any user data stored on the storage

Hi-Track View Example

Page 3-40
Storage Management Tools
Hi-Track Overview: Hi-Track Monitor Agent - Mobile App

Hi-Track Overview: Hi-Track Monitor Agent - Mobile App

The Hi-Track iPhone/iPad app is used optionally in


concert with the Hi-Track Monitor agent

The app is targeted for use by customers to provide


additional value to HDS products and services and to
enhance the customer experience by allowing them to
view the status of monitored devices anytime, anywhere,
using a familiar mobile device and interface

The app interfaces with the Hi-Track Monitor agent


application at the customer site to acquire information
regarding the Hi-Track monitored devices

The app is currently available for download from the Apple


App Store

Page 3-41
Storage Management Tools
Module Summary

Module Summary

In this module, you should have learned to:


Identify the tools for managing hardware and software functionality
Compare and contrast the tools for managing storage
Provide an overview of Hitachi Command Suite (HCS) features and
functionality, including configuration, mobility, analytics and replication
Describe Hitachi Infrastructure Director (HID) and compare to HCS
Describe the purpose and functions of Hi-Track Remote Monitoring
system and the mobile app

In this module, you reviewed the following Hitachi storage management software products:
Hitachi Storage Navigator (legacy products)
Hitachi Command Suite (HCS)
Hitachi Device Manager (HDvM)
Hitachi Tiered Storage Manager (HTSM)
Hitachi Replication Manager (HRpM)
Hitachi Tuning Manager (HTnM)
Hitachi Compute Systems Manager (HCSM server management)

Hitachi Dynamic Link Manager Advanced (HDLM)


With the Hitachi Global Link Manager console (HGLM)

Hitachi Command Director (HCD)


Hitachi Infrastructure Director (HID)
Hi-Track Remote Monitoring system

Page 3-42
4. Storage Virtualization
Module Objectives

Upon completion of this module, you should be able to:


Describe virtualization of external storage
Describe virtual storage machines (VSM)
Describe nondisruptive migration (NDM)
Describe global-active device (GAD)

Page 4-1
Storage Virtualization
Hitachi Universal Volume Manager

Hitachi Universal Volume Manager

Components of Virtualization of External Storage

Hitachi Universal Volume Manager (UVM) license

Ports supporting external attribute

External storage

Volume to be virtualized, physically located in external storage

Page 4-2
Storage Virtualization
Virtualization of External Volumes (Example)

Virtualization of External Volumes (Example)

T T VSP G1000
E T HUS110

SW1
Creating Volume in HUS 110
Map it via the two target
ports to the WWPN of
E T External port in VSP G1000
Virtualize it in VSP G1000

SW2
Present it to a server

T Target Port Logical Volume from RG Server I/O

E External Port Disk from RG Virtual Volume

Max. Size of external Volumes 4TB


HUS = Hitachi Unified Storage
WWPN = World Wide Port Name

Supported Storage Systems for UVM

Other supported manufacturers


Fujitsu
Gateway
HP
IBM
NEC
NetApp
Nexsan Technologies
Pillar Data Systems
Promise Technology
SGI
Sun Microsystems
Violin Memory

Generic Profiles can be used


additionaly

Complete list at http://www.hds.com/products/storage-systems/specifications/supported-


external-storage.html

Page 4-3
Storage Virtualization
Virtual Storage Machine

Virtual Storage Machine

Virtual Storage Machine Essentials

Virtual storage machine


Is a container in a VSP G1000, which has assigned ports, host groups and
volumes
Is supported in VSP G1000 and the VSP midrange family only
Requires Hitachi Command Suite (HCS) for configuration

Use of virtual storage machines is required for


Nondisruptive migration (NDM)
Global-active device (GAD)

Components of a Virtual Storage Machine

Page 4-4
Storage Virtualization
Adding Resources to Virtual Storage Machines

Adding Resources to Virtual Storage Machines

Resources Description
Storage Specifying a physical storage system from any 1 of the VSP G1000 systems discovered in
Systems HCS. Virtual Storage Machine will be created on the specified storage system.
Parity Groups Specifying existing parity group on the selected storage system. This is same purpose for
adding parity groups in resource group for access control.
The user who manages this virtual storage machine can create new volumes from the parity
group.
LDEV IDs Specifying LDEVs can be used in the virtual storage machine.
You can specify LDEVs already created in the storage system or you can also reserve LDEV
IDs (physical LDEV IDs) to be used by the virtual storage machine.
Storage Ports Specifying existing ports on the selected storage system. This is same purpose for adding
storage ports in resource group for access control.
The user who manages this virtual storage machine can use the port when allocating volume.
Host Group Specifying host groups can be used in the virtual storage machine.
Numbers You can specify unused host groups already created in the storage system or you can also
specifying number of host groups will be used by the virtual storage machine per ports.

Viirtual Storage Machines in HDvM

Page 4-5
Storage Virtualization
Use Cases for Virtual Storage Machine

Use Cases for Virtual Storage Machine

Nondisruptive migration

Global-active device

Scalability

Page 4-6
Storage Virtualization
Nondisruptive Migration

Nondisruptive Migration

Nondisruptive Migration Use Case Preparation

Source DKC is VSP SN 12345


LDEV 11:11 is shown to the server as LUN 0
LUN 0 This should be migrated to a VSP G1000 with SN 67890
Create CMD in both storages and map them to server
Map 11:11 additionally to an EXTERNAL port of VSP G1000 (for Virtualization)
Create VSM in VSP G1000
Create a virtual LDEV with identity of the source LDEV (11:11) in VSP G1000
Virtualize source LDEV
Map it to the server
VSP G1000
T T
VSP T T SN 67890
SN 12345
CMD CMD X11:11

11:11 T E 44:44

RSG#0 RSG#1

VSM #0 VSM #1
VSP G1000 SN 67890 VSP SN 12345

Nondisruptive Migration is a GSS feature, to migrate customer volumes without


disturbing the production.

As enhancement it will be available for customers as well (HCS integration)

CMD Command Device, a low level I/F for controlling functions via CCI-commands

CCI Command Control Interface

CLI Command Line Interface (RAIDCOM command set in CCI)

VSM Virtual Storage Machine emulate a storage machines type and SN

RSG Resource Group is a kind of virtual partition containing Ports, Host Groups and so
on

Page 4-7
Storage Virtualization
Nondisruptive Use Case Migration

Nondisruptive Use Case Migration

Delete alternate path to source DKC (VSP)


LUN 0

VSP G1000
T T
VSP T T SN 67890
SN 12345
CMD CMD X11:11

11:11 T E 44:44

I/O through
Do not use cache
RSG#0 RSG#1

VSM #0 VSM #1
VSP G1000 SN 67890 VSP SN 12345

Change cache mode from Through mode to Write Fix mode


LUN 0

VSP G1000
T T
VSP T T SN 67890
SN 12345
CMD CMD X11:11

11:11 T E 44:44

Use Cache

RSG#0 RSG#1

VSM #0 VSM #1
VSP G1000 SN 67890 VSP SN 12345

Page 4-8
Storage Virtualization
Nondisruptive Use Case Migration

Migrate the virtual volume 44:44 to an internal physical one (99:99)


LUN 0

VSP G1000
T T
VSP T T SN 67890
SN 12345
CMD CMD X11:11

11:11 T E 44:44

99:99
Use Cache

RSG#0 RSG#1

VSM #0 VSM #1
VSP G1000 SN 67890 VSP SN 12345

Identities are switched after migration is finished


LUN 0

VSP G1000
T T
VSP T T SN 67890
SN 12345
CMD CMD X11:11

11:11 T E 44:44

99:99

RSG#0 RSG#1

VSM #0 VSM #1
VSP G1000 SN 67890 VSP SN 12345

Page 4-9
Storage Virtualization
Supported Cache Modes

Source DKC can be removed


LUN 0

VSP G1000
T T
VSP T T SN 67890
SN 12345
CMD CMD X11:11

11:11 T E 44:44

99:99

RSG#0 RSG#1

VSM #0 VSM #1
VSP G1000 SN 67890 VSP SN 12345

Supported Cache Modes

Through Mode Enabled/Disabled Write Fix


Source DKC I/O cached I/O cached I/O cached
Target DKC I/O not cached I/O cached I/O cached
(bypass) (conventional cache
mode ON/OFF)
In Case of Failure No data protection No data protection Data protection
In Target DKC

Performance Low EM: High Low


DM: LOW

Page 4-10
Storage Virtualization
Global-Active Device

Global-Active Device

Purpose of Global-Active Device

Continuing server I/O in case of disaster event

Aids easier server failover/failback with active-active high availability

Balance the load between data centers by moving VMs

Replacement for Hitachi High Availability Manager (HAM)

Components of Global-Active Device

Two VSP G1000 storage systems

Ports for remote replication (Initiator/RCU target)

Volumes to be replicated

External storage with Quorum device

External ports in both VSP G1000 for virtualization of Quorum device

HCS installed (recommended but not mandatory)

Hitachi Replication Manager (HRpM) installed and command devices in


both storages

HRpM = Hitachi Replication Manager

Page 4-11
Storage Virtualization
Global-Active Device

Global-Active Device

VSP G1000 VSP G1000


SN 12345 SN 67890

GAD
VSM #1 VSM #0
VSP G1000 SN 12345 VSP G1000 SN 67890

E E

HUS
UVM T T UVM
Q Data path
Virtualization path
Replication path

Page 4-12
Storage Virtualization
Differences Between VSP G1000 Global-Active Device and VSP High Availability Manager

Differences Between VSP G1000 Global-Active Device and VSP


High Availability Manager

High Availability
Function Global-Active Device
Manager
Multipath I/O Active-Active Active-Passive
Multipath Software HDLM, Native OS Multipath HDLM
PP Combination(*1) YES(*2) NO
Operation I/F HCS, Raid Manager Raid Manager
Reserve SCSI-2, SCSI-3, ATS SCSI-2, ATS

Supported Models VSP G1000 USP V, VSP, HUS


VM
Distance (max.) 100 KM 30 KM

(*1) Combination with other replication Program Product (PP)

(*2) Target support microcode version may vary per PP

HDLM Hitachi Dynamic Link Manager

HCS Hitachi Command Suite

HAM Hitachi High Availability Manager

Page 4-13
Storage Virtualization
Module Summary

Module Summary

In this module, you should have learned how to:


Describe virtualization of external storage
Describe virtual storage machines (VSM)
Describe nondisruptive migration (NDM)
Describe global-active device (GAD)

Page 4-14
5. Replication
Module Objectives

Upon completion of this module, you should be able to:


Provide an overview of the replication offerings supported in the
functionality of the storage controller
Describe the components of in-system replication offerings including
Hitachi ShadowImage Replication and Hitachi Thin Image
Describe the components of remote replication offerings, including
Hitachi TrueCopy and Hitachi Universal Replicator
Describe the supported multidata center, remote replication
configurations

Page 5-1
Replication
Hitachi Replication Products

Hitachi Replication Products

Hitachi Replication Portfolio Overview

Local Replication
In-System Replication Solutions Remote Replication Solutions
Remote Replication
Solutions Solutions
Hitachi ShadowImage Replication Hitachi TrueCopy
For full volume clones of business data Synchronous, consistent clones at remote
with consistency location up to 300km (~180 miles)
Hitachi Thin Image Hitachi Universal Replicator (HUR)
Point-in-time virtual volumes of data Heterogeneous, asynchronous, journal vs.
with consistency cache-based, pull vs. push, resilient at any
distance

Hitachi Replication Manager


Easy-to-use replication management tool for both open and
mainframe environments

Page 5-2
Replication
Hitachi ShadowImage Replication

Hitachi ShadowImage Replication

Features
Full physical copy of a volume
Multiple copies at the same time
Up to 9 copies of the source volume
Immediately available for concurrent use by
Production Copy of
other applications (after split)
Volume Production
No dependence on operating system, file Volume
system or database Normal Point-in-
processing time copy
Benefits continues for parallel
unaffected processing
Protects data availability
Supports disaster recovery testing
Eliminates the backup window

The Hitachi ShadowImage In-System Replication software bundle is a nondisruptive, host-


independent data replication solution for creating copies of any customer-accessible data within
a single Hitachi storage system. The Hitachi ShadowImage In-System Replication software
bundle also increases the availability of revenue-producing applications by enabling backup
operations to run concurrently while business or production applications are online.

Page 5-3
Replication
Hitachi Thin Image

Hitachi Thin Image

Benefits Features

Reduce recovery time from data corruption or human Up to 1024 point-in-time snapshot copies
errors while minimizing the amount of storage capacity
needed for backups Only changed data blocks stored in pool

Achieve frequent and nondisruptive data backup Version tracking of backups enables easy restores of just
operations while critical applications run unaffected the data you need

Accelerate application testing and deployment with


always-available copies of current production information

Significantly reduce or eliminate backup window time


requirements

Improve operational efficiency by allowing multiple


processes to run in parallel with access to the same
information

An essential component of data backup and protection solutions is the ability to quickly and
easily copy data. Thin Image snapshot provides logical, change-based, point-in-time data
replication within Hitachi storage systems for immediate business use. Business usage can
include data backup and rapid recovery operations, as well as decision support, information
processing and software testing and development.

Maximum capacity of 2.1PB enables larger data sets or more virtual machines to be
protected

Maximum snapshots increased to 1024 for greater snapshot frequency and/or longer
retention periods

Asynchronous operation greatly improves response time to host

Enhanced for super-fast data recovery performance

Page 5-4
Replication
Hitachi TrueCopy Remote Replication

Hitachi TrueCopy Remote Replication

Hitachi TrueCopy Remote Replication bundle is ideal for the most mission-critical data situations
when replication and backup of saved data are extremely important. TrueCopy, for Hitachi
storage families, addresses these challenges with immediate real-time and robust replication
capabilities.

Page 5-5
Replication
Hitachi Universal Replicator

Hitachi Universal Replicator

Features Benefits
Asynchronous replication Resource optimization
Performance-optimized disk-based journaling Mitigation of network problems and
Resource-optimized processes significantly reduced network costs

Advanced 3 and 4 data center capabilities Enhanced disaster recovery capabilities


through 3 and data center configurations
Mainframe and open systems support
Reduced costs due to single
pane of glass heterogeneous
replication

WRT
Application Application
JNL JNL
Volume Volume

Journal data is transferred


asynchronously
Secondary site
Primary site

The following describes the basic technology behind the disk-optimized journals:

I/O is initiated by the application and sent to the Universal Storage Platform.

It is captured in cache and sent to the disk journal, at which point it is written to disk.

The I/O complete is released to the application.

The remote system pulls the data and writes it to its own journals and then to the
replicated application volumes.

Hitachi Universal Replicator sorts the I/Os at the remote site by sequence and time stamp
(mainframe) and guaranteed data integrity.

Note that Hitachi Universal Replicator offers full support for consistency groups through the
journal mechanism (journal groups).

Page 5-6
Replication
Hitachi Replication Manager

Hitachi Replication Manager

Single interface for performing all replication operations including:


Managing replication pairs
Hitachi ShadowImage
Replication
Hitachi Thin Image
Hitachi TrueCopy Remote
Replication bundle
Hitachi Universal Replicator
Configuring
Command devices
Hitachi Thin Image pools
Hitachi TrueCopy/HUR ports
Creating alerts
GUI representation of replication
environment

Replication Manager centralizes and simplifies replication management by integrating replication


capabilities to configure, monitor and manage Hitachi replication products for in-system or
distance replication across both open systems and mainframe environments.

Page 5-7
Replication
Tools Used For Setting Up Replication

Tools Used For Setting Up Replication

Graphical User Interface


Replication Manager full license
Geographically spread data center and site views, enhanced monitoring
and alerting features
Hitachi Device Manager (HDvM)
Restricted license of Hitachi Replication Manager
Device Manager agent is required on one server
Hitachi Storage Navigator (element manager)
Storage Centric

Use interface tools to manage replication. Interface tools can include the following:

HDvM Storage Navigator graphical user interface (GUI)

Device Manager Replication Manager

Command control interface

Page 5-8
Replication
Tools Used For Setting Up Replication - more

Tools Used For Setting Up Replication - more

Command Line Interface (CCI)


Used to script replication process
RAID manager/CCI software
Installed on a management server
Hitachi Open Remote Copy Manager (HORCM) configuration files
Command device needed
In-band traditional FC LUN mapping
Out-of-band IP connectivity to the storage system SVP
RAIDCOM CLI (storage configuration)

CCI Command control interface

o CCI represents the command line interface for performing replication operations

HORCM Hitachi Open Remote Copy Manager

o HORCM files contain the configuration for volumes to be replicated and used by
the commands available through CCI

Page 5-9
Replication
Requirements For All Replication Products

Requirements For All Replication Products

Any volumes involved in replication operations (source P-VOL


and copy S-VOL):
Must be the same size (in blocks)
Must be mapped to a port
Source (P-VOL) is online and in use
Copy (S-VOL) is mapped to a dummy or inactive Host Group
Copy pair must be split for access to the copy (S-VOL)

Intermix of RAID levels and drive type is supported

Licensing depends on replication product or bundle and capacity


to be replicated

Page 5-10
Replication
Replication Status Flow

Replication Status Flow

Create pair Simplex

Establishes the initial copy between a production Synchronizing

Volume (P-VOL) and the copied volume (S-VOL) P-VOL S-VOL

Split pair
Paired
The S-VOL is made identical to the P-VOL P-VOL S-VOL

Resynchronize pair Split


P-VOL S-VOL
Changes to P-VOL since a pair split is copied to the
S-VOL; can be reversed Resync
P-VOL S-VOL
Swap pair
P-VOL and S-VOL roles are reversed P-VOL Swap S-VOL

Delete pair
Delete
Pairs are deleted and returned to simplex (unpaired) (Simplex)
status

Pair Operations

Basic replication operations consist of creating, splitting, resynchronizing, swapping, deleting a


pair, very common to all replication products:

Create Pair:

o This establishes the initial copy using two logical units that you specify

o Data is copied from the P-VOL to the S-VOL

o The P-VOL remains available to the host for read and write throughout the
operation

o Writes to the P-VOL are duplicated to the S-VOL Local Replication asynchron,
TrueCopy synchronously)

o The pair status changes to Paired when the initial copy is complete

Page 5-11
Replication
Replication Status Flow

Split:

o The S-VOL is made identical to the P-VOL and then copying from the P-VOL
stops

o Read/write access becomes available to and from the S-VOL

o While the pair is split, the array keeps track of changes to the P-VOL and S-VOL
in track maps

o The P-VOL remains fully accessible in Split status

Resynchronize pair:

o When a pair is resynchronized, changes in the P-VOL since the split is copied to
the S-VOL, making the S-VOL identical to the P-VOL again

o During a resync operation, the S-VOL is inaccessible to hosts for write operations;
the P-VOL remains accessible for read/write

o If a pair was suspended by the system because of a pair failure, the entire P-VOL
is copied to the S-VOL during a resync

Swap pair:

o The pair roles are reversed

Delete pair:

o The pair is deleted and the volumes return to Simplex status

Page 5-12
Replication
Thin Provisioning Awareness

Thin Provisioning Awareness

Pair create
instruction
P-VOL S-VOL POOL
Delete allocated page
(Write 0 and restore it
Usage 0%
to POOL)

Data copy Get a new page


(Only page allocated
area on P-VOL)

Saves bandwidth and reduces initial copy time: In thin-to-thin replication pairings,
only data pages actually consumed (allocated) from the Hitachi Dynamic Provisioning
(HDP) pool need to be copied during initial copy
Reduce license costs: You only have to provision license capacity for capacity actually
consumed (allocated) from the HDP pool

Thin provisioning awareness: applies to all HDS replication products (including HUR)!

Page 5-13
Replication
Hitachi ShadowImage Replication

Hitachi ShadowImage Replication

Hitachi ShadowImage Replication Overview

Simplifies and increases data


protection and availability
Eliminates traditional backup window
Reduces application testing and VOL #1
development cycle times
Enables an uncorrupted copy of
production data to be restored if an VOL #2

outage occurs
Allows disaster recovery testing without
impacting production

ShadowImage Replication is the in-system copy facility for the Hitachi storage systems. It
enables server-free backups, which allows customers to exceed service level agreements (SLAs).
It fulfills 2 primary functions:

Copy open-systems data

Backup data to a second volume

ShadowImage Replication allows the pair to be split and use the secondary volume for system
backups, testing and data mining applications while the customers business using the primary
disk continues to run. It uses either graphical or command line interfaces to create a copy and
then control data replication and fast resynchronization of logical volumes within the system.

Page 5-14
Replication
Hitachi ShadowImage Replication RAID-Protected Clones

Hitachi ShadowImage Replication RAID-Protected Clones

Use ShadowImage Replication to create multiple clones of


primary data
Open systems 9 copies total
Level 2
Level 1 S-Vol
S-Vol

P-Vol

Page 5-15
Replication
Applications for ShadowImage In-System Replication

Applications for ShadowImage In-System Replication

Backup and recovery

Data warehousing and data mining applications

Application development

Run benchmarks and reports

Hitachi ShadowImage Replication is replication, backup and restore software that delivers the
copy flexibility customers need for meeting todays unpredictable business challenges.

With ShadowImage Replication, customers can:

Execute logical backups at faster speeds and with less effort than previously possible

Easily configure backups to execute across a storage area network

Manage backups from a central location

Increase the speed of applications

Expedite application testing and development

Keep a copy of data for backup or testing

Ensure data availability

Page 5-16
Replication
ShadowImage Replication Consistency Groups

ShadowImage Replication Consistency Groups

Internal ShadowImage Asynchronous Operation

Page 5-17
Replication
Pair Status Over Time

Pair Status Over Time

Time

App App Backup App App

A A A B A B A A
Online Offline Online Online Online Offline Offline Offline
Pair Create
Split Suspend Resync Resume Reverse Sync
Copy status
Active Pair Pair Suspend Pair Resynchronization Reverse Synch/Restore
Pair status Split status

Hitachi ShadowImage Replication operations include:

paircreate

pairsplit

pairresync

Page 5-18
Replication
Hitachi Thin Image

Hitachi Thin Image

What is Hitachi Thin Image?

Thin Image is snapshot technology that rapidly


creates up to 1,024 instant point-in-time copies for
data protection or application testing purposes
Read Write
Saves disk space by storing only changed data blocks
P - VOL
Speeds backups from hours to a few minutes, virtually
eliminating traditional backup windows
Only Changed
Restore possible from any snapshot volume Pool Data Saved

V - VOL V - VOL V - VOL

Virtual Volumes

(3) Asynchronous upstage to cache (read miss)


(1) Host write
Thin Image
snapshot pair

Data B Data A Data A

Host P - VOL V - VOL HDP Snap Pool


(2) Write complete

Subsequent writes to the same block for the same snapshot do not have to be moved
Single instance of data stored in Hitachi Dynamic Provisioning Snap Pool regardless
of number of snaps

Page 5-19
Replication
Hitachi Thin Image Technical Details

Hitachi Thin Image Technical Details

License
Part of the In-System Replication license
Requires a Hitachi Dynamic Provisioning license

Pool
Uses a special Thin Image pool, which is created similarly to an HDP pool
Cannot be shared with a regular HDP pool

Shared Memory
Does not use shared memory except for difference tables
Uses a cache management device, which is stored in the Thin Image pool

V-VOLs
Uses virtual volumes (V-VOL), a transparent view on the P-Vol at snapshot creation time
Maximum 1,024 snapshots

Management
Managed via RAIDCOM CLI (up to 1,024 generations) or CCI (up to 64 generations) or
Hitachi Replication Manager

Copy Mechanism
Employs a copy-after-write instead of copy-on-write mechanism whenever possible

Advanced Configuration
Can be combined with Hitachi ShadowImage Replication, Hitachi Universal Replicator
and Hitachi TrueCopy

Page 5-20
Replication
Hitachi Thin Image Components

Hitachi Thin Image Components

Thin Image basic components:


S-VOL is a volume used by the host to access a snapshot and does not have physical
disk space
Thin Image pool consists of a group of basic volumes, similar to HDP Pool

Host can
access

S-VOL

P-VOL
TI Pool

Page 5-21
Replication
Operations Flow Copy-on-Write Snapshot

Operations Flow Copy-on-Write Snapshot

Overview Hitachi Thin Image in copy-on-write mode

1. Host writes to 3. I/O complete


cache goes back

4. New data block


moved to P-VOL 2. If not previously
moved (overwrite
condition), old data
block moved to
Pool

P-VOL S-VOL Pool

Copy-on-write method workflow

In the copy-on-write method, store snapshot data in the following steps:

The host writes data to a P-VOL.

Snapshot data for the P-VOL is stored.

The write completion status is returned to the host after the snapshot data is stored.

Page 5-22
Replication
Operations Flow Copy-After-Write

Operations Flow Copy-After-Write

Overview Hitachi Thin Image copy-after-write mode

1. Host writes to 2. I/O complete


cache goes back

4. New data block


moved to P-VOL 3. . If not
previously moved
(overwrite
condition), old data
block moved to
pool

P-VOL S-VOL Pool

Copy-after-write method workflow

In the copy-after-write method, store snapshot data in the following steps:

1. The host writes data to a P-VOL.

2. The write completion status is returned to the host before the snapshot data is stored.

o Snapshot data for the P-VOL is stored in the background.

Page 5-23
Replication
Thin Image Copy-After-Write or Copy-on-Write Mode

Thin Image Copy-After-Write or Copy-on-Write Mode

Hitachi Thin Image uses either copy-after-write mode or copy-on-write mode,


depending on P-VOL and pool type
External
Normal VOL DP VOL
VOL
RAID-5
RAID-5 RAID-1 External External pool
RAID-1 RAID-6 Mixed pool
RAID-6 Pool pool (V01) (V02 and later)
Pool
Copy-on- Copy-after- Copy-on- Copy-after- Copy-after- Copy-on- Copy-after-Write Copy-on-
Write mode Write mode Write mode Write mode Write mode Write mode mode Write mode

Note: If the cache write pending rate is 60% or more, Thin Image shifts to copy-on-write mode to slow host writes

Page 5-24
Replication
Hitachi ShadowImage Replication Clones vs. Hitachi Thin Image Snapshots

Hitachi ShadowImage Replication Clones vs. Hitachi Thin Image


Snapshots

ShadowImage Replication Hitachi Thin Image


All data is saved from P-VOL to S-VOL Only changed data is saved from P-VOL to data pool; pool is
shared by multiple snapshot images (V-VOL)

Main Backup Main Backup


Server Server Server Server

Read Write Read Write Read Write Read Write

S-VOL
Virtual Volumes
P-VOL P-VOL

Differential V-VOL V-VOL VOL


Data Save
Consistent read/write access
is available only in split states Pool Link

Size of physical volume

The P-VOL and the S-VOL have exactly the same size in ShadowImage Replication

In Thin Image snapshot software, less disk space is required for building a V-VOL image
since only part of the V-VOL is on the pool and the rest is still on the primary volume

Pair configuration

Only 1 S-VOL can be created for every P-VOL in ShadowImage Replication

In Thin Image snapshot, there can be up to 64 V-VOLs per primary volume

Restore

A primary volume can only be restored from the corresponding secondary volume in
ShadowImage Replication

With Thin Image snapshot software the primary volume can be restored from any
snapshot image (V-VOL)

Page 5-25
Replication
Applications: Hitachi ShadowImage Clones vs. Hitachi Thin Image Snapshots

Applications: Hitachi ShadowImage Clones vs. Hitachi Thin Image


Snapshots

Simple positioning
Clones should be positioned for data repurposing and data protection (for example, DR testing)
where performance is a primary concern
Snapshots should be positioned for data protection (for example, backup) only where space saving
is the primary concern
ShadowImage Snapshot
P-VOL = S-VOL P-VOL V-VOL

Size of Physical Volume P-VOL = S-VOL P-VOL V-VOL

1:9 1:1024
Pair Configuration P-VOL
P-VOL S-VOL
V-VOL V-VOL V-VOL V-VOL

P-VOL can be restored from S-VOL Restore from any V-VOL

Restore P-VOL
P-VOL S-VOL
V-VOL V-VOL .. V-VOL V-VOL

Clones should be positioned for data repurposing and data protection (for example, DR
testing) where performance is a primary concern

Snapshots should be positioned for data protection (for example, backup) only where
space saving is the primary concern

Page 5-26
Replication
Hitachi TrueCopy Remote Replication

Hitachi TrueCopy Remote Replication

Hitachi TrueCopy Overview

TrueCopy mirrors data between Hitachi storage systems across


metropolitan distances

Supports replication between any enterprise storage systems

Can be combined with Hitachi Universal Replicator to support up


to 4 data centers in a multidata center DR configuration

Enables multiple, nondisruptive point-in-time copies in the event


of logical corruption up to the point of an outage when combined
with Hitachi ShadowImage or Hitachi Thin Image on remote site

TrueCopy is recommended for mission-critical data protection requirements that


mandate recovery point objectives of zero or near-zero seconds (RPO=0)

TrueCopy can remotely copy data to a second data center located up to 200 miles/320
km away (Distance limit is variable, but typically around 5060 km for HUS)

TrueCopy uses synchronous data transfers, which means data from the host server
requires a write acknowledgment from the remote local, as an indication of a successful
data copy, before the server host can proceed to the next data write I/O sequence

In addition to disaster recovery, use case examples for TrueCopy also include test and
development, data warehousing and mining, as well as data migration purposes

Page 5-27
Replication
Basic Hitachi TrueCopy Replication Operation

Basic Hitachi TrueCopy Replication Operation

Duplicates production volume data to a remote site

Data at remote site remains synchronized with local site as data


changes occur

Supported with Fibre Channel or iSCSI connection between sites

Requires write acknowledgment before new data is written, which


ensures RPO=0 data integrity

Can be combined with Hitachi ShadowImage or Hitachi Thin


Image

About Hitachi TrueCopy

TrueCopy creates a duplicate of a production volume to a secondary volume located at a


remote site

Data in a TrueCopy backup stays synchronized with the data in the local array

o This happens when data is written from the host to the local array then to the
remote system, through Fibre Channel or iSCSI link

o The host holds subsequent output until acknowledgement is received from the
remote array for the previous output

When a synchronized pair is split, writes to the primary volume are no longer copied to
the secondary side

o Doing this means that the pair is no longer synchronous

Output to the local array is cached until the primary and secondary volumes are
resynchronized

When resynchronization takes place, only the changed data is transferred, rather than
the entire primary volume, which reduces copy time

Page 5-28
Replication
Basic Hitachi TrueCopy Replication Operation

Use TrueCopy with ShadowImage or Hitachi Copy-on-Write Snapshot, on either or both


local and remote sites

o These in-system copy tools allow restoration from one or more additional copies
of critical data

Besides disaster recovery, TrueCopy backup copies can be used for test and
development, data warehousing and mining, or migration applications

Recovery objectives

o Recovery time objective (RTO): Time within which business functions or


applications must be restored

o Recovery point objective (RPO): Point in time to which data must be restored to
successfully resume processing

Page 5-29
Replication
Hitachi TrueCopy Remote Replication (Synchronous)

Hitachi TrueCopy Remote Replication (Synchronous)

Zero data loss possible with fence-level = data

Performance: dual write plus 1 round-trip latency plus overhead

Support for consistency groups

(1) Host Write (2) Synchronous Remote Copy

P-VOL S-VOL

(4) Write Complete (3) Remote Copy Complete

Provides a remote mirror of any data

The remote copy is always identical to the local copy

Allows very fast restart/recovery with no data loss

No dependence on host operating system, database or file system

Impacts application response time

Distance depends on application read/write activity, network bandwidth, response-time


tolerance and other factors

o Remote I/O is not posted complete to the application server until it is written to
the remote system

o Provides fast recovery with no data loss

o Limited distance response-time impact

Fence level of P-VOL:

data: writes to P-VOL will be refused when replication to remote site is not possible

status: writes to P-VOL allowed if S-VOL can be changed to error status (PSUE)

never: writes to P-VOL are always allowed (default for asynchron replications)

Page 5-30
Replication
Hitachi Universal Replicator (Asynchronous)

Hitachi Universal Replicator (Asynchronous)

Hitachi Universal Replicator Overview

Hitachi Universal Replicator (HUR) is an asynchronous,


continuous, nondisruptive, host-independent remote data
replication solution for disaster recovery or data migration over
long distances

HUR and Hitachi ShadowImage can be used together in the


same storage system and on the same volumes to provide
multiple copies of data at the primary and/or remote sites

Hitachi TrueCopy Synchronous and HUR can be combined to


allow advanced 3-data-center (3DC) configurations for optimal
data protection

TrueCopy Synchronous software and HUR can be combined together to allow advanced 3Data
Center (3DC) configurations for optimal data protection

Hitachi Universal Replicator Benefits

Optimize resource usage (lower the cache and resource


consumption on production/primary storage systems)

Improve bandwidth utilization and simplify bandwidth planning

Improve operational efficiency and resiliency (tolerant for link


failures between sites)

More flexibility in trading off between recovery point objective and


cost

Implement advanced multi-data center support

Page 5-31
Replication
Hitachi Universal Replicator Functions

Hitachi Universal Replicator Functions

Host I/O process completes immediately after storing write data to the cache
memory of primary storage system Master Control Unit (MCU)

MCU will store data to be transferred in journal cache to be destaged to journal


volume in the event of link failure

Universal Replicator provides consistency of copied data by maintaining write


order in copy process
To achieve this, it attaches write order information to the data in the copy process

3. Asynchronous
1. Write I/O remote copy
P-VOL JNL-VOL
JNL-VOL
Primary host 2. Write complete 4. Remote copy complete S-VOL

Primary Storage (MCU) Secondary Storage (RCU)

Three-Data-Center Cascade Replication

P-VOL True Copy S-VOL JNL- HUR JNL-


JNL- JNL- S-VOL
VOL
(sync) P-VOL
VOL
VOL
VOL

JNL Group JNL Group

Hitachi TrueCopy S-VOL shared as HUR P-VOL in intermediate site

Hitachi TrueCopy Remote Replication synchronous software and Hitachi


Universal Replicator can be combined into a 3-data-center (3DC)
configuration
This is a 3DC Cascade illustration

Page 5-32
Replication
Three-Data-Center Multi-Target Replication

Three-Data-Center Multi-Target Replication

S-VOL
JNL-
S-VOL
TrueCopy (Sync) JNL-
VOL
or HUR* VOL

JNL-
JNL-
VOL JNL Group
P-VOL
VOL

Optional Delta Resync


Journal Group
HUR
JNL-
JNL- S-VOL
VOL
VOL

Primary volume is shared P-VOL for 2 remote systems JNL Group

Mainframe supports up to 12x12x12 controller configurations


Open systems support up to 4x4x4 controller configurations
Requires D/R extended and for mainframe, BCM extended

There might be limitations/guidelines related to what storage systems can be set up in this
configuration. Refer to the product documentation for latest information.

Four-Data-Center Multi-Target Replication

Typically for migration


JNL-
Supported in both mainframe 3DC JNL-
VOL
S-VOL

and open systems environments Cascade VOL

HUR JNL Group

3DC
Multi-target TrueCopy (Sync) JNL-
JNL- S-VOL
VOL
VOL

JNL Group

Optional Delta 2DC


JNL- Resync HUR
JNL-
VOL
P-VOL
VOL

JNL-
S-VOL
Journal Group JNL-
VOL
VOL
HUR
JNL Group

Page 5-33
Replication
Module Summary

Module Summary

In this module, you should have learned to:


Provide an overview of the replication offerings supported in the
functionality of the storage controller
Describe the components of in-system replication offerings including
Hitachi ShadowImage Replication and Hitachi Thin Image
Describe the components of remote replication offerings, including
Hitachi TrueCopy and Hitachi Universal Replicator
Describe the supported multidata center, remote replication
configurations

Additional Training offerings from HDS

Learn more:
CSI0147 Hitachi Enterprise In-System and TrueCopy Remote
Replications
TSI0150 Hitachi Universal Replicator Open Systems
TSI1635 Replication Solutions v7.x

Page 5-34
Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

A AIX IBM UNIX.


AaaS Archive as a Service. A cloud computing AL Arbitrated Loop. A network in which nodes
business model. contend to send data and only 1 node at a
AAMux Active-Active Multiplexer. time is able to send data.

ACC Action Code. A SIM (System Information AL-PA Arbitrated Loop Physical Address.
Message). AMS Adaptable Modular Storage.
ACE Access Control Entry. Stores access rights APAR Authorized Program Analysis Reports.
for a single user or group within the APF Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL Access Control List. Stores a set of ACEs permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the API Application Programming Interface.
Microsoft Windows security model.
APID Application Identification. An ID to
ACP Array Control Processor. Microprocessor identify a command device.
mounted on the disk adapter circuit board
(DKA) that controls the drives in a specific Application Management The processes that
disk array. Considered part of the back end; manage the capacity and performance of
it controls data transfer between cache and applications.
the hard drives. ARB Arbitration or request.
ACP Domain Also Array Domain. All of the ARM Automated Restart Manager.
array-groups controlled by the same pair of Array Domain Also ACP Domain. All
DKA boards, or the HDDs managed by 1 functions, paths and disk drives controlled
ACP PAIR (also called BED). by a single ACP pair. An array domain can
ACP PAIR Physical disk access control logic. contain a variety of LVI or LU
Each ACP consists of 2 DKA PCBs to configurations.
provide 8 loop paths to the real HDDs. Array Group Also called a parity group. A
Actuator (arm) Read/write heads are attached group of hard disk drives (HDDs) that form
to a single head actuator, or actuator arm, the basic unit of storage in a subsystem. All
that moves the heads around the platters. HDDs in a parity group must have the same
AD Active Directory. physical capacity.

ADC Accelerated Data Copy. Array Unit A group of hard disk drives in 1
RAID structure. Same as parity group.
Address A location of data, usually in main
memory or on a disk. A name or token that ASIC Application specific integrated circuit.
identifies a network component. In local area ASSY Assembly.
networks (LANs), for example, every node Asymmetric virtualization See Out-of-Band
has a unique address. virtualization.
ADP Adapter. Asynchronous An I/O operation whose
ADS Active Directory Service. initiator does not await its completion before

HDS Confidential: For distribution only to authorized parties. Page G-1


proceeding with other work. Asynchronous this term are subject to proprietary
I/O operations enable an initiator to have trademark disputes in multiple countries at
multiple concurrent I/O operations in the present time.
progress. Also called Out-of-Band BIOS Basic Input/Output System. A chip
virtualization. located on all computer motherboards that
ATA Advanced Technology Attachment. A disk governs how a system boots and operates.
drive implementation that integrates the BLKSIZE Block size.
controller on the disk drive itself. Also
known as IDE (Integrated Drive Electronics). BLOB Binary large object.

ATR Autonomic Technology Refresh. BP Business processing.

Authentication The process of identifying an BPaaS Business Process as a Service. A cloud


individual, usually based on a username and computing business model.
password. BPAM Basic Partitioned Access Method.
AUX Auxiliary Storage Manager. BPM Business Process Management.
Availability Consistent direct access to BPO Business Process Outsourcing. Dynamic
information over time. BPO services refer to the management of
-back to top- partly standardized business processes,
including human resources delivered in a
B pay-per-use billing relationship or a self-
B4 A group of 4 HDU boxes that are used to service consumption model.
contain 128 HDDs. BST Binary Search Tree.
BA Business analyst. BSTP Blade Server Test Program.
Back end In client/server applications, the BTU British Thermal Unit.
client part of the program is often called the
Business Continuity Plan Describes how an
front end and the server part is called the
organization will resume partially or
back end.
completely interrupted critical functions
Backup imageData saved during an archive within a predetermined time after a
operation. It includes all the associated files, disruption or a disaster. Sometimes also
directories, and catalog information of the called a Disaster Recovery Plan.
backup operation.
-back to top-
BASM Basic Sequential Access Method.
BATCTR Battery Control PCB.
C
CA (1) Continuous Access software (see
BC (1) Business Class (in contrast with EC,
HORC), (2) Continuous Availability or (3)
Enterprise Class). (2) Business Coordinator.
Computer Associates.
BCP Base Control Program.
Cache Cache Memory. Intermediate buffer
BCPii Base Control Program internal interface. between the channels and drives. It is
BDAM Basic Direct Access Method. generally available and controlled as 2 areas
BDW Block Descriptor Word. of cache (cache A and cache B). It may be
battery-backed.
BED Back end director. Controls the paths to
the HDDs. Cache hit rate When data is found in the cache,
it is called a cache hit, and the effectiveness
Big Data Refers to data that becomes so large in of a cache is judged by its hit rate.
size or quantity that a dataset becomes
awkward to work with using traditional Cache partitioning Storage management
database management systems. Big data software that allows the virtual partitioning
entails data capacity or measurement that of cache and allocation of it to different
requires terms such as Terabyte (TB), applications.
Petabyte (PB), Exabyte (EB), Zettabyte (ZB) CAD Computer-Aided Design.
or Yottabyte (YB). Note that variations of

Page G-2 HDS Confidential: For distribution only to authorized parties.


CAGR Compound Annual Growth Rate. CDWP Cumulative disk write throughput.
Capacity Capacity is the amount of data that a CE Customer Engineer.
storage system or drive can store after CEC Central Electronics Complex.
configuration and/or formatting.
CentOS Community Enterprise Operating
Most data storage companies, including HDS, System.
calculate capacity based on the premise that
1KB = 1,024 bytes, 1MB = 1,024 kilobytes, Centralized Management Storage data
1GB = 1,024 megabytes, and 1TB = 1,024 management, capacity management, access
gigabytes. See also Terabyte (TB), Petabyte security management, and path
(PB), Exabyte (EB), Zettabyte (ZB) and management functions accomplished by
Yottabyte (YB). software.

CAPEX Capital expenditure the cost of CF Coupling Facility.


developing or providing non-consumable CFCC Coupling Facility Control Code.
parts for the product or system. For example, CFW Cache Fast Write.
the purchase of a photocopier is the CAPEX,
and the annual paper and toner cost is the CH Channel.
OPEX. (See OPEX). CH S Channel SCSI.
CAS (1) Column Address Strobe. A signal sent CHA Channel Adapter. Provides the channel
to a dynamic random access memory interface control functions and internal cache
(DRAM) that tells it that an associated data transfer functions. It is used to convert
address is a column address. CAS-column the data format between CKD and FBA. The
address strobe sent by the processor to a CHA contains an internal processor and 128
DRAM circuit to activate a column address. bytes of edit buffer memory. Replaced by
(2) Content-addressable Storage. CHB in some cases.
CBI Cloud-based Integration. Provisioning of a CHA/DKA Channel Adapter/Disk Adapter.
standardized middleware platform in the CHAP Challenge-Handshake Authentication
cloud that can be used for various cloud Protocol.
integration scenarios.
CHB Channel Board. Updated DKA for Hitachi
An example would be the integration of Unified Storage VM and additional
legacy applications into the cloud or enterprise components.
integration of different cloud-based
Chargeback A cloud computing term that refers
applications into one application.
to the ability to report on capacity and
CBU Capacity Backup. utilization by application or dataset,
CBX Controller chassis (box). charging business users or departments
based on how much they use.
CC Common Criteria. In regards to Information
Technology Security Evaluation, it is a CHF Channel Fibre.
flexible, cloud related certification CHIP Client-Host Interface Processor.
framework that enables users to specify Microprocessors on the CHA boards that
security functional and assurance process the channel commands from the
requirements. hosts and manage host access to cache.
CCHH Common designation for Cylinder and CHK Check.
Head. CHN Channel adapter NAS.
CCI Command Control Interface. CHP Channel Processor or Channel Path.
CCIF Cloud Computing Interoperability CHPID Channel Path Identifier.
Forum. A standards organization active in CHSN or C-HSN Cache Memory Hierarchical
cloud computing. Star Network.
CDP Continuous Data Protection. CHT Channel tachyon. A Fibre Channel
CDR Clinical Data Repository. protocol controller.
CICS Customer Information Control System.

HDS Confidential: For distribution only to authorized parties. Page G-3


CIFS protocol Common internet file system is a Private cloud (or private network cloud)
platform-independent file sharing system. A Public cloud (or public network cloud)
network file system accesses protocol
Virtual private cloud (or virtual private
primarily used by Windows clients to
network cloud)
communicate file access requests to
Windows servers. Cloud Enabler a concept, product or solution
that enables the deployment of cloud
CIM Common Information Model.
computing. Key cloud enablers include:
CIS Clinical Information System.
Data discoverability
CKD Count-key Data. A format for encoding
Data mobility
data on hard disk drives; typically used in
the mainframe environment. Data protection
CKPT Check Point. Dynamic provisioning
CL See Cluster. Location independence

CLA See Cloud Security Alliance. Multitenancy to ensure secure privacy

CLI Command Line Interface. Virtualization

CLPR Cache Logical Partition. Cache can be Cloud Fundamental A core requirement to the
deployment of cloud computing. Cloud
divided into multiple virtual cache
memories to lessen I/O contention. fundamentals include:

Cloud Computing Cloud computing refers to Self service


applications and services that run on a Pay per use
distributed network using virtualized Dynamic scale up and scale down
resources and accessed by common Internet
protocols and networking standards. It is Cloud Security Alliance A standards
distinguished by the notion that resources are organization active in cloud computing.
virtual and limitless, and that details of the Cloud Security Alliance GRC Stack The Cloud
physical systems on which software runs are Security Alliance GRC Stack provides a
abstracted from the user. Source: Cloud toolkit for enterprises, cloud providers,
Computing Bible, Barrie Sosinsky (2011). security solution providers, IT auditors and
Cloud computing often entails an as a other key stakeholders to instrument and
service business model that may entail one assess both private and public clouds against
or more of the following: industry established best practices,
standards and critical compliance
Archive as a Service (AaaS) requirements.
Business Process as a Service (BPaas)
CLPR Cache Logical Partition.
Failure as a Service (FaaS)
Cluster A collection of computers that are
Infrastructure as a Service (IaaS) interconnected (typically at high-speeds) for
IT as a Service (ITaaS) the purpose of improving reliability,
Platform as a Service (PaaS) availability, serviceability or performance
(via load balancing). Often, clustered
Private File Tiering as a Service (PFTaaS) computers have access to a common pool of
Software as a Service (SaaS) storage and run special software to
SharePoint as a Service (SPaaS) coordinate the component computers'
activities.
SPI refers to the Software, Platform and
Infrastructure as a Service business model. CM (1) Cache Memory, Cache Memory Module.
Cloud network types include the following: Intermediate buffer between the channels
and drives. It has a maximum of 64GB (32GB
Community cloud (or community x 2 areas) of capacity. It is available and
network cloud) controlled as 2 areas of cache (cache A and
Hybrid cloud (or hybrid network cloud)

Page G-4 HDS Confidential: For distribution only to authorized parties.


cache B). It is fully battery-backed (48 hours). Corporate governance Organizational
(2) Content Management. compliance with government-mandated
CM DIR Cache Memory Directory. regulations.

CME Communications Media and CP Central Processor (also called Processing


Unit or PU).
Entertainment.
CPC Central Processor Complex.
CM-HSN Control Memory Hierarchical Star
Network. CPM Cache Partition Manager. Allows for
partitioning of the cache and assigns a
CM PATH Cache Memory Access Path. Access
partition to a LU; this enables tuning of the
Path from the processors of CHA, DKA PCB
systems performance.
to Cache Memory.
CPOE Computerized Physician Order Entry
CM PK Cache Memory Package. (Provider Ordered Entry).
CM/SM Cache Memory/Shared Memory. CPS Cache Port Slave.
CMA Cache Memory Adapter. CPU Central Processing Unit.
CMD Command. CRM Customer Relationship Management.
CMG Cache Memory Group. CSA Cloud Security Alliance.
CNAME Canonical NAME. CSS Channel Subsystem.
CNS Cluster Name Space or Clustered Name CS&S Customer Service and Support.
Space. CSTOR Central Storage or Processor Main
CNT Cumulative network throughput. Memory.
CoD Capacity on Demand. C-Suite The C-suite is considered the most
important and influential group of
Community Network Cloud Infrastructure individuals at a company. Referred to as
shared between several organizations or the C-Suite within a Healthcare provider.
groups with common concerns.
CSV Comma Separated Value or Cluster Shared
Concatenation A logical joining of 2 series of Volume.
data, usually represented by the symbol |.
In data communications, 2 or more data are CSVP Customer-specific Value Proposition.
often concatenated to provide a unique CSW Cache Switch PCB. The cache switch
name or reference (such as, S_ID | X_ID). connects the channel adapter or disk adapter
Volume managers concatenate disk address to the cache. Each of them is connected to the
spaces to present a single larger address cache by the Cache Memory Hierarchical
space. Star Net (C-HSN) method. Each cluster is
Connectivity technology A program or device's provided with the 2 CSWs, and each CSW
ability to link with other programs and can connect 4 caches. The CSW switches any
devices. Connectivity technology allows of the cache paths to which the channel
programs on a given computer to run adapter or disk adapter is to be connected
routines or access objects on another remote through arbitration.
computer. CTG Consistency Group.
Controller A device that controls the transfer of CTL Controller module.
data from a computer to a peripheral device
CTN Coordinated Timing Network.
(including a storage system) and vice versa.
CU Control Unit. Refers to a storage subsystem.
Controller-based virtualization Driven by the
The hexadecimal number to which 256
physical controller at the hardware
microcode level versus at the application LDEVs may be assigned.
software layer and integrates into the CUDG Control Unit Diagnostics. Internal
infrastructure to allow virtualization across system tests.
heterogeneous storage and third party CUoD Capacity Upgrade on Demand.
products.
CV Custom Volume.

HDS Confidential: For distribution only to authorized parties. Page G-5


CVS Customizable Volume Size. Software used context, data migration is the same as
to create custom volume sizes. Marketed Hierarchical Storage Management (HSM).
under the name Virtual LVI (VLVI) and Data Pipe or Data Stream The connection set up
Virtual LUN (VLUN). between the MediaAgent, source or
CWDM Course Wavelength Division destination server is called a Data Pipe or
Multiplexing. more commonly a Data Stream.
CXRC Coupled z/OS Global Mirror. Data Pool A volume containing differential
-back to top- data only.
D Data Protection Directive A major compliance
and privacy protection initiative within the
DA Device Adapter.
European Union (EU) that applies to cloud
DACL Discretionary access control list (ACL). computing. Includes the Safe Harbor
The part of a security descriptor that stores Agreement.
access rights for users and groups.
Data Stream CommVaults patented high
DAD Device Address Domain. Indicates a site performance data mover used to move data
of the same device number automation back and forth between a data source and a
support function. If several hosts on the MediaAgent or between 2 MediaAgents.
same site have the same device number
Data Striping Disk array data mapping
system, they have the same name.
technique in which fixed-length sequences of
DAP Data Access Path. Also known as Zero virtual disk data addresses are mapped to
Copy Failover (ZCF). sequences of member disk addresses in a
DAS Direct Attached Storage. regular rotating pattern.
DASD Direct Access Storage Device. Data Transfer Rate (DTR) The speed at which
data can be transferred. Measured in
Data block A fixed-size unit of data that is
kilobytes per second for a CD-ROM drive, in
transferred together. For example, the
bits per second for a modem, and in
X-modem protocol transfers blocks of 128
megabytes per second for a hard drive. Also,
bytes. In general, the larger the block size,
often called data rate.
the faster the data transfer rate.
DBL Drive box.
Data Duplication Software duplicates data, as
in remote copy or PiT snapshots. Maintains 2 DBMS Data Base Management System.
copies of data. DBX Drive box.
Data Integrity Assurance that information will DCA Data Cache Adapter.
be protected from modification and
DCTL Direct coupled transistor logic.
corruption.
DDL Database Definition Language.
Data Lifecycle Management An approach to
information and storage management. The DDM Disk Drive Module.
policies, processes, practices, services and DDNS Dynamic DNS.
tools used to align the business value of data DDR3 Double data rate 3.
with the most appropriate and cost-effective
storage infrastructure from the time data is DE Data Exchange Software.
created through its final disposition. Data is Device Management Processes that configure
aligned with business requirements through and manage storage systems.
management policies and service levels DFS Microsoft Distributed File System.
associated with performance, availability,
recoverability, cost, and what ever DFSMS Data Facility Storage Management
parameters the organization defines as Subsystem.
critical to its operations. DFSM SDM Data Facility Storage Management
Data Migration The process of moving data Subsystem System Data Mover.
from 1 storage device to another. In this

Page G-6 HDS Confidential: For distribution only to authorized parties.


DFSMSdfp Data Facility Storage Management 8 LUs; a large one, with hundreds of disk
Subsystem Data Facility Product. drives, can support thousands.
DFSMSdss Data Facility Storage Management DKA Disk Adapter. Also called an array control
Subsystem Data Set Services. processor (ACP). It provides the control
DFSMShsm Data Facility Storage Management functions for data transfer between drives
Subsystem Hierarchical Storage Manager. and cache. The DKA contains DRR (Data
Recover and Reconstruct), a parity generator
DFSMSrmm Data Facility Storage Management circuit. Replaced by DKB in some cases.
Subsystem Removable Media Manager.
DKB Disk Board. Updated DKA for Hitachi
DFSMStvs Data Facility Storage Management Unified Storage VM and additional
Subsystem Transactional VSAM Services. enterprise components.
DFW DASD Fast Write. DKC Disk Controller Unit. In a multi-frame
DICOM Digital Imaging and Communications configuration, the frame that contains the
in Medicine. front end (control and memory
DIMM Dual In-line Memory Module. components).
Direct Access Storage Device (DASD) A type of DKCMN Disk Controller Monitor. Monitors
storage device, in which bits of data are temperature and power status throughout
stored at precise locations, enabling the the machine.
computer to retrieve information directly DKF Fibre disk adapter. Another term for a
without having to scan a series of records. DKA.
Direct Attached Storage (DAS) Storage that is DKU Disk Array Frame or Disk Unit. In a
directly attached to the application or file multi-frame configuration, a frame that
server. No other device on the network can contains hard disk units (HDUs).
access the stored data. DKUPS Disk Unit Power Supply.
Director class switches Larger switches often DLIBs Distribution Libraries.
used as the core of large switched fabrics.
DKUP Disk Unit Power Supply.
Disaster Recovery Plan (DRP) A plan that
describes how an organization will deal with DLM Data Lifecycle Management.
potential disasters. It may include the DMA Direct Memory Access.
precautions taken to either maintain or DM-LU Differential Management Logical Unit.
quickly resume mission-critical functions. DM-LU is used for saving management
Sometimes also referred to as a Business information of the copy functions in the
Continuity Plan. cache.
Disk Administrator An administrative tool that DMP Disk Master Program.
displays the actual LU storage configuration.
DMT Dynamic Mapping Table.
Disk Array A linked group of 1 or more
physical independent hard disk drives DMTF Distributed Management Task Force. A
generally used to replace larger, single disk standards organization active in cloud
drive systems. The most common disk computing.
arrays are in daisy chain configuration or DNS Domain Name System.
implement RAID (Redundant Array of DOC Deal Operations Center.
Independent Disks) technology.
A disk array may contain several disk drive Domain A number of related storage array
trays, and is structured to improve speed groups.
and increase protection against loss of data. DOO Degraded Operations Objective.
Disk arrays organize their data storage into DP Dynamic Provisioning (pool).
Logical Units (LUs), which appear as linear
DP-VOL Dynamic Provisioning Virtual Volume.
block paces to their clients. A small disk
array, with a few disks, might support up to DPL (1) (Dynamic) Data Protection Level or (2)
Denied Persons List.

HDS Confidential: For distribution only to authorized parties. Page G-7


DR Disaster Recovery. EHR Electronic Health Record.
DRAC Dell Remote Access Controller. EIG Enterprise Information Governance.
DRAM Dynamic random access memory. EMIF ESCON Multiple Image Facility.
DRP Disaster Recovery Plan. EMPI Electronic Master Patient Identifier. Also
DRR Data Recover and Reconstruct. Data Parity known as MPI.
Generator chip on DKA. Emulation In the context of Hitachi Data
DRV Dynamic Reallocation Volume. Systems enterprise storage, emulation is the
logical partitioning of an Array Group into
DSB Dynamic Super Block. logical devices.
DSF Device Support Facility. EMR Electronic Medical Record.
DSF INIT Device Support Facility Initialization
ENC Enclosure or Enclosure Controller. The
(for DASD).
units that connect the controllers with the
DSP Disk Slave Program. Fibre Channel disks. They also allow for
DT Disaster tolerance. online extending a system by adding RKAs.
DTA Data adapter and path to cache-switches. ENISA European Network and Information
Security Agency.
DTR Data Transfer Rate.
EOF End of Field.
DVE Dynamic Volume Expansion.
EOL End of Life.
DW Duplex Write.
EPO Emergency Power Off.
DWDM Dense Wavelength Division
Multiplexing. EREP Error Reporting and Printing.

DWL Duplex Write Line or Dynamic ERP Enterprise Resource Planning.


Workspace Linking. ESA Enterprise Systems Architecture.
-back to top- ESB Enterprise Service Bus.

E ESC Error Source Code.


ESD Enterprise Systems Division (of Hitachi).
EAL Evaluation Assurance Level (EAL1
through EAL7). The EAL of an IT product or ESCD ESCON Director.
system is a numerical security grade ESCON Enterprise Systems Connection. An
assigned following the completion of a input/output (I/O) interface for mainframe
Common Criteria security evaluation, an computer connections to storage devices
international standard in effect since 1999. developed by IBM.
EAV Extended Address Volume. ESD Enterprise Systems Division.
EB Exabyte. ESDS Entry Sequence Data Set.
EC Enterprise Class (in contrast with BC, ESS Enterprise Storage Server.
Business Class). ESW Express Switch or E Switch. Also referred
ECC Error Checking and Correction. to as the Grid Switch (GSW).
ECC.DDR SDRAM Error Correction Code Ethernet A local area network (LAN)
Double Data Rate Synchronous Dynamic architecture that supports clients and servers
RAM Memory. and uses twisted pair cables for connectivity.
ECM Extended Control Memory. ETR External Time Reference (device).
ECN Engineering Change Notice. EVS Enterprise Virtual Server.
E-COPY Serverless or LAN free backup. Exabyte (EB) A measurement of data or data
storage. 1EB = 1,024PB.
EFI Extensible Firmware Interface. EFI is a
specification that defines a software interface EXCP Execute Channel Program.
between an operating system and platform ExSA Extended Serial Adapter.
firmware. EFI runs on top of BIOS when a -back to top-
LPAR is activated.

Page G-8 HDS Confidential: For distribution only to authorized parties.


F achieved by including redundant instances
of components whose failure would make
FaaS Failure as a Service. A proposed business the system inoperable, coupled with facilities
model for cloud computing in which large- that allow the redundant components to
scale, online failure drills are provided as a assume the function of failed ones.
service in order to test real cloud
deployments. Concept developed by the FAIS Fabric Application Interface Standard.
College of Engineering at the University of FAL File Access Library.
California, Berkeley in 2011. FAT File Allocation Table.
Fabric The hardware that connects Fault Tolerant Describes a computer system or
workstations and servers to storage devices component designed so that, in the event of a
in a SAN is referred to as a "fabric." The SAN component failure, a backup component or
fabric enables any-server-to-any-storage procedure can immediately take its place with
device connectivity through the use of Fibre no loss of service. Fault tolerance can be
Channel switching technology. provided with software, embedded in
Failback The restoration of a failed system hardware or provided by hybrid combination.
share of a load to a replacement component. FBA Fixed-block Architecture. Physical disk
For example, when a failed controller in a sector mapping.
redundant configuration is replaced, the FBA/CKD Conversion The process of
devices that were originally controlled by converting open-system data in FBA format
the failed controller are usually failed back to mainframe data in CKD format.
to the replacement controller to restore the FBUS Fast I/O Bus.
I/O balance, and to restore failure tolerance.
FC Fibre Channel or Field-Change (microcode
Similarly, when a defective fan or power
update). A technology for transmitting data
supply is replaced, its load, previously borne
between computer devices; a set of
by a redundant component, can be failed
standards for a serial I/O bus capable of
back to the replacement part.
transferring data between 2 ports.
Failed over A mode of operation for failure-
FC RKAJ Fibre Channel Rack Additional.
tolerant systems in which a component has
Module system acronym refers to an
failed and its function has been assumed by
additional rack unit that houses additional
a redundant component. A system that
hard drives exceeding the capacity of the
protects against single failures operating in
core RK unit.
failed over mode is not failure tolerant, as
failure of the redundant component may FC-0 Lowest layer on Fibre Channel transport.
render the system unable to function. Some This layer represents the physical media.
systems (for example, clusters) are able to FC-1 This layer contains the 8b/10b encoding
tolerate more than 1 failure; these remain scheme.
failure tolerant until no redundant FC-2 This layer handles framing and protocol,
component is available to protect against frame format, sequence/exchange
further failures. management and ordered set usage.
Failover A backup operation that automatically FC-3 This layer contains common services used
switches to a standby database server or by multiple N_Ports in a node.
network if the primary system fails, or is FC-4 This layer handles standards and profiles
temporarily shut down for servicing. Failover for mapping upper level protocols like SCSI
is an important fault tolerance function of an IP onto the Fibre Channel Protocol.
mission-critical systems that rely on constant FCA Fibre Channel Adapter. Fibre interface
accessibility. Also called path failover. card. Controls transmission of fibre packets.
Failure tolerance The ability of a system to FC-AL Fibre Channel Arbitrated Loop. A serial
continue to perform its function or at a data transfer architecture developed by a
reduced performance level, when 1 or more consortium of computer and mass storage
of its components has failed. Failure device manufacturers, and is now being
tolerance in disk subsystems is often standardized by ANSI. FC-AL was designed

HDS Confidential: For distribution only to authorized parties. Page G-9


for new mass storage devices and other physical link rates to make them up to 8
peripheral devices that require very high times as efficient as ESCON (Enterprise
bandwidth. Using optical fiber to connect System Connection), IBM's previous fiber
devices, FC-AL supports full-duplex data optic channel standard.
transfer rates of 100MB/sec. FC-AL is FIPP Fair Information Practice Principles.
compatible with SCSI for high-performance Guidelines for the collection and use of
storage systems. personal information created by the United
FCC Federal Communications Commission. States Federal Trade Commission (FTC).
FCIP Fibre Channel over IP. A network storage FISMA Federal Information Security
technology that combines the features of Management Act of 2002. A major
Fibre Channel and the Internet Protocol (IP) compliance and privacy protection law that
to connect distributed SANs over large applies to information systems and cloud
distances. FCIP is considered a tunneling computing. Enacted in the United States of
protocol, as it makes a transparent point-to- America in 2002.
point connection between geographically FLGFAN Front Logic Box Fan Assembly.
separated SANs over IP networks. FCIP
relies on TCP/IP services to establish FLOGIC Box Front Logic Box.
connectivity between remote SANs over FM Flash Memory. Each microprocessor has
LANs, MANs, or WANs. An advantage of FM. FM is non-volatile memory that contains
FCIP is that it can use TCP/IP as the microcode.
transport while keeping Fibre Channel fabric FOP Fibre Optic Processor or fibre open.
services intact.
FQDN Fully Qualified Domain Name.
FCoE Fibre Channel over Ethernet. An
encapsulation of Fibre Channel frames over FPC Failure Parts Code or Fibre Channel
Ethernet networks. Protocol Chip.
FCP Fibre Channel Protocol. FPGA Field Programmable Gate Array.
FC-P2P Fibre Channel Point-to-Point. Frames An ordered vector of words that is the
FCSE Flashcopy Space Efficiency. basic unit of data transmission in a Fibre
FC-SW Fibre Channel Switched. Channel network.
FCU File Conversion Utility. Front end In client/server applications, the
FD Floppy Disk or Floppy Drive. client part of the program is often called the
front end and the server part is called the
FDDI Fiber Distributed Data Interface.
back end.
FDR Fast Dump/Restore.
FRU Field Replaceable Unit.
FE Field Engineer.
FS File System.
FED (Channel) Front End Director.
FedRAMP Federal Risk and Authorization FSA File System Module-A.
Management Program. FSB File System Module-B.
Fibre Channel A serial data transfer FSI Financial Services Industries.
architecture developed by a consortium of
FSM File System Module.
computer and mass storage device
manufacturers and now being standardized FSW Fibre Channel Interface Switch PCB. A
by ANSI. The most prominent Fibre Channel board that provides the physical interface
standard is Fibre Channel Arbitrated Loop (cable connectors) between the ACP ports
(FC-AL). and the disks housed in a given disk drive.
FICON Fiber Connectivity. A high-speed FTP File Transfer Protocol. A client-server
input/output (I/O) interface for mainframe protocol that allows a user on 1 computer to
computer connections to storage devices. As transfer files to and from another computer
part of IBM's S/390 server, FICON channels over a TCP/IP network.
increase I/O capacity through the FWD Fast Write Differential.
combination of a new architecture and faster -back to top-

Page G-10 HDS Confidential: For distribution only to authorized parties.


G only 1 H2F that can be added to the core RK
Floor Mounted unit. See also: RK, RKA, and
GA General availability. H1F.
GARD General Available Restricted HA High Availability.
Distribution.
Hadoop Apache Hadoop is an open-source
Gb Gigabit. software framework for data storage and
GB Gigabyte. large-scale processing of data-sets on
Gb/sec Gigabit per second. clusters of hardware.
GB/sec Gigabyte per second. HANA High Performance Analytic Appliance,
a database appliance technology proprietary
GbE Gigabit Ethernet.
to SAP.
Gbps Gigabit per second.
HBA Host Bus Adapter An I/O adapter that
GBps Gigabyte per second. sits between the host computer's bus and the
GBIC Gigabit Interface Converter. Fibre Channel loop and manages the transfer
of information between the 2 channels. In
GCMI Global Competitive and Marketing
order to minimize the impact on host
Intelligence (Hitachi).
processor performance, the host bus adapter
GDG Generation Data Group. performs many low-level interface functions
GDPS Geographically Dispersed Parallel automatically or with minimal processor
Sysplex. involvement.
GID Group Identifier within the UNIX security HCA Host Channel Adapter.
model. HCD Hardware Configuration Definition.
gigE Gigabit Ethernet. HD Hard Disk.
GLM Gigabyte Link Module. HDA Head Disk Assembly.
Global Cache Cache memory is used on demand HDD Hard Disk Drive. A spindle of hard disk
by multiple applications. Use changes platters that make up a hard drive, which is
dynamically, as required for READ a unit of physical storage within a
performance between hosts/applications/LUs. subsystem.
GPFS General Parallel File System. HDDPWR Hard Disk Drive Power.
GSC Global Support Center. HDU Hard Disk Unit. A number of hard drives
(HDDs) grouped together within a
GSI Global Systems Integrator.
subsystem.
GSS Global Solution Services.
Head See read/write head.
GSSD Global Solutions Strategy and
Heterogeneous The characteristic of containing
Development.
dissimilar elements. A common use of this
GSW Grid Switch Adapter. Also known as E word in information technology is to
Switch (Express Switch). describe a product as able to contain or be
GUI Graphical User Interface. part of a heterogeneous network,"
consisting of different manufacturers'
GUID Globally Unique Identifier.
products that can interoperate.
-back to top-
Heterogeneous networks are made possible by
H standards-conforming hardware and
H1F Essentially the floor-mounted disk rack software interfaces used in common by
(also called desk side) equivalent of the RK. different products, thus allowing them to
(See also: RK, RKA, and H2F). communicate with each other. The Internet
itself is an example of a heterogeneous
H2F Essentially the floor-mounted disk rack
network.
(also called desk side) add-on equivalent
similar to the RKA. There is a limitation of HiCAM Hitachi Computer Products America.

HDS Confidential: For distribution only to authorized parties. Page G-11


HIPAA Health Insurance Portability and infrastructure, operations and applications)
Accountability Act. in a coordinated fashion to assemble a
HIS (1) High Speed Interconnect. (2) Hospital particular solution. Source: Gartner
Information System (clinical and financial). Research.
Hybrid Network Cloud A composition of 2 or
HiStar Multiple point-to-point data paths to
cache. more clouds (private, community or public).
Each cloud remains a unique entity but they
HL7 Health Level 7. are bound together. A hybrid network cloud
HLQ High-level Qualifier. includes an interconnection.
HLS Healthcare and Life Sciences. Hypervisor Also called a virtual machine
manager, a hypervisor is a hardware
HLU Host Logical Unit.
virtualization technique that enables
H-LUN Host Logical Unit Number. See LUN. multiple operating systems to run
HMC Hardware Management Console. concurrently on the same computer.
Hypervisors are often installed on server
Homogeneous Of the same or similar kind.
hardware then run the guest operating
Host Also called a server. Basically a central systems that act as servers.
computer that processes end-user
Hypervisor can also refer to the interface
applications or requests.
that is provided by Infrastructure as a Service
Host LU Host Logical Unit. See also HLU. (IaaS) in cloud computing.
Host Storage Domains Allows host pooling at Leading hypervisors include VMware
the LUN level and the priority access feature vSphere Hypervisor (ESXi), Microsoft
lets administrator set service levels for Hyper-V and the Xen hypervisor.
applications. -back to top-
HP (1) Hewlett-Packard Company or (2) High
Performance.
HPC High Performance Computing. I
HSA Hardware System Area. I/F Interface.
HSG Host Security Group. I/O Input/Output. Term used to describe any
HSM Hierarchical Storage Management (see program, operation, or device that transfers
Data Migrator). data to or from a computer and to or from a
peripheral device.
HSN Hierarchical Star Network.
IaaS Infrastructure as a Service. A cloud
HSSDC High Speed Serial Data Connector.
computing business model delivering
HTTP Hyper Text Transfer Protocol. computer infrastructure, typically a platform
HTTPS Hyper Text Transfer Protocol Secure. virtualization environment, as a service,
Hub A common connection point for devices in along with raw (block) storage and
a network. Hubs are commonly used to networking. Rather than purchasing servers,
connect segments of a LAN. A hub contains software, data center space or network
multiple ports. When a packet arrives at 1 equipment, clients buy those resources as a
port, it is copied to the other ports so that all fully outsourced service. Providers typically
segments of the LAN can see all packets. A bill such services on a utility computing
switching hub actually reads the destination basis; the amount of resources consumed
address of each packet and then forwards (and therefore the cost) will typically reflect
the packet to the correct port. Device to the level of activity.
which nodes on a multi-point bus or loop are IDE Integrated Drive Electronics Advanced
physically connected. Technology. A standard designed to connect
Hybrid Cloud Hybrid cloud computing refers hard and removable disk drives.
to the combination of external public cloud IDN Integrated Delivery Network.
computing services and internal resources
iFCP Internet Fibre Channel Protocol.
(either a private cloud or traditional

Page G-12 HDS Confidential: For distribution only to authorized parties.


Index Cache Provides quick access to indexed IOC I/O controller.
data on the media during a browse\restore IOCDS I/O Control Data Set.
operation.
IODF I/O Definition file.
IBR Incremental Block-level Replication or
IOPH I/O per hour.
Intelligent Block Replication.
IOPS I/O per second.
ICB Integrated Cluster Bus.
IOS I/O Supervisor.
ICF Integrated Coupling Facility.
IOSQ Input/Output Subsystem Queue.
ID Identifier.
IP Internet Protocol. The communications
IDR Incremental Data Replication. protocol that routes traffic across the
iFCP Internet Fibre Channel Protocol. Allows Internet.
an organization to extend Fibre Channel IPv6 Internet Protocol Version 6. The latest
storage networks over the Internet by using revision of the Internet Protocol (IP).
TCP/IP. TCP is responsible for managing IPL Initial Program Load.
congestion control as well as error detection
IPSEC IP security.
and recovery services.
IRR Internal Rate of Return.
iFCP allows an organization to create an IP
SAN fabric that minimizes the Fibre Channel ISC Initial shipping condition or Inter-System
fabric component and maximizes use of the Communication.
company's TCP/IP infrastructure. iSCSI Internet SCSI. Pronounced eye skuzzy.
An IP-based standard for linking data
IFL Integrated Facility for LINUX.
storage devices over a network and
IHE Integrating the Healthcare Enterprise. transferring data by carrying SCSI
IID Initiator ID. commands over IP networks.
IIS Internet Information Server. ISE Integrated Scripting Environment.
ILM Information Life Cycle Management. iSER iSCSI Extensions for RDMA.
ILO (Hewlett-Packard) Integrated Lights-Out. ISL Inter-Switch Link.

IML Initial Microprogram Load. iSNS Internet Storage Name Service.


ISOE iSCSI Offload Engine.
IMS Information Management System.
ISP Internet service provider.
In-Band Virtualization Refers to the location of
the storage network path, between the ISPF Interactive System Productivity Facility.
application host servers in the storage ISPF/PDF Interactive System Productivity
systems. Provides both control and data Facility/Program Development Facility.
along the same connection path. Also called ISV Independent Software Vendor.
symmetric virtualization. ITaaS IT as a Service. A cloud computing
INI Initiator. business model. This general model is an
Interface The physical and logical arrangement umbrella model that entails the SPI business
supporting the attachment of any device to a model (SaaS, PaaS and IaaS Software,
connector or to another device. Platform and Infrastructure as a Service).
Internal Bus Another name for an internal data ITSC Informaton and Telecommunications
bus. Also, an expansion bus is often referred Systems Companies.
to as an internal bus. -back to top-

Internal Data Bus A bus that operates only J


within the internal circuitry of the CPU,
Java A widely accepted, open systems
communicating among the internal caches of
programming language. Hitachis enterprise
memory that are part of the CPU chips
software products are all accessed using Java
design. This bus is typically rather quick and
applications. This enables storage
is independent of the rest of the computers
administrators to access the Hitachi
operations.

HDS Confidential: For distribution only to authorized parties. Page G-13


enterprise software products from any PC or (all or portions of 1 or more disks) that are
workstation that runs a supported thin-client combined so that the subsystem sees and
internet browser application and that has treats them as a single area of data storage.
TCP/IP network access to the computer on Also called a volume. An LDEV has a
which the software product runs. specific and unique address within a
Java VM Java Virtual Machine. subsystem. LDEVs become LUNs to an
open-systems host.
JBOD Just a Bunch of Disks.
JCL Job Control Language. LDKC Logical Disk Controller or Logical Disk
Controller Manual.
JMP Jumper. Option setting method.
LDM Logical Disk Manager.
JMS Java Message Service.
LDS Linear Data Set.
JNL Journal.
JNLG Journal Group. LED Light Emitting Diode.

JRE Java Runtime Environment. LFF Large Form Factor.


JVM Java Virtual Machine. LIC Licensed Internal Code.
J-VOL Journal Volume. LIS Laboratory Information Systems.
-back to top- LLQ Lowest Level Qualifier.

K LM Local Memory.

KSDS Key Sequence Data Set. LMODs Load Modules.

kVA Kilovolt Ampere. LNKLST Link List.

KVM Kernel-based Virtual Machine or Load balancing The process of distributing


Keyboard-Video Display-Mouse. processing and communications activity
evenly across a computer network so that no
kW Kilowatt. single device is overwhelmed. Load
-back to top- balancing is especially important for
networks where it is difficult to predict the
L number of requests that will be issued to a
LACP Link Aggregation Control Protocol. server. If 1 server starts to be swamped,
LAG Link Aggregation Groups. requests are forwarded to another server
with more capacity. Load balancing can also
LAN Local Area Network. A communications
refer to the communications channels
network that serves clients within a
themselves.
geographical area, such as a building.
LOC Locations section of the Maintenance
LBA Logical block address. A 28-bit value that
Manual.
maps to a specific cylinder-head-sector
address on the disk. Logical DKC (LDKC) Logical Disk Controller
Manual. An internal architecture extension
LC Lucent connector. Fibre Channel connector
to the Control Unit addressing scheme that
that is smaller than a simplex connector (SC).
allows more LDEVs to be identified within 1
LCDG Link Processor Control Diagnostics. Hitachi enterprise storage system.
LCM Link Control Module. Longitudinal record Patient information from
LCP Link Control Processor. Controls the birth to death.
optical links. LCP is located in the LCM. LPAR Logical Partition (mode).
LCSS Logical Channel Subsystems. LR Local Router.
LCU Logical Control Unit. LRECL Logical Record Length.
LD Logical Device. LRP Local Router Processor.
LDAP Lightweight Directory Access Protocol. LRU Least Recently Used.
LDEV Logical Device or Logical Device
(number). A set of physical disk partitions

Page G-14 HDS Confidential: For distribution only to authorized parties.


LSS Logical Storage Subsystem (equivalent to Control Unit. The local CU of a remote copy
LCU). pair. Main or Master Control Unit.
LU Logical Unit. Mapping number of an LDEV. MCU Master Control Unit.
LUN Logical Unit Number. 1 or more LDEVs. MDPL Metadata Data Protection Level.
Used only for open systems. MediaAgent The workhorse for all data
LUSE Logical Unit Size Expansion. Feature used movement. MediaAgent facilitates the
to create virtual LUs that are up to 36 times transfer of data between the data source, the
larger than the standard OPEN-x LUs. client computer, and the destination storage
media.
LVDS Low Voltage Differential Signal
Metadata In database management systems,
LVI Logical Volume Image. Identifies a similar data files are the files that store the database
concept (as LUN) in the mainframe information; whereas other files, such as
environment. index files and data dictionaries, store
LVM Logical Volume Manager. administrative information, known as
-back to top- metadata.
MFC Main Failure Code.
M MG (1) Module Group. 2 (DIMM) cache
MAC Media Access Control. A MAC address is memory modules that work together. (2)
a unique identifier attached to most forms of Migration Group. A group of volumes to be
networking equipment. migrated together.
MAID Massive array of disks. MGC (3-Site) Metro/Global Mirror.
MAN Metropolitan Area Network. A MIB Management Information Base. A database
communications network that generally of objects that can be monitored by a
covers a city or suburb. MAN is very similar network management system. Both SNMP
to a LAN except it spans across a and RMON use standardized MIB formats
geographical region such as a state. Instead that allow any SNMP and RMON tools to
of the workstations in a LAN, the monitor any device defined by a MIB.
workstations in a MAN could depict Microcode The lowest-level instructions that
different cities in a state. For example, the directly control a microprocessor. A single
state of Texas could have: Dallas, Austin, San machine-language instruction typically
Antonio. The city could be a separate LAN translates into several microcode
and all the cities connected together via a instructions.
switch. This topology would indicate a
Fortan Pascal C
MAN.
High-level Language
MAPI Management Application Programming
Assembly Language
Interface.
Machine Language
Mapping Conversion between 2 data
Hardware
addressing spaces. For example, mapping
refers to the conversion between physical
Microprogram See Microcode.
disk block addresses and the block addresses
of the virtual disks presented to operating MIF Multiple Image Facility.
environments by control software. Mirror Cache OFF Increases cache efficiency
Mb Megabit. over cache data redundancy.
MB Megabyte. M-JNL Primary journal volumes.
MBA Memory Bus Adaptor. MM Maintenance Manual.
MBUS Multi-CPU Bus. MMC Microsoft Management Console.
MC Multi Cabinet. Mode The state or setting of a program or
device. The term mode implies a choice,
MCU Main Control Unit, Master Control Unit,
which is that you can change the setting and
Main Disk Control Unit or Master Disk
put the system in a different mode.

HDS Confidential: For distribution only to authorized parties. Page G-15


MP Microprocessor. NFS protocol Network File System is a protocol
MPA Microprocessor adapter. that allows a computer to access files over a
network as easily as if they were on its local
MPB Microprocessor board.
disks.
MPI (Electronic) Master Patient Identifier. Also
NIM Network Interface Module.
known as EMPI.
MPIO Multipath I/O. NIS Network Information Service (originally
called the Yellow Pages or YP).
MP PK MP Package.
NIST National Institute of Standards and
MPU Microprocessor Unit.
Technology. A standards organization active
MQE Metadata Query Engine (Hitachi). in cloud computing.
MS/SG Microsoft Service Guard. NLS Native Language Support.
MSCS Microsoft Cluster Server. Node An addressable entity connected to an
MSS (1) Multiple Subchannel Set. (2) Managed I/O bus or network, used primarily to refer
Security Services. to computers, storage devices and storage
subsystems. The component of a node that
MTBF Mean Time Between Failure.
connects to the bus or network is a port.
MTS Multitiered Storage.
Node name A Name_Identifier associated with
Multitenancy In cloud computing, a node.
multitenancy is a secure way to partition the
infrastructure (application, storage pool and NPV Net Present Value.
network) so multiple customers share a NRO Network Recovery Objective.
single resource pool. Multitenancy is one of NTP Network Time Protocol.
the key ways cloud can achieve massive
economy of scale. NVS Non Volatile Storage.
-back to top-
M-VOL Main Volume.
MVS Multiple Virtual Storage. O
-back to top- OASIS Organization for the Advancement of
Structured Information Standards.
N
OCC Open Cloud Consortium. A standards
NAS Network Attached Storage. A disk array
organization active in cloud computing.
connected to a controller that gives access to
a LAN Transport. It handles data at the file OEM Original Equipment Manufacturer.
level. OFC Open Fibre Control.
NAT Network Address Translation. OGF Open Grid Forum. A standards
NDMP Network Data Management Protocol. A organization active in cloud computing.
protocol meant to transport data between OID Object identifier.
NAS devices.
OLA Operating Level Agreements.
NetBIOS Network Basic Input/Output System.
OLTP On-Line Transaction Processing.
Network A computer system that allows
OLTT Open-loop throughput throttling.
sharing of resources, such as files and
peripheral hardware devices. OMG Object Management Group. A standards
organization active in cloud computing.
Network Cloud A communications network.
The word "cloud" by itself may refer to any On/Off CoD On/Off Capacity on Demand.
local area network (LAN) or wide area ONODE Object node.
network (WAN). The terms computing"
OpenStack An open source project to provide
and "cloud computing" refer to services
orchestration and provisioning for cloud
offered on the public Internet or to a private
environments based on a variety of different
network that uses the same protocols as a
hypervisors.
standard network. See also cloud computing.

Page G-16 HDS Confidential: For distribution only to authorized parties.


OPEX Operational Expenditure. This is an multiple partitions. Then customize the
operating expense, operating expenditure, partition to match the I/O characteristics of
operational expense, or operational assigned LUs.
expenditure, which is an ongoing cost for PAT Port Address Translation.
running a product, business, or system. Its
counterpart is a capital expenditure (CAPEX). PATA Parallel ATA.

ORM Online Read Margin. Path Also referred to as a transmission channel,


the path between 2 nodes of a network that a
OS Operating System. data communication follows. The term can
Out-of-Band Virtualization Refers to systems refer to the physical cabling that connects the
where the controller is located outside of the nodes on a network, the signal that is
SAN data path. Separates control and data communicated over the pathway or a sub-
on different connection paths. Also called channel in a carrier frequency.
asymmetric virtualization. Path failover See Failover.
-back to top-
PAV Parallel Access Volumes.
P PAWS Protect Against Wrapped Sequences.
P-2-P Point to Point. Also P-P. PB Petabyte.
PaaS Platform as a Service. A cloud computing PBC Port Bypass Circuit.
business model delivering a computing PCB Printed Circuit Board.
platform and solution stack as a service. PCHIDS Physical Channel Path Identifiers.
PaaS offerings facilitate deployment of
PCI Power Control Interface.
applications without the cost and complexity
of buying and managing the underlying PCI CON Power Control Interface Connector
hardware, software and provisioning Board.
hosting capabilities. PaaS provides all of the PCI DSS Payment Card Industry Data Security
facilities required to support the complete Standard.
life cycle of building and delivering web PCIe Peripheral Component Interconnect
applications and services entirely from the Express.
Internet.
PD Product Detail.
PACS Picture Archiving and Communication PDEV Physical Device.
System.
PDM Policy based Data Migration or Primary
PAN Personal Area Network. A Data Migrator.
communications network that transmit data
PDS Partitioned Data Set.
wirelessly over a short distance. Bluetooth
and Wi-Fi Direct are examples of personal PDSE Partitioned Data Set Extended.
area networks. Performance Speed of access or the delivery of
PAP Password Authentication Protocol. information.
Petabyte (PB) A measurement of capacity the
Parity A technique of checking whether data
amount of data that a drive or storage
has been lost or written over when it is
system can store after formatting. 1PB =
moved from one place in storage to another
1,024TB.
or when it is transmitted between
computers. PFA Predictive Failure Analysis.
Parity Group Also called an array group. This is PFTaaS Private File Tiering as a Service. A cloud
a group of hard disk drives (HDDs) that computing business model.
form the basic unit of storage in a subsystem. PGP Pretty Good Privacy. A data encryption
All HDDs in a parity group must have the and decryption computer program used for
same physical capacity. increasing the security of email
Partitioned cache memory Separate workloads communications.
in a storage consolidated system by PGR Persistent Group Reserve.
dividing cache into individually managed

HDS Confidential: For distribution only to authorized parties. Page G-17


PI Product Interval. Provisioning The process of allocating storage
PIR Performance Information Report. resources and assigning storage capacity for
an application, usually in the form of server
PiT Point-in-Time.
disk drive space, in order to optimize the
PK Package (see PCB). performance of a storage area network
PL Platter. The circular disk on which the (SAN). Traditionally, this has been done by
magnetic data is stored. Also called the SAN administrator, and it can be a
motherboard or backplane. tedious process. In recent years, automated
PM Package Memory. storage provisioning (also called auto-
provisioning) programs have become
POC Proof of concept.
available. These programs can reduce the
Port In TCP/IP and UDP networks, an time required for the storage provisioning
endpoint to a logical connection. The port process, and can free the administrator from
number identifies what type of port it is. For the often distasteful task of performing this
example, port 80 is used for HTTP traffic. chore manually.
POSIX Portable Operating System Interface for PS Power Supply.
UNIX. A set of standards that defines an
PSA Partition Storage Administrator .
application programming interface (API) for
software designed to run under PSSC Perl Silicon Server Control.
heterogeneous operating systems. PSU Power Supply Unit.
PP Program product. PTAM Pickup Truck Access Method.
P-P Point-to-point; also P2P. PTF Program Temporary Fixes.
PPRC Peer-to-Peer Remote Copy. PTR Pointer.
Private Cloud A type of cloud computing PU Processing Unit.
defined by shared capabilities within a Public Cloud Resources, such as applications
single company; modest economies of scale and storage, available to the general public
and less automation. Infrastructure and data over the Internet.
reside inside the companys data center
P-VOL Primary Volume.
behind a firewall. Comprised of licensed
-back to top-
software tools rather than on-going services.
Q
Example: An organization implements its QD Quorum Device.
own virtual, scalable cloud and business
units are charged on a per use basis. QDepth The number of I/O operations that can
run in parallel on a SAN device; also WWN
Private Network Cloud A type of cloud
QDepth.
network with 3 characteristics: (1) Operated
solely for a single organization, (2) Managed QoS Quality of Service. In the field of computer
internally or by a third-party, (3) Hosted networking, the traffic engineering term
internally or externally. quality of service (QoS) refers to resource
reservation control mechanisms rather than
PR/SM Processor Resource/System Manager. the achieved service quality. Quality of
Protocol A convention or standard that enables service is the ability to provide different
the communication between 2 computing priority to different applications, users, or
endpoints. In its simplest form, a protocol data flows, or to guarantee a certain level of
can be defined as the rules governing the performance to a data flow.
syntax, semantics and synchronization of
QSAM Queued Sequential Access Method.
communication. Protocols may be
-back to top-
implemented by hardware, software or a
combination of the 2. At the lowest level, a R
protocol defines the behavior of a hardware RACF Resource Access Control Facility.
connection.
RAID Redundant Array of Independent Disks,
or Redundant Array of Inexpensive Disks. A

Page G-18 HDS Confidential: For distribution only to authorized parties.


group of disks that look like a single volume telecommunication links that are installed to
to the server. RAID improves performance back up primary resources in case they fail.
by pulling a single stripe of data from
multiple disks, and improves fault-tolerance A well-known example of a redundant
either through mirroring or parity checking system is the redundant array of
and it is a component of a customers SLA. independent disks (RAID). Redundancy
contributes to the fault tolerance of a system.
RAID-0 Striped array with no parity.
RAID-1 Mirrored array and duplexing. Redundancy Backing up a component to help
ensure high availability.
RAID-3 Striped array with typically non-
rotating parity, optimized for long, single- Reliability (1) Level of assurance that data will
threaded transfers. not be lost or degraded over time. (2) An
attribute of any commuter component
RAID-4 Striped array with typically non-
(software, hardware or a network) that
rotating parity, optimized for short, multi-
consistently performs according to its
threaded transfers.
specifications.
RAID-5 Striped array with typically rotating
REST Representational State Transfer.
parity, optimized for short, multithreaded
transfers. REXX Restructured extended executor.
RAID-6 Similar to RAID-5, but with dual RID Relative Identifier that uniquely identifies
rotating parity physical disks, tolerating 2 a user or group within a Microsoft Windows
physical disk failures. domain.
RAIN Redundant (or Reliable) Array of RIS Radiology Information System.
Independent Nodes (architecture). RISC Reduced Instruction Set Computer.
RAM Random Access Memory.
RIU Radiology Imaging Unit.
RAM DISK A LUN held entirely in the cache
R-JNL Secondary journal volumes.
area.
RAS Reliability, Availability, and Serviceability RK Rack additional.
or Row Address Strobe. RKAJAT Rack Additional SATA disk tray.
RBAC Role Base Access Control. RKAK Expansion unit.
RC (1) Reference Code or (2) Remote Control. RLGFAN Rear Logic Box Fan Assembly.
RCHA RAID Channel Adapter. RLOGIC BOX Rear Logic Box.
RCP Remote Control Processor. RMF Resource Measurement Facility.
RCU Remote Control Unit or Remote Disk RMI Remote Method Invocation. A way that a
Control Unit. programmer, using the Java programming
RCUT RCU Target. language and development environment,
can write object-oriented programming in
RD/WR Read/Write. which objects on different computers can
RDM Raw Disk Mapped. interact in a distributed network. RMI is the
RDMA Remote Direct Memory Access. Java version of what is generally known as a
RPC (remote procedure call), but with the
RDP Remote Desktop Protocol.
ability to pass 1 or more objects along with
RDW Record Descriptor Word. the request.
Read/Write Head Read and write data to the RndRD Random read.
platters, typically there is 1 head per platter ROA Return on Asset.
side, and each head is attached to a single
actuator shaft. RoHS Restriction of Hazardous Substances (in
Electrical and Electronic Equipment).
RECFM Record Format Redundant. Describes
the computer or network system ROI Return on Investment.
components, such as fans, hard disk drives, ROM Read Only Memory.
servers, operating systems, switches, and

HDS Confidential: For distribution only to authorized parties. Page G-19


Round robin mode A load balancing technique delivery model for most business
which distributes data packets equally applications, including accounting (CRM
among the available paths. Round robin and ERP), invoicing (HRM), content
DNS is usually used for balancing the load management (CM) and service desk
of geographically distributed Web servers. It management, just to name the most common
works on a rotating basis in that one server software that runs in the cloud. This is the
IP address is handed out, then moves to the fastest growing service in the cloud market
back of the list; the next server IP address is today. SaaS performs best for relatively
handed out, and then it moves to the end of simple tasks in IT-constrained organizations.
the list; and so on, depending on the number SACK Sequential Acknowledge.
of servers being used. This works in a
looping fashion. SACL System ACL. The part of a security
descriptor that stores system auditing
Router A computer networking device that information.
forwards data packets toward their
destinations, through a process known as SAIN SAN-attached Array of Independent
routing. Nodes (architecture).

RPC Remote procedure call. SAN Storage Area Network. A network linking
computing devices to disk or tape arrays and
RPO Recovery Point Objective. The point in other devices over Fibre Channel. It handles
time that recovered data should match. data at the block level.
RPSFAN Rear Power Supply Fan Assembly. SAP (1) System Assist Processor (for I/O
RRDS Relative Record Data Set. processing), or (2) a German software
RS CON RS232C/RS422 Interface Connector. company.

RSD RAID Storage Division (of Hitachi). SAP HANA High Performance Analytic
Appliance, a database appliance technology
R-SIM Remote Service Information Message. proprietary to SAP.
RSM Real Storage Manager. SARD System Assurance Registration
RTM Recovery Termination Manager. Document.
RTO Recovery Time Objective. The length of SAS Serial Attached SCSI.
time that can be tolerated between a disaster SATA Serial ATA. Serial Advanced Technology
and recovery of data. Attachment is a new standard for connecting
R-VOL Remote Volume. hard drives into computer systems. SATA is
R/W Read/Write. based on serial signaling technology, unlike
current IDE (Integrated Drive Electronics)
-back to top-
hard drives that use parallel signaling.
S SBM Solutions Business Manager.
SA Storage Administrator. SBOD Switched Bunch of Disks.
SA z/OS System Automation for z/OS. SBSC Smart Business Storage Cloud.
SAA Share Access Authentication. The process SBX Small Box (Small Form Factor).
of restricting a user's rights to a file system
SC (1) Simplex connector. Fibre Channel
object by combining the security descriptors
connector that is larger than a Lucent
from both the file system object itself and the
connector (LC). (2) Single Cabinet.
share to which the user is connected.
SCM Supply Chain Management.
SaaS Software as a Service. A cloud computing
SCP Secure Copy.
business model. SaaS is a software delivery
model in which software and its associated SCSI Small Computer Systems Interface. A
data are hosted centrally in a cloud and are parallel bus architecture and a protocol for
typically accessed by users using a thin transmitting large data blocks up to a
client, such as a web browser via the distance of 15 to 25 meters.
Internet. SaaS has become a common SD Software Division (of Hitachi).

Page G-20 HDS Confidential: For distribution only to authorized parties.


SDH Synchronous Digital Hierarchy. Specific performance benchmarks to
SDM System Data Mover. which actual performance will be
periodically compared
SDO Standards Development Organizations (a
general category). The schedule for notification in advance of
network changes that may affect users
SDSF Spool Display and Search Facility.
Sector A sub-division of a track of a magnetic Help desk response time for various
disk that stores a fixed amount of data. classes of problems

SEL System Event Log. Dial-in access availability


Selectable Segment Size Can be set per Usage statistics that will be provided
partition. Service-Level Objective SLO. Individual
Selectable Stripe Size Increases performance by performance metrics built into an SLA. Each
customizing the disk access size. SLO corresponds to a single performance
characteristic relevant to the delivery of an
SENC Is the SATA (Serial ATA) version of the
overall service. Some examples of SLOs
ENC. ENCs and SENCs are complete
include: system availability, help desk
microprocessor systems on their own and
incident resolution time, and application
they occasionally require a firmware
response time.
upgrade.
SeqRD Sequential read. SES SCSI Enclosure Services.

Serial Transmission The transmission of data SFF Small Form Factor.


bits in sequential order over a single line. SFI Storage Facility Image.
Server A central computer that processes SFM Sysplex Failure Management.
end-user applications or requests, also called
SFP Small Form-Factor Pluggable module Host
a host.
connector. A specification for a new
Server Virtualization The masking of server generation of optical modular transceivers.
resources, including the number and identity The devices are designed for use with small
of individual physical servers, processors, form factor (SFF) connectors, offer high
and operating systems, from server users. speed and physical compactness and are
The implementation of multiple isolated hot-swappable.
virtual environments in one physical server.
SHSN Shared memory Hierarchical Star
Service-level Agreement SLA. A contract Network.
between a network service provider and a
SID Security Identifier. A user or group
customer that specifies, usually in
identifier within the Microsoft Windows
measurable terms, what services the network
security model.
service provider will furnish. Many Internet
service providers (ISP) provide their SIGP Signal Processor.
customers with a SLA. More recently, IT SIM (1) Service Information Message. A
departments in major enterprises have message reporting an error that contains fix
adopted the idea of writing a service level guidance information. (2) Storage Interface
agreement so that services for their Module. (3) Subscriber Identity Module.
customers (users in other departments SIM RC Service (or system) Information
within the enterprise) can be measured, Message Reference Code.
justified, and perhaps compared with those
SIMM Single In-line Memory Module.
of outsourcing network providers.
SLA Service Level Agreement.
Some metrics that SLAs may specify include:
SLO Service Level Objective.
The percentage of the time services will be
SLRP Storage Logical Partition.
available
SM Shared Memory or Shared Memory Module.
The number of users that can be served
Stores the shared information about the
simultaneously
subsystem and the cache control information
(director names). This type of information is

HDS Confidential: For distribution only to authorized parties. Page G-21


used for the exclusive control of the can send and receive TCP/IP messages by
subsystem. Like CACHE, shared memory is opening a socket and reading and writing
controlled as 2 areas of memory and fully non- data to and from the socket. This simplifies
volatile (sustained for approximately 7 days). program development because the
SM PATH Shared Memory Access Path. The programmer need only worry about
Access Path from the processors of CHA, manipulating the socket and can rely on the
DKA PCB to Shared Memory. operating system to actually transport
messages across the network correctly. Note
SMB/CIFS Server Message Block
that a socket in this sense is completely soft;
Protocol/Common Internet File System.
it is a software object, not a physical
SMC Shared Memory Control. component.
SME Small and Medium Enterprise. SOM System Option Mode.
SMF System Management Facility. SONET Synchronous Optical Network.
SMI-S Storage Management Initiative SOSS Service Oriented Storage Solutions.
Specification.
SPaaS SharePoint as a Service. A cloud
SMP Symmetric Multiprocessing. An IBM- computing business model.
licensed program used to install software
SPAN Span is a section between 2 intermediate
and software changes on z/OS systems.
supports. See Storage pool.
SMP/E System Modification
Spare An object reserved for the purpose of
Program/Extended.
substitution for a like object in case of that
SMS System Managed Storage. object's failure.
SMTP Simple Mail Transfer Protocol. SPC SCSI Protocol Controller.
SMU System Management Unit. SpecSFS Standard Performance Evaluation
Snapshot Image A logical duplicated volume Corporation Shared File system.
(V-VOL) of the primary volume. It is an SPECsfs97 Standard Performance Evaluation
internal volume intended for restoration. Corporation (SPEC) System File Server (sfs)
SNIA Storage Networking Industry developed in 1997 (97).
Association. An association of producers and SPI model Software, Platform and
consumers of storage networking products, Infrastructure as a service. A common term
whose goal is to further storage networking to describe the cloud computing as a service
technology and applications. Active in cloud business model.
computing.
SRA Storage Replicator Adapter.
SNMP Simple Network Management Protocol. SRDF/A (EMC) Symmetrix Remote Data
A TCP/IP protocol that was designed for Facility Asynchronous.
management of networks over TCP/IP,
SRDF/S (EMC) Symmetrix Remote Data
using agents and stations.
Facility Synchronous.
SOA Service Oriented Architecture.
SRM Site Recovery Manager.
SOAP Simple Object Access Protocol. A way for
SSB Sense Byte.
a program running in one kind of operating
system (such as Windows 2000) to SSC SiliconServer Control.
communicate with a program in the same or SSCH Start Subchannel.
another kind of an operating system (such as SSD Solid-State Drive or Solid-State Disk.
Linux) by using the World Wide Web's
SSH Secure Shell.
Hypertext Transfer Protocol (HTTP) and its
Extensible Markup Language (XML) as the SSID Storage Subsystem ID or Subsystem
mechanisms for information exchange. Identifier.
Socket In UNIX and some other operating SSL Secure Sockets Layer.
systems, socket is a software object that SSPC System Storage Productivity Center.
connects an application to a network SSUE Split Suspended Error.
protocol. In UNIX, for example, a program

Page G-22 HDS Confidential: For distribution only to authorized parties.


SSUS Split Suspend. TCO Total Cost of Ownership.
SSVP Sub Service Processor interfaces the SVP TCG Trusted Computing Group.
to the DKC. TCP/IP Transmission Control Protocol over
SSW SAS Switch. Internet Protocol.
Sticky Bit Extended UNIX mode bit that TDCONV Trace Dump Converter. A software
prevents objects from being deleted from a program that is used to convert traces taken
directory by anyone other than the object's on the system into readable text. This
owner, the directory's owner or the root user. information is loaded into a special
Storage pooling The ability to consolidate and spreadsheet that allows for further
manage storage resources across storage investigation of the data. More in-depth
system enclosures where the consolidation failure analysis.
of many appears as a single view. TDMF Transparent Data Migration Facility.
STP Server Time Protocol. Telco or TELCO Telecommunications
STR Storage and Retrieval Systems. Company.
Striping A RAID technique for writing a file to TEP Tivoli Enterprise Portal.
multiple disks on a block-by-block basis, Terabyte (TB) A measurement of capacity, data
with or without parity. or data storage. 1TB = 1,024GB.
Subsystem Hardware or software that performs TFS Temporary File System.
a specific function within a larger system. TGTLIBs Target Libraries.
SVC Supervisor Call Interruption. THF Front Thermostat.
SVC Interrupts Supervisor calls. Thin Provisioning Thin provisioning allows
S-VOL (1) (ShadowImage) Source Volume for storage space to be easily allocated to servers
In-System Replication, or (2) (Universal on a just-enough and just-in-time basis.
Replicator) Secondary Volume. THR Rear Thermostat.
SVP Service Processor A laptop computer Throughput The amount of data transferred
mounted on the control frame (DKC) and from 1 place to another or processed in a
used for monitoring, maintenance and specified amount of time. Data transfer rates
administration of the subsystem. for disk drives and networks are measured
Switch A fabric device providing full in terms of throughput. Typically,
bandwidth per port and high-speed routing throughputs are measured in kb/sec,
of data via link-level addressing. Mb/sec and Gb/sec.
SWPX Switching power supply. TID Target ID.
SXP SAS Expander. Tiered Storage A storage strategy that matches
Symmetric Virtualization See In-Band data classification to storage metrics. Tiered
Virtualization. storage is the assignment of different
categories of data to different types of
Synchronous Operations that have a fixed time
storage media in order to reduce total
relationship to each other. Most commonly
storage cost. Categories may be based on
used to denote I/O operations that occur in
levels of protection needed, performance
time sequence, such as, a successor operation
requirements, frequency of use, and other
does not occur until its predecessor is
considerations. Since assigning data to
complete.
particular media may be an ongoing and
-back to top-
complex activity, some vendors provide
T software for automatically managing the
Target The system component that receives a process based on a company-defined policy.
SCSI I/O command, an open device that Tiered Storage Promotion Moving data
operates at the request of the initiator. between tiers of storage as their availability
TB Terabyte. 1TB = 1,024GB. requirements change.
TCDO Total Cost of Data Ownership. TLS Tape Library System.

HDS Confidential: For distribution only to authorized parties. Page G-23


TLS Transport Layer Security. secondary servers, set up protection and
TMP Temporary or Test Management Program. perform failovers and failbacks.
TOD (or ToD) Time Of Day. VCS Veritas Cluster System.
TOE TCP Offload Engine. VDEV Virtual Device.
Topology The shape of a network or how it is VDI Virtual Desktop Infrastructure.
laid out. Topologies are either physical or VHD Virtual Hard Disk.
logical.
VHDL VHSIC (Very-High-Speed Integrated
TPC-R Tivoli Productivity Center for
Circuit) Hardware Description Language.
Replication.
VHSIC Very-High-Speed Integrated Circuit.
TPF Transaction Processing Facility.
VI Virtual Interface. A research prototype that
TPOF Tolerable Points of Failure.
is undergoing active development, and the
Track Circular segment of a hard disk or other details of the implementation may change
storage media. considerably. It is an application interface
Transfer Rate See Data Transfer Rate. that gives user-level processes direct but
Trap A program interrupt, usually an interrupt protected access to network interface cards.
caused by some exceptional situation in the This allows applications to bypass IP
user program. In most cases, the Operating processing overheads (for example, copying
System performs some action and then data, computing checksums) and system call
returns control to the program. overheads while still preventing 1 process
from accidentally or maliciously tampering
TSC Tested Storage Configuration.
with or reading data being used by another.
TSO Time Sharing Option.
Virtualization Referring to storage
TSO/E Time Sharing Option/Extended.
virtualization, virtualization is the
T-VOL (ShadowImage) Target Volume for amalgamation of multiple network storage
In-System Replication. devices into what appears to be a single
-back to top- storage unit. Storage virtualization is often
U used in a SAN, and makes tasks such as
archiving, backup and recovery easier and
UA Unified Agent. faster. Storage virtualization is usually
UBX Large Box (Large Form Factor). implemented via software applications.
UCB Unit Control Block.
UDP User Datagram Protocol is 1 of the core There are many additional types of
protocols of the Internet protocol suite. virtualization.
Using UDP, programs on networked Virtual Private Cloud (VPC) Private cloud
computers can send short messages known existing within a shared or public cloud (for
as datagrams to one another. example, the Intercloud). Also known as a
UFA UNIX File Attributes. virtual private network cloud.
UID User Identifier within the UNIX security VLL Virtual Logical Volume Image/Logical
model. Unit Number.
UPS Uninterruptible Power Supply A power VLUN Virtual LUN. Customized volume. Size
supply that includes a battery to maintain chosen by user.
power in the event of a power outage. VLVI Virtual Logical Volume Image. Marketing
UR Universal Replicator. name for CVS (custom volume size).
UUID Universally Unique Identifier. VM Virtual Machine.
-back to top- VMDK Virtual Machine Disk file format.
V VNA Vendor Neutral Archive.
vContinuum Using the vContinuum wizard, VOJP (Cache) Volatile Jumper.
users can push agents to primary and
VOLID Volume ID.

Page G-24 HDS Confidential: For distribution only to authorized parties.


VOLSER Volume Serial Numbers. WWNN World Wide Node Name. A globally
Volume A fixed amount of storage on a disk or unique 64-bit identifier assigned to each
tape. The term volume is often used as a Fibre Channel node process.
synonym for the storage medium itself, but WWPN World Wide Port Name. A globally
it is possible for a single disk to contain more unique 64-bit identifier assigned to each
than 1 volume or for a volume to span more Fibre Channel port. A Fibre Channel ports
than 1 disk. WWPN is permitted to use any of several
VPC Virtual Private Cloud. naming authorities. Fibre Channel specifies a
VSAM Virtual Storage Access Method. Network Address Authority (NAA) to
distinguish between the various name
VSD Virtual Storage Director. registration authorities that may be used to
VTL Virtual Tape Library. identify the WWPN.
VSP Virtual Storage Platform. -back to top-
VSS (Microsoft) Volume Shadow Copy Service.
X
VTOC Volume Table of Contents.
XAUI "X"=10, AUI = Attachment Unit Interface.
VTOCIX Volume Table of Contents Index.
VVDS Virtual Volume Data Set. XCF Cross System Communications Facility.

V-VOL Virtual Volume. XDS Cross Enterprise Document Sharing.


-back to top-
XDSi Cross Enterprise Document Sharing for
W Imaging.

WAN Wide Area Network. A computing XFI Standard interface for connecting a 10Gb
internetwork that covers a broad area or Ethernet MAC device to XFP interface.
region. Contrast with PAN, LAN and MAN.
XFP "X"=10Gb Small Form Factor Pluggable.
WDIR Directory Name Object.
XML eXtensible Markup Language.
WDIR Working Directory.
XRC Extended Remote Copy.
WDS Working Data Set. -back to top-

WebDAV Web-Based Distributed Authoring Y


and Versioning (HTTP extensions).
YB Yottabyte.
WFILE File Object or Working File.
Yottabyte The highest-end measurement of
WFS Working File Set. data at the present time. 1YB = 1,024ZB, or 1
quadrillion GB. A recent estimate (2011) is
WINS Windows Internet Naming Service. that all the computer hard drives in the
WL Wide Link. world do not contain 1YB of data.
-back to top-
WLM Work Load Manager.
WORM Write Once, Read Many.
Z
WSDL Web Services Description Language. z/OS z Operating System (IBM S/390 or
WSRM Write Seldom, Read Many. z/OS Environments).
z/OS NFS (System) z/OS Network File System.
WTREE Directory Tree Object or Working Tree.
z/OSMF (System) z/OS Management Facility.
WWN World Wide Name. A unique identifier
zAAP (System) z Application Assist Processor
for an open-system host. It consists of a 64-
(for Java and XML workloads).
bit physical address (the IEEE 48-bit format
with a 12-bit extension and a 4-bit prefix).

HDS Confidential: For distribution only to authorized parties. Page G-25


ZCF Zero Copy Failover. Also known as Data
Access Path (DAP).
Zettabyte (ZB) A high-end measurement of
data. 1ZB = 1,024EB.
zFS (System) zSeries File System.
zHPF (System) z High Performance FICON.
zIIP (System) z Integrated Information
Processor (specialty processor for database).
Zone A collection of Fibre Channel Ports that
are permitted to communicate with each
other via the fabric.
Zoning A method of subdividing a storage area
network into disjoint zones, or subsets of
nodes on the network. Storage area network
nodes outside a zone are invisible to nodes
within the zone. Moreover, with switched
SANs, traffic within each zone may be
physically isolated from traffic outside the
zone.
-back to top-

Page G-26 HDS Confidential: For distribution only to authorized parties.


Evaluating This Course
Please use the online evaluation system to help improve our
courses.

For evaluations handled inside the Learning Center, sign in to:


https://learningcenter.hds.com/Saba/Web/Main
Evaluations can be reached by clicking the My Learning tab, followed by Evaluations &
Surveys on the left navigation bar. Click the Launch link to evaluate the course.

Learning Center Sign-in location:

https://learningcenter.hds.com/Saba/Web/Main

Page E-1
Evaluating This Course

Page E-2

You might also like