You are on page 1of 33

CERTIFIED PROFESSIONAL STUDY GUIDE

VCE VBLOCK SYSTEMS DEPLOYMENT


AND IMPLEMENTATION: STORAGE EXAM
210-030
Document revision 1.2
December 2014

2014 VCE Company, LLC. All rights reserved.

Table Of Contents
Obtaining The VCE-CIIEs Certification Credential ........................................... 4
VCE Vblock Systems Deployment And Implementation: Storage Exam ...................................... 4
Recommend Prerequisites .............................................................................................................. 4
VCE Exam Preparation Resources ................................................................................................. 4
VCE Certification Web Site ............................................................................................................. 4
About This Study Guide ..................................................................................... 6
Vblock Systems Overview ................................................................................. 6
Vblock Systems Architecture ........................................................................................................... 7
VMware vSphere Architecture ......................................................................................................... 9
VMware vSphere Components ....................................................................................................... 9

VMware vCenter Server On the AMP .............................................................. 11


Storage Provisioning And Configuration ....................................................... 11
Virtual Storage Concepts .............................................................................................................. 11
ESXi Host Data Stores .................................................................................................................. 12
Virtual Storage And High Availability ............................................................................................. 13
Virtual Network Switches ................................................................................. 14
Validate Networking And Storage Configurations ......................................... 15
Storage Connectivity Concepts In Vblock Systems ...................................... 16
EMC VNX Series Storage Arrays .................................................................................................. 16
WWNN and WWPN ....................................................................................................................... 16
Zoning ........................................................................................................................................... 16
VSAN ............................................................................................................................................. 16
Storage Protocols .......................................................................................................................... 17
Storage Management Interface ..................................................................................................... 17
Storage Types ............................................................................................................................... 17
Thick/Thin Devices ........................................................................................................................ 17
Provisioning Methods .................................................................................................................... 18
File versus Block Level Storage .................................................................................................... 18
F.A.S.T Concepts .......................................................................................................................... 18
Storage Components In Vblock Systems ....................................................... 19
Physical Disk Types ...................................................................................................................... 19
Storage Cache Options ................................................................................................................. 19
EMC XtremCache ..................................................................................................................................... 19
2014 VCE Company, LLC. All rights reserved.

EMC Fast Cache ....................................................................................................................................... 20

EMC Storage Controller/RAID Controller ...................................................................................... 20


LUN Devices ................................................................................... Error! Bookmark not defined.
Storage Processor ......................................................................................................................... 20
Front-End Director ......................................................................................................................... 20
VNX DAE Cabling ......................................................................................................................... 21
Disk ............................................................................................................................................... 21
Vault .............................................................................................................................................. 21

Administration Of Vblock Systems ................................................................. 22


Power ............................................................................................................................................ 22
Initialize the Storage Array ............................................................................................................ 22
Storage Pool Management ............................................................................................................ 22
Virtual Pools .................................................................................................................................. 23
VCE Vision Intelligent Operations .............................................................................................. 23
Install Upgrade .................................................................................................. 25
Vblock Systems Documentation And Deliverables ....................................................................... 25
Storage Firmware, Validation, And Upgrade ................................................................................. 25
Storage Management Applications ............................................................................................... 26
PowerPath Deployment ................................................................................................................. 26
Storage Management Attributes .................................................................................................... 26
Provisioning ...................................................................................................... 27
Pool Management ......................................................................................................................... 27
LUNS ............................................................................................................................................. 28
NFS/CIFS ...................................................................................................................................... 28
VDM .............................................................................................................................................. 29
Zoning principles ........................................................................................................................... 29
Best practices ............................................................................................................................................ 29
Troubleshooting ................................................................................................ 30
Connectivity ................................................................................................................................... 30
Fault Detection .............................................................................................................................. 31
Health Check ................................................................................................................................. 31
Storage Processor/Control Station IP Address ............................................................................. 32
Storage Reconfiguration And Replacement Concepts .................................................................. 32
Conclusion......................................................................................................... 33

2014 VCE Company, LLC. All rights reserved.

Obtaining The VCE-CIIEs Certification Credential


The VCE Certified Professional program validates qualified IT professional to design, manage, configure, and
implement Vblock Systems. The VCE Certified Converged Infrastructure Implementation Engineer (VCECIIE) credential verifies proficiency with respect to the deployment methodology and management concepts of the
VCE Converged Infrastructure. VCE-CIIE credentials assure customers that a qualified implementer with a
thorough understanding of Vblock Systems is deploying their systems.
The VCE-CIIE track includes a core qualification and four specialty qualifications: Virtualization, Network,
Compute, and Storage. Each track requires a passing score for the VCE Vblock Systems Deployment and
Implementation: Core Exam and one specialty exam.
To obtain the Certified Converged Infrastructure Implementation Engineer Storage (CIIEs) certification, you must
pass both the VCE Vblock Systems Deployment and Implementation: Core Exam, and the VCE Vblock Systems
Deployment and Implementation: Storage Exam.

VCE Vblock Systems Deployment and Implementation: Storage Exam


The VCE Vblock Systems Deployment and Implementation: Storage Exam validates that candidates have met all
entrance, integration, and interoperability criteria and that the person is technically qualified to install, configure,
and secure the EMC storage array on Vblock Systems.
The exam covers Vblock Systems storage technology available at the time the exam was developed.

Recommend Prerequisites
There are no required prerequisites for taking the VCE Vblock Systems Deployment and Implementation: Storage
Exam. However, exam candidates should have working knowledge of EMC data storage solutions obtained
through formal ILT training and a minimum of one-year experience. It is also highly recommended that exam
candidates have training, knowledge and/or working experience with industry-standard x86 servers and operating
systems.

VCE Exam Preparation Resources


VCE strongly recommends exam candidates carefully review this study guide. However, its not the only
recommended preparation resource for the VCE Vblock Systems Deployment and Implementation: Storage
Exam, and reviewing this study guide alone does not guarantee passing the exam. VCE certification credentials
require a high level of expertise and its expected that you review the related VMware, Cisco, or EMC resources
listed in the References document (available from the VCE Certification web site). Its also expected that you
draw from real-world experiences to answer the questions on the VCE certification exams. The certification exam
also tests deployment and implementation concepts covered in the Instructor-Led training (ILT) course VCE
Vblock Systems Deployment and Implementation, which is a recommended reference for the exam.

VCE Certification Website


Please refer to https://www.vce.com/services/training/certified/exams for more information on the VCE Certified
Professional program and exam preparation resources.

2014 VCE Company, LLC. All rights reserved.

Accessing VCE Documentation


The descriptions of the various hardware and software configurations in this study guide apply generically to
Vblock Systems. Vblock System 200, Vblock System 300 family, and Vblock System 700 family Physical Build,
Logical Build, Architecture and Administration Guides contain more specific configuration details.
The VCE related documentation is available via the links listed below. Use the link relative to your role.
Role

Resource

Customer

http://support.vce.com/

VCE partner

www.vcepartnerportal.com

VCE employee

www.vceview.com/solutions/products/

Cisco, EMC, VCE, or VMware employee

www.vceportal.com/solutions/68580567.html

Note: The websites listed above require some form of authentication using a username/badge and password.

2014 VCE Company, LLC. All rights reserved.

About This Study Guide


The content in this study guide is relevant to the VCE Vblock Systems Deployment and Implementation: Storage
Exam. It provides information about EMC storage, and focuses on how it integrates into the VCE Vblock Systems.
Specifically, it addresses installation, administration, and troubleshooting of data storage arrays within the Vblock
Systems environment.
This study guide focuses on deploying EMC storage array in a VCE Vblock Systems converged infrastructure.
Vblock Systems come configured with specific customer-defined server, storage, and networking hardware that is
already VMware qualified. The bulk of this guide concentrates on how to configure and manage the data storage
array on Vblock Systems.
The following topics are covered in this study guide:
Overview of the Vblock Systems and EMC data storage environment, including an architectural review of
the EMC VNXe and the Vblock Systems specific data storage components.
Configure and optimize storage and networking for both virtual applications, including optimizing the
environment for the ESXi virtual infrastructure.
Storage connectivity components and concepts in Vblock Systems focusing on the EMC VNX series
storage array. WWNN and WWPNs, zoning, storage protocols, storage management interfaces, storage
types, thick and thin devices, provisioning methods, and file vs. block level storage are also discussed.
Administration of Vblock Systems, including power, deployment and implementation, storage provisioning,
installation and upgrade deliverables.
Troubleshooting, including specific situations often found in a deployment.

Vblock Systems Overview


This study guide focuses on the Vblock System 200, Vblock System 300 family, and Vblock System 700 family
Converged Infrastructure comprised of Cisco UCS blade servers, Cisco Nexus unified and IP only network
switches, Cisco Catalyst management switches and Cisco MDS SAN switches, VMware vSphere Hypervisor
ESXi and VMware vCenter Server software, and EMC VNX (Vblock System 200 and Vblock System 300 family),
or VMAX (Vblock System 700 family) storage systems.
VCE Vblock Systems combine industry-leading hardware components to create a robust, extensible
platform to host VMware vSphere in an optimized scalable environment. Vblock Systems use redundant
hardware and power connections, which, when combined with clustering and replication technologies,
create a highly available virtual infrastructure.

2014 VCE Company, LLC. All rights reserved.

Vblock Systems Architecture


Vblock Systems are a scaled-out architecture built as consolidation and efficiency infrastructure for the enterprise
data center. System resources are scalable through common and fully redundant components. The architecture
allows for deployments involving large numbers of virtual machines and users.
The specific hardware varies depending on the particular model and configuration of the Vblock Systems. The
compute, storage, and network components include:
Cisco Unified Computing System (UCS)-environment components
o UCS rack-mount blades (Vblock 200) and blade servers (Vblock System 300 family and
Vblock System 700 family)
o UCS chassis
o UCS Fabric Interconnects
o I/O modules
Redundant Cisco Catalyst and/or Nexus LAN switches
Redundant Cisco MDS SAN switches, installed in pairs
EMC VNX or VMAX enterprise storage arrays
Base configuration software comes preinstalled, including VMware vSphere. The Vblock Systems management
infrastructure includes two significant management components:
The Advanced Management Pod (AMP) resides on a designated server made up of management virtual
machines. It functions as a centralized repository for Vblock Systems software management tools,
including vCenter Server.
VCE Vision Intelligent Operations application is a single source for Vblock Systems resource monitoring and
management. VCE Vision software is the industry first converged architecture manager designed with a
consistent interface that interacts with all Vblock Systems components. VCE Vision software integrates tightly
with vCenter Operations Manager, the management platform for the vSphere environment.

2014 VCE Company, LLC. All rights reserved.

The diagram below provides a sample view of the Vblock Systems architecture. The Vblock System 720
is shown in this example.

2014 VCE Company, LLC. All rights reserved.

VMware vSphere Architecture


Vblock Systems support multiple versions of VMware vSphere; VMware vSphere 5.5 is the latest iteration
of VMware server virtualization suite. Architecturally, it has two layers:
ESXi Virtualization Layer is the ESXi hypervisor that runs the management servers in the Vblock
Systems. It abstracts processor, memory, and storage into virtual machines. ESXi hosts reside on UCS
B-Series blade servers.
Virtualization management layer in Vblock Systems is the vCenter Server. vCenter is a central
management point for the ESXi hosts and the virtual machines they service. vCenter runs as a service on
a Windows server and resides on the AMP. It provides the following functionality:
o User access to core services
o VM deployment
o Cluster configuration
o Host and VM monitoring

VMware vSphere Components


In addition, VMware vSphere features the following components (this is a partial list and it provides a preface to
some of the features investigated in this study guide):
vCenter Operations Manager is an automated operations management solution that provides an
integrated performance, capacity, and configuration system for virtual cloud infrastructure.
Web Client user interface lets an administrator manage the vSphere environment from a remote system.
VMware vSphere HA provides business continuity services, such as a cluster file system, host and VM
monitoring, failover, and data protection.
VMFS is the cluster file system for ESXi environments. It allows access to multiple ESXi servers at the
same time and features a distributed journaling mechanism to maintain high availability.
vMotion enables live migration of virtual machines from one server to another. Storage vMotion enables
live migration of VM files from one data store to another.
VMware vSphere Update Manager (VUM) is another notable vSphere component. It maintains the
compliance of the virtual environment. It automates patch management and eliminates manual tracking
and patching of vSphere hosts and virtual machines.

2014 VCE Company, LLC. All rights reserved.

The following diagram provides a concise view of vCenter and its related components:

2014 VCE Company, LLC. All rights reserved.

10

VMware vCenter Server On The AMP


The vCenter server instance installed on the Advanced Management Pod (AMP) is the primary management
point for the Vblock Systems virtual environment. The AMP is a specific set of hardware in the Vblock Systems,
typically in a high-availability configuration that contains all virtual machines and vApps that are necessary to
manage the Vblock Systems infrastructure. vCenter manages the VMware vSphere environment and allows you
to install and configure vApps and create new ESXi instances, as well as look at VMware performance and
troubleshooting information.
The Advanced Management Pod (AMP) is a set of hardware (optionally, HA clustered) that hosts a virtual
infrastructure containing VMware vCenter and VMs running the tools necessary to manage and maintain the
Vblock Systems environment. The diagram below represents a logical view of the AMP:

Storage Provisioning And Configuration


Storage systems and their ability to interact with VMware vSphere are important considerations when creating a
resilient virtualized environment. EMC storage arrays complement the Vblock Systems architecture by providing a
robust, highly available storage infrastructure. VMware vSphere leverages this infrastructure to provision new VMs
and virtual storage.

Virtual Storage Concepts


Thin provisioning allows for flexibility in allocating storage, and VMware vSphere includes supports for it.
Administrators can create thin-format virtual machine disks (VMDKs), where ESXi provisions the entire space
required for the disks current and future activities, but commits only as much storage space as the disk needs for
its initial operations. VMware vSphere manages usage and space reclamation. It is possible to grow or shrink an
existing VMDK to reflect its storage requirements.
The storage arrays in Vblock Systems are preconfigured based on array type and Vblock Systems model. By
default, the storage arrays are Fully Automatic Storage Tiering (FAST) enabled. FAST dynamically stores data
according to its activity level: highly active data goes to high-performance drives; less active data goes to highcapacity drives. Vblock Systems storage arrays have a mix of Enterprise Flash Drives (EFD), Fibre Channel
Drives, SATA Drives, SAS Drives and NL-SAS Drives. As an example, Vblock System 300 family models deploy
FAST with a default configuration of 5% for Enterprise Flash drives, 45% for SAS, and Near-Line SAS set to 50%.
Vblock Systems virtual storage offers lazy and eager thick provisioning. Thick Provision Lazy Zeroed creates a
virtual disk in a default thick format, with space reserved during the virtual disk creation. Any older data on the
storage device is cleared, or zeroed out, only when the VM first writes new data to that thick virtual disk. It leaves
the door open for recovering deleted files or restoring old data, if necessary. Alternatively, a Thick Provision Eager
Zeroed virtual disk clears data from the storage device upon creation.

2014 VCE Company, LLC. All rights reserved.

11

Vblock Systems support PowerPath and native VMware multipathing to manage storage I/O connections. ESXi
uses pluggable storage architecture in the Vmkernel and is delivered with I/O multipathing software referred to as
the Native Multipathing Plugin (NMP), an extensible module that manages sub plug-in. VMware provides built-in
sub plug-in, but they can also come from third parties. NMP sub plug-in can be one of two types: one based on
the storage-array type (SATP), and the other based on the path selection (PSP). PSPs are responsible for
choosing a physical path for I/O requests. The VMware NMP assigns a default PSP for each logical device based
on the SATP associated with the physical paths for that device, but you can override the default.
The VMware NMP supports the following PSPs:
In this case, the host selects the path used most recently (MRU). When it becomes unavailable, the host
selects an alternative path. The host does not revert to the original path when that path becomes available
again. There is no preferred path setting with the MRU policy. MRU is the default policy for most activepassive storage devices, and VMware vSphere displays the state as the Most Recently Used (VMware)
path-selection policy.
With a fixed path, the host uses a designated preferred path. Otherwise, it selects the first working path
discovered at boot time. If you want the host to use a particular preferred path, specify it manually. Fixed
is the default policy for most active-active storage devices. VMware vSphere displays the state as the
Fixed (VMware) path-selection policy.
Note that if a default preferred path's status turns to dead, the host will select a new preferred path.
However, designated preferred paths will remain preferred even when they become inaccessible.
In round robin (RR), the host uses an automatic path-selection algorithm. For active-passive arrays, it
rotates through all active paths. For active-passive arrays, it rotates through all available paths. RR is the
default for a number of arrays and can implement load balancing across paths for different LUNs. VMware
vSphere displays the state as the Round Robin (VMware) path-selection policy.
PowerPath/VE is host multipathing software optimized for virtual host environments. It provides I/O path
optimization, path failover, and I/O load balancing across virtual host bus adapters (HBAs). PPVE installs as a
virtual appliance OVF file. The PowerPath/VE license management server installs on the AMP.

ESXi Host Data Stores


All the files associated with VMs are contained in the ESXi host data store, a logical construct that can exist on
most standard SAN or NFS physical storage devices. A data store is a managed object that represents a storage
location for virtual machine files. A storage location can be a VMFS volume, a directory on Network Attached
Storage, or a local file system path. Virtual machines need no information about the physical location of their
storage, because the data store keeps track of it. Data stores are platform-independent and host-independent.
Therefore, they do not change when the virtual machines move between hosts.
Data stores configuration is per host. As part of host configuration, a host system can mount a set of network
drives. Multiple hosts may point to the same storage location. Only one data store object exists for each shared
location. Each data store object keeps a reference to the set of hosts that are mounted to it. You may only remove
a data store object when it has no mounted hosts.
Data stores are created during the initial ESXi-host boot and when adding an ESXi host to the inventory. You can
adjust their size with the Add Storage command. Once established, you can use them to store VM files.
Management functions include renaming data stores, removing them, and setting access control permissions.
Data stores can also have group permissions.

2014 VCE Company, LLC. All rights reserved.

12

Virtual Storage And High Availability


To minimize the possibility of service outages, Vblock Systems Converged Infrastructure hardware has multiple
redundant features designed to eliminate single points of failure. The EMC storage systems used in Vblock
Systems configurations also implement various mechanisms to ensure data reliability with BC/DR capabilities.
VMware vSphere high availability (HA) features enhance the inherent resiliency of the Vblock Systems
environment. When properly configured, these features enable VMs to remain available through a variety of both
planned and unplanned outages.
Clustering computer systems has been around for a long time. Shared-nothing failover clustering at the OS level
is a predominant approach to system availability, and it does indeed provide system continuity.
VMware vSphere HA allows clustering at the hypervisor layer, leveraging its cluster file system, VMFS, to
allow shared access to VM files during cluster operations. Unlike OS-based clustering, VMware vSphere
clusters remain in service during failure and migration scenarios that would cause an outage in a typical
failover cluster. VMware vSphere HA gets VMs back up and running after an ESXi host failure with very little
effect to the virtual infrastructure.
A key to implementing a resilient HA cluster is using multiple I/O paths for cluster communications and data
access. This hardware infrastructure is part of the Vblock Systems, encompassing both SAN and LAN
fabrics. Another Vblock Systems best practice is to configure redundant data stores, enabling alternate
paths for data store heartbeat. Additionally, NIC teaming uses multiple paths for cluster communications and
tolerates NIC failures.
Another important consideration is ensuring that cluster failover targets have the necessary resources to handle
the application requirements of the primary host. Because certain planned outage scenarios are relatively shortlived, the primary VM can run on a reduced set of resources until migrated back to the original location. In the
case of failover due to a real hardware or software failure, the target VM environment must be able to host the
primary OS and application with no performance degradation.
VMware vSphere HA provides a base layer of support for fault tolerance. Full fault tolerance is at the VM level.
The Host Failures Cluster control policy specifies a maximum number of host failures, given the available
resources. VMware vSphere HA ensures that if these hosts fail, sufficient resources remain in the cluster to
failover all of the VMs from those hosts. This is particularly important for business applications hosted on ESXi.
VMware vSphere includes tools to analyze the slot size required to successfully failover VMs to a new location.
The slot size is a representation of the CPU and Memory necessary to host the VM after a failover event. Several
additional settings define failover and restart parameters. Keep in mind that configuring slot size requires careful
consideration. A smaller size may conserve resources at the expense of application performance.
The VMware vSphere Distributed Resource Scheduler DRS takes compute resources and aggregates them into
logical pools, which ultimately simplifies HA configuration deployment. It balances computing capacity and load
balancing within the cluster to optimize VM performance.
VMware vSphere Fault Tolerance protects VMs when an ESXi server goes down. There is no loss of data,
transactions, or connections. If an ESXi host fails, Fault Tolerance instantly moves VMs to a new host via
vLockstep, which keeps a secondary VM in sync with the primary VM, ready to take over if need be. vLockstep
passes the instructions and instruction execution sequence of primary VM to the secondary VM in case of primary
host failure. It occurs immediately. After the failover, a new secondary VM respawns to reestablish redundancy.
The entire process is transparent and fully automated and occurs even if the vCenter Server is unavailable.
A vSphere HA cluster is a prerequisite for configuring Fault Tolerant VMs.
Many situations require migration of a VM to a compatible host without service interruption, performing
maintenance on production hosts for example, or processing issues and bottlenecks on existing hosts, etc.
vMotion enables live host-to-host migration for virtual machines.

2014 VCE Company, LLC. All rights reserved.

13

In addition, vSphere offers an enhanced vMotion compatibility (EVC) feature that allows live VM migration
between hosts with different CPU capabilities. This is useful when upgrading server hardware, particularly if the
new hardware contains a new CPU type or manufacturer. These enhanced clusters are a distinct cluster type.
Existing hosts can only function as an EVC host after migrated into a new, empty EVC cluster.
Storage systems need maintenance too, and Storage vMotion allows VM files to migrate from shared storage
system to another with no downtime or service disruption. It is also effective when performing migrations to
different tiers of storage.
Performing any vMotion operation requires permissions associated with Data Center Administrator, Resource
Pool Administrator, or Virtual Machine Administrator.

Virtual Network Switches


This section details network connectivity and management for virtual machines. Vblock Systems support a
number of different network/storage paradigms: segregated networks with block-only storage, with unified storage,
with SAN boot storage; unified networks with block-only storage, SAN boot storage, and unified storage.
Close investigation of the Vblock Systems network architecture is beyond the scope of this study guide. Generally,
segregated network connections use separate pairs of LAN (Catalyst and Nexus) and SAN (MDS) switches and
unified network connections consolidate to a single pair of Nexus network switches both LAN and SAN
connectivity.
Virtual servers are managed and connected differently than physical servers and have different requirements for
fabric connectivity and management. They use a virtual network switch. Vblock Systems customers have two
options here, the VMware virtual switch (vSwitch), and the Cisco Nexus 1000V virtual switch. The VMware virtual
switch runs on the ESXi kernel and connects to the Vblock Systems LAN through the UCS Fabric Interconnect.
The Nexus 1000V virtual switch from Cisco resides on each Vblock Systems server and is licensed on a per-host
basis. IT is equipped with better virtual network management and scalability than the VMware virtual switch, and
VCE considers it a best practice.
The Nexus 1000V is a combined hardware and software switch solution, consisting of a Virtual Ethernet
Module (VEM) and a Virtual Supervisor Module (VSM). The following diagram depicts the 1000V distributed
switching architecture:

2014 VCE Company, LLC. All rights reserved.

14

The VEM runs as part of the ESXi kernel and uses the VMware vNetwork Distributed Switch (vDS) API, which
was developed jointly by Cisco and VMware for virtual machine networking. The integration is tight; it ensures that
the Nexus 1000V is fully aware of all server virtualization events such as vMotion and Distributed Resource
Scheduler (DRS). The VEM takes configuration information from the VSM and performs Layer-2 switching and
advanced networking functions.
If the communication between the VSM and the VEM is interrupted, the VEM has Nonstop Forwarding (NSF)
capability to continue to switch traffic based on the last known configuration. You can use the vSphere Update
Manager (VUM) to install the VEM, or you can install it manually using the CLI.
The VSM controls multiple VEMs as one logical switch module. Instead of multiple physical line-card modules, the
VSM supports multiple VEMs that run inside the physical servers. Initial virtual-switch configuration occurs in the
VSM, which automatically propagates to the VEMs. Instead of configuring soft switches inside the hypervisor on a
host-by-host basis, administrators can use a single interface to define configurations for immediate use on all
VEMs managed by the VSM. The Nexus 1000V provides synchronized, redundant VSMs for high availability.
You have a few interface options for configuring the Nexus 1000V virtual switch: standard SNMP and XML as well
Cisco CLI and Cisco LAN Management Solution (LMS). The Nexus 1000V is compatible with all the vSwitch
management tools, and the VSM also integrates with VMware vCenter Server so that the virtualization
administrator can manage the network configuration in the Cisco Nexus 1000V switch.
The VMware virtual switch directs network traffic to one of two distinct traffic locations: the VMkernel and the VM
network. VMkernel traffic controls Fault Tolerance, vMotion, and NFS. The VM network allows hosted VMs to
connect to the virtual and physical network.
Standard vSwitches exist at each (ESXi) server and can be configured either from the vCenter Server or directly
on the host. Distributed vSwitches exist at the vCenter Server level, where they are managed and configured.
Several factors govern choice of adapter, generally either host compatibility requirements or application
requirements. Virtual network adapters install into ESXi and emulate a variety of physical Ethernet and Fibre
Channel NICs. (Refer to the Vblock Systems Architecture Guides for network hardware details and
supported topologies.)

Validate Networking And Storage Configurations


The networking topology and associated hardware on the Vblock Systems arrives preconfigured. With the basic
configuration performed at manufacturing, the network is adjusted to accommodate the applications it supports
and other environmental considerations. For example, if using block storage, SAN configuration components must
be tested and verified. If using filers or unified storage, the LAN settings may need adjustment, particularly NIC
teaming, multipathing and jumbo frames.
With regard to the SAN configuration, you need to review the overall connectivity in terms of availability.
Check both host multipathing and switch failover to ensure that the VMs will be as highly available as
possible. Review the storage configuration to verify the correct number of LUNs and storage pools and
verifies the storage pool accessibility. Then, make sure that all the deployed virtual machines have access to
the appropriate storage environment.
These activities require almost the complete suite of monitoring and management tools in the Vblock Systems
with most tools installed on the AMP. Specific tools used during a deployment include, vCenter, Operations
Manager, EMC Unisphere, EMC PowerPath Viewer, VCE Vision software and Cisco UCS Manager.

2014 VCE Company, LLC. All rights reserved.

15

Storage Connectivity Concepts In Vblock Systems


EMC VNX Series Storage Arrays
The EMC VNX series are fourth-generation storage platforms that deliver industry-leading capabilities. They offer
a unique combination of flexible, scalable hardware design, and advanced software capabilities that enable them
to meet the diverse needs of todays organizations. EMC VNX series platforms support block and unified storage.
The platforms are optimized for VMware virtualized applications. They feature flash drives for extendable cache
and high performance in the virtual storage pools. Automation features include self-optimized storage tiering and
application-centric replication.
Regardless of the storage protocol implemented at startup (block or unified), Vblock Systems can include cabinet
space, cabling, and power to support the hardware for all of these storage protocols. This arrangement makes it
easier to move from block storage to unified storage with minimal hardware changes.
The EMC VNX series storage arrays connect to dual storage processors (SPs) using six GB/s four-lane serial
attached SCSI (SAS). Fibre channel (FC) expansion cards within the storage processors connect to the Cisco
MDS switches or Cisco Nexus unified network switches within the network layer over FC.

WWNN And WWPN


A WWNN (World Wide Node Name) is assigned to a Fibre Channel device such as an FC HBA or storage array.
The WWPN (World Wide Port Name) is generated from the WWNN by the Fibre Channel fabric when the device
logs into the fabric. A WWN pool is a collection of worldwide names (WWNs) for use by the Fibre Channel vHBAs
in a Cisco UCS instance. Separate pools are created for both. A WWNN is assigned to the server and the WWPN
is assigned to the vHBA. If WWN pools are used in service profiles, you do not have to manually configure the
WWNs that will be used by the server associated with the service profile. In a system that implements
multitenancy, you can use a WWN pool to control the WWNs that are used by each organization.

Zoning
Zoning is a procedure that takes place on the SAN fabric and ensures devices can only communicate with those
that they need to. Masking takes place on storage arrays and ensures that only particular World Wide Names
[WWNs] can communicate with LUNs on that array.

There are two distinct methods of zoning that can be applied to a SAN: World Wide Name zoning and port
zoning. WWN zoning groups a number of WWNs in a zone and allows them to communicate with each
other. The switch port that each device is connected to is irrelevant when WWN zoning is configured. One
advantage of this type of zoning is that when a port is suspected to be faulty a device can be connected to
another port without the need for fabric reconfiguration. A disadvantage is that if an HBA fails in a server the
fabric will need to be reconfigured for the host to reattach to its storage. WWN zoning is also sometimes
called 'soft zoning.' Port zoning groups particular ports on a switch or number of switches together, allowing
any device connected to those ports to communicate with each other. An advantage of port zoning is that
you don't need to reconfigure a zone when an HBA is changed. A disadvantage is that any device can be
attached into the zone and communicate with any device in the zone. There are EMC recommendations for
zoning specific to the Vblock Systems you are configuring.

VSAN
A VSAN is a virtual storage area network (SAN). A SAN is a dedicated network that interconnects hosts and
storage devices primarily to exchange SCSI traffic. In SANs you use the physical links to make these
interconnections; VSANS are a logical segmentation of a physical SAN. Sets of protocols run over the SAN to
handle routing, naming, and zoning. You can design multiple SANs with different topologies.
A named VSAN creates a connection to a specific external SAN. The VSAN isolates traffic to that external SAN,
including broadcast traffic. The traffic on one named VSAN knows that the traffic on another named VSAN exists,
but cannot read or access that traffic.

2014 VCE Company, LLC. All rights reserved.

16

Like a named VLAN, the name that you assign to a VSAN ID adds a layer of abstraction that allows you to
globally update all servers associated with service profiles that use the named VSAN. You do not need to
reconfigure the servers individually to maintain communication with the external SAN. You can create more than
one named VSAN with the same VSAN ID.
In a cluster configuration, a named VSAN can be configured to be accessible only to the FC uplinks on one fabric
interconnect or to the FC Uplinks on both fabric interconnects.

Storage Protocols
Unified storage is a platform with storage capacity connected to a network that provides file-based and blockbased data storage services to other devices on the network. Unified storage uses standard file protocols such as
Common Internet File System (CIFS) and Network File System (NFS) and standard block protocols like Fibre
Channel (FC) and Internet Small Computer System Interface (iSCSI) to allow users and applications to access
data consolidated on a single device. Block storage uses a standard file protocol such as Fibre Channel (FC).
Unified storage is ideal for organizations with general-purpose servers that use internal or direct-attached storage
for shared file systems, applications, and virtualization. Unified storage replaces file servers and consolidates data
for applications and virtual servers onto a single, efficient, and powerful platform.
Large numbers and various types and release levels of direct-attached storage or internal storage can be
difficult to manage and protect as well as costly due to very low total utilization rates. Unified storage
provides the cost savings and simplicity of consolidating storage over an existing network, the efficiency of
tiered storage, and the flexibility required by virtual server environments. VCE Vblock Systems can support
block or unified storage protocols.

Storage Management Interface


SMI-S (Storage Management Initiative Specification) is a standard developed by the Storage Network Industry
Association (SNIA) that is intended to facilitate the management of storage devices from multiple vendors in
storage area networks. SMI-S defines common attributes for each component in a SAN. The specification is
platform-independent and extensible. This makes it possible to add new devices to a SAN with a minimum of
difficulty. SMI-S can allow SAN managers to oversee all aspects of a network from a single point. EMC has
separate management interfaces for midrange block and file systems. (Navisphere Manager, for block- based
storage systems, and Celerra Manager for Celerra file based systems.)

Storage Types
Adding RAID packs can increase storage capacity. Each pack contains a number of drives of a given type, speed,
and capacity. The number of drives in a pack depends upon the RAID level that it supports.
The number and types of RAID packs to include in Vblock Systems is based upon the following:
The number of storage pools that are needed.
The storage tiers that each pool contains, and the speed and capacity of the drives in each tier.
Note: The speed and capacity of all drives within a given tier in a given pool must be the same.
A WWN pool is a collection of worldwide names (WWNs) for use by the Fibre Channel vHBAs in a Cisco UCS
instance. Separate pools are created for WWNNs assigned to a server, and WWPNs assigned to the vHBA.

Thick/Thin Devices
The primary difference between thin LUNs compared to Classic and thick LUNs is that thin LUNs have the ability
to present more storage to an application than what is physically allocated. Presenting storage that is not
physically available avoids underutilizing the storage systems capacity.

2014 VCE Company, LLC. All rights reserved.

17

Data and LUN metadata is written to thin LUNs in 8 KB chunks. Thin LUNs consume storage on an asneeded basis from the underlying pool. As new writes come into a thin LUN, more physical space is
allocated in 256 MB slices.
Thick LUNs are also available in VNX. Unlike a thin LUN, a thick LUNs capacity is fully reserved and allocated on
creation so it will never run out of capacity. Users can also better control which tier the slices are initially written to.
For example, as pools are initially being created and there is still sufficient space in the highest tier, users can be
assured that when they create a LUN with Highest Available Tier or Start High, then Auto-Tier, data will be
written to the highest tierbecause the LUN is allocated immediately.
Thin LUNs typically have lower performance than thick LUNs because of the indirect addressing. Thus, the
mapping overhead for a thick LUN is much less when compared to a thin LUN.
Thick LUNs have more predictable performance than thin LUNs because the slice allocation is assigned at
creation. However, thick LUNs do not provide the flexibility of oversubscribing like a thin LUN does so they should
be used for applications where performance is more important than space savings.
Thick and thin LUNs can share the same pool, allowing them to have the same ease-of-use and benefits of
pool-based provisioning.

Provisioning Methods
Storage provisioning is the process of assigning storage resources to meet the capacity, availability, and
performance needs of applications. In terms of accessing storage, the virtual disk always appears to the
virtual machine as a mounted SCSI device. The virtual disks within the data store hide the physical storage
layer from the VM.
With traditional block storage provisioning, you create a RAID group with a particular RAID protection level. When
LUNs are bound on the RAID group, the host-reported capacity of the LUNs is equal to the amount of physical
storage capacity allocated. The entire amount of physical storage capacity must be present on day one, resulting
in low levels of utilization, and recovering underutilized space remains a challenge.
Virtual Provisioning utilizes storage pool-based provisioning technology that is designed to save time, increase
efficiency, reduce costs, and improve ease of use. Storage pools can be used to create thick and thin LUNs. Thick
LUNs provide high and predictable performance for your applications, mainly because all of the user capacity is
reserved and allocated upon creation.

File Versus Block Level Storage


File level storage can be defined as a centralized location to store files and folders. They are considered networkattached storage. This level of storage requires file level protocols such as NFS presented by Linux and VMware,
and SMB and CIFS presented by Windows.
Block level storage can be defined as fixed block sizes of data that are seen as individual hard drives. Block level
file systems utilize Fibre Channel (FC) iSCSI, and FCOE protocols.

F.A.S.T Concepts
EMC Fully Automated Storage Tiering (FAST) cache is a storage performance optimization feature that
provides immediate access to frequently accessed data. FAST cache complements FAST by automatically
absorbing unpredicted spikes in application workloads. FAST cache results in a significant increase in
performance for all read and write workloads.
FAST cache uses enterprise Flash drives to extend existing cache capacities up to 2 terabytes. FAST cache
monitors incoming I/O for access frequency and automatically copies frequently accessed data from the back-end
drives into the cache. FAST cache is simply configured and easy to monitor.
FAST cache accelerates performance to address unexpected workload spikes. FAST and FAST cache are a
powerful combination, unmatched in the industry, that provides optimal performance at the lowest possible cost.

2014 VCE Company, LLC. All rights reserved.

18

FAST VP automatically optimizes performance in a tiered environment reducing costs, footprint, and
management effort. FAST VP puts the right data in the right place at the right time. FAST VP maximizes
utilization of Flash Drive capacity for high IOPS workloads and maximizes utilization of SATA drives for
capacity intensive applications. FAST VP is a game changing technology that delivers automation and
efficiency to the virtual data center.
SP Memory allocations on a VNX changes based on how many FAST Cache drives have been purchased. VCE
suggests an 80/20 rule for configuring Write/Read cache when using FAST Cache.
VCE Recommendations:
SP Memory allocations on a VNX changes based on how many FAST Cache drives have been
purchased. VCE suggests an 80/20 rule for configuring Write/Read cache when using FAST Cache.
For FAST VP implementations, assign 95% of the available pool capacity to File.
Always allocate EFD drives for FAST Cache requirements first, and then assign the remainder for FAST
VP for best performance.

Storage Components In Vblock Systems


Physical Disk Types
The disk array enclosure (DAE) is a cabinet-mounted device that holds the disk drives within the storage array.
DAEs are added in packs. The number of DAEs in each pack is equivalent to the number of back-end buses in
the EMC VNX array in the Vblock System 300 family. Each DAE is a 15-drive 3U enclosure for 3.5" form factor
drives. Certain configurations of the Vblock System 700 family, Vblock System 100, and the Vblock 200 contain
different numbers and types of DAEs.
All Vblock Systems components are fully redundant with organized physical builds, cabling and power for
optimum energy efficiency, and reliability. Depending on the Vblock Systems model, the physical disk drive will be
one or more of the following:
Serial attached SCSI (SAS) SAS disk drives allow for higher data rate transfers. SAS drives are smaller in size to
reduce the amount of seek time. This disk array enclosure is for performance-intensive, random reads.
Nearline SAS (NL-SAS) NL-SAS are drives that are approximately three times the size of SAS drives that allow
for higher density per drive slot. (The larger the disk, the slower the data rate transfer and seek time.) This disk
array enclosure is for capacity-intensive, sequential reads.
Enterprise Flash Drives (EFD) EFDs are also known as solid state drives (SSDs), they contain no moving parts
and therefore are not constrained by seek time or rotational latency. EFD is supported for the VMAX family, or
optionally for the VNX.
Fibre Channel Drives Fibre Channel is a hard disk drive interface technology designed primarily for high-speed
data throughput for high-capacity storage systems, usually set up as a disk array or RAID. Fibre Channel disk
drive systems typically have performance rivaling or exceeding that of high-performance SCSI disk arrays.
Serial ATA (Serial Advanced Technology Attachment) (SATA) is a standard for connecting hard drives into
computer systems. As its name implies, SATA is based on serial signaling technology, unlike IDE drives that
use parallel signaling.

Storage Cache Options


EMC XtremCache
EMC XtremCache is intelligent caching software that leverages server-based flash technology and writethrough caching for accelerated application performance with data protection. EMC XtremCache improves
the performance of read-intensive, flash technology and write-through caching for accelerated application

2014 VCE Company, LLC. All rights reserved.

19

performance high-throughput, minimal response time applications, and cuts latency in half. EMC
XtremCache increases read performance by keeping a localized flash cache within a server. EMC
XtremCache extends EMC FAST array-based technology in to the server with a single, intelligent
input/output path from the application to the data store.
Write-through caching is provided forcing writes to persist to the back-end storage array to ensure high availability,
data integrity and reliability, and disaster recovery.
Read cache typically needs a smaller amount of capacity to operate efficiently. In addition, its typical usage
is less varied across the majority of workloads. Write cache is different because its usage affects
performance more heavily and can range more widely. Set the read cache first, and then allocate the total
remaining capacity to write cache.
EMC Fast Cache
FAST Cache is best for small random I/O where data is skewed. EMC recommends first utilizing available flash
drives for FAST Cache, which can globally benefit all LUNs in the storage system. Then supplement performance
as needed with additional flash drives in storage pool tiers.
Virtual Applications (vApps) are a collection of components that combine to create a virtual appliance running, as
a VM. Several Vblock Systems management components hosted reside on the AMP as vApps.
VMware uses the Open Virtualization Format/Archive (OVF/OVA) extensively. VMware vSphere relies on
OVF/OVA standard interface templates as a means of deploying virtual machines and vApps. A VM template
contains all of its OS, application, and configuration data. You can use an existing template as a master to
replicate any VM or vApp or use it as the foundation to customize a new VMs. Any VM can become a template.
Its a simple procedure from within the vCenter Operations Manager.
VM Cloning is another VM replication option. The existing virtual machine is the parent of the clone. When the
cloning operation is complete, the clone is a separate virtual machine, though it may share virtual disks with the
parent virtual machine. Again, cloning is a simple vCenter Operations Manager procedure.
The current state of a virtual machine is saved with a snapshot. This provides the benefit to revert to a previous
state in case of an error modifying or updating the VM.

EMC Storage Controller/RAID Controller


The controller performs certain operations that can make the disks job easier. Any time the disk spends seeking
or waiting for a sector to spin under its head is wasted time because no data is being transferred. When data on
successive sectors of a disk are accessed, no time is wasted seeking. The access pattern of reading or writing
successive sectors on disk is called sequential access, and the service time for sequential access is very low. The
service time for a sequential read can be under 1 ms, so a much higher queue can be tolerated than for a random
read. Many optimizations try to maximize the sequential nature of disk drives.

Storage Processor
The Storage Processors, which handle all of the block storage processing for the array, are contained within a
Storage Processor Enclosure (SPE) or Disk Processor Enclosure (DPE)depending on VNX model. Each SP
has a management module that is connected to an Ethernet network and provides administrative access to the
array, front-end ports for host connectivity and back-end ports that connect to the storage. The front-end ports can
be used to connect the UCS blade server hosts for block access, or Data Movers for file access.
The SPs manage all configuration tasks, store the active configuration information, and perform all monitoring and
alerting for the array.

Front-End Director
A channel director (front-end director) is a card that connects a host to the Symmetrix. Where the VNX uses
Storage Processors, the VMAX uses Engines, but their function is the samerelaying front-end I/O requests to
the back-end buses to access the appropriate storage. Each VMAX Engine is made up of several different
2014 VCE Company, LLC. All rights reserved.

20

components, including redundant power supplies, cooling fans, Standby Power Supply modules, and
Environmental modules.
Each engine also contains two directors, each of which contains CPUs, memory (cache) and the front- and backend ports. As with the VNX, the back-end ports provide connectivity to the Disk Enclosures and the front-end ports
provide connectivity to hosts or NAS devices or for replication purposes. The System Interface Boards provide
connectivity from each director to the Virtual Matrix, which allows all directors to share memory.

VNX DAE Cabling


The disk processor enclosure (DPE) houses the service processors for the EMC VNX5400, EMC VNX5600, EMC
VNX5800 and EMC VNX7600. The DPE provides slots for 2 Service Processors, 2 battery backup units (BBU),
and an integrated 25-slot disk array enclosure (DAE) for 2.5" drives. Each SP provides support for up to 5 SLICs
(small I/O cards).
Vblock Systems are designed to keep hardware changes to a minimum, should the customer decide to change
the storage protocol after installation (for example, from block storage to unified storage). Cabinet space can be
reserved for all components that are needed for each storage configuration (Cisco MDS switches, X-Blades, etc.),
ensuring that network and power cabling capacity for these components is in place.

Disk
While at the most basic level, all storage arrays do the same thing: They take capacity that resides on
physical disk and make logical volumes that are presented to the servers. All storage arrays protect data,
using RAID data protection schemes, and optimize performance-using caching. What is different is the level
of scalability and advance features offered, such as replication and disaster recovery capability. All disk
drives are housed in DAEs (Disk Array enclosures). If the number of RAID packs in Vblock Systems is
expanded, more disk array enclosures (DAEs) might be required. DAEs are added in packs. The number of
DAEs in each pack is equivalent to the number of back-end buses in the EMC VNX array in the Vblock
Systems. Each VNXe system has a disk package (RAID group) in the first disk enclosure. These disks store
the operating environment (OE) for the VNXe system data.

Vault
The vault is a place, or mechanism, used to protect your VMAX data when powering down the array when a
power outage occurs, or in the event that some environmental change, such as power failure or temperature is
being exceeded. The VMAX is set to write the data in global cache memory to the Vault devices and shutdown
the array in a controlled manner.

2014 VCE Company, LLC. All rights reserved.

21

Two copies of the cache memory are written to independent vault devices, allowing for a fully redundant vault. The
vault process is to stop all transactions to the VMAX. Once all the I/O is stopped, the directors will write all the
global memory data to the vault devices. Shutdown of the VMAX will be complete. The vault process then
restores the cache memory from Vault. During this process, the array will reinitialize the physical memory, check
the integrity of the data in Vault, and restore the data to global cache. The VMAX resumes operation once the
SPSs are sufficiently recharged to support another Vault operation. Vault devices are important in ensuring the
consistency of application data stored on the VMAX.

Administration Of Vblock Systems


Power
The EMC VNX series platforms support block and unified storage. The platforms are optimized for VMware
virtualized applications. They feature flash drives for extendable cache and high performance in the virtual storage
pools. Automation features include self-optimized storage tiering and application-centric replication. The EMC
VNX series storage arrays are used in the Vblock Systems family. Regardless of the storage protocol
implemented at startup (block or unified), the Vblock System 300 family includes cabinet space, cabling, and
power to support the hardware for these storage protocols. VCE recommends that before powering on the Vblock
Systems, ensure that you have properly installed and cabled all Vblock Systems components. Refer to the Cisco
documentation at www.cisco.com for information about the LED indicators.
To facilitate powering on the Vblock Systems at a later time, VCE also recommends confirming that the virtual
machines in the management infrastructure are set to start automatically with the system, while other virtual
machines are not set to start automatically with the system. The management infrastructure contains VMware
vCenter Server, VMware vSphere Update Manager servers, and the SQL database. Due to the Vblock Systems
supporting several storage layer and power outlet unit (POU) options inside and outside of North America, it is
advised to review the VCE Vblock Systems Powering On And Off Guide and Architecture Guide for the system
you are configuring.

Initialize The Storage Array


Initializing a storage array or disk prepares or enables the disk for discovery, and is used to store data. For a
Unified (file and block) VNX, a VIA (VNX installation Assistant) is run. IP addresses, hostname, domain, NTP, and
DNS info is inputted into the VIA Application. For a Block only VNX, use Unisphere Init Tool.

Storage Pool Management


Think of a storage pool as a much larger, less restrictive pool of disks. A quality of a pool is its ability to house
heterogonous disk types, i.e., EFDs, SATA, and FC disk. This is where FAST or Fully Automated Storage Tiering
takes over. By having multiple disk types within a single pool you now have the ability to move data north bound
and south bound between tiers depending on its utilization (it does not however move hot data within a storage
pool). You can imagine this has the benefit of increasing your storage efficiency, as you are moving only chunks of
data rather than an entire LUN between tiers of different value.
EMC VNX series platforms support block storage and unified storage. The platforms are optimized for
VMware virtualized applications. They feature flash drives for extendable cache and high performance in the
virtual storage pools. Automation features include self-optimized storage tiering, and application-centric
replication. Storage Pools for block is a group of disk drives used for configuring pool LUNs (Thick and
Thin). There may be zero or more pools in a storage system. Disks can only be members of one pool and
cannot be in a separate user-defined RAID group. A storage Pool for file are groups of available disk
volumes organized by AVM that are used to allocate available storage to file systems. They can be created
automatically when using AVM, or manually by the user.

2014 VCE Company, LLC. All rights reserved.

22

Virtual Pools
Virtual pools are resources that allow storage administrators the ability to logically segregate application data
stored on storage arrays. Implementing pool LUNs can be either thin or thick. Thin LUNs provide on-demand
storage that maximizes the utilization of the storage by allocating storage as needed. Thick LUNs provide high
and predictable performance for applications mainly because all of the user capacity is reserved and allocated
upon creation. VCE recommends that you use drives in multiples of RAID group sizes. Only create new virtual
provisioning (VP) pools if needed to support new application requirements.
Bare metal hosts require separate disk drives. The boot LUNs and data LUNs for bare metal cannot
coexist with the same disk groups or VP pools used for VMware vSphere ESXi boot and VMware vSphere
ESXi data volumes.

VCE Vision Intelligent Operations


The VCE Vision software suite provides an integrated set of software products for managing a data center. The
VCE Vision software suite is the first suite to provide an intelligent solution to the problem of managing operations
in a converged infrastructure environment. These tools enable and simplify converged operations by dynamically
providing a high level of intelligence into your existing management toolset. The VCE Vision software suite
enables VCE customers and third-party consumers to know that the Vblock Systems exists, where they are
located, and what components they contain. It reports on the health of the operating status of the Vblock Systems.
It also reports on how compliant the Vblock Systems are with a VCE RCM and VCE Security Standards. The VCE
Vision software suite effectively acts as a mediation layer between your system and the management tools you
use now. The software allows for intelligent discovery by providing a continuous near real-time perspective of your
compute, networks, storage, and virtualization resources as a single object, ensuring that your management tools
reflect the most current state of your Vblock Systems.

There are four main components to VCE Vision:


VCE Vision System Library
VCE Vision Plug-in for vCenter
VCE Vision Adapter for vCenter Operations Manager
VCE Vision SDK
This study guide only discusses the System Library, the Plug-in, and the Adapter.

2014 VCE Company, LLC. All rights reserved.

23

The System Library is responsible for discovery. The discovery process consists of using appropriate protocols
to discover the inventory, location, and health of the Vblock Systems and, using that information, to populate an
object model. The System Library uses the information in the object model to populate a PostgreSQL database
that is used by the REST interfaces. The data stored in the object model can also be accessed through SNMP
GETs.
The initial discovery process takes place during the manufacturing process. At that time, a file is populated with
basic information about the Vblock Systems that were built and configured. Later, when the Vblock Systems are in
place at the customer site, the System Library discovers the Vblock Systems model used, its physical
components, and logical entities, using the following methods:
XML API
Simple Network Management Protocol (SNMP)
Storage Management InitiativeSpecification (SMI-S)
Vendor CLI's, such as EMC Unisphere CLI (UEMCLI)
The System Library discovers the Vblock Systems every 15 minutes and the following physical components
and logical entities:
Storage groups
RAID groups
LUN relationships to RAID and storage groups
Masking records
Mapping recordsLUNs mapped to FA ports so that ports can see the LUNs for access.
The Plug-in for vCenter is a client that runs on the VMware vSphere Web Client application. Using the API for
System Library, it provides a system-level view of a data center's configured physical servers that form a named
clusterthe Vblock Systems cluster. It also enables a customer to view and monitor information about all the
components in Vblock Systems, including the server, network switches, and storage arrays, as well as their
subcomponents and the management servers.
The graphical user interface of the Plug-in for vCenter provides a tree view that displays the Vblock Systems
name, as well as its overall system health, description, prior state, serial number, and location. Additional
information, such as the health status of the Vblock Systems and their components can be displayed by drilling
down through the tree view.
The Plug-in for vCenter integrates with the Compliance Checker, which is required for complete monitoring of the
Plug-in for vCenter. Together, they enable you to run reports that provide detailed information about how closely
your Vblock Systems comply with established benchmarks and profiles you select.
The Adapter for vCenter Operations Manager discovers and monitors Vblock Systems hardware and VMware
vCenter software components. The Adapter works with VMware vCenter Operations Manager to collect and
analyze component metrics. Metric data include health, operability, and resource availability that measure the
performance of Vblock Systems components and determine the health and status of the system.
Vblock Systems component dashboards use widgets to show the health of compute, storage, and network
components. Dashboard widgets can be connected to multiple Vblock Systems. The Resources widget shows all
Vblock Systems. Vblock Systems selected in the Resources widget are shown in the Health Tree. Components
selected in the Health Tree are shown in the Alerts, Metric Selector, and Metric Sparklines widgets.

2014 VCE Company, LLC. All rights reserved.

24

Installation Upgrade
Vblock Systems Documentation And Deliverables
The Vblock Systems family includes an enterprise and service provider-ready system, designed to address
a wide spectrum of virtual machines (VMs), users, and applications. This test plan is intended as a
guideline to perform postimplementation configuration and redundancy testing of Vblock Systems. It is a
document that should be tailored to fit the Vblock Systems and logical configuration for a specific customer.
The purpose of the test plan is to define and track postimplementation configuration and redundancy testing
of Vblock Systems and their components as deployed in the customers infrastructure. This test plan
provides tests for Cisco UCS, VMware motion, EMC VNX storage array, overall Vblock Systems redundancy
power and VCE Vision software. The test plan also includes review and approvals for the entire test plan,
test exit criteria, reporting results, and action items for review. Each test includes the purpose, test
procedure, expected results, observed results, and summary test results of pass or fail. All testing will be
performed in view of the customers representatives. All results and relevant information recorded in the
Reporting Results section of the document is for review and acceptance by the customer.
The Configuration Reference Guide (CRG) is generated as a deliverable to customers at the end of an
Implementation. The VCE GUI script tool takes all of the script tools designed by VCE technicians and adds them
into a common user interface.
The VCE GUI Script tool provides a familiar user interface that can import data from the logical configuration
survey, allow users to save and distribute existing configuration files, or build new configuration files that can be
used on the fly or saved to be used later.
The VCE Logical Configuration Survey (LCS) is used for configuring Vblock Systems. Information entered here
will be used in the manufacturing of your Vblock Systems. Completion of this document is required to move
forward to the next phase.
A customer may use the VCE Professional Services team to customize Vblock Systems to their specific
environment. This service is termed as the VCE Build Services and the first step in the process is the collection
of information through the Logical Configuration Survey (LCS). The LCS and the Logical Build Guide (LBG) are
used by VCE Professional Services teams to tailor the configuration of Vblock Systems. The configuration and
subsequent testing is carried out on VCE premises and Vblock Systems are shipped in the preconfigured state
directly to the customers data center. Integration of Vblock Systems into an existing environment is thus
simplified. VCE customers are encouraged to engage appropriate security and audit stakeholders in this process
to provide direction. By providing this information in advance, customer teams reduce the required effort in
configuring the components of Vblock Systems in a compliant manner.

Storage Firmware, Validation, And Upgrade


The Clariion Environment is governed by Flare Code and the Symmetrix/DMX by Enginuity Code. The
Enginuity Code was developed internally at EMC and, to my knowledge, has not been outsourced so far
anywhere for development purposes. Unlike the Clariion Flare Code that is customer upgradeable, the code
on EMC Symmetrix/DMX is upgraded through EMC only. This code sits on the Service Processor but gets
also loaded on all the Directors during installation and upgrades. On these Directors is also loaded the BIN
FILE (Configuration of the Symmetrix) along with the Emulation code. The initial Enginuity code load and
BIN FILE setup is performed when the customer first purchases the machine and is customized based on
their SAN environment. As new Enginuity code releases hit market, customers can get the upgrades from
EMC. It is very normal for customers to go through multiple code upgrades during the 3- to 5-year life cycle
of these machines. The service processor houses the code, but the Symmetrix/DMX can be rebooted or can
be fully functional without the Service processor present. The Service processor will allow an EMC trained
and qualified engineer to perform diagnostics and enable the call home feature for proactive fixes and
failures. For any host related configurations changes, the presence of this service processor, including EMC
Symmwin Software is absolutely necessary. Without it, it becomes impossible to obtain configuration locks
on the machine through ECC or Symcli, restricting customer BIN FILE Changes for reconfiguration.
2014 VCE Company, LLC. All rights reserved.

25

VCE Software Upgrade Service for Vblock Systems provides customers the option to order software
update services for Vblock Systems components to maintain current supported levels. The service
minimizes implementation effort and risk while providing assessment, planning, and execution of upgrade
activities. By using this service, customers can focus on business priorities instead of their infrastructure.
The VCE Release Certification Matrix (RCM) is published semiannually to document software versions that
have been fully tested and verified for Vblock Systems. While customers can opt to install their own updates,
choosing VCE Software Upgrade Service expedites installation and can reduce the risk associated with a
multisystem upgrade. The RCM defines the specific hardware components and software version
combinations, which are tested and certified by VCE.

Storage Management Applications


EMC Secure Remote Support (ESRS) is a proactive remote support capability for EMC systems that is

secure, high-speed, and operates 24x7. The EMC Secure Remote Support IP Solution (ESRS IP) is an IPbased automated connect home and remote support solution enhanced by a comprehensive security
system. ESRS IP creates both a unified architecture and a common point of access for remote support
activities performed on EMC products.
The ESRS installation is performed after the Deployment and Integration team have completed the
necessary tasks of the D&I process.
After powering up the Vblock Systems, configuring, and validating storage, network, and virtualization. VCE
Professional Services launches and completes the ESRS provisioning tool to finalize installation (proxy
information, EMC secure ID information, site-ID #, site information and location.)

PowerPath Deployment
EMC PowerPath Multipathing automatically tunes the storage area network (SAN) and selects alternate paths for
data, if necessary. Residing on the server, PowerPath Multipathing enhances SAN performance and application
availability. It also integrates multiple-path I/O capabilities, automatic load balancing, and path failover functions for
complete path management.
PowerPath is easy to install:
After obtaining and downloading the software, Powerpath is installed on VMware ESXi hosts. This can be done
either by using the local CLI, or by using vCenter Update Manager (VUM) after EMC PowerPath remote tools are
installed. An EMC PowerPath license server must be installed and configured. PowerPath must be registered and
licensed on the VMware ESXi hosts to install PowerPath viewer.

Storage Management Attributes


There is storage management utilities required to be installed and validated for the Vblock Systems
to run efficiently.
Configuring an SNMP server allows the monitoring of Cisco UCS Manager and the ability to receive SNMP traps.
VCE recommends the use of an SNMP server to aid in report alerting, monitoring, and troubleshooting.
SNMP v3 is recommended as the most secure option in using the SNMP protocol.
Verify that the DNS of the new host is forward and reverse resolvable. Also verify that an array name exists in
DNS. The storage array must be able to reach it. Ensure that DNS entries and time sync are properly configured.
Verify that the time, date, and time zone of fabric interconnects are identical for a cluster.
VCE recommends using a central time source such as network time protocol (NTP). The storage array must be
able to reach it.

2014 VCE Company, LLC. All rights reserved.

26

The following is a brief list of what steps are required for configuring a VNXe for the Vblock System 300 family.
1.

Obtain an EMC VNXe license file.

2.

Install the EMC VNXe Connection Utility.

3.

Access the management GUI and perform initial configuration.

4.

Update the storage array to the latest service pack.

5.

Configure the NFS server.

6.

Configure LACP Ethernet port aggregation and MTU.

7.

Create the NFS file system and share.

8.

Build the hot-spare disk to protect from hard disk failure.

9.

Create an administrative user account.

10. Enable SSH on the storage device.

Provisioning
Pool Management
Storage pools, as the name implies, allows the storage administrator to create pools of storage. You could even
in some cases, create one big pool with all of the disks in the array, which could greatly simplify the management.
Along with this, comes a complimentary technology called FAST VP, which allows you to place multiple disk tiers
into a storage pool, and allow the array to move the data blocks to the appropriate tier, as needed, based on
performance needs. Simply assign storage from this pool as needed, in a dynamic, flexible fashion, and let the
array handle the rest via autotiering.
VCE recommends that you use drives in multiples of RAID group sizes. Only create new virtual provisioning (VP)
pools if needed to support new application requirements.
Bare metal hosts require separate disk drives. The boot LUNs and data LUNs for bare metal cannot
coexist with the same disk groups or VP pools used for VMware vSphere ESXi boot and VMware vSphere
ESXi data volumes.
The initial configuration of the Vblock Systems components is done using the Element Managers, which includes
initialization and specific settings required for UIM discovery.
Element Managers are also used when expanding resources. For example, adding a new chassis to the UCS
environment requires a discovery from UCS Manager, and adding storage to an array requires using Unisphere or
SMC to create new storage or add the disks to an existing storage pool. Troubleshooting tasks will generally
require the use of the Element Managers to examine logs, make configuration changes, etc.
Unisphere is used to change the storage configuration of the array. This includes adding physical disks/capacity to
the array, adding or deleting storage pools or RAID groups, or adding capacity to an existing storage pool. Overall
system settings and replication are also managed using Unisphere.
From the file/NAS perspective, all functionality must be managed using Unisphere , as UIM does not currently
support file provisioning. As a result, network interfaces, file systems, and exports/shares will be managed using
Unisphere.

2014 VCE Company, LLC. All rights reserved.

27

LUNS
LUNS come in two basic categories: Traditional and Pool. Pool LUNs can be either Thin or Thick. Traditional
LUNs have been the standard for many years. When a LUN is created, a number of disks that corresponds to the
desired RAID type are utilized to create the LUN. Traditional LUNs offer fixed performance based on the RAID
and disk type, and are still a good choice for storage that does not have large growth. For Vblock Systems, the
ESXi boot disks are created as Traditional LUNs in a RAID-5 configuration, using 5 disks.
Pool LUNs utilize a larger grouping of disks to create LUNs. While the physical disks that comprise the pool
members are configured with a RAID mechanism, when the LUN is created using a Pool, the LUN is built across
all of the disks in the pool. This allows larger LUNs to be created without sacrificing availability, but it introduces
some variability in performance, as it is more likely that a larger number of applications will share storage in a pool.
In general, the principles of what type of storage to utilize are the same on both storage arrays: Avoid using
traditional LUNs or Standard Devices except when absolutely necessary. They are restrictive in terms of
growth and support of advanced array features. For data, try to use pool/Thin Pools. This allows you to use
FAST VP to optimize access times and storage costs. In addition, data is striped across multiple RAID
Groups/Devices in a pool, which will improve performance. Thin LUNs/Devices provide the maximum use of
your storage capacity by not overallocating storage and having it sit idle. Use Thick LUNs/Pre allocated Thin
Devices only when required by an application.
To mask storage, UIM/P creates one storage group per server on a VNX array. The boot LUN and any common
data LUNs are put in each servers storage group. On a VMAX array, UIM/P creates one masking view per server
that holds only the boot LUN. It creates one additional masking view that contains only the shared data LUNs.
This inconsistency is a result of the unequal masking rules of each array type.
When creating file storage, UIM/P exports the file system to the NFS IP addresses of the servers. You remember
creating an IP Pool to supply the service with those addresses.

NFS/CIFS
The Data Mover Enclosure (DME) is physically similar to the Storage Processor Enclosure, except that instead of
housing Storage Processors, it houses Data Movers, which are also called X-Blades. The Data Movers provide
access to the exported NFS or CIFS file systems.
Each Data Mover has one Fibre Channel I/O module installed. Two ports are used for connectivity to the Storage
Processors and two ports can be used for tape drive connectivity, allowing backup directly from the Data Mover.
In addition to the single Fibre Channel I/O Module, two 10-Gigabit Ethernet modules are installed in each Data
Mover. These Ethernet modules provide connectivity to the LAN for file sharing.
A maximum transmission unit (MTU) is the largest size packet or frame, specified in octets (eight-bit bytes) that
can be sent in a packet- or frame-based network such as the Internet. The Internet's Transmission Control
Protocol (TCP) uses the MTU to determine the maximum size of each packet in any transmission. Too large an
MTU size may mean retransmissions if the packet encounters a router that can't handle that large a packet. Too
small an MTU size means relatively more header overhead and more acknowledgements that have to be sent
and handled. Most computer operating systems provides a default MTU value that is suitable for most users. In
general, Internet users should follow the advice of their Internet service provider (ISP) about whether to change
the default value and what to change it to. VCE recommends an MTU size of 9000 (Jumbo frames) for Unified
storage, and 1500 for Block storage.
Ethernet frames with a maximum transmission unit (MTU) ranging from more than 1500 bytes to 9000 bytes are
known as Jumbo frames. Use jumbo frames only when the frame to or from any combination of Ethernet devices
can be handled without any Layer 2 fragmentation or reassembly.

2014 VCE Company, LLC. All rights reserved.

28

The VNX has incorporated the Common Internet File System (CIFS) Protocol as an open standard for network file
service. CIFS is a file access protocol designed for the Internet and is based on the Server Message Block (SMB)
protocol that the Microsoft Windows operating system uses for distributed file sharing. The CIFS protocol lets
remote users access file systems over the network. CIFS is configured on a Data Mover using the command line
interface (CLI). You can also perform many of these tasks using one of the VNX management applications: EMC
Unisphere, Microsoft Management Console (MMC) snap-ins, or Active directory users and Computers (ADUC).
The CIFS server created on a physical Data Mover with no interface specified becomes the default server. It is
automatically associated with all unused network interfaces on the Data Mover and any new interfaces that you
subsequently create. If you create additional CIFS servers on the Data Mover, you must specify one or more
network interfaces with which to associate the server. You can reassign network interfaces to other CIFS servers
on the Data Mover as you create them, or later as required. The default CIFS server behavior is useful if you plan
to create only one CIFS server on the Data Mover. It is recommended that you always explicitly associate network
interfaces with the first CIFS server created on a Data Mover. This practice makes it easier to create more
network interfaces in the future and avoids having CIFS traffic flow onto networks for which it is not intended. The
default CIFS server cannot be created on a Data Mover having a loaded Virtual Data Mover (VDM).
You can use network interfaces to access more than one Windows domain by creating multiple network
interfaces, each associated with a different domain, and assigning a different CIFS server to each interface.

VDM
A VDM is a software feature that allows administrative separation and replication of CIFS environments. A VDM
houses a group of CIFS servers and their shares. A VDM looks like a computer on the Windows network. It has its
own event log, local user and group database, CIFS servers and shares, and usermapper cache that are
applicable when using NFS and CIFS to access the same file system on the same VNX file system. EMC
recommends that you create CIFS servers in VDMs. This provides separation of the CIFS server user and group
databases, CIFS auditing settings, and event logs. This is required if the CIFS server and its associated file
systems are ever to be replicated using VNX Replicator. An exception to creating a CIFS server in a VDM is the
CIFS server that is used to route antivirus activity.
VDMs apply only to CIFS shares. You cannot use VDMs with NFS exports. Each VDM has access only to the file
systems mounted to that VDM. This provides a logical isolation between the VDM and the CIFS servers that it
contains. A VDM can be moved from one physical Data Mover to another.
Each VDM stores its configuration information in a VDM root file system, which is a directory within the root file
system of the physical Data Mover. No data file systems are stored within the VDM root file system. All user data
is kept in user file systems.
The default size of the root file system is 128 MB. In an environment with a large number of users or shares, you
might need to increase the size of the root file system. You cannot extend the root file system automatically.

Zoning Principles
Zoning enables you to set up access control between storage devices or user groups. If you have administrator
privileges in your fabric, you can create zones to increase network security and to prevent data loss or corruption.
Examining the source-destination ID field enforces zoning. Zoning is a fabric-based service in storage area
networks that groups host and storage nodes that have a need to communicate.
Best Practices
Ports on the storage arrays are reserved for add-on components and features such as RecoverPoint, data
migration, VG Gateways (Vblock System 700 family only), and SAN backup. Do not use these array front-end
ports for VPLEX or direct host access.
EMC/VCE recommends single-initiator zoning. Each HBA port should be configured on its switch with a separate
zone that contains the HBA and the SP ports with which it communicates.

2014 VCE Company, LLC. All rights reserved.

29

Connect the host and storage ports in such a way as to prevent a single point of failure from affecting redundant
paths. If you have a dual-attached host and each HBA accesses its storage through a different storage port, do
not place both storage ports for the same server on the same Line Card or ASIC.
Always use two power sources.
While planning the SAN, keep track of how many host and storage pairs will be utilizing the ISLs between
domains. As a general best practice, if two switches have to be connected by ISLs, ensure that there is a
minimum of two ISLs between them and that there are no more than six initiator and target pairs per ISL.
There can be a maximum of three tiers in a heterogeneous storage pool based on the drive types available.
Heterogeneous pools can also have two-drive types and still leverage FAST VP. FAST VP does not
differentiate tiers by the drive speeds. Users should ensure that they choose drives with common rotation
speeds when building pools.

Troubleshooting
Connectivity
Troubleshooting is a form of problem solving, often applied to repair-failed products or processes. It is a logical,
systematic search for the source of a problem so that it can be solved, and the product or process can be made
operational again. Troubleshooting is needed to develop and maintain complex systems where the symptoms of a
problem can have many possible causes. Troubleshooting is used in many fields such as engineering, system
administration, electronics, automotive repair, and diagnostic medicine. Troubleshooting requires identification of
the malfunction(s) or symptoms within a system. Then, experience is commonly used to generate possible causes
of the symptoms. Determining the most likely cause is a process of eliminationeliminating potential causes of a
problem. Finally, troubleshooting requires confirmation that the solution restores the product or process to its
working state.
In general, troubleshooting is the identification of, or diagnosis of, "trouble" in the management flow of a
corporation or a system caused by a failure of some kind. The problem is initially described as symptoms of
malfunction, and troubleshooting is the process of determining and remedying the causes of these symptoms.

2014 VCE Company, LLC. All rights reserved.

30

A system can be described in terms of its expected, desired, or intended behavior (usually, for artificial systems,
its purpose). Events or inputs to the system are expected to generate specific results or outputs. (For example
selecting the "print" option from various computer applications is intended to result in a hardcopy emerging from
some specific device.) Any unexpected or undesirable behavior is a symptom. Troubleshooting is the process of
isolating the specific cause or causes of the symptom.
Effective troubleshooting steps:
Define normal behavior: What did previous metrics look like?
Identify the problem.
Collect information that will lead to identifying the root cause.
Check and collect switch and storage array information and configuration from log files.
Print or capture exact error codes.
Use array management software to monitor the health of the system.
Check power on the system.
Open ticket with support.

Fault Detection
Storage administrators are constantly trying to maximize performance. When an application starts performing
poorly and happens to rely on a SAN, administrators struggle to isolate the performance bottleneck. Performance
issues can be the result of anything from a misconfigured component to subtle interactions between various parts
of the SAN. When performance issues occur, administrators can rely on Sentry Software monitoring solutions to
check signs of physical faults, such as I/O retries, suspect data movement, and poor response times.
The Symmetrix DMX has unique methods to proactively detect and prevent failures. The Symmetrix is designed
with the following data integrity features:
Error checking, correction, and data integrity protection.
Disk error correction and error verification.
Cache error correction and error verification.
Periodic system checks.
Through the service processor, the Enginuity storage-operating environment proactively monitors all end-to-end
I/O operations for errors and faults. By tracking these errors during normal operation, Enginuity can recognize
patterns of error activity and predict a potential hard failure. This proactive error tracking capability can detect
and/or remove from service a suspect component before a failure occurs.

Health Check
A basic health check in the Celerra or VNX arrays is always the best place to start. This check is meant to be
an environmental one that will allow us to test all the power supplies, fans, management switches, etc. contained
in the Vblock Systems.
At this point, it is important to notice that the EMC Celerra and VNX arrays have redundant devices, so in case of
failure of a single power supply or management switch, we don't have to worry because there will be another one
available. After running the necessary commands for this, EMC/VCE recommends using Unisphere (if on a
VNXe) and making use of its Dashboard widgets, and logs page. This will monitor and report a variety of events
on the VNXe system. They are written to the user log, and the interface will show an icon revealing the label for
each event. The labels are listed from Informational to Critical.

2014 VCE Company, LLC. All rights reserved.

31

Due to the various components and management options in Vblock Systems, VCE recommends reviewing the
documentation for a specific model of Vblock Systems and its associated components.
A real-time check on the connectivity, management, and storage component status of your VNX system is
recommended. The health check verifies network connectivity, management service status, storage processor
status, hot spare status, disk faults, disk status, blade status, and hardware component faults. Unisphere Service
Manager (USM) is used for this purpose.

Storage Processor/Control Station IP Address


The Control Station is a 1U self-contained server and is used in unified, file, and gateway configurations. The
Control Station provides administrative access to the blades. It also monitors the blades and facilitates failover in
the event of a blade runtime issue.
The Control Station provides network communication to each storage processor. It uses Proxy ARP
technology to enable communication over the Control Station management Ethernet port to each storage
processors own IP address.
An optional secondary Control Station is available that acts as a standby unit to provide redundancy for the
primary Control Station.
Unisphere is completely web-enabled for remote management of your storage environment. Unisphere
Management Server runs on the SPs and the Control Station. Launch this Unisphere server by pointing the
browser to the IP address of either SP or the Control Station. Navicli is also used.
Enter the public IP addresses for the storage processors (SP) you want to use to communicate with the VNX from
the public network. These IP addresses must be on the same subnet as the Control Station IP address and
should not be in use by any other host on the network.

Storage Reconfiguration And Replacement Concepts


The Vblock Systems are designed to be powered on continuously. Most components are hot swappable. This
means that you can replace or install these components while the storage system is running. Front bezels should
always be attached and each compartment should contain a Field Replaceable Unit (FRU) or filler panel to ensure
EMI compliance and proper airflow over the FRUs. You should not remove a faulty FRU until you have a
replacement available. When you replace or install FRUs, you can inadvertently damage the sensitive electronic
circuits in the equipment by simply touching them. Electrostatic charge (ESD) that has accumulated on your body
discharges through the circuits. If the air in the work area is very dry, running a humidifier in the work area will help
decrease the risk of ESD damage.
Whether reconfiguring an IP address to a control station or a storage processor, or replacing a failed storage
component, a clearly-defined and documented procedure should be followed. This includes both planned and
unplanned activities. Depending on which activity is being performed and which tool is used to perform the task,
you should be familiar with all cautions and warnings.
Most procedures should ideally be done during off-peak hours or a maintenance window. This approach provides
additional safety in case any problems arise during an IP address reassignment.
Some procedures are done with active hosts attached to the array to ensure that there are no single HBAattached hosts and that failover software is configured on each multi-HBA attached host. SPs will reboot each
time the IP address change is submitted. Wait to change the other SP's IP address until the first SP has rebooted.

2014 VCE Company, LLC. All rights reserved.

32

Vblock Systems have been architected with a base and upgrade approach. Base configurations are those
configurations that represent an entry point to Vblock Systems. A Vblock Systems base begins with racks,
in-rack PDUs, cabling, patch panels, aggregate SAN and Ethernet switches, enough UCS hardware to
support a full rack of servers, and storage configuration with a small amount of initial storage in a discrete
configuration. Upgrades of blade packs and disk tier groups extend the base to achieve a particular
performance or scalability goal. This architecture provides customers with significant scalability and flexibility
to meet business requirements. Due to the many components in the Vblock Systems, it is recommended to
visit VCE and EMC technical publication websites for the most current information on any procedure.
Documentation and procedures are updated frequently.

Conclusion
This study guide represents a subset of all of the tasks, configuration parameters, and features that are part of a
Vblock Systems deployment and implementation. This study guide focuses on deploying EMC storage array in a
VCE Vblock Systems converged infrastructure, including how to configure and manage the data storage array on
Vblock Systems.
Exam candidates with the related recommended prerequisite working knowledge, experience, and training should
thoroughly review this study guide and the resources in the References document (available on the VCE
Certification website) to help them successfully complete the VCE Vblock Systems Deployment and
Implementation: Storage Exam.

ABOUT VCE
VCE, formed by Cisco and EMC with investments from VMware and Intel, accelerates the adoption of
converged infrastructure and cloud-based computing models that dramatically reduce the cost of IT while
improving time to market for our customers. VCE, through the Vblock Systems, delivers the industry's
only fully integrated and fully virtualized cloud infrastructure system. VCE solutions are available
through an extensive partner network, and cover horizontal applications, vertical industry offerings, and
application development environments, allowing customers to focus on business innovation instead of
integrating, validating, and managing IT infrastructure.
For more information, go to vce.com.

Copyright 2014 VCE Company, LLC. All rights reserved. VCE, VCE Vision, Vblock, and the VCE logo are registered trademarks or
trademarks of VCE Company LLC or its affiliates in the United States and/or other countries. All other trademarks used herein are the property
of their respective owners.

2014 VCE Company, LLC. All rights reserved.

12052014

33

You might also like