Professional Documents
Culture Documents
Table Of Contents
Obtaining The VCE-CIIEs Certification Credential ........................................... 4
VCE Vblock Systems Deployment And Implementation: Storage Exam ...................................... 4
Recommend Prerequisites .............................................................................................................. 4
VCE Exam Preparation Resources ................................................................................................. 4
VCE Certification Web Site ............................................................................................................. 4
About This Study Guide ..................................................................................... 6
Vblock Systems Overview ................................................................................. 6
Vblock Systems Architecture ........................................................................................................... 7
VMware vSphere Architecture ......................................................................................................... 9
VMware vSphere Components ....................................................................................................... 9
Recommend Prerequisites
There are no required prerequisites for taking the VCE Vblock Systems Deployment and Implementation: Storage
Exam. However, exam candidates should have working knowledge of EMC data storage solutions obtained
through formal ILT training and a minimum of one-year experience. It is also highly recommended that exam
candidates have training, knowledge and/or working experience with industry-standard x86 servers and operating
systems.
Resource
Customer
http://support.vce.com/
VCE partner
www.vcepartnerportal.com
VCE employee
www.vceview.com/solutions/products/
www.vceportal.com/solutions/68580567.html
Note: The websites listed above require some form of authentication using a username/badge and password.
The diagram below provides a sample view of the Vblock Systems architecture. The Vblock System 720
is shown in this example.
The following diagram provides a concise view of vCenter and its related components:
10
11
Vblock Systems support PowerPath and native VMware multipathing to manage storage I/O connections. ESXi
uses pluggable storage architecture in the Vmkernel and is delivered with I/O multipathing software referred to as
the Native Multipathing Plugin (NMP), an extensible module that manages sub plug-in. VMware provides built-in
sub plug-in, but they can also come from third parties. NMP sub plug-in can be one of two types: one based on
the storage-array type (SATP), and the other based on the path selection (PSP). PSPs are responsible for
choosing a physical path for I/O requests. The VMware NMP assigns a default PSP for each logical device based
on the SATP associated with the physical paths for that device, but you can override the default.
The VMware NMP supports the following PSPs:
In this case, the host selects the path used most recently (MRU). When it becomes unavailable, the host
selects an alternative path. The host does not revert to the original path when that path becomes available
again. There is no preferred path setting with the MRU policy. MRU is the default policy for most activepassive storage devices, and VMware vSphere displays the state as the Most Recently Used (VMware)
path-selection policy.
With a fixed path, the host uses a designated preferred path. Otherwise, it selects the first working path
discovered at boot time. If you want the host to use a particular preferred path, specify it manually. Fixed
is the default policy for most active-active storage devices. VMware vSphere displays the state as the
Fixed (VMware) path-selection policy.
Note that if a default preferred path's status turns to dead, the host will select a new preferred path.
However, designated preferred paths will remain preferred even when they become inaccessible.
In round robin (RR), the host uses an automatic path-selection algorithm. For active-passive arrays, it
rotates through all active paths. For active-passive arrays, it rotates through all available paths. RR is the
default for a number of arrays and can implement load balancing across paths for different LUNs. VMware
vSphere displays the state as the Round Robin (VMware) path-selection policy.
PowerPath/VE is host multipathing software optimized for virtual host environments. It provides I/O path
optimization, path failover, and I/O load balancing across virtual host bus adapters (HBAs). PPVE installs as a
virtual appliance OVF file. The PowerPath/VE license management server installs on the AMP.
12
13
In addition, vSphere offers an enhanced vMotion compatibility (EVC) feature that allows live VM migration
between hosts with different CPU capabilities. This is useful when upgrading server hardware, particularly if the
new hardware contains a new CPU type or manufacturer. These enhanced clusters are a distinct cluster type.
Existing hosts can only function as an EVC host after migrated into a new, empty EVC cluster.
Storage systems need maintenance too, and Storage vMotion allows VM files to migrate from shared storage
system to another with no downtime or service disruption. It is also effective when performing migrations to
different tiers of storage.
Performing any vMotion operation requires permissions associated with Data Center Administrator, Resource
Pool Administrator, or Virtual Machine Administrator.
14
The VEM runs as part of the ESXi kernel and uses the VMware vNetwork Distributed Switch (vDS) API, which
was developed jointly by Cisco and VMware for virtual machine networking. The integration is tight; it ensures that
the Nexus 1000V is fully aware of all server virtualization events such as vMotion and Distributed Resource
Scheduler (DRS). The VEM takes configuration information from the VSM and performs Layer-2 switching and
advanced networking functions.
If the communication between the VSM and the VEM is interrupted, the VEM has Nonstop Forwarding (NSF)
capability to continue to switch traffic based on the last known configuration. You can use the vSphere Update
Manager (VUM) to install the VEM, or you can install it manually using the CLI.
The VSM controls multiple VEMs as one logical switch module. Instead of multiple physical line-card modules, the
VSM supports multiple VEMs that run inside the physical servers. Initial virtual-switch configuration occurs in the
VSM, which automatically propagates to the VEMs. Instead of configuring soft switches inside the hypervisor on a
host-by-host basis, administrators can use a single interface to define configurations for immediate use on all
VEMs managed by the VSM. The Nexus 1000V provides synchronized, redundant VSMs for high availability.
You have a few interface options for configuring the Nexus 1000V virtual switch: standard SNMP and XML as well
Cisco CLI and Cisco LAN Management Solution (LMS). The Nexus 1000V is compatible with all the vSwitch
management tools, and the VSM also integrates with VMware vCenter Server so that the virtualization
administrator can manage the network configuration in the Cisco Nexus 1000V switch.
The VMware virtual switch directs network traffic to one of two distinct traffic locations: the VMkernel and the VM
network. VMkernel traffic controls Fault Tolerance, vMotion, and NFS. The VM network allows hosted VMs to
connect to the virtual and physical network.
Standard vSwitches exist at each (ESXi) server and can be configured either from the vCenter Server or directly
on the host. Distributed vSwitches exist at the vCenter Server level, where they are managed and configured.
Several factors govern choice of adapter, generally either host compatibility requirements or application
requirements. Virtual network adapters install into ESXi and emulate a variety of physical Ethernet and Fibre
Channel NICs. (Refer to the Vblock Systems Architecture Guides for network hardware details and
supported topologies.)
15
Zoning
Zoning is a procedure that takes place on the SAN fabric and ensures devices can only communicate with those
that they need to. Masking takes place on storage arrays and ensures that only particular World Wide Names
[WWNs] can communicate with LUNs on that array.
There are two distinct methods of zoning that can be applied to a SAN: World Wide Name zoning and port
zoning. WWN zoning groups a number of WWNs in a zone and allows them to communicate with each
other. The switch port that each device is connected to is irrelevant when WWN zoning is configured. One
advantage of this type of zoning is that when a port is suspected to be faulty a device can be connected to
another port without the need for fabric reconfiguration. A disadvantage is that if an HBA fails in a server the
fabric will need to be reconfigured for the host to reattach to its storage. WWN zoning is also sometimes
called 'soft zoning.' Port zoning groups particular ports on a switch or number of switches together, allowing
any device connected to those ports to communicate with each other. An advantage of port zoning is that
you don't need to reconfigure a zone when an HBA is changed. A disadvantage is that any device can be
attached into the zone and communicate with any device in the zone. There are EMC recommendations for
zoning specific to the Vblock Systems you are configuring.
VSAN
A VSAN is a virtual storage area network (SAN). A SAN is a dedicated network that interconnects hosts and
storage devices primarily to exchange SCSI traffic. In SANs you use the physical links to make these
interconnections; VSANS are a logical segmentation of a physical SAN. Sets of protocols run over the SAN to
handle routing, naming, and zoning. You can design multiple SANs with different topologies.
A named VSAN creates a connection to a specific external SAN. The VSAN isolates traffic to that external SAN,
including broadcast traffic. The traffic on one named VSAN knows that the traffic on another named VSAN exists,
but cannot read or access that traffic.
16
Like a named VLAN, the name that you assign to a VSAN ID adds a layer of abstraction that allows you to
globally update all servers associated with service profiles that use the named VSAN. You do not need to
reconfigure the servers individually to maintain communication with the external SAN. You can create more than
one named VSAN with the same VSAN ID.
In a cluster configuration, a named VSAN can be configured to be accessible only to the FC uplinks on one fabric
interconnect or to the FC Uplinks on both fabric interconnects.
Storage Protocols
Unified storage is a platform with storage capacity connected to a network that provides file-based and blockbased data storage services to other devices on the network. Unified storage uses standard file protocols such as
Common Internet File System (CIFS) and Network File System (NFS) and standard block protocols like Fibre
Channel (FC) and Internet Small Computer System Interface (iSCSI) to allow users and applications to access
data consolidated on a single device. Block storage uses a standard file protocol such as Fibre Channel (FC).
Unified storage is ideal for organizations with general-purpose servers that use internal or direct-attached storage
for shared file systems, applications, and virtualization. Unified storage replaces file servers and consolidates data
for applications and virtual servers onto a single, efficient, and powerful platform.
Large numbers and various types and release levels of direct-attached storage or internal storage can be
difficult to manage and protect as well as costly due to very low total utilization rates. Unified storage
provides the cost savings and simplicity of consolidating storage over an existing network, the efficiency of
tiered storage, and the flexibility required by virtual server environments. VCE Vblock Systems can support
block or unified storage protocols.
Storage Types
Adding RAID packs can increase storage capacity. Each pack contains a number of drives of a given type, speed,
and capacity. The number of drives in a pack depends upon the RAID level that it supports.
The number and types of RAID packs to include in Vblock Systems is based upon the following:
The number of storage pools that are needed.
The storage tiers that each pool contains, and the speed and capacity of the drives in each tier.
Note: The speed and capacity of all drives within a given tier in a given pool must be the same.
A WWN pool is a collection of worldwide names (WWNs) for use by the Fibre Channel vHBAs in a Cisco UCS
instance. Separate pools are created for WWNNs assigned to a server, and WWPNs assigned to the vHBA.
Thick/Thin Devices
The primary difference between thin LUNs compared to Classic and thick LUNs is that thin LUNs have the ability
to present more storage to an application than what is physically allocated. Presenting storage that is not
physically available avoids underutilizing the storage systems capacity.
17
Data and LUN metadata is written to thin LUNs in 8 KB chunks. Thin LUNs consume storage on an asneeded basis from the underlying pool. As new writes come into a thin LUN, more physical space is
allocated in 256 MB slices.
Thick LUNs are also available in VNX. Unlike a thin LUN, a thick LUNs capacity is fully reserved and allocated on
creation so it will never run out of capacity. Users can also better control which tier the slices are initially written to.
For example, as pools are initially being created and there is still sufficient space in the highest tier, users can be
assured that when they create a LUN with Highest Available Tier or Start High, then Auto-Tier, data will be
written to the highest tierbecause the LUN is allocated immediately.
Thin LUNs typically have lower performance than thick LUNs because of the indirect addressing. Thus, the
mapping overhead for a thick LUN is much less when compared to a thin LUN.
Thick LUNs have more predictable performance than thin LUNs because the slice allocation is assigned at
creation. However, thick LUNs do not provide the flexibility of oversubscribing like a thin LUN does so they should
be used for applications where performance is more important than space savings.
Thick and thin LUNs can share the same pool, allowing them to have the same ease-of-use and benefits of
pool-based provisioning.
Provisioning Methods
Storage provisioning is the process of assigning storage resources to meet the capacity, availability, and
performance needs of applications. In terms of accessing storage, the virtual disk always appears to the
virtual machine as a mounted SCSI device. The virtual disks within the data store hide the physical storage
layer from the VM.
With traditional block storage provisioning, you create a RAID group with a particular RAID protection level. When
LUNs are bound on the RAID group, the host-reported capacity of the LUNs is equal to the amount of physical
storage capacity allocated. The entire amount of physical storage capacity must be present on day one, resulting
in low levels of utilization, and recovering underutilized space remains a challenge.
Virtual Provisioning utilizes storage pool-based provisioning technology that is designed to save time, increase
efficiency, reduce costs, and improve ease of use. Storage pools can be used to create thick and thin LUNs. Thick
LUNs provide high and predictable performance for your applications, mainly because all of the user capacity is
reserved and allocated upon creation.
F.A.S.T Concepts
EMC Fully Automated Storage Tiering (FAST) cache is a storage performance optimization feature that
provides immediate access to frequently accessed data. FAST cache complements FAST by automatically
absorbing unpredicted spikes in application workloads. FAST cache results in a significant increase in
performance for all read and write workloads.
FAST cache uses enterprise Flash drives to extend existing cache capacities up to 2 terabytes. FAST cache
monitors incoming I/O for access frequency and automatically copies frequently accessed data from the back-end
drives into the cache. FAST cache is simply configured and easy to monitor.
FAST cache accelerates performance to address unexpected workload spikes. FAST and FAST cache are a
powerful combination, unmatched in the industry, that provides optimal performance at the lowest possible cost.
18
FAST VP automatically optimizes performance in a tiered environment reducing costs, footprint, and
management effort. FAST VP puts the right data in the right place at the right time. FAST VP maximizes
utilization of Flash Drive capacity for high IOPS workloads and maximizes utilization of SATA drives for
capacity intensive applications. FAST VP is a game changing technology that delivers automation and
efficiency to the virtual data center.
SP Memory allocations on a VNX changes based on how many FAST Cache drives have been purchased. VCE
suggests an 80/20 rule for configuring Write/Read cache when using FAST Cache.
VCE Recommendations:
SP Memory allocations on a VNX changes based on how many FAST Cache drives have been
purchased. VCE suggests an 80/20 rule for configuring Write/Read cache when using FAST Cache.
For FAST VP implementations, assign 95% of the available pool capacity to File.
Always allocate EFD drives for FAST Cache requirements first, and then assign the remainder for FAST
VP for best performance.
19
performance high-throughput, minimal response time applications, and cuts latency in half. EMC
XtremCache increases read performance by keeping a localized flash cache within a server. EMC
XtremCache extends EMC FAST array-based technology in to the server with a single, intelligent
input/output path from the application to the data store.
Write-through caching is provided forcing writes to persist to the back-end storage array to ensure high availability,
data integrity and reliability, and disaster recovery.
Read cache typically needs a smaller amount of capacity to operate efficiently. In addition, its typical usage
is less varied across the majority of workloads. Write cache is different because its usage affects
performance more heavily and can range more widely. Set the read cache first, and then allocate the total
remaining capacity to write cache.
EMC Fast Cache
FAST Cache is best for small random I/O where data is skewed. EMC recommends first utilizing available flash
drives for FAST Cache, which can globally benefit all LUNs in the storage system. Then supplement performance
as needed with additional flash drives in storage pool tiers.
Virtual Applications (vApps) are a collection of components that combine to create a virtual appliance running, as
a VM. Several Vblock Systems management components hosted reside on the AMP as vApps.
VMware uses the Open Virtualization Format/Archive (OVF/OVA) extensively. VMware vSphere relies on
OVF/OVA standard interface templates as a means of deploying virtual machines and vApps. A VM template
contains all of its OS, application, and configuration data. You can use an existing template as a master to
replicate any VM or vApp or use it as the foundation to customize a new VMs. Any VM can become a template.
Its a simple procedure from within the vCenter Operations Manager.
VM Cloning is another VM replication option. The existing virtual machine is the parent of the clone. When the
cloning operation is complete, the clone is a separate virtual machine, though it may share virtual disks with the
parent virtual machine. Again, cloning is a simple vCenter Operations Manager procedure.
The current state of a virtual machine is saved with a snapshot. This provides the benefit to revert to a previous
state in case of an error modifying or updating the VM.
Storage Processor
The Storage Processors, which handle all of the block storage processing for the array, are contained within a
Storage Processor Enclosure (SPE) or Disk Processor Enclosure (DPE)depending on VNX model. Each SP
has a management module that is connected to an Ethernet network and provides administrative access to the
array, front-end ports for host connectivity and back-end ports that connect to the storage. The front-end ports can
be used to connect the UCS blade server hosts for block access, or Data Movers for file access.
The SPs manage all configuration tasks, store the active configuration information, and perform all monitoring and
alerting for the array.
Front-End Director
A channel director (front-end director) is a card that connects a host to the Symmetrix. Where the VNX uses
Storage Processors, the VMAX uses Engines, but their function is the samerelaying front-end I/O requests to
the back-end buses to access the appropriate storage. Each VMAX Engine is made up of several different
2014 VCE Company, LLC. All rights reserved.
20
components, including redundant power supplies, cooling fans, Standby Power Supply modules, and
Environmental modules.
Each engine also contains two directors, each of which contains CPUs, memory (cache) and the front- and backend ports. As with the VNX, the back-end ports provide connectivity to the Disk Enclosures and the front-end ports
provide connectivity to hosts or NAS devices or for replication purposes. The System Interface Boards provide
connectivity from each director to the Virtual Matrix, which allows all directors to share memory.
Disk
While at the most basic level, all storage arrays do the same thing: They take capacity that resides on
physical disk and make logical volumes that are presented to the servers. All storage arrays protect data,
using RAID data protection schemes, and optimize performance-using caching. What is different is the level
of scalability and advance features offered, such as replication and disaster recovery capability. All disk
drives are housed in DAEs (Disk Array enclosures). If the number of RAID packs in Vblock Systems is
expanded, more disk array enclosures (DAEs) might be required. DAEs are added in packs. The number of
DAEs in each pack is equivalent to the number of back-end buses in the EMC VNX array in the Vblock
Systems. Each VNXe system has a disk package (RAID group) in the first disk enclosure. These disks store
the operating environment (OE) for the VNXe system data.
Vault
The vault is a place, or mechanism, used to protect your VMAX data when powering down the array when a
power outage occurs, or in the event that some environmental change, such as power failure or temperature is
being exceeded. The VMAX is set to write the data in global cache memory to the Vault devices and shutdown
the array in a controlled manner.
21
Two copies of the cache memory are written to independent vault devices, allowing for a fully redundant vault. The
vault process is to stop all transactions to the VMAX. Once all the I/O is stopped, the directors will write all the
global memory data to the vault devices. Shutdown of the VMAX will be complete. The vault process then
restores the cache memory from Vault. During this process, the array will reinitialize the physical memory, check
the integrity of the data in Vault, and restore the data to global cache. The VMAX resumes operation once the
SPSs are sufficiently recharged to support another Vault operation. Vault devices are important in ensuring the
consistency of application data stored on the VMAX.
22
Virtual Pools
Virtual pools are resources that allow storage administrators the ability to logically segregate application data
stored on storage arrays. Implementing pool LUNs can be either thin or thick. Thin LUNs provide on-demand
storage that maximizes the utilization of the storage by allocating storage as needed. Thick LUNs provide high
and predictable performance for applications mainly because all of the user capacity is reserved and allocated
upon creation. VCE recommends that you use drives in multiples of RAID group sizes. Only create new virtual
provisioning (VP) pools if needed to support new application requirements.
Bare metal hosts require separate disk drives. The boot LUNs and data LUNs for bare metal cannot
coexist with the same disk groups or VP pools used for VMware vSphere ESXi boot and VMware vSphere
ESXi data volumes.
23
The System Library is responsible for discovery. The discovery process consists of using appropriate protocols
to discover the inventory, location, and health of the Vblock Systems and, using that information, to populate an
object model. The System Library uses the information in the object model to populate a PostgreSQL database
that is used by the REST interfaces. The data stored in the object model can also be accessed through SNMP
GETs.
The initial discovery process takes place during the manufacturing process. At that time, a file is populated with
basic information about the Vblock Systems that were built and configured. Later, when the Vblock Systems are in
place at the customer site, the System Library discovers the Vblock Systems model used, its physical
components, and logical entities, using the following methods:
XML API
Simple Network Management Protocol (SNMP)
Storage Management InitiativeSpecification (SMI-S)
Vendor CLI's, such as EMC Unisphere CLI (UEMCLI)
The System Library discovers the Vblock Systems every 15 minutes and the following physical components
and logical entities:
Storage groups
RAID groups
LUN relationships to RAID and storage groups
Masking records
Mapping recordsLUNs mapped to FA ports so that ports can see the LUNs for access.
The Plug-in for vCenter is a client that runs on the VMware vSphere Web Client application. Using the API for
System Library, it provides a system-level view of a data center's configured physical servers that form a named
clusterthe Vblock Systems cluster. It also enables a customer to view and monitor information about all the
components in Vblock Systems, including the server, network switches, and storage arrays, as well as their
subcomponents and the management servers.
The graphical user interface of the Plug-in for vCenter provides a tree view that displays the Vblock Systems
name, as well as its overall system health, description, prior state, serial number, and location. Additional
information, such as the health status of the Vblock Systems and their components can be displayed by drilling
down through the tree view.
The Plug-in for vCenter integrates with the Compliance Checker, which is required for complete monitoring of the
Plug-in for vCenter. Together, they enable you to run reports that provide detailed information about how closely
your Vblock Systems comply with established benchmarks and profiles you select.
The Adapter for vCenter Operations Manager discovers and monitors Vblock Systems hardware and VMware
vCenter software components. The Adapter works with VMware vCenter Operations Manager to collect and
analyze component metrics. Metric data include health, operability, and resource availability that measure the
performance of Vblock Systems components and determine the health and status of the system.
Vblock Systems component dashboards use widgets to show the health of compute, storage, and network
components. Dashboard widgets can be connected to multiple Vblock Systems. The Resources widget shows all
Vblock Systems. Vblock Systems selected in the Resources widget are shown in the Health Tree. Components
selected in the Health Tree are shown in the Alerts, Metric Selector, and Metric Sparklines widgets.
24
Installation Upgrade
Vblock Systems Documentation And Deliverables
The Vblock Systems family includes an enterprise and service provider-ready system, designed to address
a wide spectrum of virtual machines (VMs), users, and applications. This test plan is intended as a
guideline to perform postimplementation configuration and redundancy testing of Vblock Systems. It is a
document that should be tailored to fit the Vblock Systems and logical configuration for a specific customer.
The purpose of the test plan is to define and track postimplementation configuration and redundancy testing
of Vblock Systems and their components as deployed in the customers infrastructure. This test plan
provides tests for Cisco UCS, VMware motion, EMC VNX storage array, overall Vblock Systems redundancy
power and VCE Vision software. The test plan also includes review and approvals for the entire test plan,
test exit criteria, reporting results, and action items for review. Each test includes the purpose, test
procedure, expected results, observed results, and summary test results of pass or fail. All testing will be
performed in view of the customers representatives. All results and relevant information recorded in the
Reporting Results section of the document is for review and acceptance by the customer.
The Configuration Reference Guide (CRG) is generated as a deliverable to customers at the end of an
Implementation. The VCE GUI script tool takes all of the script tools designed by VCE technicians and adds them
into a common user interface.
The VCE GUI Script tool provides a familiar user interface that can import data from the logical configuration
survey, allow users to save and distribute existing configuration files, or build new configuration files that can be
used on the fly or saved to be used later.
The VCE Logical Configuration Survey (LCS) is used for configuring Vblock Systems. Information entered here
will be used in the manufacturing of your Vblock Systems. Completion of this document is required to move
forward to the next phase.
A customer may use the VCE Professional Services team to customize Vblock Systems to their specific
environment. This service is termed as the VCE Build Services and the first step in the process is the collection
of information through the Logical Configuration Survey (LCS). The LCS and the Logical Build Guide (LBG) are
used by VCE Professional Services teams to tailor the configuration of Vblock Systems. The configuration and
subsequent testing is carried out on VCE premises and Vblock Systems are shipped in the preconfigured state
directly to the customers data center. Integration of Vblock Systems into an existing environment is thus
simplified. VCE customers are encouraged to engage appropriate security and audit stakeholders in this process
to provide direction. By providing this information in advance, customer teams reduce the required effort in
configuring the components of Vblock Systems in a compliant manner.
25
VCE Software Upgrade Service for Vblock Systems provides customers the option to order software
update services for Vblock Systems components to maintain current supported levels. The service
minimizes implementation effort and risk while providing assessment, planning, and execution of upgrade
activities. By using this service, customers can focus on business priorities instead of their infrastructure.
The VCE Release Certification Matrix (RCM) is published semiannually to document software versions that
have been fully tested and verified for Vblock Systems. While customers can opt to install their own updates,
choosing VCE Software Upgrade Service expedites installation and can reduce the risk associated with a
multisystem upgrade. The RCM defines the specific hardware components and software version
combinations, which are tested and certified by VCE.
secure, high-speed, and operates 24x7. The EMC Secure Remote Support IP Solution (ESRS IP) is an IPbased automated connect home and remote support solution enhanced by a comprehensive security
system. ESRS IP creates both a unified architecture and a common point of access for remote support
activities performed on EMC products.
The ESRS installation is performed after the Deployment and Integration team have completed the
necessary tasks of the D&I process.
After powering up the Vblock Systems, configuring, and validating storage, network, and virtualization. VCE
Professional Services launches and completes the ESRS provisioning tool to finalize installation (proxy
information, EMC secure ID information, site-ID #, site information and location.)
PowerPath Deployment
EMC PowerPath Multipathing automatically tunes the storage area network (SAN) and selects alternate paths for
data, if necessary. Residing on the server, PowerPath Multipathing enhances SAN performance and application
availability. It also integrates multiple-path I/O capabilities, automatic load balancing, and path failover functions for
complete path management.
PowerPath is easy to install:
After obtaining and downloading the software, Powerpath is installed on VMware ESXi hosts. This can be done
either by using the local CLI, or by using vCenter Update Manager (VUM) after EMC PowerPath remote tools are
installed. An EMC PowerPath license server must be installed and configured. PowerPath must be registered and
licensed on the VMware ESXi hosts to install PowerPath viewer.
26
The following is a brief list of what steps are required for configuring a VNXe for the Vblock System 300 family.
1.
2.
3.
4.
5.
6.
7.
8.
9.
Provisioning
Pool Management
Storage pools, as the name implies, allows the storage administrator to create pools of storage. You could even
in some cases, create one big pool with all of the disks in the array, which could greatly simplify the management.
Along with this, comes a complimentary technology called FAST VP, which allows you to place multiple disk tiers
into a storage pool, and allow the array to move the data blocks to the appropriate tier, as needed, based on
performance needs. Simply assign storage from this pool as needed, in a dynamic, flexible fashion, and let the
array handle the rest via autotiering.
VCE recommends that you use drives in multiples of RAID group sizes. Only create new virtual provisioning (VP)
pools if needed to support new application requirements.
Bare metal hosts require separate disk drives. The boot LUNs and data LUNs for bare metal cannot
coexist with the same disk groups or VP pools used for VMware vSphere ESXi boot and VMware vSphere
ESXi data volumes.
The initial configuration of the Vblock Systems components is done using the Element Managers, which includes
initialization and specific settings required for UIM discovery.
Element Managers are also used when expanding resources. For example, adding a new chassis to the UCS
environment requires a discovery from UCS Manager, and adding storage to an array requires using Unisphere or
SMC to create new storage or add the disks to an existing storage pool. Troubleshooting tasks will generally
require the use of the Element Managers to examine logs, make configuration changes, etc.
Unisphere is used to change the storage configuration of the array. This includes adding physical disks/capacity to
the array, adding or deleting storage pools or RAID groups, or adding capacity to an existing storage pool. Overall
system settings and replication are also managed using Unisphere.
From the file/NAS perspective, all functionality must be managed using Unisphere , as UIM does not currently
support file provisioning. As a result, network interfaces, file systems, and exports/shares will be managed using
Unisphere.
27
LUNS
LUNS come in two basic categories: Traditional and Pool. Pool LUNs can be either Thin or Thick. Traditional
LUNs have been the standard for many years. When a LUN is created, a number of disks that corresponds to the
desired RAID type are utilized to create the LUN. Traditional LUNs offer fixed performance based on the RAID
and disk type, and are still a good choice for storage that does not have large growth. For Vblock Systems, the
ESXi boot disks are created as Traditional LUNs in a RAID-5 configuration, using 5 disks.
Pool LUNs utilize a larger grouping of disks to create LUNs. While the physical disks that comprise the pool
members are configured with a RAID mechanism, when the LUN is created using a Pool, the LUN is built across
all of the disks in the pool. This allows larger LUNs to be created without sacrificing availability, but it introduces
some variability in performance, as it is more likely that a larger number of applications will share storage in a pool.
In general, the principles of what type of storage to utilize are the same on both storage arrays: Avoid using
traditional LUNs or Standard Devices except when absolutely necessary. They are restrictive in terms of
growth and support of advanced array features. For data, try to use pool/Thin Pools. This allows you to use
FAST VP to optimize access times and storage costs. In addition, data is striped across multiple RAID
Groups/Devices in a pool, which will improve performance. Thin LUNs/Devices provide the maximum use of
your storage capacity by not overallocating storage and having it sit idle. Use Thick LUNs/Pre allocated Thin
Devices only when required by an application.
To mask storage, UIM/P creates one storage group per server on a VNX array. The boot LUN and any common
data LUNs are put in each servers storage group. On a VMAX array, UIM/P creates one masking view per server
that holds only the boot LUN. It creates one additional masking view that contains only the shared data LUNs.
This inconsistency is a result of the unequal masking rules of each array type.
When creating file storage, UIM/P exports the file system to the NFS IP addresses of the servers. You remember
creating an IP Pool to supply the service with those addresses.
NFS/CIFS
The Data Mover Enclosure (DME) is physically similar to the Storage Processor Enclosure, except that instead of
housing Storage Processors, it houses Data Movers, which are also called X-Blades. The Data Movers provide
access to the exported NFS or CIFS file systems.
Each Data Mover has one Fibre Channel I/O module installed. Two ports are used for connectivity to the Storage
Processors and two ports can be used for tape drive connectivity, allowing backup directly from the Data Mover.
In addition to the single Fibre Channel I/O Module, two 10-Gigabit Ethernet modules are installed in each Data
Mover. These Ethernet modules provide connectivity to the LAN for file sharing.
A maximum transmission unit (MTU) is the largest size packet or frame, specified in octets (eight-bit bytes) that
can be sent in a packet- or frame-based network such as the Internet. The Internet's Transmission Control
Protocol (TCP) uses the MTU to determine the maximum size of each packet in any transmission. Too large an
MTU size may mean retransmissions if the packet encounters a router that can't handle that large a packet. Too
small an MTU size means relatively more header overhead and more acknowledgements that have to be sent
and handled. Most computer operating systems provides a default MTU value that is suitable for most users. In
general, Internet users should follow the advice of their Internet service provider (ISP) about whether to change
the default value and what to change it to. VCE recommends an MTU size of 9000 (Jumbo frames) for Unified
storage, and 1500 for Block storage.
Ethernet frames with a maximum transmission unit (MTU) ranging from more than 1500 bytes to 9000 bytes are
known as Jumbo frames. Use jumbo frames only when the frame to or from any combination of Ethernet devices
can be handled without any Layer 2 fragmentation or reassembly.
28
The VNX has incorporated the Common Internet File System (CIFS) Protocol as an open standard for network file
service. CIFS is a file access protocol designed for the Internet and is based on the Server Message Block (SMB)
protocol that the Microsoft Windows operating system uses for distributed file sharing. The CIFS protocol lets
remote users access file systems over the network. CIFS is configured on a Data Mover using the command line
interface (CLI). You can also perform many of these tasks using one of the VNX management applications: EMC
Unisphere, Microsoft Management Console (MMC) snap-ins, or Active directory users and Computers (ADUC).
The CIFS server created on a physical Data Mover with no interface specified becomes the default server. It is
automatically associated with all unused network interfaces on the Data Mover and any new interfaces that you
subsequently create. If you create additional CIFS servers on the Data Mover, you must specify one or more
network interfaces with which to associate the server. You can reassign network interfaces to other CIFS servers
on the Data Mover as you create them, or later as required. The default CIFS server behavior is useful if you plan
to create only one CIFS server on the Data Mover. It is recommended that you always explicitly associate network
interfaces with the first CIFS server created on a Data Mover. This practice makes it easier to create more
network interfaces in the future and avoids having CIFS traffic flow onto networks for which it is not intended. The
default CIFS server cannot be created on a Data Mover having a loaded Virtual Data Mover (VDM).
You can use network interfaces to access more than one Windows domain by creating multiple network
interfaces, each associated with a different domain, and assigning a different CIFS server to each interface.
VDM
A VDM is a software feature that allows administrative separation and replication of CIFS environments. A VDM
houses a group of CIFS servers and their shares. A VDM looks like a computer on the Windows network. It has its
own event log, local user and group database, CIFS servers and shares, and usermapper cache that are
applicable when using NFS and CIFS to access the same file system on the same VNX file system. EMC
recommends that you create CIFS servers in VDMs. This provides separation of the CIFS server user and group
databases, CIFS auditing settings, and event logs. This is required if the CIFS server and its associated file
systems are ever to be replicated using VNX Replicator. An exception to creating a CIFS server in a VDM is the
CIFS server that is used to route antivirus activity.
VDMs apply only to CIFS shares. You cannot use VDMs with NFS exports. Each VDM has access only to the file
systems mounted to that VDM. This provides a logical isolation between the VDM and the CIFS servers that it
contains. A VDM can be moved from one physical Data Mover to another.
Each VDM stores its configuration information in a VDM root file system, which is a directory within the root file
system of the physical Data Mover. No data file systems are stored within the VDM root file system. All user data
is kept in user file systems.
The default size of the root file system is 128 MB. In an environment with a large number of users or shares, you
might need to increase the size of the root file system. You cannot extend the root file system automatically.
Zoning Principles
Zoning enables you to set up access control between storage devices or user groups. If you have administrator
privileges in your fabric, you can create zones to increase network security and to prevent data loss or corruption.
Examining the source-destination ID field enforces zoning. Zoning is a fabric-based service in storage area
networks that groups host and storage nodes that have a need to communicate.
Best Practices
Ports on the storage arrays are reserved for add-on components and features such as RecoverPoint, data
migration, VG Gateways (Vblock System 700 family only), and SAN backup. Do not use these array front-end
ports for VPLEX or direct host access.
EMC/VCE recommends single-initiator zoning. Each HBA port should be configured on its switch with a separate
zone that contains the HBA and the SP ports with which it communicates.
29
Connect the host and storage ports in such a way as to prevent a single point of failure from affecting redundant
paths. If you have a dual-attached host and each HBA accesses its storage through a different storage port, do
not place both storage ports for the same server on the same Line Card or ASIC.
Always use two power sources.
While planning the SAN, keep track of how many host and storage pairs will be utilizing the ISLs between
domains. As a general best practice, if two switches have to be connected by ISLs, ensure that there is a
minimum of two ISLs between them and that there are no more than six initiator and target pairs per ISL.
There can be a maximum of three tiers in a heterogeneous storage pool based on the drive types available.
Heterogeneous pools can also have two-drive types and still leverage FAST VP. FAST VP does not
differentiate tiers by the drive speeds. Users should ensure that they choose drives with common rotation
speeds when building pools.
Troubleshooting
Connectivity
Troubleshooting is a form of problem solving, often applied to repair-failed products or processes. It is a logical,
systematic search for the source of a problem so that it can be solved, and the product or process can be made
operational again. Troubleshooting is needed to develop and maintain complex systems where the symptoms of a
problem can have many possible causes. Troubleshooting is used in many fields such as engineering, system
administration, electronics, automotive repair, and diagnostic medicine. Troubleshooting requires identification of
the malfunction(s) or symptoms within a system. Then, experience is commonly used to generate possible causes
of the symptoms. Determining the most likely cause is a process of eliminationeliminating potential causes of a
problem. Finally, troubleshooting requires confirmation that the solution restores the product or process to its
working state.
In general, troubleshooting is the identification of, or diagnosis of, "trouble" in the management flow of a
corporation or a system caused by a failure of some kind. The problem is initially described as symptoms of
malfunction, and troubleshooting is the process of determining and remedying the causes of these symptoms.
30
A system can be described in terms of its expected, desired, or intended behavior (usually, for artificial systems,
its purpose). Events or inputs to the system are expected to generate specific results or outputs. (For example
selecting the "print" option from various computer applications is intended to result in a hardcopy emerging from
some specific device.) Any unexpected or undesirable behavior is a symptom. Troubleshooting is the process of
isolating the specific cause or causes of the symptom.
Effective troubleshooting steps:
Define normal behavior: What did previous metrics look like?
Identify the problem.
Collect information that will lead to identifying the root cause.
Check and collect switch and storage array information and configuration from log files.
Print or capture exact error codes.
Use array management software to monitor the health of the system.
Check power on the system.
Open ticket with support.
Fault Detection
Storage administrators are constantly trying to maximize performance. When an application starts performing
poorly and happens to rely on a SAN, administrators struggle to isolate the performance bottleneck. Performance
issues can be the result of anything from a misconfigured component to subtle interactions between various parts
of the SAN. When performance issues occur, administrators can rely on Sentry Software monitoring solutions to
check signs of physical faults, such as I/O retries, suspect data movement, and poor response times.
The Symmetrix DMX has unique methods to proactively detect and prevent failures. The Symmetrix is designed
with the following data integrity features:
Error checking, correction, and data integrity protection.
Disk error correction and error verification.
Cache error correction and error verification.
Periodic system checks.
Through the service processor, the Enginuity storage-operating environment proactively monitors all end-to-end
I/O operations for errors and faults. By tracking these errors during normal operation, Enginuity can recognize
patterns of error activity and predict a potential hard failure. This proactive error tracking capability can detect
and/or remove from service a suspect component before a failure occurs.
Health Check
A basic health check in the Celerra or VNX arrays is always the best place to start. This check is meant to be
an environmental one that will allow us to test all the power supplies, fans, management switches, etc. contained
in the Vblock Systems.
At this point, it is important to notice that the EMC Celerra and VNX arrays have redundant devices, so in case of
failure of a single power supply or management switch, we don't have to worry because there will be another one
available. After running the necessary commands for this, EMC/VCE recommends using Unisphere (if on a
VNXe) and making use of its Dashboard widgets, and logs page. This will monitor and report a variety of events
on the VNXe system. They are written to the user log, and the interface will show an icon revealing the label for
each event. The labels are listed from Informational to Critical.
31
Due to the various components and management options in Vblock Systems, VCE recommends reviewing the
documentation for a specific model of Vblock Systems and its associated components.
A real-time check on the connectivity, management, and storage component status of your VNX system is
recommended. The health check verifies network connectivity, management service status, storage processor
status, hot spare status, disk faults, disk status, blade status, and hardware component faults. Unisphere Service
Manager (USM) is used for this purpose.
32
Vblock Systems have been architected with a base and upgrade approach. Base configurations are those
configurations that represent an entry point to Vblock Systems. A Vblock Systems base begins with racks,
in-rack PDUs, cabling, patch panels, aggregate SAN and Ethernet switches, enough UCS hardware to
support a full rack of servers, and storage configuration with a small amount of initial storage in a discrete
configuration. Upgrades of blade packs and disk tier groups extend the base to achieve a particular
performance or scalability goal. This architecture provides customers with significant scalability and flexibility
to meet business requirements. Due to the many components in the Vblock Systems, it is recommended to
visit VCE and EMC technical publication websites for the most current information on any procedure.
Documentation and procedures are updated frequently.
Conclusion
This study guide represents a subset of all of the tasks, configuration parameters, and features that are part of a
Vblock Systems deployment and implementation. This study guide focuses on deploying EMC storage array in a
VCE Vblock Systems converged infrastructure, including how to configure and manage the data storage array on
Vblock Systems.
Exam candidates with the related recommended prerequisite working knowledge, experience, and training should
thoroughly review this study guide and the resources in the References document (available on the VCE
Certification website) to help them successfully complete the VCE Vblock Systems Deployment and
Implementation: Storage Exam.
ABOUT VCE
VCE, formed by Cisco and EMC with investments from VMware and Intel, accelerates the adoption of
converged infrastructure and cloud-based computing models that dramatically reduce the cost of IT while
improving time to market for our customers. VCE, through the Vblock Systems, delivers the industry's
only fully integrated and fully virtualized cloud infrastructure system. VCE solutions are available
through an extensive partner network, and cover horizontal applications, vertical industry offerings, and
application development environments, allowing customers to focus on business innovation instead of
integrating, validating, and managing IT infrastructure.
For more information, go to vce.com.
Copyright 2014 VCE Company, LLC. All rights reserved. VCE, VCE Vision, Vblock, and the VCE logo are registered trademarks or
trademarks of VCE Company LLC or its affiliates in the United States and/or other countries. All other trademarks used herein are the property
of their respective owners.
12052014
33