You are on page 1of 81

Design and Implementation Guide

EMC VSPEX with EMC VPLEX for


VMware vSphere 5.1

Abstract
This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with
EMC VPLEX Metro, VMware vSphere, and EMC VNX for up to 125 virtual machines.
June, 2013

Copyright 2013 EMC Corporation. All rights reserved. Published in the USA.
Published June 2013
EMC believes the information in this publication is accurate of its publication date. The information is subject
to change without notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties
of any kind with respect to the information in this publication, and specifically disclaims implied warranties of
merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software
described in this publication requires an applicable software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United
States and other countries. All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical documentation and
advisories section on the EMC Online Support website.
EMC VSPEX with EMC VPLEX for VMware vSphere 5.1
Part Number H11878

Contents
1. Executive Summary ..............................................................................................8
2. Background and VPLEX Overview ..........................................................................9
2.1 Document purpose ...................................................................................................... 9
2.2 Target Audience ........................................................................................................... 9
2.3 Business Challenges.................................................................................................... 9

3. VSPEX with VPLEX Solution .................................................................................11


3.1 VPLEX Local ............................................................................................................... 11
3.2 VPLEX Metro .............................................................................................................. 12
3.3 VPLEX Platform Availability and Scaling Summary ...................................................... 14

4. VPLEX Overview .................................................................................................15


4.1 Continuous Availability .............................................................................................. 15
4.2 Mobility ..................................................................................................................... 16
4.3 Stretched Clusters Across Distance ............................................................................ 18
4.4 vSphere HA and VPLEX Metro HA................................................................................ 18
4.5 VPLEX Availability ...................................................................................................... 19
4.6 Storage/Service Availability ....................................................................................... 20

5. Solution Architecture..........................................................................................22
5.1 Overview .................................................................................................................... 22
5.2 Solution Architecture VPLEX Key Components ............................................................ 23
5.3 VPLEX Cluster Witness ............................................................................................... 25

6. Best Practices and Configuration Recommendations ...........................................30


6.1 VPLEX Back-End Storage ............................................................................................ 30
6.2 VPLEX Host Connectivity ............................................................................................ 30
6.3 VPLEX Network Connectivity ....................................................................................... 31
6.4 VPLEX Cluster Connectivity......................................................................................... 31
6.5 Storage Configuration Guidelines .............................................................................. 32
6.6 VSPEX Storage Building Blocks .................................................................................. 33

7. VPLEX Local Deployment ....................................................................................34


7.1 Overview .................................................................................................................... 34
7.2 Physical Installation................................................................................................... 34
7.3 Preliminary Tasks ....................................................................................................... 35
7.4 Set public IPv4 address ............................................................................................. 40
7.5 Run EZ-Setup Wizard.................................................................................................. 40
7.6 Expose Back-End Storage .......................................................................................... 40
7.7 Resume EZ-Setup ....................................................................................................... 40
7.8 Meta-volume.............................................................................................................. 41
7.9 Register VPLEX ........................................................................................................... 41

7.10 Enable Front-End Ports ............................................................................................. 41


7.11 Configure VPLEX for ESRS Gateway .......................................................................... 41
7.12 Re-Verify Cluster Health ........................................................................................... 42

8. VPLEX Metro Deployment ...................................................................................43


8.1 Overview .................................................................................................................... 43
8.2 Physical Installation................................................................................................... 43
8.3 Preliminary Tasks ....................................................................................................... 44
8.4 Set public IPv4 address ............................................................................................. 44
8.5 Run EZ-Setup Wizard on Cluster 1 .............................................................................. 44
8.6 Expose Back-End Storage .......................................................................................... 45
8.7 Resume EZ-Setup ....................................................................................................... 45
8.8 Meta-volume.............................................................................................................. 45
8.9 Register Cluster 1 ....................................................................................................... 45
8.10 Enable Front-End Ports ............................................................................................. 46
8.11 Connect Cluster 2 .................................................................................................... 46
8.12 Verify the Product Version ........................................................................................ 46
8.13 Verify Cluster 2 Health ............................................................................................. 46
8.14 Synchronize Clusters ............................................................................................... 46
8.15 Launch EZ-Setup on Cluster 2 .................................................................................. 46
8.16 Expose Back-end Storage on Cluster 2 ..................................................................... 47
8.17 Resume EZ-Setup on Cluster 2 ................................................................................. 47
8.18 Create Meta-volume on Cluster 2 ............................................................................. 47
8.19 Register Cluster 2..................................................................................................... 47
8.20 Configure VPLEX for ESRS Gateway .......................................................................... 47
8.21 Complete EZ-Setup on Cluster 1 ............................................................................... 48
8.22 Complete EZ-Setup on Cluster 2 ............................................................................... 48
8.23 Configure WAN Interfaces ........................................................................................ 48
8.24 Join the Clusters....................................................................................................... 48
8.25 Create Logging Volumes .......................................................................................... 49
8.26 Re-Verify Cluster Health ........................................................................................... 49

9. Provisioning Virtual Volumes with VPLEX Local ....................................................50


9.1 Confirm Storage Pools ............................................................................................... 50
9.2 Provision Storage ....................................................................................................... 51

10. Adding VPLEX to an Existing VSPEX Solution .....................................................54


10.1 Assumptions............................................................................................................ 54
10.2 Integration Procedure .............................................................................................. 54
10.3 Storage Array Mapping............................................................................................. 54
10.4 VPLEX Procedure ...................................................................................................... 54
10.5 Power-on Hosts........................................................................................................ 59
10.6 Register Host Initiators............................................................................................. 59

10.7 Create Highly Available Datastores .......................................................................... 63

11. Converting a VPLEX Local Cluster into a VPLEX Metro Cluster .............................65
11.1 Gathering Information for Cluster-2 .......................................................................... 65
11.2 Configuration Information for Cluster Witness .......................................................... 65
11.3 Consistency Group and Detach Rules ....................................................................... 66
11.4 Create Distributed Devices between VPLEX Cluster-1 and Cluster-2.......................... 67
11.5 Create Storage View for ESXi Hosts .......................................................................... 68

12. Post-install checklist ........................................................................................69


13. Summary .........................................................................................................70
Appendix-A -- References .......................................................................................71
Appendix-B Tech Refresh using VPLEX Data Mobility ............................................72
Appendix-C VPLEX Configuration limits ................................................................76
Appendix-D VPLEX Pre-Configuration Worksheets.................................................77

List of Figures
Figure 1: Private Cloud components for VSPEX with VPLEX Local......................................... 12
Figure 2: Private Cloud components for a VSPEX/VPLEX Metro solution .............................. 13
Figure 3: VPLEX delivers zero downtime .............................................................................. 15
Figure 4: Application Mobility within a datacenter .............................................................. 16
Figure 5: Application and Data Mobility Example ................................................................ 17
Figure 6: Application and Data Mobility Example ................................................................ 18
Figure 7: Highly Available Infrastructure Example ............................................................... 20
Figure 8: VPLEX Local architecture for Traditional Single Site Environments ........................ 22
Figure 9: VPLEX Metro architecture for Distributed Environments ........................................ 23
Figure 10: VSPEX deployed with VPLEX Metro using distributed volumes ........................... 24
Figure 11: Failure scenarios without VPLEX Witness ............................................................ 26
Figure 12: Failure scenarios with VPLEX Witness ................................................................. 26
Figure 13: VSPEX deployed with VPLEX Metro configured with 3rd Site VPLEX Witness ........ 27
Figure 15: Storage Layout for 125 Virtual Machine Private Cloud Proven Infrastructure ...... 33
Figure 16: Place VNX LUNs into VPLEX Storage Group ......................................................... 51
Figure 17: VPLEX Local System Status and Login Screen ..................................................... 52
Figure 18: Provisioning Storage .......................................................................................... 52
Figure 19: EZ-Provisioning Step 1: Claim Storage and Create Virtual Volumes .................... 53
Figure 20: EZ-Provisioning Step 2: Register Initiators .......................................................... 53
Figure 21: EZ-Provisioning Step 3: Create Storage View ...................................................... 53
Figure 22: EZ-Provisioning................................................................................................... 56
Figure 23: Create Virtual Volumes- Select Array................................................................... 56
Figure 24: Create Virtual Volumes- Select Storage Volumes ................................................ 57
Figure 25: Create Distributed Volumes- Select Mirrors ........................................................ 58
Figure 26: Create Distributed Volumes- Select Consistency Group ...................................... 59
Figure 27: EZ-Provisioning- Register Initiators ..................................................................... 59
Figure 28: View Unregistered Initiator-ports ........................................................................ 60
Figure 29: View Unregistered Initiator-ports ........................................................................ 61
Figure 30: EZ-Provisioning- Create Storage View ................................................................. 61
Figure 31: Create Storage View- Select Initiators ................................................................. 62
Figure 32: Create Storage View- Select Ports ....................................................................... 62
Figure 33: Create Storage View- Select Virtual Volumes ...................................................... 63
Figure 34: VPLEX Metro System Status Page ....................................................................... 66
Figure 35: VPLEX Consistency Group created for Virtual Volumes........................................ 67
Figure 36: VPLEX Distributed Devices .................................................................................. 67
Figure 37: VPLEX Storage View for ESXi Hosts ..................................................................... 68
Figure 38: Batch Migration, Create Migration Plan .............................................................. 73
Figure 39: Batch Migration, Start Migration ......................................................................... 74
Figure 40: Batch Migration, Monitor Progress ..................................................................... 74
Figure 41: Batch Migrations, Change Migration State.......................................................... 75
Figure 42: Batch Migrations, Commit the Migration ............................................................ 75

List of Tables
Table 1: VPLEX Components ............................................................................................... 24
Table 2: Hardware Resources for Storage ........................................................................... 32
Table 3: IPv4 Networking Information ................................................................................. 35
Table 4: Metadata Backup Information ............................................................................... 35
Table 5: SMTP details to configure event notifications ........................................................ 36
Table 6: SNMP information ................................................................................................. 37
Table 7: Certificate Authority (CA) and Host Certificate information .................................... 37
Table 8: Product Registration Information ........................................................................... 37
Table 9: VPLEX Metro IP WAN Configuration Information ..................................................... 38
Table 10: Cluster Witness Configuration Information .......................................................... 39
Table 13: IPv4 Networking Information ............................................................................... 77
Table 14: Metadata Backup Information ............................................................................. 77
Table 15: SMTP details to configure event notifications ...................................................... 77
Table 16: SNMP information ............................................................................................... 79
Table 17: Certificate Authority (CA) and Host Certificate information .................................. 79
Table 18: Product Registration Information ......................................................................... 79
Table 19: IP WAN Configuration Information ....................................................................... 80
Table 20: Cluster Witness Configuration Information .......................................................... 81

1. Executive Summary
Businesses face many challenges in delivering application availability, while working within
constrained IT budgets. Increased deployment of storage virtualization lowers costs and
improves availability, but this alone will not allow businesses to provide the required
application access demanded by users. This document provides an overview of VPLEX, the
use cases, and how VSPEX with VPLEX solutions provide the continuous availability and
mobility mission-critical applications require for 24X7 operations.
This document is divided into sections that give an overview of the VPLEX family, use cases,
solution architecture, how VPLEX extends VMware capabilities along with solution
requirements and configuration details.
The EMC VPLEX family versions, Local and Metro, provide continuous availability and
non-disruptive data mobility for EMC and non-EMC storage within and across data centers.
Additionally, this document will cover the following:
VMware vSphere makes it simpler and less expensive to provide higher levels of
availability for critical business applications. With vSphere, organizations can easily
increase the baseline level of availability provided for all applications, as well as
provide higher levels of availability more easily and cost-effectively.

How VPLEX Metro extends VMware vMotion, HA, DRS and FT, stretching the VMware
cluster across distance providing solutions that go beyond traditional Disaster
Recovery.

Solution requirements for software and hardware, material lists, step-by-step sizing
guidance and worksheets, and verified deployment steps to implement a VPLEX
solution with VSPEX Private Cloud for VMware vSphere that supports up to 125
Virtual machines.

2. Background and VPLEX Overview


2.1 Document purpose
This document provides an overview on how to use VPLEX to leverage VSPEX Proven
Infrastructure, an explanation on how to modify the architecture for specific engagements,
and instructions on how to effectively deploy and monitor the overall system.
This document applies to VSPEX deployed with EMC VPLEX Metro and VPLEX Witness. The
details provided in this document are based on the following configurations:

VPLEX GeoSynchrony 5.1 (patch 4) or higher


VPLEX Metro
VPLEX Clusters are within 5 milliseconds (ms) of each other for VMware HA (10ms is
possible with a VMware Enterprise Plus license)
VPLEX Witness is deployed to a third failure domain
ESXi and vSphere 5.1 or later are used
Any qualified pair of arrays (both EMC and non-EMC) listed on the EMC Simple
Support Matrix (ESSM) found here:
https://elabnavigator.emc.com/vault/pdf/EMC_VPLEX.pdf

2.2 Target Audience


The readers of this document should be familiar with the VSPEX Proven Infrastructures, have
the necessary training and background to install and configure VMware vSphere, EMC
VNX series storage systems, VPLEX, and associated infrastructure as required by this
implementation. External references are provided where applicable, and the readers should
be familiar with these documents. After purchase, implementers of this solution should
focus on the configuration guidelines of the solution validation phase and the appropriate
references and appendices.

2.3 Business Challenges


Most of todays organizations operate 24X7, with most applications being mission- critical.
Continuous availability of these applications to all users is a primary goal of IT. A secondary
goal is to have all applications up and running as soon as possible if the applications stop
processing. There are hundreds of possibilities that can cause infrastructure to be taken
down - from fires, flooding, natural disasters, application failures, or even simple mistakes
in the computer room, of which most are outside of ITs control. Sometimes there are good
reasons to take down applications for scheduled maintenance, tech refreshes, load
balancing, or data center relocation. In all of these scenarios the outcome is the same,

applications stop processing. The ultimate goal of the IT organization is to maintain mission
critical application availability.

3. VSPEX with VPLEX Solution


VSPEX with VPLEX utilizing best-of-breed technologies delivers the power, performance and
reliability businesses need to be competitive. VSPEX solutions are built with proven best-ofbreed technologies to create complete virtualization solutions that enable you to make an
informed decision in the hypervisor, server, and networking layers.
Customers are increasingly deploying their business applications on consolidated compute,
network, and storage environments. EMC VSPEX Private Cloud using VMware reduces the
complexity of configuring every component of a traditional deployment model. With VSPEX
the complexity of integration management is reduced while maintaining the application
design and implementation options. VPLEX enhances the VSPEX value proposition by
adding the continuous availability and non-disruptive data mobility use cases to the VSPEX
infrastructure. VPLEX rounds out the VSPEX Data Protection portfolio by providing the ability
to:
Refresh technology non-disruptively within the storage arrays within VSPEX
vMotion virtual machines non-disruptively from one VSPEX system to another
(example, for workload balancing or disaster avoidance)
Automatically restart virtual machines from one VSPEX to another to deliver a higher
level of protection to VMware environments on VSPEX
The following sections describe the VPLEX Local and VPLEX Metro products and how they
deliver the value propositions listed above as part of a VSPEX solution.

3.1 VPLEX Local


This solution uses VSPEX Private Cloud for VMware vSphere 5.1 with VPLEX Local to provide
simplified management and non-disruptive data mobility between multiple heterogeneous
storage arrays within the data center. VPLEX removes physical barriers within the
datacenter. With its unique scale-out architecture, VPLEXs advanced data caching and
distributed cache coherency provide workload resiliency, automatic sharing, balancing and
failover of storage domains, and enable local access with predictable service levels.

Figure 1: Private Cloud components for VSPEX with VPLEX Local


Note: The above image depicts a logical configuration, physically the VPLEX can be hosted
within the VSPEX rack.

3.2 VPLEX Metro


The two data center site solution referenced in this document uses VSPEX Private Cloud for
VMware vSphere 5.1 with VPLEX Metro to provide simplified management and nondisruptive data mobility between multiple heterogeneous storage arrays across data
centers. VPLEX Metro enhances the capabilities of VMware vMotion, HA, DRS and FT to
provide a solution that extends data protection strategies that go beyond traditional

Disaster Recovery. This solution provides a new type of deployment which achieves
continuous availability over distance for todays enterprise storage and cloud environments.
VPLEX Metro provides data access and mobility between two VPLEX clusters within
synchronous distances. This solution builds on the VPLEX Local approach by creating a
VPLEX Metro cluster between the two geographically dispersed datacenters. Once
deployed, this solution will provide truly available distributed storage volumes over
distance and makes VMware technology such as vMotion, HA, DRS and FT even better and
easier.

Figure 2: Private Cloud components for a VSPEX/VPLEX Metro solution


The above image is a logical configuration depicting 125 VMs with their datastores
stretched across a VMware vSphere 5.1 cluster. This infrastructure will deliver continuous
availability for the applications as well as enable non-disruptive workload mobility and
balancing. The VPLEX appliances can be physically hosted within the VSPEX racks if space
permits.

3.3 VPLEX Platform Availability and Scaling Summary


VPLEX addresses high-availability and data mobility requirements while scaling to the I/O
throughput required for the front-end applications and back-end storage.
Continuous Availability (CA), High-availability (HA), and Data Mobility features are all
characteristics of the VPLEX Local and VPLEX Metro solutions outlined in this document.
The basic building block of a VPLEX is an engine. To eliminate single points of failure, each
VPLEX Engine consists of two Directors. A VPLEX cluster can consist of one, two, or four
engines. Each engine is protected by a standby power supply (SPS), and each Fibre
Channel switch gets its power through an uninterruptible power supply (UPS). In a dualengine or quad-engine cluster, the management server also gets power from a UPS. The
management server has a public Ethernet port, which provides cluster management
services when connected to the customer network.
VPLEX scales both up and out. Upgrades from a single engine to a dual engine cluster as
well as from a dual engine to a quad engine are fully supported and are accomplished nondisruptively. This is referred to as scale up. Upgrades from a VPLEX Local to a VPLEX Metro
are also supported non-disruptively.

4. VPLEX Overview
The EMC VSPEX with EMC VPLEX solution represents the next-generation architecture for
continuous availability and data mobility for mission-critical applications. This architecture
is based on EMCs 20+years of expertise in designing; implementing and perfecting
enterprise class intelligent cache and distributed data protection solutions. The combined
VSPEX with VPLEX solution provides a complete system architecture capable of supporting
up to 125 virtual machines with a redundant server or network topology and highly
available storage within or across geographically dispersed datacenters.
VPLEX addresses three distinct customer requirements:
Continuous Availability: The ability to create high-availability storage infrastructure
across synchronous distances with unmatched resiliency.
Mobility: The ability to move applications and data across different storage
installationswithin the same data center, across a campus, or within a
geographical region.
Stretched Clusters across Distance The ability to extend VMware vMotion, HA, DRS
and FT outside the data center across distances, ensuring the continuous availability
of VSPEX solutions.

4.1 Continuous Availability


EMC VPLEX family provides continuous availability with zero unplanned downtime for
applications from within a data center and across data centers at synchronous distances.
VPLEX enables users to have the exact same information simultaneously read / write
accessible in two locations, delivering the ability to stretch hypervisor clusters, such as
VMware across sites. Instead of idle assets at the second site, all infrastructure is utilized in
an Active-Active state.

Figure 3: VPLEX delivers zero downtime

With VPLEX in place, customers now have infinite flexibility in the area of Data Mobility.
This addresses some compelling use cases such as array technology refreshes with nodisruption to the applications or planned downtime. It also enables performance load
balancing for customers who want to dynamically move data to a higher performing or
higher capacity array without affecting the end users.

4.2 Mobility
EMC VPLEX Local enables the connectivity to heterogeneous storage arrays providing
seamless data mobility and the ability to manage storage provisioned from multiple
heterogeneous arrays from a single interface within a data center. This provides you with
the ability to relocate, share and balance infrastructure resources within a data center.

Figure 4: Application Mobility within a datacenter


VPLEX Metro configurations enable migrations within and across datacenters over
synchronous distances. In combination with VMware using vMotion, it allows you to
transparently relocate Virtual Machines and their corresponding applications and data over
synchronous distance. This provides you with the ability to relocate, share and balance
infrastructure resources between data centers. These capabilities save you money, both in
reducing time to do data migrations, and balancing workloads across sites to fully utilize
infrastructure at both sites.

Traditional data migration using array replication or manual data moves, are an expensive,
time consuming, and oftentimes risky process. They are often expensive since companies
typically are paying someone to do the services work. Migrations can be time consuming as
the customer cant just shut down servers, instead they must work through their business
units to identify possible windows to work within that are mostly during nights and
weekends. Migrations can also be risky events if all of the dependencies between
applications arent well documented. It is possible that any issues in the migration process
may not be able to be remediated until the following maintenance cycle without an outage.
VPLEX limits the risk in traditional migrations by having a fully reversible process. If
performance or other issues are discovered when the new storage is put online, the new
storage can be taken down and the old storage can continue serving I/O. Due to the ease of
migrations with VPLEX, the customer can do the migrations themselves and there are
significant services cost savings. Also, new infrastructure can be used immediately with no
need to wait for scheduled downtime to begin migrations. There is powerful TCO associated
with VPLEX all future refreshes and migrations are free.

Figure 5: Application and Data Mobility Example


A VPLEX Cluster is a single virtualization I/O group that enables non-disruptive data mobility
across the entire cluster. This means that all Directors in a VPLEX cluster have access to all
Storage Volumes making this solution what is referred to as an N -1 architecture. This type
of architecture allows for multiple director failures without loss of access to data down to a
single director.

During a VPLEX Mobility operation any jobs in progress can be paused or stopped without
affecting data integrity. Data Mobility creates a mirror of the source and target devices
allowing the user to commit or cancel the job without affecting the actual data. A record of
all mobility jobs are maintained until the user purges the list for organizational purposes.

4.3 Stretched Clusters Across Distance


VPLEX Metro extends VMware vMotion, High Availability (HA), Distributed Resource
Scheduler (DRS) and Fault Tolerance (FT) outside the data center across distances, ensuring
the continuous availability of VSPEX solutions. Stretching vMotion across datacenters
enables non-disruptive load balancing, maintenance, and workload re-location. VMware
DRS provides for full utilization of resources across domains.

Figure 6: Application and Data Mobility Example

4.4 vSphere HA and VPLEX Metro HA

Due to its core design, EMC VPLEX Metro provides the perfect foundation for VMware High
Availability and Fault Tolerance clustering over distance ensuring simple and transparent
deployment of stretched clusters without any added complexity.
VPLEX Metro takes a single block storage device in one location and distributes it to
provide single disk semantics across two locations. This enables a distributed VMFS
datastore to be created on that virtual volume. Furthermore, if the layer 2 network has also
been stretched then a single instance of vSphere (including a single logical datacenter)
can now also be distributed into more than one location and VMware HA can be enabled
for any given vSphere cluster. This is possible since the storage federation layer of the
VPLEX is completely transparent to ESXi. It therefore enables the user to add ESXi hosts at
two different locations to the same HA cluster. Stretching an HA failover cluster (such as
VMware HA) with VPLEX creates a Federated HA cluster over distance. This blurs the
boundaries between local HA and disaster recovery since the configuration has the
automatic restart capabilities of HA combined with the geographical distance typically
associated with synchronous DR.

4.5 VPLEX Availability


VPLEX is built on a foundation of scalable and highly available processor engines and is
designed to seamlessly scale from small to large configurations. VPLEX resides between the
servers and heterogeneous storage assets, and uses a unique clustering architecture that
allows servers at multiple data centers to have read/write access to the same data at two
locations at the same time. Unique characteristics of this new architecture include:

Scale-out clustering hardware lets you start small and grow big with predictable
service levels
Advanced data caching utilizes large-scale SDRAM cache to improve performance
and reduce I/O latency and array contention
Distributed cache coherence for automatic sharing, balancing, and failover of I/O
across the cluster
Consistent view of one or more LUNs across VPLEX clusters (within a data center or
across synchronous distances) enabling new models of high-availability and
workload relocation

With a unique scale-up and scale-out architecture, VPLEX advanced data caching and
distributed cache coherency provide continuous availability, workload resiliency, automatic
sharing, balancing, and failover of storage domains, and enables both local and remote
data access with predictable service levels.
EMC VPLEX has been architected for virtualization enabling federation across VPLEX
Clusters. VPLEX Metro supports a maximum 5ms RTT for FC or 10 GigE connectivity.
To protect against entire site failure causing application outages, VPLEX uses a VMware
Virtual machine located within a separate failure domain to provide a VPLEX Witness

between VPLEX Clusters that are part of a distributed/federated solution. The VPLEX
Witness, known as Cluster Witness, resides in a third failure domain monitoring both VPLEX
Clusters for availability. This third site needs only IP connectivity to the VPLEX sites.

4.6 Storage/Service Availability


Each VPLEX site has a local VPLEX Cluster with physical storage and hosts connected to that
VPLEX Cluster only. The VPLEX Clusters themselves are interconnected across the sites to
enable federation. A virtual volume is taken from each of the VPLEX Clusters to create a
distributed virtual volume. Hosts connected in Site A actively use the storage I/O capability
of the storage in Site A; Hosts in Site B actively use the storage I/O capability of the storage
in Site B.

Figure 7: Highly Available Infrastructure Example


VPLEX distributed volumes are available from either VPLEX cluster and have the same LUN
and storage identifiers when exposed from each cluster, enabling true concurrent
read/write access across sites.

When using a distributed virtual volume across two VPLEX Clusters, if the storage in one of
the sites is lost, all hosts continue to have access to the distributed virtual volume, with no
disruption. VPLEX services all read/write traffic through the remote mirror leg at the other
site.

5. Solution Architecture
5.1 Overview
The VSPEX with VPLEX solution using VMware vSphere has been validated for configuration
with up to 125 virtual machines.
Figure-8 shows an environment with VPLEX Local only, virtualizing the storage and providing
high availability across storage arrays. Since all ESXi servers are able to see VPLEX, VMware
vMotion, HA, and DRS are able to seamlessly move and be restarted on all hosts. This
configuration is traditional virtualized environment compared to the VPLEX Metro
environment which provides high availability within, and across, datacenters.

Figure 8: VPLEX Local architecture for Traditional Single Site Environments

Figure-9 characterizes both a traditional infrastructure validated with block -based storage
in a single datacenter, and a distributed infrastructure validated with block -based storage
federated across two datacenters, where 8 Gb FC carries storage traffic locally, and 10 GbE
carries storage, management, and application traffic across datacenter sites

Figure 9: VPLEX Metro architecture for Distributed Environments

5.2 Solution Architecture VPLEX Key Components


This solution adds the following VPLEX technology to the VSPEX Private Cloud for VMware
vSphere 5.1 for 125 Virtual Machines solution:

Table 1: VPLEX Components


Cluster-1 Components
Directors
Redundant Engine SPSs

Single engine
2
Yes

FE Fibre Channel ports (VS2)

BE Fibre Channel ports (VS2)

Cache size (VS2 Hardware)


Management Servers

72GB
1

Internal Fibre Channel switches (Local Comm)

None

Uninterruptable Power Supplies (UPSs)

None

Cluster-2 Components
Directors
Redundant Engine SPSs

Single engine
2
Yes

FE Fibre Channel ports (VS2)

BE Fibre Channel ports (VS2)

Cache size (VS2 Hardware)


Management Servers

72GB
1

Internal Fibre Channel switches (Local Comm)

None

Uninterruptable Power Supplies (UPSs)

None

The figure below shows a high-level physical topology of a VPLEX Metro distributed device.
VPLEX Dual and Quad engine options can be found in the Appendix.

Figure 10: VSPEX deployed with VPLEX Metro using distributed volumes
Figure 10 is a physical representation of the logical configuration shown in Figure 9.
Effectively, with this topology deployed, the distributed volume can be treated just like any

other volume; the only difference being it is now distributed and available in two locations
at the same time. Another benefit of this type of architecture is extreme simplicity since it
is no more difficult to configure a cluster across distance that it is in a single data center.
Note: When deploying VPLEX Metro you have the choice to inter-connect your VPLEX Clusters
by using either 8GB Fiber Channel or 10GB Ethernet WAN connectivity. When using FC
connectivity this can be configured with either a dedicated channel (i.e. separate non
merged fabrics) or an ISL based fabric (i.e. where fabrics have been merged across sites). It
is assumed that any WAN link will be fully routable between sites with physically
redundant circuits.
Note: It is vital that VPLEX Metro has enough bandwidth between clusters to meet
requirements. The Business Continuity Solution Designer (BCSD) tool can be used to
validate the design. EMC can assist in the qualification if desired.
https://elabadvisor.emc.com/app/licensedtools/list
For an in-depth technology and architectural understanding of VPLEX Metro, VMware HA, and
their interactions, please refer to the VPLEX HA Techbook found here:
http://www.emc.com/collateral/hardware/technical- documentation/h7113-vplexarchitecture-deployment.pdf

5.3 VPLEX Cluster Witness


VPLEX Metro goes beyond the realms of legacy active/passive replication technologies
since it can deliver true active/active storage over distance as well as federated availability.
There are three main items that are required to deliver true "Federated Availability".
1. True active/active fibre channel block storage over distance.
2. VPLEX Storage mirroring delivers one view of storage, making data accessible
immediately, with no waiting for mirroring to complete. This feature eliminates the
need for host based mirroring, saving host CPU cycles.
3. External arbitration to ensure that under all failure conditions automatic recovery is
possible.
In the previous sections we have discussed 1 and 2, but now we will look at external
arbitration which is enabled by VPLEX Witness.
VPLEX Witness is delivered as a zero cost VMware Virtual Appliance (vApp) which runs on a
customer supplied ESXi server, or a public cloud utilizing a VMware virtualized environment.
The ESXi server resides in a physically separate failure domain to either VPLEX cluster and
uses different storage to the VPLEX cluster.

Using VPLEX Witness ensures that true Federated Availability can be delivered. This means
that regardless of site or link/WAN failure a copy of the data will automatically remain
online in at least one of the locations. When setting up a single or a group of distributed
volumes the user will choose a preference rule which is a special property that each
individual or group of distributed volumes has. It is the preference rule that determines the
outcome after failure conditions such as site failure or link partition. The preference rule can
either be set to cluster A preferred, cluster B preferred or no automatic winner. At a high
level this has the following effect to a single or group of distributed volumes under different
failure conditions as listed below:

Figure 11: Failure scenarios without VPLEX Witness


As we can see in Figure 11 if we only used the preference rules without VPLEX Witness then
under some scenarios manual intervention would be required to bring the volume online at
a given VPLEX cluster(e.g. if site A is the preferred site, and site A fails, site B would also
suspend).
This is where VPLEX Witness assists since it can better diagnose failures due to the network
triangulation, and ensures that at any time at least one of the VPLEX clusters has an active
path to the data as shown in the table below:

Figure 12: Failure scenarios with VPLEX Witness

As one can see from Figure 12 VPLEX Witness converts a VPLEX Metro from an active/active
mobility and collaboration solution into an active/active continuously available storage
cluster. Furthermore once VPLEX Witness is deployed, failure scenarios become selfmanaging (i.e. fully automatic) which makes it extremely simple since there is nothing to do
regardless of the failure condition.

Figure 13: VSPEX deployed with VPLEX Metro configured with 3rd Site
VPLEX Witness
As depicted in Figure 13 above, we can see that the Witness VM is deployed in a separate
fault domain and connected into both VPLEX management stations via an IP network.
Note: VPLEX Witness will support a maximum round trip latency of 1 second between
VPLEXs.

VPLEX Virtualized Storage for VMware ESXi


Using VPLEX to virtualize your VMware ESXi storage will allow disk access without changing
the fundamental dynamics of datastore creation and use. Whether using VPLEX Local for
Virtual volumes or VPLEX Metro with Distributed Devices via AccessAnywhere the hosts
are still going to coordinate locking to ensure volume consistency. This is controlled by the
cluster file system Virtual Machine File System (VMFS) within each datastore. Each storage
volume will be presented to VPLEX, a Virtual Volume or Distributed device is created and
presented to each ESXi Host in the cluster and formatted with the VMFS file system. The
Figure 14 below shows a high-level physical topology of how VMFS and RDM disks are
passed to each ESXi host.

Figure 14: VMware virtual disk types


VMFS
VMware VMFS is a high-performance cluster file system for ESXi Server virtual machines that
allows multiple ESXi Servers to access the same virtual machine storage concurrently.
VPLEX enhances this technology by adding the ability to take a virtual volume at one
location and create a RAID-1 mirror that creates a distributed volume to provide single
disk semantics across two locations. This enables the VMFS datastore to be transparently
utilized within and across datacenters.
Raw Device Mapping (RDM)
VMware also provides RDM, which is a SCSI pass-through technology that allows a virtual
machine to pass SCSI commands for a volume directly to the physical storage array. RDMs

are typically used for quorum devices and/or other commonly shared volumes within a
cluster.

6. Best Practices and Configuration


Recommendations
6.1 VPLEX Back-End Storage
The following are Best Practices for VPLEX Back-End Storage:

Dual fabric designs for fabric redundancy and HA should be implemented to avoid a
single point of failure. This provides data access even in the event of a full fabric
outage.
Each VPLEX director will physically connect to both fabrics for both host (front-end)
and storage (back-end) connectivity. Hosts will connect to both an A director and B
director from both fabrics for the supported HA level of connectivity as required with
the Non-Disruptive Upgrade (NDU) pre-checks.
Fabric zoning should consist of a set of zones a single initiator and up to 16 targets
Avoid port speed issues between the fabric and VPLEX by using dedicated port
speeds taking special care not to use oversubscribed ports on SAN switches
It is required that each director in a VPLEX cluster must have a minimum of two I/O
paths to every local back-end storage array and to every storage volume presented to
that cluster.
VPLEX allows a maximum of 4 active paths per director to a given LUN (Optimal). This
is considered optimal because each director will load balance across the four active
paths to the storage volume.

6.2 VPLEX Host Connectivity


The following are Best Practices for VPLEX Host Connectivity:

Dual fabric designs are considered a best practice


The front-end I/O modules on each director should have a minimum of two physical
connections one to each fabric (required).
Each host should have at least one path to an A director and one path to a B director
on each fabric for a total of four logical paths (required for NDU).
Maximum availability for host connectivity is achieved by using hosts with multiple
host bus adapters and with zoning to all VPLEX directors.
Multipathing or path failover software is required at the host for access across the
dual fabrics
Each host should have fabric zoning that provides redundant access to each LUN
from a minimum of an A and B director from each fabric.

Four paths are required for NDU


Observe Director CPU utilization to schedule NDU for times when average
consumption is at acceptable levels

6.3 VPLEX Network Connectivity


The following are Best Practices for VPLEX Network Connectivity:

Requires an IPv4 Address for the management server


Management Server is configured for Auto Negotiate (1Gbps NIC)
VPN connectivity between management servers requires a routable/pingable
connection between each cluster.
Network QoS requires that the link latency does not exceed 1 second (not
millisecond) for management server to VPLEX Witness server
Network QoS must be able to handle file transfers during the NDU procedure
The following Firewall ports must be opened:
o Internet Key Exchange (IKE): UDP port 500
o NAT Traversal in the IKE (IPsec NAT-T): UDP port 4500
o Encapsulating Security Payload (ESP): IP protocol number 50
o Authentication Header (AH): IP protocol number 51
o Secure Shell (SSH) and Secure Copy (SCP): TCP port 22

6.4 VPLEX Cluster Connectivity


The following are Best Practices for VPLEX Cluster Connectivity:
Metro over Fiber Channel (8 Gbps)

Each directors FC WAN ports must be able to see at least one FC WAN port on every
other remote director (required).
The directors local com port is used for communications between directors within the
cluster.
Independent FC WAN links are strongly recommended for redundancy
Each director has two FC WAN ports that should be configured on separate fabrics to
maximize redundancy and fault tolerance.
Use VSANs to isolate VPLEX Metro FC traffic from other traffic using zoning.
Use VLANs to isolate VPLEX Metro Ethernet traffic from other traffic.

Metro over IP (10 Gbps/E)

Latency must be less than or equal to 5ms (RTT)


Cache must be configured for synchronous write through mode only

6.5 Storage Configuration Guidelines


This section provides guidelines for setting up the storage layer of the solution to provide highavailability and the expected level of performance.
The tested solutions described below use block storage via Fiber Channel. The storage layout
described below adheres to all current best practices. A customer or architect with the necessary
training and background can make modifications based on their understanding of the system usage
and load if required. However, the building blocks described in this document ensure acceptable
performance. The VSPEX storage building blocks document specifies recommendations for
customization.

Table 2: Hardware Resources for Storage


Component
EMC VNX Array

Configuration
Block

Common:
1 x 1 GbE NIC per Control Station for management
1 x 1 GbE NIC per SP for management
2 front-end ports per SP
System disks for VNX OE
For 125 virtual machines:
EMC VNX5300
o 60 x 600 GB 15k rpm 3.5-inch SAS drives
o 2 x 600 GB 15k rpm 3.5-inch SAS Hot Spares
o 10 x 200 GB Flash drives
o 1 x 200 GB Flash drive as a hot spare
o 4 x 200 GB Flash drive for FAST Cache
Cluster-1:
(2) Directors

EMC VPLEX Metro

Single Engine

(8) Front-End Ports


(8) Back-End Ports
(4) WAN Comm Ports
Cluster-2:
(2) Directors

EMC VPLEX Metro

Single Engine

(8) Front-End Ports


(8) Back-End Ports
(4) WAN Comm Ports

6.6 VSPEX Storage Building Blocks


Please use the EMC VSPEX Private Cloud VMware vSphere 5.1 for up to 500 Virtual Machines to
properly size, plan, and implement your 125 Virtual Machine deployment. Once the Building Block
size has been established and the LUNs have been created on the back-end storage, they will be
virtualized by VPLEX and presented to the ESXi hosts for use.

Figure 15: Storage Layout for 125 Virtual Machine Private Cloud Proven
Infrastructure

7. VPLEX Local Deployment


7.1 Overview
The VPLEX deployment process consists of two main steps, physical installation and
configuration. The physical installation of VPLEX is racking and cabling VPLEX into the
VSPEX rack. The installation process is well defined in the EMC VPLEX Procedure Generator
and therefore is not replicated in this section. For detailed installation instructions use the
EMC VPLEX Field Deployment Guide found in the EMC VPLEX Procedure Generator.
The VPLEX deployment process consists of several tasks that are listed below. There are
tables within this chapter that detail what information is needed to complete the
configuration. These tables have been populated with sample data so it is clear what format
is expected. Please see Appendix-D VPLEX Pre-Configuration Worksheets for blank
worksheets that can be printed and filled in. Having these worksheets filled out prior to
beginning the configuration process is highly recommended.
Once the VPLEX has been configured for use, you may then log in to a VPLEX management
server to discover and claim your VSPEX Building Block LUNs from the VNX array. These
LUNs will be used to create virtual volumes for VPLEX Local and/or Distributed Volumes for
VPLEX Metro implementations.
For this document we will make the assumption that the VSPEX environment has been setup
and is configured for a VSPEX Private Cloud that will support up to 125 virtual machines.
Physical Installation and configuration of VPLEX is identical whether the VSPEX with VPLEX
solution is already in production or newly installed.
The VPLEX pre-deployment gathering of data consists of the items listed below. The first
phase is to collect all appropriate site data and fill out the configuration worksheets. These
worksheets may be found in the Installation and Configuration section of the VPLEX
Procedure Generator. Throughout this chapter you will need to refer to the VPLEX
Configuration Guide, or other referenced document, for more detailed information on each
step. Chapter 2 of the VPLEX Configuration Guide is for a VPLEX Local implementation,
which is the focus for this chapter. For a VPLEX Metro deployment, review the tables in this
chapter then proceed to Chapter 8. VPLEX Metro Deployment.

7.2 Physical Installation


This is the physical installation of the VPLEX into the VSPEX cabinet. This includes the
following tasks:
Unpack the VPLEX equipment.
Install and cable the standby power supply (SPS) and engine.
Install the VPELX management server.

Connect the remaining internal VPLEX management cables.


Power up and verify VPLEX operational status.
Connect the VPLEX front-end and back-end I/O cables.

7.3 Preliminary Tasks


After the VPLEX has been physically installed into the rack, you will need to verify that your
environment is ready for the deployment of the VPLEX. These tasks include the following:
Install the VPLEX Procedure Generator
Review the VPLEX Implementation and Planning Best Practice Guide
Review the VPLEX Simple Support Matrix
Review the VPLEX with GeoSynchrony 5.1 Release Notes
Review the VPLEX Configuration Guide
Review the ESX Host Connectivity Guide
Review the Encapsulate Arrays on ESX Guide
Verify (4) metadata devices are available for VPLEX install
Review the EMC Secure Remote Support Gateway Install Procedure
Before moving forward with the configuration of VPLEX, complete all the relevant
worksheets below to ensure all the necessary information to complete the configuration is
available. A blank worksheet is provided in Appendix-D VPLEX Pre-Configuration
Worksheets. Review Chapter 2, Task 1 of the VPLEX GeoSynchrony v5.1 Configuration Guide.
The following tables show sample configuration information as an example and should be
replaced by actual values from the installation environment.
Table 3: IPv4 Networking Information
Information

Additional description

Value

Management server IP
address

Public IP address for the management server on


the customer IP network.

192.168.44.171

Network mask

Subnet mask for the management server IP


network.

255.255.255.0

Hostname

Hostname for the management server. Once


configured, this name replaces the default name
(service) in the shell prompt each time you open
an SSH session to the management server.

DC1-VPLEX

EMC secure remote


support (ESRS) gateway

IP address for the ESRS gateway on the IP


network.

192.168.44.254

Table 4: Metadata Backup Information


Information

Additional description

Value

Day and time to back up


the meta-volume

The day and time that the clusters meta-volume


will be backed up to a remote storage volume on
a back-end array (which will be selected during
the cluster setup procedure) .

2013 MAY 30, 12:00

Table 5: SMTP details to configure event notifications


Information

Additional description

Value

Do you want VPLEX to


send event notifications?

Sending event notifications allows EMC to act on


any issues quickly.

Yes

Note: The remaining information in this table applies only if you specified yes to the previous question.
SMTP IP address of
primary connection

SMTP address thru which Call Home emails will


be sent. EMC recommends using your ESRS
gateway as the primary connection address.

192.168.44.254

First/only recipients email


address

Email address of a person (generally a


customers employee) who will receive Call
Home notifications.

user@companyname.com

SMTP IP address for


first/only recipient

SMTP address through which the first/only


recipients email notifications will be sent.

192.168.44.25

Event notification type

One or more people can receive email


notifications when events occur.
Notification types:
1. On Success or Failure - Sends an email
regardless of whether the email notification to
EMC succeeded.
2. On Failure - Sends an email each time an
attempt to notify EMC has failed.
3. On All Failure - Sends an email only if all
attempts to notify EMC have failed.
On Success - Sends an email each time EMC is
successfully sent an email notification.EMC
recommends distributing connections over
multiple SMTP servers for better availability.
These SMTP v4 IP addresses can be different
from the addresses used for event notifications
sent to EMC.

Second recipients email


address (optional)

Email address of a second person who will


receive Call Home notifications.

user2@companyname.com

SMTP IP address for


second recipient *

SMTP address through which the second


recipients email notifications will be sent.

192.168.44.25

Event notification type for


second recipient

See description of event notification type for first


recipient.

Third recipients email


address (optional)

Email address of a third person who will receive


Call Home notifications.

user3@companyname.com

SMTP IP address for third


recipient *

SMTP address through which the third recipients


email notifications will be sent.

192.168.44.25

Event notification type for


third recipient

See description of event notification type for first


recipient.

Information

Additional description

Value

Do you want VPLEX to


send system reports?

Day or week and time to send system reports.


Sending weekly system reports allows EMC to
communicate known configuration risks, as well
as newly discovered information that can
optimize or reduce risks.
Note that the connections for system reports are
the same connections used for event
notifications.

default

Table 6: SNMP information


Information

Additional description

Value

Do you want to use


SNMP to collect
performance statistics? *

You can collect statistics such as I/O operations


and latencies, as well as director memory, by
issuing SNMP GET, GET-NEXT, or GET-BULK
requests.

No

Community string (if you specified yes above).

private

xxx

Table 7: Certificate Authority (CA) and Host Certificate information


Information

Additional description

Value

CA certificate lifetime

How many years the cluster's self-signed CA


certificate should remain valid before expiring.

CA certificate key
passphrase

VPLEX uses self-signed certificates for ensuring


secure communication between VPLEX Metro
clusters. This passphrase is used during
installation to create the CA certificate necessary
for this secure communication.

dc1-vplex

Host certificate lifetime

How many years the cluster's host certificate


should remain valid before expiring.

Host certificate key


passphrase

This passphrase is used to create the host


certificates necessary for secure communication
between clusters.

dc1-vplex

Table 8: Product Registration Information


Information

Additional description

Value

Company site ID number


(optional)

EMC-assigned identifier used when the VPLEX


cluster is deployed on the ESRS server. The
EMC customer engineer or account executive
can provide this ID.

12345678

Company name
Company contact
Contacts business email
address

CompanyName
First and last name of a person to contact.

First Last
user@companyname.com

Contacts business phone


number

xxx-xxx-xxxx

Contacts business
address

Street, city, state/province, ZIP/postal code,


country.

123 Main Street,


City, State, 12345-6789

Method used to send


event notifications

Method by which the cluster will send event


messages to EMC.

_X_ 1. ESRS
_X_ 2. Email

Remote support method

Method by which the EMC Support Center can


access the cluster.

_X_ 1. ESRS
_X_ 2. WebEx

Table 9: VPLEX Metro IP WAN Configuration Information


Information

Additional description

Value

Local director
discovery configuration
details (default values
work in most
installations)

Class-D network discovery address

224.100.100.100

Discovery port

10000

Listening port for communications between


clusters
(Traffic on this port must be allowed through the
network)

11000

Class-C subnet prefix for Port Group 0.


The IP subnet must be different than the one
used by the management servers and different
from the Port Group 1 subnet in Cluster 1.

192.168.11.0

Subnet mask

255.255.255.0

Cluster address (use Port Group 0 subnet


prefix)

192.168.11.251

Gateway for routing configurations


(use Port Group 0 subnet prefix)

192.168.11.1

MTU:
The size must be set to the same value for Port
Group 0 on both clusters.
Also, the same MTU must be set for Port Group
1 on both clusters

1500

Attributes for Cluster 1


Port Group 0

Note: jumbo frames are supported.

Attributes for Cluster 1


Port Group 1

Port 0 IP address for director 1-1-A

192.168.11.35

Port 0 IP address for director 1-1-B

192.168.11.36

Class-C subnet prefix for Port Group 1.


The IP subnet must be different than the one
used by the management servers and different
from the Port Group 1 subnet in Cluster 2.

10.6.11.0

Subnet mask

255.255.255.0

Information

Additional description

Value

Cluster address (use Port Group 1 subnet


prefix)

10.6.11.251

Gateway for routing configurations


(use Port Group 1 subnet prefix)

10.6.11.1

MTU:
The size must be set to the same value for Port
Group 1 on both clusters.
Also, the same MTU must be set for Port Group
0 on both clusters

1500

Note: jumbo frames are supported.


Port 1 IP address for director 1-1-A

10.6.11.35

Port 1 IP address for director 1-1-B

10.6.11.36

VPLEX Metro supports the VPLEX Witness feature, which is implemented through the Cluster
Witness function.
If the inter-cluster network is deployed over Fibre Channel, use separate, unique physical
links from other management traffic links.
Table 10: Cluster Witness Configuration Information
Information

Additional description

Value

Account and password for


to log into ESX server
where Cluster Witness
Server VM is deployed

This password allows you to log into the Cluster


Witness Server VM.

username
password

Host certificate
passphrase for the
Cluster Witness certificate
Cluster Witness requires
management IP network
to be separate from intercluster network

Cluster Witness
functionality requires
these protocols to be
enabled by the firewalls

dc1-vplex

Class-C subnet mask for the ESX server where


Cluster Witness Server guest VM is deployed

255.255.255.0

IP Address for ESX server where Cluster


Witness Server guest VM is deployed

192.168.34.100

Cluster Witness Server Guest VM Class-C


subnet mask

255.255.255.0

Cluster Witness Server Guest VM IP Address

192.168.34.200

Public IP address for management server in


Cluster 1

192.168.44.171

Public IP address for management server in


Cluster 2

192.168.44.172

Any firewall between the Cluster Witness Server


and the management servers need to allow
traffic per the following protocols and ports.

IKE UDP port 500


ESP IP protocol number 50
IP protocol number 51

configured on the
management network

NAT Traversal in the IKE


(IPsec NAT-T) UDP port 4500

7.4 Set public IPv4 address


Before you can log in to a VPLEX management server over the public network, it is necessary
to connect to the management server and set the public IP address. Review Chapter 2, Task
2 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

7.5 Run EZ-Setup Wizard


The EZ-Setup Wizard performs several tasks to set up your VPLEX implementation; these
steps include but are not limited to the following:
Configures event notifications
Sets up SNMP for gathering performance statistics (optional)
Configures VPLEX for authentication directory service (LDAP/AD) (optional)
Configures VPLEX for a customized login banner (optional)
Before starting the EZ-Setup wizard we need to ensure that the VPLEX product versions
shipped from the factory match the appropriate GA code, specifically GeoSynchrony v5.1.
Review Chapter 2, Task 3 of the VPLEX GeoSynchrony v5.1 Configuration Guide.
Before launching the EZ-Setup utility on a cluster that contains the VS2 version of VPLEX
hardware, you should verify that all components in the cluster are functioning correctly.
Review Chapter 2, Task 4 of the VPLEX GeoSynchrony v5.1 Configuration Guide.
Now the EZ-Setup utility can be launched. Review Chapter 2, Task 5 of the VPLEX

GeoSynchrony v5.1 Configuration Guide.

7.6 Expose Back-End Storage


At this point of the VPLEX installation please check SAN switches to verify that the VPLEX
back-end ports have logged onto the fabric switch. Once verified, you will create the
necessary zones between the VNX storage array and the VPLEX back-end ports. This
process is documented in the VPLEX Implementation and Planning Best Practice Guide.
Review Chapter 2, Task 6 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

7.7 Resume EZ-Setup

This part of the EZ-Setup will discover the back-end arrays and look for any LUNs that have
been pre-exposed to the VPELX via the VPLEX Storage Group on the VNX storage array. Note:
You should expect to see the (4) metadata LUNs at this point. Review Chapter 2, Task 7 of
the VPLEX GeoSynchrony v5.1 Configuration Guide.
At this point of the installation you will need to restart the web server services to
incorporate any security and/or certificate changes. Review Chapter 2, Task 8 of the VPLEX

GeoSynchrony v5.1 Configuration Guide.

7.8 Meta-volume
Meta-volumes are created during system setup and must be the first storage presented to
VPLEX. The purpose of the metadata volume is to track all virtual-to-physical mappings,
data about devices, virtual volumes, and system configuration settings. Review Chapter 2,
Task 9 of the VPLEX GeoSynchrony v5.1 Configuration Guide.
The meta-volume backup creates a point-in-time copy of the current in-memory metadata
without activating it. The metadata backup is required for an overall system health check to
pass prior to a major migration or update and may also be used if the VPLEX loses access to
one or both primary meta-volume copies. Review Chapter 2, Task 10 of the VPLEX

GeoSynchrony v5.1 Configuration Guide.

7.9 Register VPLEX


As part of the installation process you must register your VPLEX with EMC in order to ensure
that may provide the best quality support for your product. Review Chapter 2, Task 11 of the

VPLEX GeoSynchrony v5.1 Configuration Guide.

7.10 Enable Front-End Ports


At this point of the VPLEX installation you will be enabling the front-end ports so that the
ESX connected hosts may be zoned to the VPLEX for use. Please follow the best practices
as outlined in the VPLEX Implementation and Planning Best Practice Guide. Review
Chapter 2, Task 12 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

7.11 Configure VPLEX for ESRS Gateway


As part of the VPLEX installation, it is important that ESRS be deployed so that EMC can
provide proactive support and advise when needed. This procedure is documented in the
EMC Secure Remote Support Gateway Install Procedure. Review Chapter 2, Task 13 of the

VPLEX GeoSynchrony v5.1 Configuration Guide.

7.12 Re-Verify Cluster Health


This post-install health check is to re-verify the following:
That all product versions are consistent
Operational health and status of the VPLEX are OK
Current state of the metadata volumes are OK
That there are no unhealthy storage volumes
Each storage volume is visible from all directors with at least (2) paths
Review Chapter 2, Task 14 of the VPLEX GeoSynchrony v5.1 Configuration Guide.
At this point the configuration of the VPLEX Local is now complete and the next step is to
provision VPLEX virtual volumes.

8. VPLEX Metro Deployment


8.1 Overview
The VPLEX deployment process consists of two main steps, physical installation and
configuration. The physical installation of VPLEX is racking and cabling VPLEX into the
VSPEX rack. The installation process is well defined in the EMC VPLEX Procedure Generator
and therefore is not replicated in this section. For detailed installation instructions use the
EMC VPLEX Field Deployment Guide found in the EMC VPLEX Procedure Generator.
The VPLEX configuration process consists of several tasks that are listed below. There are
tables within this chapter that detail what information is needed to complete the
configuration. These tables have been populated with sample data so it is clear what format
is expected. Please see Appendix-D VPLEX Pre-Configuration Worksheets for blank
worksheets that can be printed and filled in. Having these worksheets filled out prior to
beginning the configuration process is highly recommended.
Once the VPLEX has been configured for use, you may then log in to a VPLEX management
server to discover and claim your VSPEX Building Block LUNs from the VNX array. These
LUNs will be used to create virtual volumes for VPLEX Local and/or Distributed Volumes for
VPLEX Metro implementations.
For this document the assumption is made that two VSPEX environments have been setup
and are configured for a VSPEX Private Cloud that will support up to 125 virtual machines.
Physical Installation and configuration of VPLEX is identical whether the VSPEX with VPLEX
solution is already in production or newly installed.
The VPLEX pre-deployment gathering of data consists of the items listed below. The first
phase is to collect all appropriate site data and fill out the configuration worksheets. These
worksheets may be found in the Installation and Configuration section of the VPLEX
Procedure Generator. Throughout this chapter you will need to refer to the VPLEX
Configuration Guide, or other referenced document, for more detailed information on each
step. Chapter 3 of the VPLEX Configuration Guide is for a VPLEX Metro implementation,
which is the focus for this chapter.

8.2 Physical Installation


This is the physical installation of the VPLEX into the VSPEX cabinet. This includes the
following tasks:
Unpack the VPLEX equipment.
Install and cable the standby power supply (SPS) and engine.
Install the VPLEX management server.
Connect the remaining internal VPLEX management cables.

Connecting the WAN COM cables.


Power up and verify VPLEX operational status.
Connect the VPLEX front-end and back-end I/O cables.

Ensure that VPLEX is properly installed at each site before continuing.

8.3 Preliminary Tasks


After the VPLEX has been physically installed into the rack, you will need to verify that your
environment is ready for the deployment of the VPLEX. These tasks include the following:
Install the VPLEX Procedure Generator
Review the Implementation and Planning Best Practice Guide
Review the VPLEX Simple Support Matrix
Review the VPLEX with GeoSynchrony 5.1 Release Notes
Review the VPLEX Configuration Guide
Review the ESX Host Connectivity Guide
Review the Encapsulate Arrays on ESX Guide
Verify (4) metadata devices are available for VPLEX install
Review the EMC Secure Remote Support Gateway Install Procedure
Before moving forward with the configuration of VPLEX Metro, complete all the worksheets
from the previous section to ensure all the necessary information to complete the
configuration is available. A blank worksheet is provided in Appendix-D VPLEX PreConfiguration Worksheets. Review Chapter 3, Task 1 of the VPLEX GeoSynchrony v5.1

Configuration Guide.

8.4 Set public IPv4 address


Before you can log in to a VPLEX management server over the public network, it is necessary
to connect to the management server and set the public IP address. Review Chapter 3, Task
2 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

8.5 Run EZ-Setup Wizard on Cluster 1


The EZ-Setup Wizard performs several tasks to set up your VPLEX implementation; these
steps include but are not limited to the following:
Configures event notifications
Sets up SNMP for gathering performance statistics (optional)
Configures VPLEX for authentication directory service (LDAP/AD) (optional)
Configures VPLEX for a customized login banner (optional)
Before starting the EZ-Setup wizard we need to ensure that the VPLEX product versions
shipped from the factory match the appropriate GA code, specifically GeoSynchrony v5.1.
Review Chapter 3, Task 3 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

Before launching the EZ-Setup utility on a cluster that contains the VS2 version of VPLEX
hardware, you should verify that all components in the cluster are functioning correctly.
Review Chapter 3, Task 4 of the VPLEX GeoSynchrony v5.1 Configuration Guide.
Now the EZ-Setup utility can be launched for Cluster 1. Review Chapter 3, Task 5 of the

VPLEX GeoSynchrony v5.1 Configuration Guide.

8.6 Expose Back-End Storage


At this point of the VPLEX installation please check SAN switches to verify that the VPLEX
back-end ports have logged onto the fabric switch. Once verified, you will create the
necessary zones between the VNX storage array and the VPLEX back-end ports. This
process is documented in the VPLEX Implementation and Planning Best Practice Guide.
Review Chapter 3, Task 6 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

8.7 Resume EZ-Setup


At this point of the installation you will need to resume the EZ-Setup wizard. Review Chapter
3, Task 7 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

8.8 Meta-volume
Meta-volumes are created during system setup and must be the first storage presented to
VPLEX. The purpose of the metadata volume is to track all virtual-to-physical mappings,
data about devices, virtual volumes, and system configuration settings. Review Chapter 3,
Task 8 of the VPLEX GeoSynchrony v5.1 Configuration Guide.
The meta-volume backup creates a point-in-time copy of the current in-memory metadata
without activating it. The metadata backup is required for an overall system health check to
pass prior to a major migration or update and may also be used if the VPLEX loses access to
one or both primary meta-volume copies. Review Chapter 3, Task 9 of the VPLEX

GeoSynchrony v5.1 Configuration Guide.

8.9 Register Cluster 1


As part of the installation process you must register your VPLEX Cluster 1 with EMC in order
to ensure that we can provide the best quality support for your product. Review Chapter 3,
Task 10 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

8.10 Enable Front-End Ports


At this point of the VPLEX installation you will be enabling the front-end ports so that the
ESX connected hosts may be zoned to the VPLEX for use. Please follow the best practices
as outlined in the VPLEX Implementation and Planning Best Practice Guide. Review Chapter
3, Task 11 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

8.11 Connect Cluster 2


Connect to Cluster-2s management station and begin the installation process for the VPLEX
Metro configuration. Note: Before you can log in to a VPLEX management server over the
public network, its necessary to connect to the management server and set the public IP
address. Review Chapter 3, Task 12 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

8.12 Verify the Product Version


During the install process it will be necessary to also check the VPLEX product versions
shipped from the factory and load the appropriate GA code prior to running the initial EZSetup configuration wizard. Note: The Cluster-2 product versions should match whats been
loaded on Cluster-1. Review Chapter 3, Task 13 of the VPLEX GeoSynchrony v5.1

Configuration Guide.

8.13 Verify Cluster 2 Health


Before launching the EZ-Setup utility on Cluster-2, you should verify that all components in
the cluster are functioning correctly. Review Chapter 3, Task 14 of the VPLEX GeoSynchrony

v5.1 Configuration Guide.

8.14 Synchronize Clusters


This step is to synchronize the time between the management stations of Cluster-1 and
Cluster-2 prior to running EZ-Config and joining the clusters together. Review Chapter 3,
Task 15 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

8.15 Launch EZ-Setup on Cluster 2


Run the EZ-Setup Wizard on Cluster-2 to set up your VPLEX Metro implementation. Review
Chapter 3, Task 16 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

8.16 Expose Back-end Storage on Cluster 2


At this point of the VPLEX installation you will login to your SAN switches and verify that the
VPLEX back-end ports have logged onto the fabric switch. Once verified, you will create the
necessary zones between the VNX storage array and the VPLEX back-end ports. This
process is documented in the VPLEX Implementation and Planning Best Practice Guide.
Review Chapter 3, Task 17 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

8.17 Resume EZ-Setup on Cluster 2


This part of the EZ-Setup will discover the back-end arrays and look for any LUNs that have
been pre-exposed to the VPELX via the VPLEX Storage Group on the VNX storage array. Note:
You should expect to see the (4) metadata LUNs at this point. Review Chapter 3, Task 18 of
the VPLEX GeoSynchrony v5.1 Configuration Guide.
At this point of the installation you will need to restart the web server services to
incorporate any security and/or certificate changes. Review Chapter 3, Task 19 of the VPLEX

GeoSynchrony v5.1 Configuration Guide.

8.18 Create Meta-volume on Cluster 2


Create your meta-volume for Cluster-2. Review Chapter 3, Task 20 of the VPLEX

GeoSynchrony v5.1 Configuration Guide.

Create your meta-volume backup for Cluster-2 and schedule its frequency. Note: by default
you must complete the first backup at time of creation. Review Chapter 3, Task 21 of the

VPLEX GeoSynchrony v5.1 Configuration Guide.

8.19 Register Cluster 2


Register Cluster-2 of your VPLEX with EMC in order to ensure that may provide the best
quality support for your product. Review Chapter 3, Task 22 of the VPLEX GeoSynchrony v5.1

Configuration Guide.

8.20 Configure VPLEX for ESRS Gateway


As part of the VPLEX installation, it is important that ESRS be deployed so that EMC can
provide proactive support and advise when needed. This procedure is documented in the
EMC Secure Remote Support Gateway Install Procedure. Review Chapter 3, Task 23 of the

VPLEX GeoSynchrony v5.1 Configuration Guide.

8.21 Complete EZ-Setup on Cluster 1


From the VPlexcli prompt on Cluster-1, you will need to run the complete system setup
commands to complete the following tasks:
Configures VPN for inter-cluster communication
Establishes a secure connection between the clusters
Enables the WAN COM ports
Connects to Cluster 2s directors
Review Chapter 3, Task 24 of the VPLEX GeoSynchrony v5.1 Configuration Guide.
At this point of the installation you will need to restart the web server services to
incorporate any security and/or certificate changes. Review Chapter 3, Task 25 of the VPLEX

GeoSynchrony v5.1 Configuration Guide.

8.22 Complete EZ-Setup on Cluster 2


From the VPlexcli prompt on Cluster-2, you will need to run the complete system setup
commands to complete the following tasks:
Enables the WAN COM ports
Connects to Cluster 2s directors
Review Chapter 3, Task 26 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

8.23 Configure WAN Interfaces


Locate and download the VPLEX IP COM Configuration Worksheet on
support.emc.com. Once the IPComConfigWorksheet.zip file has been downloaded and
completed for Cluster-1 and Cluster-2 you will need to create the configuration script called
IPWANSetupCmd.py. This is done by using the Create Script macro in the
worksheet. Once the script has been created, move it to the Management Server for Cluster2. It should be placed in the /var/log/VPlex/cli directory. Review Chapter 3, Task 27 of
the VPLEX GeoSynchrony v5.1 Configuration Guide.
After the WAN-Com has been configured with the IPWANSetupCmd.py script it will become
necessary to validate the connections between sites. This step confirms that all expected
local and remote directors have connectivity. Note: Please resolve any pathing issues prior
to moving on to the next step.

8.24 Join the Clusters


From the VPlexcli prompt on Cluser-2, you will now verify that all connectivity between
Cluster 1s directors and Cluster 2s directors are communicating as required and will join

them together into a unified configuration. Review Chapter 3, Task 29 of the VPLEX

GeoSynchrony v5.1 Configuration Guide.

8.25 Create Logging Volumes


VPLEX uses logging volumes to track changes during a loss of connectivity or loss of a
volume that is a mirror in a distributed device. You must create a logging volume on each
cluster. Each logging volume must be large enough to contain one bit for every page of
distributed storage space (approximately 10 GB of logging volume space for every 320 TB of
distributed devices). The logging volumes experience much I/O during and after link
outages. The recommended best practice is to stripe each logging volume across many
disks for speed, and also to have a mirror (on another fast disk), because this is important
data. Review Chapter 3, Task 30 of the VPLEX GeoSynchrony v5.1 Configuration Guide.

8.26 Re-Verify Cluster Health


This post-install health check is to re-verify the following:
That all product versions are consistent
Operational health and status of the VPLEX are OK
Current state of the metadata volumes are OK
That there are no unhealthy storage volumes
Each storage volume is visible from all directors with at least (2) paths

Each storage volume is visible from all directors with at least (2) paths

Review Chapter 3, Task 31 of the VPLEX GeoSynchrony v5.1 Configuration Guide.


At this point the configuration of the VPLEX Local is now complete and the next step is to
provision VPLEX virtual volumes.

9. Provisioning Virtual Volumes with


VPLEX Local
Before VPLEX is able to provision virtual volumes, VNX storage pools that are designated for
VPLEX must be created. Login to the Unisphere for VPLEX interface to verify the health of the
VPLEX system will be shown, see Figure 17.
All VNX storage must first be zoned to the VPLEX, and storage pools created on the VNX for
the VPLEX. Create the VNX Storage pools in the VNX Unisphere interface, as shown in
Figure25. It is important to note that adding VPLEX to an existing environment with unused
storage is identical to a brand new installation.

9.1 Confirm Storage Pools


Confirm that the storage pools for VPLEX have been properly configured. Please follow the
best practices for creating storage pools found in the storage array documentation. This can
be accomplished by viewing the management interface for the storage array and reviewing
the LUN details as seen in Figure 16.

Figure 16: Place VNX LUNs into VPLEX Storage Group

9.2 Provision Storage


1. Login to VPLEX and verify operational status for the VPLEX Local Cluster (Figure 17).

Figure 17: VPLEX Local System Status and Login Screen


2. Click on the Provision Storage link, see Figure 18

Figure 18: Provisioning Storage


3. Select Provisioning Overview
4. Claim volumes from the VNX storage array and create Virtual Volumes by clicking on
Step 1 as shown in Figure 19 and following the EZ-Provisioning guide. When finished
return here. In this step the storage array will be selected, as well as the storage
volumes that were provisioned to the storage pool.

Figure 19: EZ-Provisioning Step 1: Claim Storage and Create Virtual Volumes
After the VPLEX virtual volumes are created, the volumes must be exposed to the hosts that
will need to use them. This is similar to how a VNX volume must be exposed to a host, or to
VPLEX, before it can be utilized.
5. Register the host initiators to the VPLEX virtual volume by clicking on Step 2 as shown in
Figure 20 and following the EZ-Provisioning guide. When finish return here.

Figure 20: EZ-Provisioning Step 2: Register Initiators


6. Create a storage view by clicking on Step 3 as shown in Figure 21 and following the EZProvisioning guide. When finished the virtual volume will appear in the Virtual Volume list.

Figure 21: EZ-Provisioning Step 3: Create Storage View

10. Adding VPLEX to an Existing VSPEX


Solution
This procedure describes the task of encapsulating storage for existing VSPEX solutions that
have data in production today. This is useful when the environment has storage that needs
to be continuously available and is currently not protected by VPLEX.

10.1 Assumptions

The ESX host is running with LUNs presented directly from the storage-array.
At least one virtual machine is running I/Os on the LUNs presented to the ESX host.
VPLEX must be installed and in Good Health.
One new Switch (pair of switches if HA is required) is available for use as front-end
switches.

10.2 Integration Procedure


Power down all virtual machines on the host by shutting down the operating system
for each VM.
Shutdown each of the ESX hosts using the command: shutdown h now
Remove the host initiators from the zone to the storage array.
Add the VPLEX Back-End ports to the zone to the storage array.
Create a new zone with the VPLEX Front-End ports and Host Initiators.

10.3 Storage Array Mapping

Make appropriate masking changes on the storage-array. Please see the Encapsulate
Arrays on ESXi documentation for additional details.

Create a storage group on the VNX array


Register the VPLEX initiators and place them into the Storage Group
Add the existing LUNs for your ESX Server(s) to the VPLEX Storage Group.

10.4 VPLEX Procedure


Log in to the VPLEX management server
In the storage-array context, view the storage array.

VPlexCLI:/> ls -al /clusters/cluster-1/storage-elements/storage-arrays/


/clusters/cluster-1/storage-elements/storage-arrays:
EMC-CLARiiON-FNM00094200051 ok
true 0x50060160446019f5, 6
0x50060166446019f5,
0x50060168446419f5,
0x5006016f446019f5
EMC-CLARiiON-FNM00094200052 ok
true 0x5006016044601ff5, 6
0x5006016644601ff5,
0x5006016844641ff5,
0x5006016f44601ff5
Re-Discover New Storage
VPlexCLI:/> cd /clusters/cluster-1/storage-elements/storage-arrays/
VPlexCLI:/> array re-discover EMC-CLARiiON-FNM00094200051
Make sure VPLEX can see the LUNs. If the WWN of a CLARiiON LUN is
6006016031111000d4991c2f7d50e011, it will be visible in the storage volume context
as shown in this example:
VPlexCLI:/> ll /clusters/cluster-1/storage-elements/storage-volumes/
/clusters/cluster-1/storage-elements/storage-volumes/:
Name VPD83 ID Capacity Use Vendor IO Status Type Thin Rebuild
---- -------- -------- --- ------ ---------- ----- ----------VPD83T3:6006016031111000d4991c2f7d50e011
VPD83T3:6006016031111000d4991c2f7d50e011 5G unclaimed DGC alive traditional
false
VPD83T3:6006016018641e00e221f379a9d5e011
VPD83T3:6006016018641e00e221f379a9d5e011 78G meta-data DGC alive
traditional false
VPD83T3:6006016018641e00e321f379a9d5e011
VPD83T3:6006016018641e00e321f379a9d5e011 78G meta-data DGC alive
traditional false
Claim the volume in VPlexCLI.
VPlexCLI:/> storage-volume claim -d VPD83T3:6006016031111000d4991c2f7d50e011 n lun_1
VPlexCLI:/> ll /clusters/cluster-1/storage-elements/storage-volumes/
Name VPD83 ID Capacity Use Vendor IO Status Type Thin Rebuild
---- -------- -------- --- ------ --------- ----- ----------lun_1 VPD83T3:6006016031111000d4991c2f7d50e011 5G claimed DGC alive
data-protected false

From the VPLEX GUI, Go to the EZ-Provisioning Wizard

Figure 22: EZ-Provisioning


Select the correct cluster and go to Step 1 to create your Virtual Volumes.
Note: The new storage has already been claimed in the previous steps.
Select VNX array and create your virtual volumes from previously claimed storage.

Figure 23: Create Virtual Volumes- Select Array

Select storage volumes for your ESX Host(s):

Figure 24: Create Virtual Volumes- Select Storage Volumes


Review and Commit your changes.
If this is a VPELX Metro configuration, it will now be necessary to create an identical volume
at Cluster-2 and establish a remote mirror between the Cluster-1 and Clsuter-2 volumes.

Figure 25: Create Distributed Volumes- Select Mirrors


Note: See VPLEX Administrator Guide for more detailed information on adding Remote
Mirrors.
Create a Consistency Group and apply the Winner: Cluster-1 (5 second) rule set.

Figure 26: Create Distributed Volumes- Select Consistency Group

10.5 Power-on Hosts


Power up the host.
At the front-end switch, zone the front end ports of VPLEX with the host ports.

10.6 Register Host Initiators


From the VPLEX GUI, Go back to the EZ-Provisioning Wizard

Figure 27: EZ-Provisioning- Register Initiators

Select Step 2 to Register your Host Initiators.


View the unregistered initiator-ports in the initiator-ports context.

Figure 28: View Unregistered Initiator-ports

Apply a New Initiator Name and apply the default host type for the ESX server. (repeat as
necessary for all host initiator ports)

Figure 29: View Unregistered Initiator-ports

From the VPLEX GUI, Go back to the EZ-Provisioning Wizard

Figure 30: EZ-Provisioning- Create Storage View


Select Step 3 to create your new Storage View.
Create a new Storage View and add the newly registered initiator ports.

Figure 31: Create Storage View- Select Initiators


Select the previously zoned VPLEX Front-End ports for the ESX51-N1 Storage view.

Figure 32: Create Storage View- Select Ports

Select the previously zoned VPLEX Front-End ports for the ESX51-N1 Storage view.

Figure 33: Create Storage View- Select Virtual Volumes

10.7 Create Highly Available Datastores

Log in to the vCenter/vSphere client used for managing the ESX host. In the
Configuration tab for the host, under Storage, the LUNs exported from VPLEX should
be visible as Devices.

If the required Datastore is not present in Storage, follow these steps:


o Click Add Storage
o In Storage Type, select Disk/LUN
o Click Next

Now you can see the exported LUN with the required Datastore on it. It also shows
the Datastore name in the VMFS Label column
o Select the Datastore name
o Click Next
o When asked how to use the Datastore present on the LUN, click on the option
that uses the old data with New Signature

In the Storage and Configuration tab, click Storage Adapters

Check the status of paths for Fibre Channel Host Adapters. They should show as
active.

If a previously existing virtual machine is not yet present in the inventory, perform
the following steps.

o Right click the datastore


o Select Browse Datastore
o Right click the required virtual machine
o Select Add to Inventory
o Confirm the process of importing the datastore by acknowledging that it has
been manually moved

To power on the required virtual machines, in the left pane of the vCenter/vSphere
Client, right click the virtual machine names and run I/Os from the virtual machines.

11. Converting a VPLEX Local Cluster into


a VPLEX Metro Cluster
VSPEX customers may be using VPLEX in their data center for continuous availability and
data mobility across their local arrays. As business requirements change, demanding
infrastructure enhancements to provide two-site continuous availability for their missioncritical applications, the VPLEX configuration in the primary data center can be changed to
add VPLEX Metro capability to the existing VPLEX Local configuration. From a licensing
standpoint, this is a simple process as a VPLEX Metro license is added. This section
describes the configuration steps to add the Metro functionality.

11.1 Gathering Information for Cluster-2


When upgrading from VPLEX Local to VPLEX Metro you will be required to repeat the
gathering steps from the VPLEX Pre-Configuration Worksheets and Guidelines section.
See the VPLEX Procedure Generator for a complete set of detailed steps.

11.2 Configuration Information for Cluster Witness


When using VPLEX Metro the Best Practice calls for a 3rd Failure domain running the VPLEX
Witness feature, which is implemented through the Cluster Witness function. The intercluster network is deployed over separate, unique physical links from other management
traffic links.
Table 11: Cluster Witness Configuration Information
Information

Additional description

Account and password for


to log into ESX server
where Cluster Witness
Server VM is deployed

This password allows you to log into the Cluster


Witness Server VM.

Host certificate
passphrase for the
Cluster Witness certificate
Cluster Witness requires
management IP network
to be separate from intercluster network

Value

Must be at least eight


characters (including spaces):

Class-C subnet mask for the ESX server where


Cluster Witness Server guest VM is deployed
IP Address for ESX server where Cluster
Witness Server guest VM is deployed
Cluster Witness Server Guest VM Class-C
subnet mask
Cluster Witness Server Guest VM IP Address

Public IP address for management server in


Cluster 1
Public IP address for management server in
Cluster 2
Cluster Witness
functionality requires
these protocols to be
enabled by the firewalls
configured on the
management network

Any firewall between the Cluster Witness Server


and the management servers need to allow
traffic per the following protocols and ports.

IKE UDP port 500


ESP IP protocol number 50
IP protocol number 51
NAT Traversal in the IKE
(IPsec NAT-T) UDP port 4500

Once the VPLEX Metro and Witness have been configured, you will now need to allocate the
datastore LUNs from Cluster-2 to the ESXi hosts at Site-B. The assumption has been made
that Site-Bs VNX has been configured identically to Site-As VNX

Figure 34: VPLEX Metro System Status Page

11.3 Consistency Group and Detach Rules


A Consistency Group ensures application-dependent write consistency of application data
on VPLEX distributed virtual volumes within the VPLEX system in the event of a disaster. Add
the specified virtual volumes to a consistency group. The properties of the consistency
group are then immediately applied to the added volumes.

Note: Only volumes with visibility and storage-at-cluster properties which match those of
the consistency group can be added to the consistency group.

Maximum number of volumes in a consistency group: 1000


All volumes used by the same application and/or same host should be grouped
together in a consistency group.
Only volumes with storage at both clusters (distributed volumes) are allowed in
remote consistency groups.
If any of the specified volumes are already in the consistency group, the command
skips those volumes, but prints a warning message for each one.
When a detach rule is initiated for a consistency group, it takes 5 seconds to
suspend the non-preferred cluster and maintain I/O on the preferred cluster.

The first step is to create, or select, a consistency group as shown in Figure 35. From
previous steps, a VPLEX distributed virtual volume should already be create, as depicted in
Figure 36.

Figure 35: VPLEX Consistency Group created for Virtual Volumes

11.4 Create Distributed Devices between VPLEX Cluster-1 and


Cluster-2.

Figure 36: VPLEX Distributed Devices


Use the choose-winner command when:
I/O must be resumed on a cluster during a link outage
The selected cluster has not yet detached its peer
The detach-rules require manual intervention
o The selected cluster will detach its peer cluster in preparation for continuing
I/O.

o I/O continues or is suspended depending on the cache mode of the


consistency group
For synchronous consistency groups: I/O resumes immediately.

11.5 Create Storage View for ESXi Hosts

Figure 37: VPLEX Storage View for ESXi Hosts


NOTE: Be sure to set the LUN Numbering to match each on each ESXi Host that share disks.

12. Post-install checklist


The following configuration items are critical to the functionality of the solution.
On each vSphere server, verify the following items prior to deployment into production:

The vSwitch that hosts the client VLANs is configured with sufficient ports to
accommodate the maximum number of virtual machines it may host.
All required virtual machine port groups are configured, and each server has access
to the required VMware datastores.
An interface is configured correctly for vMotion using the material in the vSphere
Networking guide.
If at some point during the deployment process a step results in an error, please visit
https://support.emc.com/products/29264_VPLEX-VS2. All of the approved
troubleshooting guides are available through keyword search.

13. Summary
Despite the many challenges IT managers face delivering continuous availability for
mission-critical applications, the VSPEX with VPLEX solution described in this design and
implementation guide will provide the infrastructure to meet the needs of the most
demanding user availability requirements. The components and products included in this
solution have been selected based on proven performance and reliability in production
environments worldwide. Here are some data points reinforcing the selection of the solution
technologies:

VSPEX is designed for flexibility and validated by EMC to ensure interoperability and
fast deployment. VSPEX enables you to choose the network, server, and hypervisor
that your environment requires to go along with EMCs industry-leading storage and
backup.

VMware is the most pervasively deployed Hypervisor, with the largest virtualized
environments in corporate and cloud provider networks running on vSphere. VMware
will provide the stable high performance infrastructure required for hosting missioncritical applications.

The EMC VPLEX family today is deployed in over 2000 continuous availability
clusters with over 200PB of storage managed. Over 15 million run time hours with 59s+ availability. Here are some stories from VPLEX users based on user experience:
o A well known Financial Services firm had an entire array failure and it didnt
realize it for a week as their VPLEX protected remote array seamlessly took
over
o A regional government data center lost power due to a backhoe operators
mistake their users didnt notice because VPLEX connected a second data
center and provided continuous availability
o A large hospital required a non-disruptive data center relocation VPLEX
moved 100s of VMs to the new data center with NO application downtime
EMC VSPEX with the VNX Series is high-performing unified storage with unsurpassed
simplicity and efficiency, optimized for virtual applications. With the VNX Series,
youll achieve new levels of performance, protection, compliance, and ease of
management. Leverage a single platform for file and block data services. Centralized
management makes administration simple Leverage a single platform for file and
block data services. Centralized management makes administration simple.

In choosing the VSPEX with VPLEX solution, you are assured a world class infrastructure
backed by EMC, with best in class support backing up the solution.

Appendix-A -- References
EMC documentation
The following documents, available on EMC Online Support provide additional and relevant
information. If you do not have access to a document, contact your EMC representative.

EMC VSPEX Private Cloud: VMware vSphere 5.1 for up to 500 Virtual Machines
EMC VPLEX Site Preparation Guide
Implementation and Planning Best Practices for EMC VPLEX Technical Notes
EMC VPLEX Release Notes
EMC VPLEX Security Configuration Guide
EMC VPLEX Configuration Worksheet
EMC VPLEX CLI Guide
EMC VPLEX Product Guide
VPLEX Procedure Generator
o Encapsulate Arrays on ESXi boot from SAN
o Encapsulate Arrays on ESXi non-boot from SAN

Appendix-B Tech Refresh using VPLEX


Data Mobility
VPLEX Data Mobility
There are two types of Data Migrations and/or Tech-Refreshes
One-Time Migrations - Begin an extent or device migration immediately when the dm
migration start command is used
Batch Migrations - Are run as batch jobs using re-usable migration plan files.
Multiple device or extent migrations can be executed using a single command
One time migrations include:
Extent migrations - Extent migrations move data between extents in the same
cluster.
Use extent migrations to:
o Move extents from a hot storage volume shared by other busy extents
o Defragment a storage volume to create more contiguous free space
o Perform migrations where the source and target have the same number of
volumes
Device migrations - Devices are RAID 0, RAID 1, or RAID C devices built on extents or
on other devices
o Migrate between dissimilar arrays
o Relocate a hot volume to a faster array
o Relocate devices to a new array in a different cluster
Prerequisites:
The target device or extent must:
Be the same size or larger than the source device or extent If the target is larger in
size than the source, the extra space cannot be utilized.
For example ,if the source is 200GB, and the target is 500GB, only 200GB of the target can
be used after a migration. The remaining 300GB cannot be claimed.
NOTE: There is however a LUN Expansion function will allow you to grow the size out to
larger extents, but the distributed device must be broken then re-established after the
expansion of both mirror legs.
Not have any existing volumes on it.

Overview of the Data Mobility Process:


Use the following general steps to perform extent and device migrations:
1. Create and check a migration plan (batch migrations only).

Figure 38: Batch Migration, Create Migration Plan


2. Start the migration.

Figure 39: Batch Migration, Start Migration


3. Monitor the migrations progress.

Figure 40: Batch Migration, Monitor Progress

4. Pause, resume, or cancel the migration (optional).

Figure 41: Batch Migrations, Change Migration State


5. Commit the migration.

Figure 42: Batch Migrations, Commit the Migration

Appendix-C VPLEX Configuration limits


The following table lists the configuration limits for VPLEX Local and VPLEX Metro.
Table 12: VPLEX Configuration Limits
Object

Maximum

Virtual volumes

8000

Storage volumes

8000

IT nexus per cluster in VPLEX Local & Metro

3200

IT nexus per back-end port

256

IT nexus per front-end port

400

Number of Extents
Extents per storage volume
RAID-1 mirror legs

24000
128
2

Number of Local top-level devices

8000

Number of Distributed devices

8000

Storage volume size

32TB

Virtual volume size

32TB

Total storage provisioned in a system

8PB

Extent block size

4KB

Active Local Rebuilds

25

Active Remote Rebuilds (on distributed devices)

25

Number of Clusters

Number of Synchronous Consistency Groups

1024

Number of Volumes per Consistency Group

1000

Paths per storage volume per VPLEX BE director port


Minimum bandwidth for VPLEX Metro IP WAN link
Maximum latency in a VPLEX Metro

4
3Gbps
5ms

Appendix-D VPLEX Pre-Configuration


Worksheets
These tables can be printed and filled in with the specific environments values, to be used
during the VPLEX configuration process.
Table 13: IPv4 Networking Information
Information

Additional description

Management server IP
address

Public IP address for the management server on


the customer IP network.

Network mask

Subnet mask for the management server IP


network.

Hostname

Hostname for the management server. Once


configured, this name replaces the default name
(service) in the shell prompt each time you open
an SSH session to the management server.

EMC secure remote


support (ESRS) gateway

IP address for the ESRS gateway on the IP


network.

Value

Table 14: Metadata Backup Information


Information

Additional description

Day and time to back up


the meta-volume

The day and time that the clusters meta-volume


will be backed up to a remote storage volume on
a back-end array (which will be selected during
the cluster setup procedure) .

Value

Table 15: SMTP details to configure event notifications


Information

Additional description

Value

Do you want VPLEX to


send event notifications?

Sending event notifications allows EMC to act on


any issues quickly.

Yes or no:

Note: The remaining information in this table applies only if you specified yes to the previous question.
SMTP IP address of
primary connection

SMTP address thru which Call Home emails will


be sent. EMC recommends using your ESRS
gateway as the primary connection address.

First/only recipients email


address

Email address of a person (generally a


customers employee) who will receive Call
Home notifications.

SMTP IP address for


first/only recipient

SMTP address through which the first/only


recipients email notifications will be sent.

Information

Additional description

Value

Event notification type

One or more people can receive email


notifications when events occur.
Notification types:
2. On Success or Failure - Sends an email
regardless of whether the email notification to
EMC succeeded.
3. On Failure - Sends an email each time an
attempt to notify EMC has failed.
4. On All Failure - Sends an email only if all
attempts to notify EMC have failed.
On Success - Sends an email each time EMC is
successfully sent an email notification.EMC
recommends distributing connections over
multiple SMTP servers for better availability.
These SMTP v4 IP addresses can be different
from the addresses used for event notifications
sent to EMC.

1, 2, 3, or 4:

Second recipients email


address (optional)

Email address of a second person who will


receive Call Home notifications.

SMTP IP address for


second recipient *

SMTP address through which the second


recipients email notifications will be sent.

Event notification type for


second recipient

See description of event notification type for first


recipient.

Third recipients email


address (optional)

Email address of a third person who will receive


Call Home notifications.

SMTP IP address for third


recipient *

SMTP address through which the third recipients


email notifications will be sent.

Event notification type for


third recipient

See description of event notification type for first


recipient.

1, 2, 3, or 4:

Do you want VPLEX to


send system reports?

Day or week and time to send system reports.


Sending weekly system reports allows EMC to
communicate known configuration risks, as well
as newly discovered information that can
optimize or reduce risks.
Note that the connections for system reports are
the same connections used for event
notifications.

(VPLEX specifies a random


day and time as a default)

1, 2, 3, or 4:

Table 16: SNMP information


Information

Additional description

Value

Do you want to use


SNMP to collect
performance statistics? *

You can collect statistics such as I/O operations


and latencies, as well as director memory, by
issuing SNMP GET, GET-NEXT, or GET-BULK
requests.

Yes or no:

Community string (if you specified yes above).

(Default = private)

xxx

Table 17: Certificate Authority (CA) and Host Certificate information


Information

Additional description

Value

CA certificate lifetime

How many years the cluster's self-signed CA


certificate should remain valid before expiring.

Valid values are 1, 2, 3, 4,


or 5 (default):

CA certificate key
passphrase

VPLEX uses self-signed certificates for ensuring


secure communication between VPLEX Metro
clusters. This passphrase is used during
installation to create the CA certificate necessary
for this secure communication.

Must be at least eight


characters (including
spaces):

Host certificate lifetime

How many years the cluster's host certificate


should remain valid before expiring.

Valid values are 1 or 2


(default):

Host certificate key


passphrase

This passphrase is used to create the host


certificates necessary for secure communication
between clusters.

Must be at least eight


characters (including
spaces):

Table 18: Product Registration Information


Information

Additional description

Company site ID number


(optional)

EMC-assigned identifier used when the VPLEX


cluster is deployed on the ESRS server. The
EMC customer engineer or account executive
can provide this ID.

Value

Company name
Company contact

First and last name of a person to contact.

Contacts business email


address
Contacts business phone
number
Contacts business
address

Street, city, state/province, ZIP/postal code,


country.

Method used to send


event notifications

Method by which the cluster will send event


messages to EMC.

__ 1. ESRS
__ 2. Email
__ 3. None (notifications
are not configured)

Remote support method

Method by which the EMC Support Center can


access the cluster.

__ 1. ESRS
__ 2. WebEx

Table 19: IP WAN Configuration Information


Information

Additional description

Value

Local director
discovery configuration
details (default values
work in most
installations)

Class-D network discovery address

(Default = 224.100.100.100)

Discovery port

(Default = 10000)

Listening port for communications between


clusters
(Traffic on this port must be allowed through the
network)

(Default = 11000)

Attributes for Cluster 1


Port Group 0

Class-C subnet prefix for Port Group 0.


The IP subnet must be different than the one
used by the management servers and different
from the Port Group 1 subnet in Cluster 1.
Subnet mask
Cluster address (use Port Group 0 subnet
prefix)
Gateway for routing configurations
(use Port Group 0 subnet prefix)
MTU:
The size must be set to the same value for Port
Group 0 on both clusters.
Also, the same MTU must be set for Port Group
1 on both clusters
Note: jumbo frames are supported
Port 0 IP address for director 1-1-A
Port 0 IP address for director 1-1-B

Attributes for Cluster 1


Port Group 1

Class-C subnet prefix for Port Group 1.


The IP subnet must be different than the one
used by the management servers and different
from the Port Group 1 subnet in Cluster 2.
Subnet mask
Cluster address (use Port Group 1 subnet
prefix)
Gateway for routing configurations
(use Port Group 1 subnet prefix)

(Default = 1500)

Information

Additional description

Value

MTU:
The size must be set to the same value for Port
Group 1 on both clusters.
Also, the same MTU must be set for Port Group
0 on both clusters

(Default = 1500)

Note: jumbo frames are supported


Port 1 IP address for director 1-1-A
Port 1 IP address for director 1-1-B

Table 20: Cluster Witness Configuration Information


Information

Additional description

Account and password for


to log into ESX server
where Cluster Witness
Server VM is deployed

This password allows you to log into the Cluster


Witness Server VM.

Host certificate
passphrase for the
Cluster Witness certificate
Cluster Witness requires
management IP network
to be separate from intercluster network

Value

Must be at least eight


characters (including spaces):

Class-C subnet mask for the ESX server where


Cluster Witness Server guest VM is deployed
IP Address for ESX server where Cluster
Witness Server guest VM is deployed
Cluster Witness Server Guest VM Class-C
subnet mask
Cluster Witness Server Guest VM IP Address
Public IP address for management server in
Cluster 1
Public IP address for management server in
Cluster 2

Cluster Witness
functionality requires
these protocols to be
enabled by the firewalls
configured on the
management network

Any firewall between the Cluster Witness Server


and the management servers need to allow
traffic per the following protocols and ports.

IKE UDP port 500


ESP IP protocol number 50
IP protocol number 51
NAT Traversal in the IKE
(IPsec NAT-T) UDP port 4500

You might also like