Professional Documents
Culture Documents
WHAT IS VPLEX?
We are looking at implementing a storage virtualization device and I started doing a bit
of research on EMCs product offering. Below is a summary of some of the
information Ive gathered, including a description of what VPLEX does as well as
some pros and cons of implementing it. This is all info Ive gathered by reading
various blogs, looking at EMC documentation and talking to our local EMC reps. I
dont have any first-hand experience with VPLEX yet.
What is VPLEX?
VPLEX at its core is a storage virtualization appliance. It sits between your arrays and hosts
and virtualizes the presentation of storage arrays, including non-EMC arrays. Instead of
presenting storage to the host directly you present it to the VPLEX. You then configure that
storage from within the VPLEX and then zone the VPLEX to the host. Basically, you attach any
storage to it, and like in-band virtualization devices, it virtualizes and abstracts them.
There are three VPLEX product offerings, Local, Metro, and Geo:
documentation/h7113-VPLEX-architecture-deployment.pdf
papers/h11299-emc-VPLEX-elements-performance-testing-best-practiceswp.pdf
What are some advantages of using VPLEX?
1. Extra Cache and Increased IO. VPLEX has a large cache (64GB per node) that
sits in-between the host and the array. It offers additional read cache that can
greatly improve read performance on databases because the additional cache is
offloaded from the individual arrays.
2. Enhanced options for DR with RecoverPoint. The DR benefits are increased
when integrating RecoverPoint with VPLEX Metro or Geo to replicate the data
using real time replication. It includes a capacity based journal for very granular
rollback capabilities (think of it as a DVR for the data center). You can also use the
native bandwidth reduction features (compression & deduplication) or disable them if
you have WAN optimization devices installed like those from Riverbed. If you want
active/active read/write access to data across a large distance, VPLEX is your only option.
NetApps V-Series and HDS USPV cant do it unless they are in the same data center. Heres a
few more advantages:
Non-disruptive DR testing
4. Non disruptive data mobility & reduced maintenance costs. One of the biggest
benefits of virtualizing storage is that youll never have to take downtime for a
migration again. It can take months to migrate production systems and without
virtualization downtime is almost always required. Also, migration is expensive, it
takes a great deal of resources from multiple groups as well as the cost of keeping the
older array on the floor during the process. Overlapping maintenance costs are
expensive too. By shortening the migration timeframe hardware maintenance costs will
drop, saving money. Maintenance can be a significant part of the storage TCO,
especially if the arrays are older or are going to be used for a longer period of time.
Virtualization can be a great way to reduce those costs and improve the return on
assets over time.
5. Flexibility based on application IO. The ability to move and balance LUN I/O among multiple
smaller arrays non-disruptively would allows you to balance workloads and increase your ability
to respond to performance demands quickly. Note that underlying LUNs can be aggregated or
simply passed through the VPLEX.
7. Increased leverage among vendors. This advantage would be true with any
virtualization device. When controller based storage virtualization is employed, there is
more flexibility to pit vendors against each other to get the best hardware, software and
maintenance costs. Older arrays could be commoditized which could allow for
9. Scale. You can scale it out and add more nodes for more performance when
needed. With a VPLEX Metro configuration, you could configure VPLEX with up to
16 nodes in the cluster between the two sites.
1. Licensing Costs. VPLEX is not cheap. Also, it can be licensed per frame on VNX
but must be licensed per TB on CX series. Your large,older CX arrays will cost you a
lot more to license.
2. Its one more device to manage. The VPLEX is an appliance, and its one more
thing (or things) that has to be managed and paid for.
3. Added complexity to infrastructure. Depending on the configuration, there could
be multiple VPLEX appliances at every site, adding considerable complexity to the
environment.
4. Managing mixed workloads in virtual enviornments. When heavy workloads
are all mixed together on the same array there is no way to isolate them, and the
ability to migrate that workload non-disruptively to another array is one of the reasons
to implement a VPLEX. In practice, however, those VMs may end up being moved to
another array with the same storage limitations as where they came from. The
VPLEX could be simply temporarily solving a problem by moving that problem to a
different location.
5. Lack of advanced features. The VPLEX has no advanced storage features such as
snapshots, deduplication, replication, or thin provisioning. It relies on the underlying storage
array for those type of features. As an example, you may want to utilize block based
deduplication with an HDS array by placing a Netapp V-series in front of it and using Netapps
dedupe to enable it. It is only possible to do that with a Netapp Vseries or HDS USP-V type
device, the VPLEX cant do that.
6. Write cache performance is not improved. The VPLEX uses write-through caching while
their competitors storage virtualization devices use write-back caching. When there is a write
I/O in a VPLEX environment the I/O is cached on the VPLEX, however it is passed all the way
back to the virtualized storage array before an ack is sent back to the host. The Netapp V-Series
and HDS USPV will store the I/O in their own cache and immediately return an ack to the host.
At that point the I/Os are flushed to the back end storage array using their respective write
coalescing & cache flushing algorithms. Because of the write-back behavior it is possible for a
possible performance gain above and beyond the performance of the underlying storage arrays
due to the caching on these controllers. There is no performance gain for write I/O in VPLEX
environments beyond the existing storage due to the write-through cache design.
You can review ETA 000193541 on EMCs support site for more information. Its a critical bug and Id
suggest patching as soon as possible.
SymptomCode: 0x8a266032
SymptomCode: 0x8a34601a
Category: Status
Severity: Error
Status: Failed
Component: CLUSTER
ComponentID: director-1-1-A
SubComponent: stdf
CallHome: Yes
FirstTime: 2014-11-14T11:20:11.008Z
LastTime: 2014-11-14T11:20:11.008Z
CDATA: Compare and Write cache transaction submit failed, status 1 [Versions:MS{D30.60.0.3.0,
D30.0.0.112, D30.60.0.3}, Director{6.1.202.1.0}, ClusterWitnessServer{unknown}] RCA: The attempt to
start a cache transaction for a Scsi Compare and Write command failed. Remedy: Contact EMC Customer
Support.
Description: The processing of a Scsi Com pare and Write command could not complete.
ClusterID: cluster-1
Based on that error the commands below were run to make sure the cluster was healthy.
Clusters:
--------Cluster Cluster Oper Health Connected Expelled Local-com
Name ID State State
--------- ------- ----- ------ --------- -------- --------cluster-1 1 ok ok True False ok
Meta Data:
----------
Director OS Uptime:
------------------Director OS Uptime
-------------- --------------------------director-1-1-A 12:49pm up 147 days 16:09
director-1-1-B 12:49pm up 147 days 16:09
Front End:
---------Cluster Total Unhealthy Total Total Total Total
Name Storage Storage Registered Ports Exported ITLs
Views Views Initiators Volumes
--------- ------- --------- ---------- ----- -------- ----cluster-1 56 0 299 16 353 9802
Storage:
Consistency Groups:
------------------Cluster Total Unhealthy Total Unhealthy
Name Synchronous Synchronous Asynchronous Asynchronous
Groups Groups Groups Groups
--------- ----------- ----------- ------------ -----------cluster-1 0 0 0 0
Cluster Witness:
---------------Cluster Witness is not configured
This command checks the status of the cluster:
VPlexcli:/> cluster status
Cluster cluster-1
operational-status: ok
transitioning-indications:
transitioning-progress:
health-state: ok
health-indications:
local-com: ok
This command checks the state of the storage volumes:
VPlexcli:/> storage-volume summary
Storage-Volume Summary (no tier)
---------------------- --------------------
Health out-of-date 0
storage-volumes 203
unhealthy 0
Use meta-data 4
used 199
Typically, when presenting a new LUN to our AIX administration team for a new server build, they would
assign the LUNs to specific volume groups based on the LUN names. The command powermt display
dev=hdiskpower# always includes the name & intended volume group for the LUN, making it easy for our
admins to identify a LUNs purpose. Now that we are presenting LUNs through our VPlex, when they run a
powermt display on the server the UID for the LUN is shown, not the name. Below is a sample output of what
is displayed.
Connected to localhost.
Escape character is ^].
Enter User Name: admin
Password:
VPlexcli:/>
Once youre in, run the ls -t command with the additional options listed below. You will need to substitute
theSTORAGE_VIEW_NAME with the actual name of the storage view that you want a list of LUNs from.
VPlexcli:/> ls -t /clusters/cluster-1/exports/storage-views/STORAGE_VIEW_NAME::virtual-volumes
The output looks like this:
/clusters/cluster-1/exports/storage-views/st1pvio12a-b:
Name Value
virtual-volumes
[(0,P1_LUN411_7872_SPB_VIOServer1_VIO_10G,VPD83T3:6000144000000010704759addf2487a6,10G),
(1,P0_LUN111_7872_SPA_VIOServer1_VIO_10G,VPD83T3:6000144000000010704759addf2487a1,10G)]
Now you can easily see which disk UID is tied to which LUN name.
If you would like to get a list of every storage view and every LUN:UID mapping, you can substitute the
storage view name with an asterisk (*).
VPlexcli:/> ls -t /clusters/cluster-1/exports/storage-views/*::virtual-volumes
The resulting report will show a complete list of LUNs, grouped by storage view:
/clusters/cluster-1/exports/storage-views/VIOServer1:
Name Value
[(0,P0_LUN101_3432_SPA_VIOServer3_root_75G,VPD83T3:6000144000000010704759addf248a0a,75G),
(1,P0_LUN130_3432_SPA_VIOServer3_redo1_25G,VPD83T3:6000144000000010704759addf248a0f,25G),
Our VPlex has only been installed for a few months and our team is still learning. There may be a better
way to do this, but its all Ive been able to figure out so far.