You are on page 1of 42

Vblock Infrastructure Packages

Reference Architecture

Introduction

Goal of This Document


This document describes and provides high-level design considerations for deploying a Vblock. Vblock
is an enterprise- and service provider-class infrastructure solution using VMware vSphere/vCenter on a
Cisco Unified Computing System (UCS) connected to EMC CLARiiON CX4 Series storage platforms
or Symmetrix V-Max Series arrays via a Cisco MDS 9506 Multilayer Director class SAN Switch or,
optionally, a MDS 9222i Multiservice Modular Fibre Channel Switch.

Audience
The target audience for this document includes sales engineers, field consultants, advanced services
specialists, and customers who want to deploy a virtualized infrastructure using VMware
vSphere/vCenter on Cisco UCS connected to EMC V-Max and CLARiiON storage products. The
document also explores potential business benefits of interest to senior executives.

Objectives
This document is intended to describe:
• The role of the Vblock within a data center
• The capabilities and benefits of the Vblock
• The components of the two types of Vblock: Vblock 1 and Vblock 2
This document also highlights the collaborative efforts of three partner companies—EMC, VMware, and
Cisco—working together on a common goal of providing proven technology to customers.

© 2010 Cisco EMC VMware. All rights reserved.


Vblock Overview

Vblock Overview
IT is undergoing a transformation. The current “accidental architecture” of IT increases procurement,
management costs, and complexity while making it difficult to meet customer service level agreements.
This makes IT less responsive to the business and creates the perception of IT being a cost center. IT is
now moving towards a “private cloud” model, which is a new model for delivering IT as a service,
whether that service is provided internally (IT today), externally (service provider), or in combination.
This new model requires a new way of thinking about both the underlying technology and the way IT is
delivered for customer success.
While the need for a new IT model has never been more clear, navigating the path to that model has never
been more complicated. The benefits of private clouds are capturing the collective imagination of IT
architects and IT consumers in organizations of all sizes around the world. The realities of outdated
technologies, rampant incremental approaches, and the absence of a compelling end-state architecture
are impeding adoption by customers.
By harnessing the power of virtualization, private clouds place considerable business benefits within
reach. These include:
• Business enablement—Increased business agility and responsiveness to changing priorities; speed
of deployment and the ability to address the scale of global operations with business innovation
• Service-based business models—Ability to operate IT as a service
• Facilities optimization—Lower energy usage; better (less) use of datacenter real estate
• IT budget savings—Efficient use of resources through consolidation and simplification
• Reduction in complexity—Moving away from fragmented, ‘accidental architectures’ to integrated,
optimized technology that lowers risk, increases speed and produces predictable outcomes
• Flexibility—Ability of IT to gain responsiveness and scalability through federation to cloud service
providers while maintaining enterprise-required policy and control
Moore’s Law (1965) was, in the 1980s and 1990s, replaced by the unwritten rule that everyone knew and
did not lament loudly enough: enterprise IT doubles in complexity and TCO every five years and IT gets
more pinched by the pressure points.
Enterprise IT solutions over the past 30 years have become more costly to analyze and design, procure,
customize, integrate, inter-operate, scale, service, and maintain. This is due to the inherent complexity
in each of these lifecycle stages of the various solutions.
Within the last decade, we have seen the rise of diverse inter-networks—variously called “fabrics,”
“grids,” and, generically, the “cloud”—constructed on commodity hardware, heavily yet selectively
service-oriented with a scale of virtualized power never before contemplated, housed in massive data
centers on- and off-premises.
Yet amid the buzzword din—onshoring/offshoring, in-/out-/co-sourcing, blades and RAIDs, LANs and
SANs, massive scale and hand-held computing—virtualization (an abiding computing capacity since
early mainframe days) has met secure networking (around since DARPAnet), both perfected, to form the
basis for the next wave, rightly delineated.
It has only been in the past several years that the notion of “cloud computing”—infrastructure, software,
or whatever-business-needs as an IT service—has been taken seriously in its own right, championed by
pioneers who have proved the model’s viability even if on too limited a basis.
With enterprise-level credibility, enabled by the best players in the IT industry, the next wave of
computing will be ushered in on terms that make business sense to the business savvy.

Vblock Infrastructure Packages Reference Architecture


2 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Overview

What Constitutes a Vblock?


Vblocks are pre-engineered, tested, and validated units of IT infrastructure that have a defined
performance, capacity, and availability Service Level Agreement (SLA). The promise that Vblocks offer
is to deliver IT infrastructure in a new way and accelerate organizations’ migration to private clouds.
Vblocks grew out of an idea to simplify IT infrastructure acquisition, deployment, and operations.
Removing choice is part of that simplification process. To that end, many decisions regarding current
form factors may limit the scope to customize or remove components. For example, substituting
components is not permitted as it breaks the tested and validated principle. While Vblocks are tightly
defined to meet specific performance and availability bounds, their value lies in a combination of
efficiency, control, and choice. Another guiding principle of Vblocks is the ability to expand the capacity
of Vblock infrastructures as the architecture is very flexible and extensible. The following sections
provide definitions for Vblock configurations with mandatory, recommended, and optional hardware and
software.

Vblock—A New Way of Delivering IT to Business


Vblock Infrastructure Packages accelerate infrastructure virtualization and private cloud adoption:
• Production-ready
– Integrated and tested units of virtualized infrastructure
– Best-of-breed virtualization, network, compute, storage, security, and management products
• SLA-driven
– Predictable performance and operational characteristics
• Reduced risk and compliance
– Tested and validated solution with unified support and end-to-end vendor accountability
Customer benefits include:
• Simplifies expansion and scaling
• Add storage or compute capacity as required
• Can connect to existing LAN switching infrastructure
• Graceful, non-disruptive expansion
• Self-contained SAN environment with known standardized platform and processes
• Enables introduction of Fiber Channel over IP (FCIP), Storage Media Encryption (SME), and so on,
later for Multi-pod
• Enables scaling to multi-Vblock and multi-data center architectures
• Multi-tenant administration, role-based security, and strong user authentication

Vblock Design Principles


A data center is a collection of pooled “Vblocks” aggregated in “Zones.”
• A unit of assembly that provides a set of services, at a known level, to target consumers
• Self contained, but it may also use external shared services
• Optimized for the classes of services it is designed to provide

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 3
Vblock Overview

• Can be clustered to provide availability or aggregated for scalability, but each Vblock is still viable
on its own
• Fault and service isolation—The failure of a Vblock will not impact the operation of other Vblocks
(service level degradation may occur unless availability or continuity services are present)
Vblocks are architectures that are pre-tested, fully-integrated, and scalable. They are characterized by:
• Repeatable “units” of construction based on “matched” performance, operational characteristics,
and discrete of power, space, and cooling
• Repeatable design patterns facilitate rapid deployment, integration, and scalability
• Designed from the “Facilities to the Workload” to be scaled for the highest efficiencies in
virtualization and workload re-platforming
• An extensible management and orchestration model based on industry standard tools, APIs, and
methods
• Built to contain, manage, and mitigate failure scenarios in hardware and software environments
Vblocks offer deterministic performance and predictable architecture:
• Predictable SLA—Granular SLA measurement and assurance
• Deterministic space and weight—Floor tiles become unit of capacity planning
• Power and cooling—Consistent power consumption and cooling (KWh/BTUs) per unit
• Pre-determined capacity and scalability—Uniform workload distribution and mobility
• Deterministic fault and security isolation
Vblock benefits include:
• Accelerate the journey to pervasive virtualization and private cloud computing while lowering risk
and operating expenses
• Ensure security and minimize risk with certification paths
• Support and manage SLAs
– Resource metering and reporting
– Configuration and provisioning
– Resource utilization
• Vblock is a validated platform that enables seamless extension of the environment
• Vblock O/S and application support:
– Vblock accelerates virtualization of applications by standardizing IT infrastructure and IT
processes
– Broad range of O/S support
– All current applications that work in a VMware environment also work in a Vblock environment
– Vblock validated applications include:
– SAP
– VMware View 4
– Oracle RAC
– Exchange 2007
– Sharepoint and Web applications
Vblock is a scalable platform for building solutions:

Vblock Infrastructure Packages Reference Architecture


4 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Overview

• Modular architecture enables graceful scaling of Vblock environment


• Consistent policy enforcement and IT operational processes
• Add capacity to an existing Vblock or add more Vblocks
• Mix-and-match Vblocks to meet specific application needs

Vblock Architecture Components


Figure 1 provides a high-level overview of the components in the Vblock architecture1.

Figure 1 Vblock Architecture Components

Management VMware vCenter


Cisco UCS Manager
EMC Ionix UIM
SMC or NaviSphere

Compute/Network
Cisco Nexus 1000V

VMware vSphere

Cisco UCS 5108


Blade Chassis

Cisco UCS 6100


Fabric Interconnect

Network

Cisco Nexus 7000

SAN
Cisco MDS 9506

Storage
CLARiiON CX4-480

OR

Symmetrix V-Max
228037

Vblock 1 Components
Vblock 1 is a mid-sized configuration that provides a broad range of IT capabilities for organizations of
all sizes. Typical use cases include shared services, such as e-mail, file and print, virtual desktops, etc.
1. The network layer represented in the figure by the Cisco Nexus 7000 is not a Vblock component. EMC Ionix
is optional and available at additional cost.

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 5
Vblock Overview

Vblock 1 components:
• Compute
– 16-32 Cisco UCS B-series blades
– 128-256 Cores
– 960-1920 GB Memory
• Network
– Cisco Nexus 1000V
– The UCS uses 6100 series fabric interconnects which carry the network and storage (IP-based)
traffic from the blades to the conected SAN and LAN
• Storage
– EMC CLARiiON CX4-480
– 38-64 TB capacity
– Enterprise Flash Drives (EFD), FC, and SATA Drives
– iSCSI and SAN
– Celerra NS-G2 (optional)
– Cisco MDS 9506 (optionally MDS 9222i)
• VMware vSphere 4.0/vCenter 4.0
• Management
– EMC Ionix Unified Infrastructure Manager (optional)
– VMware vCenter
– EMC Navisphere
– EMC PowerPath/VE
– Cisco UCS Manager
– Cisco Fabric Manager

Vblock 2 Components
Vblock 2 is a high-end configuration that is extensible to meet the most demanding IT needs. It is
optimized for performance to support high-intensity application environments for enterprises and
service providers. Typical use cases include business critical ERP, CRM systems, etc.
Vblock 2 components:
• Compute
– 32-64 Cisco UCS B-series blades
– 256-512 Cores
– 3072-7144 GB memory
• Network
– Cisco Nexus 1000V
– The UCS uses 6100 series fabric interconnects which carry the network and storage (IP-based)
traffic from the blades to the conected SAN and LAN

Vblock Infrastructure Packages Reference Architecture


6 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Overview

• Storage
– EMC Symmetrix V-Max
– 96-146 TB capacity
– EFD, FC, and SATA drives
– iSCSI and SAN
– Celerra NS-G8 (optional)
– Cisco MDS 9506
• VMware vSphere 4.0/vCenter 4.0
• Management
– EMC Ionix Unified Infrastructure Manager (optional)
– VMware vCenter
– EMC Symmetrix Management console
– EMC PowerPath/VE
– Cisco UCS Manager
– Cisco Fabric Manager

Vblock Design and Configuration Details


Figure 2 provides a high-level topological view of Vblock components.

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 7
Vblock Overview

Figure 2 High-Level Topological View of Vblock

CLARiiON CX4-480 or
Symmetrix V-Max

EMC Storage

Cisco MDS Cisco MDS


9506-SAN 9506-SAN
A B

LAN Management

Management
Up Links Links
10/100/1000

Nexus 6100 Series UCS Cluster Links


Fabric Interconnect

Fabric Links (10GE x 4)


Cisco UCS Fabric
Extender (in back)
UCS Blade Server

228038
UCS Blade Chassis

A Vblock consists of a minimum and maximum amount of components that offer balanced I/O,
bandwidth, and storage capacity relative to the compute and storage arrays offered. Each Vblock is a
fully-redundant autonomous system that has 1+1 or N+1 redundancy by default. The minimum and
maximum configurations for Vblock 1 and Vblock 2 are listed in Table 1, Table 2, and Table 3.

Table 1 Minimum and Maximum Vblock Configurations

UCS Hardware Type 1 Minimum Type 1 Maximum Type 2 Minimum Type 2 Maximum
UCS 5100 Chassis 2 4 4 8
UCS B-200 Series Blades 16 32 32 64
UCS 6120 Fabric Interconnect 2 2
UCS 6140 Fabric Interconnect 2 2

Table 2 Software

Software Type 1 Type 2


VMware vSphere 4 Enterprise Plus Suite Yes Yes

Vblock Infrastructure Packages Reference Architecture


8 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Overview

Table 3 Network

Hardware Type 1 Minimum Type 1 Maximum Type 2 Minimum Type 2 Maximum


Nexus 1000V Yes Yes Yes Yes
MDS 9506 Director Switch 2 2 2 2

In Vblock 1, each UCS chassis contains B-200 blades, six (6) of which have 48GB RAM and two (2) of
which have 96GB RAM. This provides good price/performance and support some memory intensive
applications such as in-memory databases within the Vblock definition. For Vblock 2, all B-200 series
have been defined with 96GB RAM by default due to the systems performance capabilities, where it is
more likely to be running very dense VM or memory intensive mission-critical applications.
The amount of RAM per blade within either a Vblock 1 or Vblock 2 may be adjusted if you have specific
requirements within the definition of a Vblock. For example, if you have specific needs to have a mixture
of RAM densities, you can specify 32G, 48, 72, and 96G RAM options. This however requires careful
consideration of the operational environment and introduces some variance.
B-250 series modules were not tested, but will be a future option. If B-250 series modules are a
requirement for memory densities greater than 96G per module, this may be accommodated within
Vblock 1 and Vblock 2 once testing and validation have been completed. Note that as the B-250 is a
full-slot module, this will have density and performance impacts that need to be ascertained. Because a
B-250 blade is a full slot module, it is expected that these will reduce the impact as the number of CPUs
per slot is reduced by 50%, that will reduce IOPs and potentially disk capacity.
Within a Vblock 1, there are no hard disks on the B-200 series blades as all boot services and storage are
provided by the SAN. However, a small hard drive may be installed if local page memory is required for
vSphere. If the local disk is used for main storage or operating system storage, it is not considered a
Vblock and is a custom implementation at this point.
For Vblock 2, each B-200 series blade module has 72GB SATA drives for page memory purposes. If
required, these may be removed to reduce power and cooling overhead, increase MTBF, or save costs.
Each 61x0 has either 4 or 8 10GE/Unified Fabric uplinks to the aggregation layer (the aggregation layer
is not a part of Vblock) Nexus 7000 (new build out) or Catalyst 6500 (upgrade to an existing data center)
switches and either 4 or 8*4G Fibre Channel connections to the SAN aggregation provided by a pair of
MDS 9506 director-class switches (SAN A and B support).
The MDS 9506 switches are recommended, but may optionally be changed for 9509 or 9513s to scale
capacity or reduced to an MDS 9222i if less density is required. However, the performance may be
acceptable for small Vblock 1 implementations.
Figure 4 illustrates the interconnection of the Cisco MDS 9222i in Vblock 1 and Figure 5 illustrates the
interconnection of the Cisco MDS 9506 in Vblock 2.
For more information on the MDS 9222i and MDS 9506, see Storage Area Network—Cisco MDS Fibre
Channel Switch.
VMware vSphere 4 Enterprise Plus licenses are mandatory within all Vblock definitions (to enable the
Cisco 1000v and EMC PowerPath/VE) and per CPU licensing is included within the defined
bill-of-materials. It is also acceptable for operating system and applications to be run directly on the
B-200 series blades. It should be noted that other hypervisors are not supported by Vblocks and
invalidate the Vblock support agreement.
Nexus 1000V and Enterprise Plus are mandatory components due to the inherent richness that they offer
in terms of policy control, segmentation, flexibility, and instrumentation.

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 9
Vblock Overview

Networking
None of the current Vblock definitions contain any form of network switches except for MDS SAN. The
MDS 9000 series are necessary components to provide Fibre Channel connectivity between the storage
arrays and UCS 61x0 series Fabric Interconnects and ultimately the UCS B-200 series blades.
For upstream connectivity, the UCS 61x0 are connected using either 4*10GE/Unified Fabric (Type 1) or
8*10GE/Unified Fabric (Type 2) connections, which equates to an oversubscription factor of 4:1. There
is no provision for an intermediate layer of “access” switches at this time.
If you require a Celerra NAS Gateway within the Vblock 1 (recommended), there are two possibilities:
• Connect the Celerra NAS Gateway to the Nexus 70001 aggregation layer directly
• Use a local Nexus 50x02 switch to provide connectivity
Figure 3 illustrates the interconnection of the EMC Celerra in Vblock.

Figure 3 EMC Celerra in Vblock

IP SAN

61x0 Fabric
Interconnect

1/10Gb Ethernet

Data Mover Front-end Ports

5100 Blade
Chassis
Data Mover Cache

Data Mover

Physical Disks
5100 Blade
Chassis
228042

UCS EMC Celerra NS-G EMC Storage

For more information, see NFS Datastores and Native File Services—EMC Celerra Gateway Family.

Storage
Storage capacity has been tuned to match the I/O performance of the attached UCS systems.
Additionally, some analysis of the likely underlying applications has also been taken into account to
characterize user or VM densities that are likely for a given Vblock. Obviously, these numbers are highly
variable based upon your use cases and requirements; the numbers are intended to provide guidance on
typical densities.

1. The Cisco Nexus 7000 is not a Vblock component.


2. The Cisco Nexus 50x0 is not a Vblock component.

Vblock Infrastructure Packages Reference Architecture


10 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Overview

Figure 4 illustrates the interconnection of the EMC CLARiiON CX4-480 in Vblock 1 and Figure 5
illustrates the interconnection of the EMC Symmetrix V-Max in Vblock 2.

Figure 4 EMC CLARiiON CX4-480 in Vblock 1

Cisco MDS Cisco MDS


9222i 9222i
IP

61x0 Fabric
Interconnect

Service Processor A Service Processor B

8-16 FC Front-end Ports


5100 Blade 2-4 GB iSCSI Front-end Ports
Chassis
16 GB Cache

105-180 Physical Disks


5100 Blade
Chassis

228040
UCS CLARiiON CX4-480

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 11
Vblock Overview

Figure 5 EMC Symmetrix V-Max in Vblock 2

Cisco MDS Cisco MDS


9506 9506
IP

61x0 Fabric
Interconnect

SE SE SE SE FA FA FA FA FA FA FA FA

8-16 FC Front-end Ports


5100 Blade 4-8 GB iSCSI Front-end Ports
Chassis
64-128 GB Cache

220-355 Physical Disks


5100 Blade
Chassis

228041
UCS V-Max (Two Engine)
SE = iSCSI
FA = Fibre Channel

Table 4 through Table 9 contain CLARiiON CX4-480 and Symmetrix V-Max system controller I/O and
bandwidth capacities, as well as installed disks and other configuration information.

Table 4 Storage

Storage # of Dives for Minimum # of Drives for Maximum Capacity (TB) Minimum Capacity (TB) Maximum
CLARiiON CX4-480 110 184 61 97
Symmetrix V-Max 209 359 42 221
1 1 1
NAS Gateway NS-G2 NS-G8 NS-G2 NS-G81
1. Recommended

The CLARiiON system is configured with an amount of Flash, Fibre Channel, and SATA drives with
N+1 spares redundancy. This means that although the minimum Vblock 1 density is some 61TB of RAW
storage, 42TB is usable when system spares and overheads are factored in.

Table 5 Vblock 1—CLARiiON CX4-480 Configuration

Vblock 1 Minimum Maximum


CLARiiON CX4-480 Fiber (450GB) Flash (400GB) SATA (1TB) Total Fiber (450GB) Flash (400GB) SATA (1TB) Total
# of Drives 83 6 21 110 151 6 27 184
Raw Capacity 37,350 2,400 21,000 60,750 67,950 2,400 27,000 97,350
Estimated 26,145 1,378 13,650 41,173 47,565 1,378 18,900 67,843
Capacity (GB)

Vblock Infrastructure Packages Reference Architecture


12 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Overview

Table 5 Vblock 1—CLARiiON CX4-480 Configuration

Vblock 1 Minimum Maximum


CLARiiON CX4-480 Fiber (450GB) Flash (400GB) SATA (1TB) Total Fiber (450GB) Flash (400GB) SATA (1TB) Total
Estimated IOPS 14,940 15,000 1,050 30,990 27,180 15,000 1,350 43,530
Estimated 2,739 900 315 3,954 4,983 900 405 6,288
Bandwidth
(Mbps)

Table 6 Vblock 1—CLARiiON CX4-480

CLARiiON CX4-480 Minimum Maximum


Storage Processors 2 2
Fibre Channel front-end ports (4 Gb) 8 12
iSCSI front-end ports (1 Gb) 4 8
Global Write Memory (cache) 16 16
IOPs/MBs 30,990/3,954 43,530/6,288

Within the Vblock definitions, NAS access is recommended for vSphere. A NAS Gateway, while
optional, provides this service, including CIFS for applications. Although these have been tested, the
NAS Gateways have not been performance validated for a pure NAS environment; further testing is
required ensure boot (PXE) as well as file access can be supported in a balanced fashion. It should be
noted that UCS does not currently support iSCSI boot of physical servers (VMs can boot on iSCSI
through vSphere), so this is neither a tested nor validated solution.
For the interim, a Vblock 1 can support NAS with the provision that primary boot services are provided
across the SAN.
For Vblock 2, the characteristics of the system are such that mission-critical applications will be hosted
that will require Fibre Channel access to maintain performance. Again, it is highly recommended that a
NAS Gateway or two are deployed for vSphere, the exact number required being ascertained during
Vblock planning phases.

Table 7 Vblock 2—Symmetrix V-Max Configuration

Vblock 2 Minimum Maximum


Symmetrix V-Max Fiber (450GB) Flash (400GB) SATA (1TB) Total Fiber (450GB) Flash (400GB) SATA (1TB) Total
# of Drives 124 9 76 209 240 9 110 359
Raw Capacity 55,800 3,600 76,000 135,400 108,000 3,600 110,000 221,600
Estimated 39,060 2,250 49,400 90,710 75,600 2,520 71,500 149,620
Capacity (GB)
Estimated IOPS 22,320 22,500 3,800 48,620 43,200 22,500 5,500 71,200
Estimated 4,092 1,350 1,140 6,582 7,920 1,350 1,650 10,920
Bandwidth
(Mbps)

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 13
Vblock Overview

Table 8 Symmetrix V-Max System Supported Drive Types

Drive type Rotational speed Capacity


4 Gb/s FC 15k 146 GB, 300 GB, 450 GB
4 Gb/s FC 10k 400 GB
SATA 7.2k 1 TB
4 Gb/s Flash (SSD) N/A 200 GB, 400 GB

Table 9 Vblock 2—Symmetrix V-Max

Symmetrix V-Max Minimum Maximum1


Storage Processors (Directors) 2 (1 Engine) 4 (2 Engines)
Fibre Channel front-end ports (4 Gb) 8 16
iSCSI front-end ports (10 Gb) 4 8
Global Memory (cache) 64 128
IOPs/MBs 48,620/6,582 71,200/10,920
1. Additional engines possible beyond base Vblock configuration.

EMC PowerPath/VE (PP/VE) provides several benefits in terms of performance, availability, and
operations, so the base PP/VE license is mandatory for Vblocks 1 and 2.
For more information on the EMC CLARiiON storage system, see
http://www.emc.com/products/family/clariion-family.htm and for the Symmetrix storage system, see
http://www.emc.com/products/family/symmetrix-family.htm.

Vblock Management
Within the Vblock there are several managed elements, some of which are managed by their respective
element managers. These elements offer corresponding interfaces that provide an extensible, open
management framework. The Vblocks management framework showing relationships and interfaces is
shown in Figure 6. The individual element managers and managed components are:
• VMware vCenter Server
• Cisco UCS Manager
• Cisco Fabric Manager
• EMC Symmetrix Management Console
• EMC Navisphere Manager
A Vblock element manager, Unified Infrastructure Manager (UIM)1, manages the configuration,
provisioning, and compliance of a Vblock and multiple mixed Vblocks. This accrues several benefits as
it provides “single pane of glass” for systems configuration and integration and provides Vblock service
catalogs and Vblock self-service portal capabilities.

1. Optional and available at additional cost.

Vblock Infrastructure Packages Reference Architecture


14 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Overview

Figure 6 Vblock Management

Enterprise Management Platforms

Configuration & Availability &


Compliance Performance
Events Events

Unified Vblock Element Management

EMC Ionix Unified Infrastructure Manager (UIM)

IT Provisioning Portal
Unified
Multi-Vblock Service Profile Catalog
Element
Management Unified Configuration
Policy-Based Infrastructure
Management Provisioning, Compliance Recovery (DR)
Config & Change Analysis
Manages one
Stand-alone or more
Component Cisco EMC Symmetrix VMware Vblocks

253547
Management UCS Manager Console vCenter

It should be noted that Ionix UIM does not provide fault, performance monitoring, billing capabilities,
or software lifecycle management capabilities. By the abstractions offered by UIM, and using UIM as a
single point of integration, this simplifies Vblock integration into IT service catalogs and workflow
engines. In this respect, UIM can dramatically simplify Vblock deployment by abstracting the overall
provisioning aspects of Vblock, while offering granular access to individual components for
troubleshooting and fault management.
It should be noted however that Vblock has an open management framework that allows an organization
to integrate Vblock management with their choice of management tools should they so desire.

Vblock Qualification of Existing Environments


Many organizations have extensive EMC and VMware components within their data centers. However,
simply adding a UCS system to this environment does not constitute a Vblock from a number of aspects,
including:
• Do the existing arrays meet the published system capacity for Vblock 1 and Vblock 2?
• What firmware/software versions are running within the infrastructure?
• Is vSphere 4 deployed?
• Which other hypervisors are in use: Xen, Hyper-V?
• What management packages are being used?
• What other equipment is accessing the storage arrays?
Before the existing equipment can be supported as a Vblock, these questions need to be addressed. A
plan would then be developed to remediate the environment to meet Vblock standards.

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 15
Vblock Overview

In practical terms, it may not be possible or desirable to do this depending upon the complexity of the
environment; it may be simpler to deploy a new Vblock, migrate workloads first, and then migrate
existing storage arrays to that infrastructure over time. Each of these situations would need to be assessed
on their relative merits and would require extensive audits.

Expanding Vblock Infrastructures


One guiding principle of Vblocks is the ability to expand the capacity of Vblock infrastructures. The
Vblock architecture is very flexible and extensible and is architected to be easily expandable from a few
hundred VMs/users to several thousand users. In addition, this capacity may be aggregated (clustered)
as a single pool of shared capacity or segmented into smaller isolated pools.
Using Figure 7 as a reference, the first Vblock deployment may be a single Type 1. As the organization
requires more capacity, the initial Vblock may be extended by adding another Type 1 and clustering the
two systems to aggregate their capacity. If a Type 2 is added, this capacity may be segmented from the
Type 1 storage and compute for regulatory, policy, or operational reasons.

Figure 7 Vblock Expansion

Vblock 1 Vblock 1 Vblock 1 Vblock 2 Vblock 1 Vblock 1


Base Base Expansion Base Base Storage
Expansion

Vblock 1 Vblock
Expansion Compute
Expansion

253557

In order to scale capacity within a Vblock, the initial Vblock configuration includes an MDS 9506 that
has a 24-port 2/4/8G Fibre Channel module. If additional capacity is required, an expansion to the
original Vblock simply connects the UCS 61x0 and CLARiiON or Symmetrix V-Max to the existing
MDS interfaces. If additional capacity is required on the MDS 9506 switch, additional interface modules
can be installed as necessary.
As Vblocks are added, the capacity of the Vblock scales either as an aggregated pool, whereby any UCS
blade can access any storage disks on the SAN or as isolated silos. For example, it is perfectly acceptable
to aggregate two Vblock 1s to provide capacity for 6,000 VMs that can share common storage capacity.
This offers an organization the ability to configure Vblock infrastructure to achieve their compliance,
security, and fault isolation objectives using a single, flexible infrastructure. As long as storage capacity
is added in conjunction with compute capacity to maintain balanced performance as published within
the Vblock, the system does not require any additional validation.

Vblock Infrastructure Packages Reference Architecture


16 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Component Details

If compute or storage needs to be added asynchronously, the Vblock environment must be carefully
considered. If storage is increased above that specified on a per Vblock maximum, there is no real
concern as the performance limitation are either at the system controller or compute node. In most cases,
the performance of the systems controller and UCS system has been balanced, so this should not be a
concern.
If compute is to be scaled to be in excess of the minimum or maximum storage capacity, systemic
problems from I/O or capacity may be introduced that need careful consideration. Some applications that
may require this flexibility are high-performance compute environments (CFD, data mining, parametric
execution, etc.). This requires services engagement to validate and certify before being accepted as a
Vblock.
In order to satisfy the performance needs of a Vblock, it is recommended that only similar Vblocks are
pooled so as to maintain the performance and availability SLA associated with that Vblock. This is easily
achieved on the MDS 9500 director switches using Virtual SAN capabilities.

Vblock Component Details


This section contains more detailed descriptions of the main components of Vblock 1 and Vblock 2:
• Compute—Unified Computing System (UCS)
• Network
– Cisco Nexus 1000V
• Storage
– EMC CLARiiON CX4 Series
– EMC Symmetrix V-Max Storage System
– NFS Datastores and Native File Services—EMC Celerra Gateway Family
– Storage Area Network—Cisco MDS Fibre Channel Switch
• Virtualization

Compute—Unified Computing System (UCS)


The Cisco Unified Computing System (UCS) is a next-generation data center platform that unites
compute, network, and storage access. The platform, optimized for virtual environments, is designed
within open industry standard technologies and aims to reduce TCO and increase business agility. The
system integrates a low-latency, lossless 10 Gigabit Ethernet unified network fabric with
enterprise-class, x86-architecture servers. The system is an integrated, scalable, multichassis platform
in which all resources participate in a unified management domain.

UCS Components
The Cisco Unified Computing System is built from the following components:
• Cisco UCS 6100 Series Fabric Interconnects
(http://www.cisco.com/en/US/partner/products/ps10276/index.html) is a family of line-rate,
low-latency, lossless, 10-Gbps Ethernet and Fibre Channel over Ethernet interconnect switches.

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 17
Vblock Component Details

• Cisco UCS 5100 Series Blade Server Chassis


(http://www.cisco.com/en/US/partner/products/ps10279/index.html) supports up to eight blade
servers and up to two fabric extenders in a six rack unit (RU) enclosure.
• Cisco UCS 2100 Series Fabric Extenders
(http://www.cisco.com/en/US/partner/products/ps10278/index.html) bring Unified Fabric into the
blade-server chassis, providing up to four 10-Gbps connections each between blade servers and the
fabric interconnect.
• Cisco UCS B-Series Blade Servers
(http://www.cisco.com/en/US/partner/products/ps10280/index.html) adapt to application demands,
intelligently scale energy use, and offer best-in-class virtualization.
• Cisco UCS B-Series Network Adapters
(http://www.cisco.com/en/US/partner/products/ps10280/index.html) offer a range of options,
including adapters optimized for virtualization, compatibility with existing driver stacks, or
efficient, high-performance Ethernet.
• Cisco UCS Manager (http://www.cisco.com/en/US/partner/products/ps10281/index.html) provides
centralized management capabilities for the Cisco Unified Computing System.
Fore more information, see: http://www.cisco.com/en/US/partner/netsol/ns944/index.html.
Table 10 summarizes the various components that constitute UCS.

Table 10 UCS System Components

UCS Manager
Embedded- manages entire system

UCS Fabric Interconnect


20 Port 10Gb FCoE
40 Port 10Gb FCoE

UCS Fabric Extender


Remote line card

UCS Blade Server Chassis


Flexible bay configurations

UCS Blade Server


Industry-standard architecture

Vblock Infrastructure Packages Reference Architecture


18 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Component Details

Table 10 UCS System Components

UCS Virtual Adapters


Choice of multiple adapters

Figure 8 provides an overview of the components of the Cisco UCS.

Figure 8 Cisco Unified Computing System

IP Storage
Aggregation Area Network
Layer (SAN)

10Gb x 4 Ethernet
Port Channel to 4Gb x 4 Fibre
Aggregation Layer Channel to SAN

UCS 6100 Fabric Interconnect UCS 6100 Fabric Interconnect


Cisco Cisco
UCS Manager UCS Manager

UCS 5100 Blade Chassis


Fabric Extender

Fabric Extender

UCS B200 M1 Blade UCS B200 M1 Blade


UCS B200 M1 Blade UCS B200 M1 Blade

Unified Fabric UCS B200 M1 Blade UCS B200 M1 Blade


• 10Gb FCoE UCS B200 M1 Blade UCS B200 M1 Blade
• 10Gb Ethernet
Power and Cooling

UCS 5100 Blade Chassis


Fabric Extender

Fabric Extender

UCS B200 M1 Blade UCS B200 M1 Blade


UCS B200 M1 Blade UCS B200 M1 Blade
UCS B200 M1 Blade UCS B200 M1 Blade
UCS B200 M1 Blade UCS B200 M1 Blade

Power and Cooling


228039

10Gb Connection From Each


Fabric Extender to Each Blade

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 19
Vblock Component Details

Cisco UCS and UCS Manager (UCSM)


The Cisco Unified Computing System™ (UCS) is a revolutionary new architecture for blade server
computing. The Cisco UCS is a next-generation data center platform that unites compute, network,
storage access, and virtualization into a cohesive system designed to reduce total cost of ownership
(TCO) and increase business agility. The system integrates a low-latency, lossless 10 Gigabit Ethernet
unified network fabric with enterprise-class, x86-architecture servers. The system is an integrated,
scalable, multi-chassis platform in which all resources participate in a unified management domain.
Managed as a single system whether it has one server or 320 servers with thousands of virtual machines,
the Cisco Unified Computing System decouples scale from complexity. The Cisco Unified Computing
System accelerates the delivery of new services simply, reliably, and securely through end-to-end
provisioning and migration support for both virtualized and non-virtualized systems.

UCS Manager

Data centers have become complex environments with a proliferation of management points. From a
network perspective, the access layer has fragmented, with traditional access layer switches, switches in
blade servers, and software switches used in virtualization software all having separate feature sets and
management paradigms. Most current blade systems have separate power and environmental
management modules, adding cost and management complexity. Ethernet NICs and Fibre Channel
HBAs, whether installed in blade systems or rack-mount servers, require configuration and firmware
updates. Blade and rack-mount server firmware must be maintained and BIOS settings must be managed
for consistency. As a result, data center environments have become more difficult and costly to maintain,
while security and performance may be less than desired. Change is the norm in data centers, but the
combination of x86 server architectures and the older deployment paradigm makes change difficult:
• In fixed environments in which servers run OS and application software stacks, rehosting software
on different servers as needed for scaling and load management is difficult to accomplish. I/O
devices and their configuration, network configurations, firmware, and BIOS settings all must be
configured manually to move software from one server to another, adding delays and introducing
the possibility of errors in the process. Typically, these environments deploy fixed spare servers
already configured to meet peak workload needs. Most of the time these servers are either idle or
highly underutilized, raising both capital and operating costs.
• Virtual environments inherit all the drawbacks of fixed environments, and more. The fragmentation
of the access layer makes it difficult to track virtual machine movement and to apply network
policies to virtual machines to protect security, improve visibility, support per-virtual machine QoS,
and maintain I/O connectivity. Virtualization offers significant benefits; however, it adds more
complexity.

Cisco UCS 6100 Series Fabric Interconnects


A core part of the Cisco Unified Computing System, the Cisco UCS 6100 Series Fabric Interconnects
provide both network connectivity and management capabilities to all attached blades and chassis. The
Cisco UCS 6100 Series offers line-rate, low-latency, lossless 10 Gigabit Ethernet and Fibre Channel over
Ethernet (FCoE) functions.
The interconnects provide the management and communication backbone for the Cisco UCS B-Series
Blades and UCS 5100 Series Blade Server Chassis. All chassis, and therefore all blades, attached to the
interconnects become part of a single, highly available management domain. In addition, by supporting
unified fabric, the Cisco UCS 6100 Series provides both the LAN and SAN connectivity for all blades
within its domain.

Vblock Infrastructure Packages Reference Architecture


20 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Component Details

Typically deployed in redundant pairs, fabric interconnects provide uniform access to both networks and
storage, eliminating the barriers to deploying a fully virtualized environment. Two models are available:
the 20-port Cisco UCS 6120XP and the 40-port Cisco UCS 6140XP.
Both models offer key features and benefits, including:
• High performance Unified Fabric with line-rate, low-latency, lossless 10 Gigabit Ethernet, and
FCoE.
• Centralized unified management with Cisco UCS Manager software.
• Virtual machine optimized services with the support for VN-Link technologies.
• Efficient cooling and serviceability with front-to-back cooling, redundant front-plug fans and power
supplies, and rear cabling.
• Available expansion module options provide Fibre Channel and/or 10 Gigabit Ethernet uplink
connectivity.
For more information on the Cisco UCS 6100 Series Fabric Interconnects, see:
http://www.cisco.com/en/US/products/ps10276/index.html

Vblock Configuration and Design Considerations

• Vblock 1—6120 Fabric Interconnect


– (20) 10 Gb fixed ports to blade chassis/aggregation layer
– (4) 4 Gb Ports to SAN fabric
• Vblock 2—6140 Fabric Interconnect
– (40) 10 Gb fixed ports to blade chassis/aggregation layer
– (8) 4 Gb Ports to SAN fabric
• Always configured in pairs for availability and load balancing
• Predictable performance:
– 4:1 network oversubscription
– Balanced configuration

Cisco UCS 5100 Series Blade Server Chassis


The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified
Computing System, delivering a scalable and flexible architecture for current and future data center
needs, while helping reduce total cost of ownership.
Cisco’s first blade-server chassis offering, the Cisco UCS 5108 Blade Server Chassis, is six rack units
(6RU) high and mounts in an industry-standard 19-inch rack. A chassis can accommodate up to either
eight half slot or four full slot Cisco UCS B-Series Blade Servers, two redundant 2104XP Fabric
Extenders, eight cooling fans, and four power supply units. The cooling fans and power supply are
hot-swappable and redundant. The chassis requires only two power supplies for normal operation; the
additional power supplies are for redundancy. The highly-efficient (in excess of 90%) power supplies,
in conjunction with the simple chassis design that incorporates front to back cooling, makes the UCS
system very reliable and energy efficient.
The Cisco UCS 5108 Blade Server Chassis revolutionizes the use and deployment of blade-based
systems. By incorporating unified fabric and fabric-extender technology, the Cisco Unified Computing
System enables the chassis to:

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 21
Vblock Component Details

• Have fewer physical components


• Require no independent management
• Be more energy efficient than traditional blade-server chassis
This simplicity eliminates the need for dedicated chassis management and blade switches, reduces
cabling, and allowing scalability to 40 chassis without adding complexity. The Cisco UCS 5108 Blade
Server Chassis is a critical component in delivering the simplicity and IT responsiveness for the data
center as part of the Cisco Unified Computing System.
For more information on the Cisco UCS 5100 Series Blade Server Chassis, see:
http://www.cisco.com/en/US/products/ps10279/index.html

Vblock Configuration and Design Considerations

• Vblock 1
– 2 to 4 blade chassis
• Vblock 2
– 4 to 8 blade chassis
• Availability:
– Two Fabric Extenders per chassis
– N+1 cooling and power
• Predictable performance:
– 2:1 Oversubscription—40 Gb per chassis
– Balanced configuration
– Distribute vHBA and vNIC between fabrics

Cisco UCS B-200 M1 Blade Server


The Cisco UCS B-200 M1 Blade Server balances simplicity, performance, and density for
production-level virtualization and other mainstream data-center workloads. The server is a half-width,
two-socket blade server with substantial throughput and 50 percent more industry-standard memory
compared to previous-generation Intel Xeon two-socket servers.
Features of the Cisco UCS B-200 M1 include:
• Up to two Intel® Xeon® 5500 Series processors, which automatically and intelligently adjust server
performance according to application needs, increasing performance when needed and achieving
substantial energy savings when not.
• Up to 96 GB of DDR3 memory in a half-width form factor for mainstream workloads, which serves
to balance memory capacity and overall density.
• Two optional Small Form Factor (SFF) Serial Attached SCSI (SAS) hard drives available in 73GB
15K RPM and 146GB 10K RPM versions with an LSI Logic 1064e controller and integrated RAID.
• One dual-port mezzanine card for up to 20 Gbps of I/O per blade. Mezzanine card options include
either a Cisco UCS VIC M81KR Virtual Interface Card, a converged network adapter (Emulex or
QLogic compatible), or a single 10GB Ethernet Adapter.
For more information on the Cisco UCS B200 M1 Blade Server, see:
http://www.cisco.com/en/US/products/ps10299/index.html

Vblock Infrastructure Packages Reference Architecture


22 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Component Details

Vblock Configuration and Design Considerations

• Vblock 1
– 16-32 blades
– 128-256 cores
– 960-1920 GB Memory
– 6 blade/chassis = 48 GB
– 2 blades/chassis = 96 GB
• Vblock 2
– 32-64 blades
– 256-512 cores
– 3072-7144 GB memory
– 96 GB per blade
– (2) 73 GB internal HDD
• Availability:
– N+1 blades per chassis
– Trunk and Port Group configuration
• One dual port Converged Network Adapter (Unified Network)
– vNIC
– vHBA
• Internal connections to both Fabric Extenders
• Predictable performance
– Dual quad core Xeon® 5500 Series processors
– Balanced configuration
– Network
– Memory
– Compute
• Scalability and flexibility
– VLAN, Trunks and Port Groups

Network

Cisco Nexus 1000V


The Nexus 1000V (http://www.cisco.com/en/US/products/ps9902/index.html) is a software switch on a
server that delivers Cisco VN-Link (http://www.cisco.com/en/US/netsol/ns894/index.html) services to
virtual machines hosted on that server. It takes advantage of the VMware vSphere
(http://www.cisco.com/survey/exit.html?http://www.vmware.com/products/cisco-nexus-1000V/index.h
tml) framework to offer tight integration between server and network environments and help ensure
consistent, policy-based network capabilities to all servers in the data center. It allows policies to move

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 23
Vblock Component Details

with a virtual machine during live migration, ensuring persistent network, security, and storage
compliance, resulting in improved business continuance, performance management, and security
compliance. Last but not least, it aligns management of the operational environment for virtual machines
and physical server connectivity in the data center, reducing the total cost of ownership (TCO) by
providing operational consistency and visibility throughout the network. It offers flexible collaboration
between the server, network, security, and storage teams while supporting various organizational
boundaries and individual team autonomy.
For more information, see: http://www.cisco.com/en/US/products/ps9902/index.html.

Storage
Storage components include:
• EMC CLARiiON CX4 Series
• EMC Symmetrix V-Max Storage System
• NFS Datastores and Native File Services—EMC Celerra Gateway Family
• Storage Area Network—Cisco MDS Fibre Channel Switch

EMC CLARiiON and Symmetrix


• Storage configurations are application-specific
• Logical device considerations*
– LUN size
– Consistent size based on application requirements
– RAID Protection
– RAID 1
– RAID 5
– RAID 6
– LUN aggregation using meta devices
– Size
– Performance
– Virtual provisioning
– Thin pool
– Thin devices/fully allocated
• Simplifies storage provisioning
– Storage tiers based on drive and protection
– Storage templates
– Storage policies
• Local and remote replication requirements

Vblock Infrastructure Packages Reference Architecture


24 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Component Details

EMC CLARiiON CX4 Series


Figure 4 illustrates the interconnection of the EMC CLARiiON CX4-480 in Vblock 1.
The EMC® CLARiiON® CX4 series delivers industry-leading innovation in midrange storage with the
fourth-generation CLARiiON CX storage platform. The unique combination of flexible, scalable
hardware design and advanced software capabilities enables EMC CLARiiON CX4 series systems,
powered by Intel Xeon processors, to meet the growing and diverse needs of today’s midsize and large
enterprises. Through innovative technologies like Flash drives, UltraFlex™ technology, and CLARiiON
Virtual Provisioning customers can decrease costs and energy use and optimize availability and
virtualization.
The EMC CLARiiON CX4 model 480 supports up to 256 highly available, dual-connected hosts and has
the capability to scale up to 480 disk drives for a maximum capacity of 939 TB. The CX4-480 supports
Flash drives for maximum performance and comes pre-configured with Fibre Channel and iSCSI
connectivity, allowing customers to choose the best connectivity for their specific applications.
Delivering up to twice the performance and scale as the previous CLARiiON generation, CLARiiON
CX4 is the leading midrange storage solution to meet a full range of needs—from departmental
applications to data-center-class business-critical systems.

EMC CLARiiON CX4 Technology Advancements

Enterprise Flash Drives—EMC-customized Flash drive technology provides low latency and high
throughput to break the performance barriers of traditional disk technology. EMC is the first to bring
Flash drives to midrange storage and expects the technology to become mainstream over the next few
years while revolutionizing networked storage.
Flash drives extend the storage tiering capabilities of CLARiiON by:
• Delivering 30 times the IOPS of a 15K RPM FC drive
• Consistently delivering less than 1 ms response times
• Requiring 98 percent less energy per I/O than 15K rpm Fibre Channel drives
• Weighing 58 percent less per TB than a typical Fibre Channel drive
• Providing better reliability due to no moving parts and faster RAID rebuilds
UltraFlex technology—The CLARiiON CX4 architecture features UltraFlex technology—a
combination of a modular connectivity design and unique FLARE® operating environment software
capabilities that deliver:
• Dual protocol support with FC and iSCSI as the base configuration on all models
• Easy, online expansion via hot-pluggable I/O modules
• Ability to easily add and/or upgrade I/O modules to accommodate future technologies as they
become available (i.e., FCoE)
CLARiiON Virtual Provisioning—Allows CLARiiON users to present an application with more
capacity than is physically allocated to it in the storage system. CLARiiON Virtual Provisioning can
lower total cost of ownership and offers customers these benefits:
• Efficient tiering that improves capacity utilization and optimizes tiering capabilities across all drive
types
• Ease of provisioning that simplifies and accelerates processes and delivers “just-in-time” capacity
allocation and flexibility
• Comprehensive monitoring, alerts, and reporting for efficient capacity planning

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 25
Vblock Component Details

• Supports advanced capabilities including Virtual LUN, Navisphere QoS Manager, Navisphere
Analyzer, and SnapView
Multi-core Intel Xeon processors, increased memory, and 64-bit FLARE—The CX4 boasts up to
twice the performance of the previous generation and provides up to 2.5 times more processing power
with multi-core Intel® Xeon® processors. The CX4 architecture also delivers twice the capacity scale
(up to 960 drives), twice the memory, and twice the LUNs compared with the previous generation
CLARiiON.
With the CLARiiON CX4, the FLARE operating environment has also been upgraded from a 32-bit to
a 64-bit environment. This enhancement enables the scalability improvements and also provides the
foundation for more advanced software functionality such as Virtual Provisioning.
Low-power SATA II drives, adaptive cooling, and drive spin-down:
• Low-power SATA II drives deliver the highest density at the lowest cost and require 96 percent less
energy per terabyte than 15K rpm Fibre Channel drives, and 32 percent less than traditional 7.2K
rpm SATA drives.
• Adaptive cooling is a new feature that provides improved energy efficiency by dynamically
adjusting cooling and airflow within the CX4 arrays based on system activity.
• Drive spin-down allows customers to set policies at the RAID group level to place inactive drives in
sleep mode. Target applications include backup-to-disk, archiving, and test and development.
For more information, see: http://www.emc.com/products/detail/hardware/clariion-cx4-model-480.htm.

EMC Symmetrix V-Max Storage System


Figure 5 illustrates the interconnection of the EMC Symmetrix V-Max in Vblock 2.
The EMC Symmetrix V-Max Series provides an extensive offering of new features and functionality for
the next era of high-availability virtual data centers. With advanced levels of data protection and
replication, the Symmetrix V-Max system is at the forefront of enterprise storage area network (SAN)
technology. Additionally, the Symmetrix V-Max array has the speed, capacity, and efficiency to
transparently optimize service levels without compromising its ability to deliver performance on
demand. These capabilities are of greatest value for large virtualized server deployments such as
VMware Virtual Data Centers. Symmetrix Fully Automated Storage Tiering (FAST)1 automatically and
dynamically moves data across storage tiers, so that it is in the right place at the right time simply by
pooling storage resources, defining the policy, and applying it to an application. FAST enables
applications to always remain optimized by eliminating trade-offs between capacity and performance.
As a result, you are able to lower costs and deliver higher services levels at the same time.
The Symmetrix V-Max system is EMC’s high-end storage array that is purpose-built to deliver
infrastructure services within the next-generation data center. Built for reliability, availability, and
scalability, Symmetrix V-Max uses specialized engines, each of which includes two redundant director
modules providing parallel access and replicated copies of all critical data.
Symmetrix V-Max’s Enginuity operating system provides several advanced features, such as
Auto-Provisioning Groups for simplification of storage management, Virtual Provisioning for ease of
use and improved capacity utilization, and Virtual LUN technology for non-disruptive mobility between
storage tiers. All of the industry-leading features for Business Continuity and Disaster Recovery have
been the hallmarks of EMC Symmetrix storage arrays for over a decade and continues in the Symmetrix
V-Max system. These are further integrated into the VMware Virtual Infrastructure for disaster recovery
with EMC’s custom Site Recovery Adapter for VMware’s Site Recovery Manager.

1. Optional.

Vblock Infrastructure Packages Reference Architecture


26 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Component Details

Combined with the rich capabilities of EMC ControlCenter and EMC’s Storage Viewer for vCenter,
administrators are provided with end-to-end visibility and control of their virtual data center storage
resources and usage. EMC’s new PowerPath/VE support for vSphere provides optimization of usage on
all available paths between virtual machines and the storage they are using, as well as proactive failover
management.

Introduction

The Symmetrix V-Max system (see Figure 9) operating environment is a new enterprise-class storage
array that is built on the strategy of simple, intelligent, modular storage. The array incorporates a new
high-performance fabric interconnect designed to meet the performance and scalability demands for
enterprise storage within the most demanding virtual data center installations. The storage array
seamlessly grows from an entry-level configuration with a single, highly available Symmetrix V-Max
Engine and one storage bay into the world’s largest storage system with eight engines and 10 storage
bays. The largest supported configuration is shown in Figure 10. When viewing Figure 10, refer to the
following list that indicates the range of configurations supported by the Symmetrix V-Max storage
array:
• 2-16 director boards
• 48-2,400 disk drives
• Up to 2 PB usable capacity
• Up to 128 Fibre Channel ports
• Up to 64 FICON ports
• Up to 64 Gig-E/iSCSI ports

Figure 9 Symmetrix V-Max System


227163

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 27
Vblock Component Details

Figure 10 Symmetrix V-Max System Features

!
!
A B C D

!
!
A B C D

! ! ! !

!
!
A B C D

!
!
A B C D

!
!
A B C D

!
!
A B C D

LAN1
LAN2

!
!
A B C D

!
!
A B C D

!
!
A B C D

!
!
A B C D

! ! ! !

!
!

A B C D

!
!

A B C D
!
!

A B C D
!
!

A B C D

LAN1
LAN2
!
!

A B C D
!
!

A B C D
227117

The enterprise-class deployments in a modern data center are expected to be always available. The
design of the Symmetrix V-Max storage array enables it to meet this stringent requirement. The
replicated components which comprise every Symmetrix V-Max configuration assure that no single
point of failure can bring the system down. The hardware and software architecture of the Symmetrix
V-Max storage array allows capacity and performance upgrades to be performed online with no impact
to production applications. In fact, all configuration changes, hardware and software updates, and
service procedures are designed to be performed online and non-disruptively. This ensures that
customers can consolidate without compromising availability, performance, and functionality, while
leveraging true pay-as-you-grow economics for high-growth storage environments.
The Symmetrix V-Max system can include two to 16 directors inside one to eight Symmetrix V-Max
Engines. Each Symmetrix V-Max Engine has its own redundant power supplies, cooling fans, SPS
Modules, and Environmental Modules. Furthermore, the connectivity between the Symmetrix V-Max
array engines provides direct connections from each director to every other director, creating a redundant
and high-availability Virtual Matrix. Each Symmetrix V-Max Engine has two directors that can offer up
to eight host access ports each, therefore allowing up to 16 host access ports per Symmetrix V-Max
Engine.
Figure 11 shows a schematic representation of a single Symmetrix V-Max storage engine.

Vblock Infrastructure Packages Reference Architecture


28 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Component Details

Figure 11 Symmetrix V-Max Storage Engine

Front End Back End Front End Back End

Host & Disk Ports Host & Disk Ports

Core
Core Core
Core Core
Core Core
Core Core
Core Core
Core Core
Core Core
Core

Core
Core Core
Core Core
Core Core
Core Core
Core Core
Core Core
Core Core
Core

CPU
CPU CPU
CPU
Global Complex
Complex Complex
Complex Global
Memory Memory

Virtual Matrix Interface Virtual Matrix Interface

227118
A
A B
B A
A B
B

The powerful, high-availability Symmetrix V-Max Engine provides the building block for all Symmetrix
V-Max systems. It includes four quad-core Intel Xeon processors, 64-128 GB of Global Memory; 8-16
ports for front-end host access or Symmetrix Remote Data Facility channels using Fibre Channel,
FICON, or Gigabit Ethernet; and 16 back-end ports connecting to up to 360 storage devices using 4Gb
Fibre Channel SATA or Enterprise Flash Drives.
Each of the two integrated directors in a Symmetrix V-Max Engine has main three parts: the back-end
director, the front-end director, and the cache memory module. The back-end director consists of two
back-end I/O modules with four logical directors that connect directly into the integrated director. The
front-end director consists of two front-end I/O modules with four logical directors that are located in
the corresponding I/O annex slots. The front-end I/O modules are connected to the director via the
midplane.
The cache memory modules are located within each integrated director, each with eight available
memory slots. Memory cards range from 2 to 8 GB, consequently allowing anywhere between 16 to 64
GB per integrated director. For added redundancy, the Symmetrix V-Max system uses mirrored cache,
so memory is mirrored across engines in a multi-engine setup. In case of a single-engine configuration,
the memory is mirrored inside the engine across the two integrated directors.
EMC’s Virtual Matrix interconnection fabric permits the connection of up to 8 Symmetrix V-Max
Engines together to scale out total system resources and flexibly adapt to the most demanding virtual
data center requirements.
Figure 12 shows a schematic representation of a maximum Symmetrix V-Max configuration.

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 29
Vblock Component Details

Figure 12 Fully Configured Symmetrix V-Max Storage System

Front End Back End Front End Back End Front End Back End Front End Back End

Host & Disk Ports Host & Disk Ports Host & Disk Ports Host & Disk Ports

Core
Core Core
Core Core
Core Core
Core Core Core Core Core Core
Core Core
Core Core
Core Core
Core Core Core Core Core

Core
Core Core
Core Core
Core Core
Core Core Core Core Core Core
Core Core
Core Core
Core Core
Core Core Core Core Core

Global
CPU
CPU CPU Global Global
CPU
CPU CPU Global
C l
Complex Complex C l
Complex Complex
Memory Memory Memory Memory

Virtual Matrix Interface


Interface Virtual Matrix Interface Virtual Matrix Interface
Interface Virtual Matrix Interface

A B A B A B A B
Virtual Matrix Interface

Virtual Matrix Interface


Core

Core

Memory

Core
Core

Core
Core
Global
Back End

Front End
B

Memory
Global
A
Host & Disk Ports

Host & Disk Ports


Core

Core

Core
Core

Core
Core
Core

Core

CPU

Core
Core

Core
Core
CPU
Front End

Complex
Complex

Back End
CPU
C
A

Interface
Core

Core

Core

Core
Core

Core
l
Virtual Matrix Interface
Interface
Core

Core
Core

Core

Virtual Matrix Interface


CPU

Core

Core
Back End

Front End
Complex
CPU
Complex
CPU
l

A
Host & Disk Ports

Host & Disk Ports


C
Core
Core

Core
Core

Core

Core
Core
Core

Core
Core

Virtual

Core

Core
Front End

Back End
Memory
Global

Memory
Global
A
Core
Core

Core
Core

Core

Core
Virtual Matrix Interface

Virtual Matrix Interface


Core

Core

Memory

Matrix

Core
Core

Core
Core
Global
Back End

Front End
B

Memory
Global
A
Host & Disk Ports

Host & Disk Ports


Core

Core

Core
Core

Core
Core
Core

Core

CPU

Core
Core

Core
Core
CPU
Front End

Complex
Complex

Back End
CPU
C
A

Interface
Core

Core

Core

Core
Core

Core
l
Virtual Matrix Interface
Interface
Core

Core
Core

Core

Virtual Matrix Interface


CPU

Core

Core
Back End

Front End
Complex
CPU
Complex
CPU
l

A
Host & Disk Ports

Host & Disk Ports


C
Core
Core

Core
Core

Core

Core
Core
Core

Core
Core

Core

Core
Front End

Back End
Memory
Global

Memory
Global
A
Core
Core

Core
Core

Core

Core
B A B A B A B A

Virtual Matrix Interface Virtual Matrix Interface


Interface Virtual Matrix Interface Virtual Matrix Interface
Interface

Memory Memory Memory Memory


Complex Complex
C l Complex Complex
C l
Global Global Global Global
CPU CPU
CPU CPU CPU
CPU

Core Core Core Core Core


Core Core
Core Core
Core Core
Core Core Core Core Core Core
Core Core
Core Core
Core Core
Core

Core Core Core Core Core


Core Core
Core Core
Core Core
Core Core Core Core Core Core
Core Core
Core Core
Core Core
Core

227112
Host & Disk Ports Host & Disk Ports Host & Disk Ports Host & Disk Ports

Back End Front End Back End Front End Back End Front End Back End Front End

The Symmetrix V-Max system supports the drive types in Table 8.


For additional information about utilizing VMware Virtual Infrastructure with EMC Symmetrix storage
arrays, refer to Using EMC Symmetrix Storage in VMware Virtual Infrastructure
Environments—TechBook, available at:
http://www.emc.com/collateral/hardware/solution-overview/h2529-vmware-esx-svr-w-symmetrix-wp-l
dv.pdf.

NFS Datastores and Native File Services—EMC Celerra Gateway Family


Figure 3 illustrates the interconnection of the EMC Celerra in Vblock.
Performance bottlenecks, security issues, and the high cost of data protection and management
associated with deploying file servers using general-purpose operating systems become non-issues with
the EMC® Celerra® Gateway family. Each Celerra Gateway product—the NS-G2 or NS-G8—is a
dedicated network server optimized for file access and advanced functionality in a scalable, easy-to-use
package. Best-in-class EMC Symmetrix® and CLARiiON® back-end array technologies, combined
with Celerra’s impressive I/O system architecture, deliver industry-leading availability, scalability,
performance, and ease of management to your business.
Celerra Gateway platforms extend the value of existing EMC storage array technologies, delivering a
comprehensive, consolidated storage solution that adds IP storage (NAS) in a centrally-managed
information storage system, enabling you to dynamically grow, share, and cost-effectively manage file
systems with multi-protocol file access. Take advantage of simultaneous support for NFS and CIFS
protocols by letting UNIX and Microsoft® Windows® clients share files using the DART (Data Access
in Real Time) operating system’s sophisticated file-locking mechanisms. The high-end features offered
with the Celerra Gateway platform enable entry-level data center consolidation resulting in lower total
cost of ownership (TCO) of your server and storage assets—while enabling you to grow your IP storage

Vblock Infrastructure Packages Reference Architecture


30 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Component Details

environment into the hundreds of TB from a single point of management. And you can improve
performance over standard NAS by simply adding MPFS to your environment without application
modification.
EMC Celerra Gateway platforms combine a NAS head and SAN storage for a flexible, cost-effective
implementation that maximizes the utilization of your existing resources. This approach offers the
utmost in configuration options, including:
• One-to-eight X-Blade configurations
• Flash drives, Fibre Channel, SATA and low-power SATA drive support
• Performance/availability mode in the entry-level NS-G2• EMC Symmetrix or CLARiiON storage
• Native integration with Symmetrix and CLARiiON replication software providing a single
replication solution for all your SAN and IP Storage disaster recovery requirements

EMC Celerra Gateway Platform System Elements

The Celerra Gateway is comprised of one or more autonomous servers called X-Blades which connect
via FC SAN to a CLARiiON or Symmetrix storage array. The X-Blades control data movement from the
disks to the network. Each X-Blade consists of an Intel-based server with redundant data paths, power
supplies, and multiple Gigabit Ethernet ports and optional, multiple 10 Gigabit Ethernet optical ports.
X-Blades run EMC’s Data Access in Real Time (DART) operating system, designed and optimized for
high-performance and multi-protocol network file access. All the X-Blades in a system are managed by
the Control Station (two control stations for HA are supported on the NS-G8), which operates out of the
data path and provides a single point of configuration management and administration as well as
handling X-Blade failover and maintenance support.

Vblock Configuration and Design Considerations

EMC Celerra is a file server for the Vblock (Celerra file services may be shared across multiple
Vblocks):
• Gateway configuration sharing CLARiiON or Symmetrix storage
• Vblock 1 NS-G2
– 2 Datamovers
For more information on the EMC Celerra NS-G2, see:
http://www.emc.com/products/detail/hardware/celerra-ns-g2.htm.
• Vblock 2 NS-G8
– 2 to 8 Datamovers
For more information on the EMC Celerra NS-G8, see:
http://www.emc.com/products/detail/hardware/celerra-ns-g8.htm.
• May be shared across multiple Vblocks

Storage Area Network—Cisco MDS Fibre Channel Switch


Figure 4 illustrates the interconnection of the Cisco MDS 9222i in Vblock 1 and Figure 5 illustrates the
interconnection of the Cisco MDS 9506 in Vblock 2.

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 31
Vblock Component Details

Cisco MDS 9222i Multiservice Modular Switch

The Cisco MDS 9222i Multiservice Modular Switch delivers state-of-the-art multiprotocol and
distributed multiservice convergence, offering:
• High-performance SAN extension and disaster recovery solutions
• Intelligent fabric services such as storage media encryption
• Cost-effective multiprotocol connectivity
Its compact form factor, expansion slot modularity, and advanced capabilities make the MDS 9222i an
ideal solution for departmental and remote branch-office SANs requiring director-class features at a
lower cost.
Product highlights:
• High-density Fibre Channel switch; scales up to 66 Fibre Channel ports
• Integrated hardware-based virtual fabric isolation with virtual SANs (VSANs) and Fibre Channel
routing with inter-VSAN routing
• Remote SAN extension with high-performance Fibre Channel over IP (FCIP)
• Long distance over Fibre Channel with extended buffer-to-buffer credits
• Multiprotocol and mainframe support (Fibre Channel, FCIP, Small Computer System Interface over
IP [iSCSI], and IBM Fiber Connection [FICON])
• IPv6 capable
• Platform for intelligent fabric applications such as storage media encryption
• In-Service Software Upgrade (ISSU)
• Comprehensive network security framework
• High-performance intelligent application with the combination of 16-port storage services node
For more information on the Cisco MDS 922i, see:
http://www.cisco.com/en/US/products/ps8420/index.html
For more information on the Cisco MDS 9200 Series Multilayer Switches, see:
http://www.cisco.com/en/US/products/ps5988/index.html

Cisco MDS 9506 Multilayer Director

The Cisco® MDS 9506 Multilayer Director provides industry-leading availability, scalability, security,
and management. The Cisco MDS 9506 allows you to deploy high-performance storage-area networks
(SANs) with lowest total cost of ownership (TCO). Layering a rich set of intelligent features onto a
high-performance, protocol-independent switch fabric, the Cisco MDS 9506 addresses the stringent
requirements of large data center storage environments: uncompromising high availability, security,
scalability, ease of management, and transparent integration of new technologies. Compatible with first,
second, and third generation Cisco MDS 9000 Family switching modules, the Cisco MDS 9506 provides
advanced functionality and unparalleled investment protection, allowing the use of any Cisco MDS 9000
Family switching module in this compact system.
The Cisco MDS 9506 offers the following benefits:
• Scalability and Availability—The Cisco MDS 9506 combines nondisruptive software upgrades,
stateful process restart/failover, and full redundancy of all major components for best-in-class
availability. Supporting up to 192 Fibre Channel ports in a single chassis, up to 1152 Fibre Channel
ports in a single rack, the Cisco MDS 9506 is designed to meet the requirements of large data center
storage environments.

Vblock Infrastructure Packages Reference Architecture


32 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Component Details

• Compact design—The Cisco MDS 9506 provides high port density in a small footprint, saving
valuable data center floor space. The seven-rack-unit chassis allows up to six Cisco MDS 9506
multilayer directors in a standard rack, maximizing the number of available Fibre Channel ports.
• 1/2/4/8-Gbps and 10-Gbps Fibre Channel—Supports new 8-Gbps as well as existing 10-Gbps,
4-Gbps, and 2-Gbps MDS Fibre Channel switching modules.
• Flexibility and investment protection—Supports mix of mix of new, second, and first generation
Cisco MDS 9000 Family modules providing forward and backward compatibility and unparalleled
investment protection.
• TCO driven design—The Cisco MDS 9506 offers advanced management tools for overall lowest
TCO. It includes VSAN technology for hardware-enforced, isolated environments within a single
physical fabric for secure sharing of physical infrastructure, further decreasing TCO.
• Multiprotocol—The multilayer architecture of the Cisco MDS 9000 Family enables a consistent
feature set over a protocol-independent switch fabric. The Cisco MDS 9506 transparently integrates
Fibre Channel, IBM Fiber Connection (FICON), Small Computer System Interface over IP (iSCSI),
and Fibre Channel over IP (FCIP) in one system.
• Intelligent network services—Provides integrated support for VSAN technology, access control lists
(ACLs) for hardware-based intelligent frame processing, and advanced traffic-management features
such as Fibre Channel Congestion Control (FCC) and fabric-wide quality of service (QoS) to enable
migration from SAN islands to enterprise-wide storage networks.
• Integrated Cisco Storage Media Encryption (SME) as distributed fabric service—Supported on the
Cisco MDS 18/4-Port Multiservice Module, Cisco SME encrypts data at rest on heterogeneous tape
drives and virtual tape libraries (VTLs) in a SAN environment using secure IEEE standard
Advanced Encryption Standard (AES) 256-bit algorithms. Cisco MDS 18/4-Port Multiservice
Module helps ensure ease of deployment, scalability, and high availability by using innovative
technology to transparently offer Cisco SME capabilities to any device connected to the fabric
without the need for reconfiguration or rewiring. Cisco SME provisioning is integrated into the
Cisco Fabric Manager; no additional software is required. Cisco SME key management can be
provided by either the Cisco Key Management Center (KMC) or with RSA Key Manager for the
Datacenter from RSA, the Security Division of EMC.
• Open platform for intelligent storage applications—Provides the intelligent services necessary for
hosting and/or accelerating storage applications such as network-hosted volume management, data
migration and backup.
• Integrated hardware-based VSANs and Inter-VSAN Routing (IVR)—Enables deployment of
large-scale multi-site and heterogeneous SAN topologies. Integration into port-level hardware
allows any port within a system or fabric to be partitioned into any VSAN. Integrated
hardware-based inter-VSAN routing provides line-rate routing between any ports within a system
or fabric without the need for external routing appliances
• Advanced FICON services—Supports 1/2/4-Gbps FICON environments, including cascaded
FICON fabrics, VSAN-enabled intermix of mainframe and open systems environments, and N_Port
ID virtualization for mainframe Linux partitions. CUP (Control Unit Port) support enables in-band
management of Cisco MDS 9000 Family switches from the mainframe management console.
• Comprehensive security framework—Supports RADIUS and TACACS+, Fibre Channel Security
Protocol (FC-SP), Secure File Transfer Protocol (SFTP), Secure Shell (SSH) Protocol and Simple
Network Management Protocol Version 3 (SNMPv3) implementing Advanced Encryption Standard
(AES), VSANs, hardware-enforced zoning, ACLs, and per-VSAN role-based access control.
• Sophisticated diagnostics—Provides intelligent diagnostics, protocol decoding, and network
analysis tools as well as integrated Call Home capability for added reliability, faster problem
resolution, and reduced service costs.

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 33
Vblock Component Details

• Unified SAN management—The Cisco MDS 9000 Family includes built-in storage network
management with all features available through a command-line interface (CLI) or Cisco Fabric
Manager, a centralized management tool that simplifies management of multiple switches and
fabrics. Integration with third party storage management platforms allows seamless interaction with
existing management tools.
• Cisco TrustSec Fibre Channel Link Encryption—Delivers transparent, hardware-based, line-rate
encryption of Fibre Channel data between any Cisco MDS 9000 Family 8-Gbps modules.
For more information on the Cisco MDS 9506 Multilayer Director, see:
http://www.cisco.com/en/US/products/hw/ps4159/ps4358/ps5395/index.html
For more information on the Cisco MDS 9500 Series Multilayer Directors, see:
http://www.cisco.com/en/US/products/ps5990/index.html

Vblock 1 SAN Configuration

• (2) Cisco MDS 9506 (optionally 2 Cisco MDS 9222i)


– (8) 4 Gb N-ports to each Fabric Interconnect
– (4-8) 4 Gb N-ports to each CLARiiON Storage Processor

Vblock 2 SAN Configuration

• (2) Cisco MDS 9506


– (8) 4 Gb N-ports to each Fabric Interconnect
– (8-16) 4 Gb N-ports to each V-Max Storage Processor

Storage Design Considerations


• Balanced configuration
– Capacity, connectivity, workload (IOPs/MBs)
• Availability
– Enterprise class storage
– RAID protection
– Extensive remote replication capabilities using MirrorView and SRDF
• Predictable Performance
– Large cache
– Tiered storage including Enterprise Flash Drives
• Ease of deployment and management
– Template based provisioning
– Wizards
– Fully Automate Storage Tiering (FAST)
– Virtual Provisioning
– Local replication capability using SnapView and TimeFinder

Vblock Infrastructure Packages Reference Architecture


34 © 2010 Cisco EMC VMware. All rights reserved.
Vblock Component Details

Virtualization

VMware vSphere/vCenter
• VMware vSphere 4 is the virtualized infrastructure for the Vblock
– Virtualizes all application servers
– Provides VMWare High Availability (HA) and Dynamic Resource Scheduling (DRS)
• Templates enable rapid provisioning

Table 11 Vblock Core to VM Ratios

# of VMs Based on Minimum # of VMs Based on Maximum


UCS Configuration UCS Configuration
Vblock 1
1:4 Core to VM Ratio 512 1024
(1920 MB memory/VM)
1:16 Core to VM Ratio 2048 4096
(480 MB memory/VM)
Vblock 2
1:4 Core to VM Ratio 1024 2048
(3072 MB memory/VM)
1:16 Core to VM Ratio 4096 8192
(768 MB memory/VM)

VMware vSphere and vCenter Server

VMware vSphere and vCenter Server offer the highest levels of availability and responsiveness for all
applications and services with VMware vSphere, the industry’s most reliable platform for data center
virtualization. Optimize IT service delivery and deliver the highest levels of application service
agreements with the lowest total cost per application workload by decoupling your business critical
applications from the underlying hardware for unprecedented flexibility and reliability.
VMware vCenter Server provides a scalable and extensible platform that forms the foundation for
virtualization management (http://www.vmware.com/solutions/virtualization-management/). VMware
vCenter Server, formerly VMware VirtualCenter, centrally manages VMware vSphere
(http://www.vmware.com/products/vsphere/) environments, allowing IT administrators dramatically
improved control over the virtual environment compared to other management platforms. VMware
vCenter Server:
• Provides centralized control and visibility at every level of virtual infrastructure.
• Unlocks the power of vSphere through proactive management.
• Is a scalable and extensible management platform with a broad partner ecosystem.
For more information, see http://www.vmware.com/products/.

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 35
Physical Architecture

Physical Architecture

Power, Cooling, and Space Requirements


Table 12 and Table 13 contain power and cooling specifications.

Table 12 Vblock Power and Cooling Specifications—1


Circuit
Breaker
Rack Input (Amps),
Power Cooling Space Weight Weight Voltage Frequency Recommen AC Power Power
Equipment Description (kVA) (BTU/hr) (RU) (lb) (kg) (VAC) (Hz) ded Connections Power Connector Factor
CAB-L520P-C19-US,
Cisco UCS Chassis (5108 Chassis and 8 x NEMA L5-20, 20A,
B200 M1 Blades) 4.25 12140 6 255 115.66 250 50-60 20 2 per chassis 250 VAC
Cisco UCS 6120XP Fabric Interconnect 0.6875 1536 1 35 15.875 100-240 50-60 20 2 per unit IEC320-C14
Cisco UCS 6140XP Fabric Interconnect 0.9375 2561 2 50 22.68 100-240 50-60 20 2 per unit IEC320-C14
CLARiiON CX4-480 Service Processor (Pair) 2 for Standby
and Standby Power Supplies 0.355 990 3 99.5 45.4 100-240 47-63 7.8 Power Supplies IEC320-C14 0.82
CLARiiON CX4-480 DAE 0.44 1450 3 68 30.9 100-240 47-63 10 2 per DAE IEC320-C14 0.98
CLARiiON 40U Rack Enclosure 11.5 32175 40 380 173 200-240 47-63 30 2 per domain NEMA L6-30P

50 (3-phase
Symmetrix V-Max SE (1 System and 1 Storage Delta-4 CS8365C (3-phase
Bay) 10.4 34000 88 4008 1818 200-240 50-60 wire) 2 per bay Delta-4 wire)

50 (3-phase
Delta-4 CS8365C (3-phase
Symmetrix V-Max System Bay (4 Engine) 4.1 13700 44 1830 830 200-240 50-60 wire) 2 per bay Delta-4 wire)

50 (3-phase
Delta-4 CS8365C (3-phase
Symmetrix V-Max System Bay (8 Engine) 7.8 26300 44 2774 1258.3 200-240 50-60 wire) 2 per bay Delta-4 wire)

50 (3-phase
Delta-4 CS8365C (3-phase
Symmetrix V-Max Storage Bay 6.1 19800 44 2144 972.5 200-240 50-60 wire) 2 per bay Delta-4 wire)
Power Cord, 125VAC
15A NEMA 5-15 Plug,
Cisco MDS 9222i 1.05625 2884 3 53.5 24.3 100-240 50-60 16 2 per chassis North America
NEMA L6-20P, IE320-
Cisco MDS 9506 2.7 12000 7 124 56.69 100-240 50-60 12 2 per chassis C19
NEMA L6-20P, IE320-
Cisco Catalyst 6504-E 2.7 12000 5 40 18.18 100-240 48-60 16 2 per chassis C19
Cisco ACE 4710 Load Balancer 0.128 1560 1 1 IEC320-C14
1 per power
Celerra NS-G2 0.52 3236 2 74.91 34.03 100-240 47-63 15 supply IEC320-C14 0.98
180-240
(single 20 for each 1 per power
Celerra NS-G8-4X-Blade System 1.7 5500 9 228 104 phase) 47-63 phase supply IEC320-C14 0.95
180-240
(single 20 for each 1 per power
Celerra NS-G8-6X-Blade System 2.5 8100 13 333 151 phase) 47-63 phase supply IEC320-C14 0.95
180-240
(single 20 for each 1 per power
Celerra NS-G8-8X-Blade System 3.3 10600 17 438 199 phase) 47-63 phase supply IEC320-C14 0.95

Rack
Vblock 1 Power Cooling Space Vblock 1 Power Cooling Rack Vblock 2 Rack Vblock 2 Power Cooling Rack
Equipment Description Min (#) (kVA) (BTU/hr) (RU) Max (#) (kVA) (BTU/hr) Space (U) Min (#) Power (kVA) Cooling (BTU/hr) Space (U) Max (#) (kVA) (BTU/hr) Space (U)
UCS 6120XP Fabric Interconnect 2 1.375 3072 2 2 1.375 3072 2 2 1.375 3072 2 4 2.75 6144 4
UCS Chassis (5108 Chassis and B2000 M1
Blade) 2 8.5 24280 12 4 17 48560 24 4 17 48560 24 8 34 97120 48
MDS 9506 2 5.4 24000 14 2 5.4 24000 14 2 5.4 24000 14 2 5.4 24000 14
Catalyst 6504 2 5.4 24000 10 2 5.4 24000 10 2 5.4 24000 10 2 5.4 24000 10
ACE Load Balancer 2 0.256 3120 2 2 0.256 3120 2 2 0.256 3120 2 2 0.256 3120 2
Clariion CX4-480 Processor (Pair) 1 0.355 990 3 1 0.355 990 3
Clariion CX4-480 DAEs 7 3.08 10150 21 12 5.28 17400 36
V-Max SE (2 Bay Configuration) 1 1 1 10.4 34000 88
V-Max System Bay (4 Engine) 1 4.1 13700 44
V-Max Storage Bay 1 6.1 19800 44
228354

Celerra NS-G2 1 0.52 3236 2 1 0.52 3236 2


Celerra NS-G8-8X-Blade System 1 3.3 10600 17 1 3.3 10600 17
Total 23.511 89776 64 34.211 121306 91 41.756 144280 155 58.556 192340 179

Estimated
Number of
1 Rack Unit (RU) = 1.75 Inches Racks 2 3 4 5
1 kilowatt = 3,412.14163 BTU/hr
1 Std Rack = 45 RU
1 CLARiiON Rack = 40 RU
1 V-Max Bay = 44 RU

Vblock Infrastructure Packages Reference Architecture


36 © 2010 Cisco EMC VMware. All rights reserved.
Physical Architecture

Table 13 Vblock Power and Cooling Specifications—2

UCS Rack (2 x UCS Rack (2 x 2 x Cisco MDS


6120XP and 2 x 6120XP and 4 x 9506 and 2 x CX4-480 (Rack
For all Vblock chassis : 5108 Chassis) 5108 Chassis) Catalyst 6504 Rack V-Max System Bay V-Max Storage Bay CX4-480 (System) CX4-480 (DAE) Enclosure Only) Celerra NS-G2 Celerra NS-G8

3 kVA (7 DAEs) to
Power load for each rack (startup power 5.28 kVA (12 20A to 40A RMS at
surge) 14 kVA 27 kVA 10.8 kVA 4.1 kVA 6.7 kVA 0.355 kVA DAEs) 6 kVA 15A Startup Surge
2 kVA (7 DAEs) to
Power load for each rack (max power 5.28 kVA (12
draw ) 14 kVA 27 kVA 10.8 kVA 4.1 kVA 6.7 kVA 0.355 kVA DAEs) 10.5 kVA 0.520 kVA 1.6 kVA to 3.3 kVA
3 kVA (7 DAEs) to
Power load for each rack (average power 5.28 kVA (12
draw ) 14 kVA 19 kVA 10.8 kVA 4.1 kVA 6.7 kVA 0.355 kVA DAEs) 6 kVA 0.520 kVA 1.6 kVA to 3.3 kVA
Heat load of each rack max only( in 5500 BTU/hr to
Kwatts heat) 27352 BTU/hr 51632 BTU/hr 48000 BTU/hr 13700 BTU/hr 21800 BTU/hr 990 BTU/hr 1440 BTU/hr 32175 BTU/hr 3236 BTU/hr 10600 BTU/hr
Confirm any rack is suitable 4 post rack 4 post rack 2 Post Rack OK No No Yes Yes No Yes Yes
Confirm can be power from above or Above and Below Above and Below Above and Below Above and Below Above and Below Above and Below Above and Below Above and Below Above and Below Above and Below
below ( current symetrix cannot). OK OK OK OK OK OK OK OK OK OK
76.66 in x 30.2 in x 76.66 in x 30.2 in x
Footprint of any non rack devices Rackable Rackable Rackable 41.88 in 41.88 in Rackable Rackable 75 in x 24 in x 39 in Rackable Rackable
Connectivity requirements of each rack
(fiber LC-LC)(SAN) 4 4 depends depends none depends no no depends depends
Connectivity requirements of each rack
(fiber LC-LC)(IP) 2 2 depends depends none depends no no none none
Any non-fibre connectity if needed? 1 for service 2 for Service
(ethernet) 2 2 depends processor none Processors no 1 control station 1 control station
Recommended power strip requirements 15.5A for power 15.5A for each 12A for MDS & 16A
eg. # of amps supply power supply for Catalyst 50A 50A 7.8 A 10A 30A each 15A 20A for each phase
Recommended # of power plugs 4+4 8+4 8 2 2 2 2 2 2 4 to 10 (depends)
CAB-L520P-C19-US, CAB-L520P-C19-
NEMA L5-20, 20A, US, NEMA L5-20,
250 VAC and C14 20A, 250 VAC and
Recommended power strip requirements for Fabric C14 for Fabric NEMA L6-20P, CS8365C (3-phase CS8365C (3-phase
eg. type (C19 ) Interconnects Interconnects IE320-C19 Delta-4 wire) Delta-4 wire) IEC320-C14 IEC320-C14 NEMA L6-30P IEC320-C14 IEC320-C14

2 x CDUs 2 x CDUs
(ServerTech.com, (ServerTech.com,
model# CW-24V2- model# CW-24V2-
L30M, 24 x L30M, 24 x L6-30A Power Strip L6-30A Power Strip L6-30A Power L6-30A Power Strip
IEC320/C13 pluges, IEC320/C13 pluges, with C13 with C13 Strip with C13 with C13
Recommended power strip requirements NEMA L6-30P NEMA L6-30P L6-30A Power Strip EMC Bay EMC Bay Receptacles Receptacles NEMA L6-30P Receptacles Receptacles
Any colocation requirements eg disk
must be attached or with 100 meters of none Shark Rack none Shark Rack

228360
devices on OM3 carriage ? Part# T2EC014-A Part# T2EC014-A none none none none none none none none

A summary of power, cooling, and space requirements for Vblock 1 and Vblock 2 is shown in Table 14.

Table 14 Summary of Power, Cooling, and Space Requirements

Minimum Configuration Maximum Configuration


Vblock 1
Power 22 KVA 29 KVA
Cooling 78132 BTU/hr. 109662 BTU/hr.
Space 66 Rack Units (RU) 2 Racks 69 Rack Units (RU) 3 Racks
Vblock 2
Power 32 KVA 45 KVA
Cooling 121536 BTU/hr. 170096 BTU/hr.
Space 70 Rack Units (RU) 5 Racks 112 Rack Units (RU) 5 Racks

Rack Layouts
Rack layouts are provided for:
• Vblock 1 Minimum Configuration—Rack Layout Front View
• Vblock 1 Maximum Configuration—Rack Layout Front View
• Vblock 2 Minimum Configuration—Rack Layout Front View
• Vblock 2 Maximum Configuration—Rack Layout Front View

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 37
Physical Architecture

Vblock 1 Minimum Configuration—Rack Layout Front View


• 2 UCS Chassis, 8 Blades each, 6 * 48 GB RAM + 2 * 96 GB RAM (Total 480 GB RAM), 2 * 73 GB
Internal HDD
• 2 UCS 6120 Fabric Interconnect, 20 Fixed Ports, 8 Ports 4 GB Fibre Channel
• 2 MDS 9506, 24 Ports 4 GB Fibre Channel (optionally 2 MDS 9222i, 18 Ports 4 GB Fibre Channel)
• CLARiiON CX4-480, 2 Controllers
• Celerra NS-G2, 2 Data Movers (optional)

Figure 13 Vblock 1 Minimum Configuration—Rack Layout Front View

42 U 42 U

MDS

Fabric Interconnect
6120 * 2

UCS Chassis
5108

UCS Chassis
5108 Clariion CX4-480

NS-G2

253591

DAE Power

Vblock Infrastructure Packages Reference Architecture


38 © 2010 Cisco EMC VMware. All rights reserved.
Physical Architecture

Vblock 1 Maximum Configuration—Rack Layout Front View


• 4 UCS Chassis, 8 Blades, 6 * 48 GB RAM + 2 * 96 GB RAM (Total 1920 GB RAM), 2 * 73 GB
Internal HDD
• 2 UCS 6120 Fabric Interconnect, 20 Fixed Ports, 8 Ports 4 GB Fibre Channel
• 2 MDS 9506, 24 Ports 4 GB Fibre Channel (optionally 2 MDS 9222i, 18 Ports 4 GB Fibre Channel)
• CLARiiON CX4-480, 2 Controllers
• Celerra NS-G2, 2 Data Movers (optional)

Figure 14 Vblock 1 Maximum Configuration—Rack Layout Front View

42 U 42 U 42 U
Fabric Interconnect
6120 * 2
MDS
UCS Chassis
5108

UCS Chassis
5108

NS-G2
UCS Chassis
5108

UCS Chassis
5108

Clariion CX4-480

253592
DAE Power

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 39
Physical Architecture

Vblock 2 Minimum Configuration—Rack Layout Front View


• 4 UCS Chassis, 8 Blades, 96 GB RAM, 2 * 73 GB Disk Drives
• 2 UCS 6140 Fabric Interconnect, 40 Fixed Ports, 8 Ports 4 GB Fibre Channel
• 2 MDS 9506, 24 Ports 4 GB Fibre Channel
• V-Max, 2 Engines
• Celerra NS-G8, 2 to 8 Data Movers (optional)

Figure 15 Vblock 2 Minimum Configuration—Rack Layout Front View

4 -UCS 5108 Chassis 2 - MDS System Bay Storage Bay Storage Bay
2 -UCS 6140 1 – NAS Gateway With 2 Engines With 13 DAEs With 13 DAEs
42 U 42 U 42 U 42 U 42 U

253593

Vblock Infrastructure Packages Reference Architecture


40 © 2010 Cisco EMC VMware. All rights reserved.
References

Vblock 2 Maximum Configuration—Rack Layout Front View


• 4 UCS Chassis, 8 Blades, 96 GB RAM, 2 * 73 GB Disk Drives
• 2 UCS 6140 Fabric Interconnect, 40 Fixed Ports, 8 Ports 4 GB Fibre Channel
• 2 MDS 9506, 24 Ports 4 GB Fibre Channel
• V-Max, 2 Engines
• Celerra NS-G8, 2 to 8 Data Movers (optional)

Figure 16 Vblock 2 Maximum Configuration—Rack Layout Front View

2 – UCS 6140 1– NAS GW System Bay 4 Racks of Storage Bays


4 – UCS 5108
4 – UCS 5108 2– MDS With 2 Engines With 52 DAEs

42 U 42 U 42 U 42 U 42 U

X 4 Racks
of DAEs

253594

References
• VMware View Reference Architecture
http://www.vmware.com/resources/techresources/1084
• VMware 3.0
http://www.vmware.com/products/view/
• Cisco UCS
http://www.cisco.com/go/unifiedcomputing
• Cisco Data Center Solutions
http://www.cisco.com/go/datacenter

Vblock Infrastructure Packages Reference Architecture


© 2010 Cisco EMC VMware. All rights reserved. 41
References

• Cisco Validated Designs


http://www.cisco.com/go/designzone
• EMC Symmetrix V-Max System
http://www.emc.com/products/detail/hardware/symmetrix-v-max.htm
• EMC Symmetrix V-Max System and VMware Virtual Infrastructure
http://www.emc.com/collateral/hardware/white-papers/h6209-symmetrix-v-max-vmware-virtual-i
nfrastructure-wp.pdf
• Using EMC Symmetrix Storage in VMware Virtual Infrastructure
http://www.emc.com/collateral/hardware/solution-overview/h2529-vmware-esx-svr-w-symmetrix-
wp-ldv.pdf

Cisco Systems, Inc. EMC Corporation VMware, Inc.


170 West Tasman Drive 176 South Street 3401 Hillview Ave
San Jose, CA 95134 USA Hopkinton, MA 01748 USA Palo Alto, CA 94304 USA
Tel: 408-526-4000 or 800-553-6387 Tel: 508-435-1000 Tel: 650-427-5000 or 877-486-9273
(NETS) Fax: 650-427-5001
Fax: 408-527-0883
www.emc.com www.vmware.com
www.cisco.com

Copyright © 2010 Cisco Systems, Inc. All rights reserved. Cisco, the Cisco logo, and Cisco Systems are registered trademarks or trademarks of
Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries. All other trademarks mentioned in this document or
Website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any
other company.
Copyright © 2010 EMC Corporation. All rights reserved. EMC2, EMC, Celerra, CLARiiON, Ionix, Navisphere, PowerPath, Symmetrix, V-Max,
Virtual Matrix, and where information lives are registered trademarks or trademarks of EMC Corporation in the United States or other
countries. All other trademarks used herein are the property of their respective owners. Published in the USA. P/N h6935
Copyright © 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark
of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective
companies.

Vblock Infrastructure Packages Reference Architecture


42 © 2010 Cisco EMC VMware. All rights reserved.

You might also like