Professional Documents
Culture Documents
Reference Architecture
Introduction
Audience
The target audience for this document includes sales engineers, field consultants, advanced services
specialists, and customers who want to deploy a virtualized infrastructure using VMware
vSphere/vCenter on Cisco UCS connected to EMC V-Max and CLARiiON storage products. The
document also explores potential business benefits of interest to senior executives.
Objectives
This document is intended to describe:
• The role of the Vblock within a data center
• The capabilities and benefits of the Vblock
• The components of the two types of Vblock: Vblock 1 and Vblock 2
This document also highlights the collaborative efforts of three partner companies—EMC, VMware, and
Cisco—working together on a common goal of providing proven technology to customers.
Vblock Overview
IT is undergoing a transformation. The current “accidental architecture” of IT increases procurement,
management costs, and complexity while making it difficult to meet customer service level agreements.
This makes IT less responsive to the business and creates the perception of IT being a cost center. IT is
now moving towards a “private cloud” model, which is a new model for delivering IT as a service,
whether that service is provided internally (IT today), externally (service provider), or in combination.
This new model requires a new way of thinking about both the underlying technology and the way IT is
delivered for customer success.
While the need for a new IT model has never been more clear, navigating the path to that model has never
been more complicated. The benefits of private clouds are capturing the collective imagination of IT
architects and IT consumers in organizations of all sizes around the world. The realities of outdated
technologies, rampant incremental approaches, and the absence of a compelling end-state architecture
are impeding adoption by customers.
By harnessing the power of virtualization, private clouds place considerable business benefits within
reach. These include:
• Business enablement—Increased business agility and responsiveness to changing priorities; speed
of deployment and the ability to address the scale of global operations with business innovation
• Service-based business models—Ability to operate IT as a service
• Facilities optimization—Lower energy usage; better (less) use of datacenter real estate
• IT budget savings—Efficient use of resources through consolidation and simplification
• Reduction in complexity—Moving away from fragmented, ‘accidental architectures’ to integrated,
optimized technology that lowers risk, increases speed and produces predictable outcomes
• Flexibility—Ability of IT to gain responsiveness and scalability through federation to cloud service
providers while maintaining enterprise-required policy and control
Moore’s Law (1965) was, in the 1980s and 1990s, replaced by the unwritten rule that everyone knew and
did not lament loudly enough: enterprise IT doubles in complexity and TCO every five years and IT gets
more pinched by the pressure points.
Enterprise IT solutions over the past 30 years have become more costly to analyze and design, procure,
customize, integrate, inter-operate, scale, service, and maintain. This is due to the inherent complexity
in each of these lifecycle stages of the various solutions.
Within the last decade, we have seen the rise of diverse inter-networks—variously called “fabrics,”
“grids,” and, generically, the “cloud”—constructed on commodity hardware, heavily yet selectively
service-oriented with a scale of virtualized power never before contemplated, housed in massive data
centers on- and off-premises.
Yet amid the buzzword din—onshoring/offshoring, in-/out-/co-sourcing, blades and RAIDs, LANs and
SANs, massive scale and hand-held computing—virtualization (an abiding computing capacity since
early mainframe days) has met secure networking (around since DARPAnet), both perfected, to form the
basis for the next wave, rightly delineated.
It has only been in the past several years that the notion of “cloud computing”—infrastructure, software,
or whatever-business-needs as an IT service—has been taken seriously in its own right, championed by
pioneers who have proved the model’s viability even if on too limited a basis.
With enterprise-level credibility, enabled by the best players in the IT industry, the next wave of
computing will be ushered in on terms that make business sense to the business savvy.
• Can be clustered to provide availability or aggregated for scalability, but each Vblock is still viable
on its own
• Fault and service isolation—The failure of a Vblock will not impact the operation of other Vblocks
(service level degradation may occur unless availability or continuity services are present)
Vblocks are architectures that are pre-tested, fully-integrated, and scalable. They are characterized by:
• Repeatable “units” of construction based on “matched” performance, operational characteristics,
and discrete of power, space, and cooling
• Repeatable design patterns facilitate rapid deployment, integration, and scalability
• Designed from the “Facilities to the Workload” to be scaled for the highest efficiencies in
virtualization and workload re-platforming
• An extensible management and orchestration model based on industry standard tools, APIs, and
methods
• Built to contain, manage, and mitigate failure scenarios in hardware and software environments
Vblocks offer deterministic performance and predictable architecture:
• Predictable SLA—Granular SLA measurement and assurance
• Deterministic space and weight—Floor tiles become unit of capacity planning
• Power and cooling—Consistent power consumption and cooling (KWh/BTUs) per unit
• Pre-determined capacity and scalability—Uniform workload distribution and mobility
• Deterministic fault and security isolation
Vblock benefits include:
• Accelerate the journey to pervasive virtualization and private cloud computing while lowering risk
and operating expenses
• Ensure security and minimize risk with certification paths
• Support and manage SLAs
– Resource metering and reporting
– Configuration and provisioning
– Resource utilization
• Vblock is a validated platform that enables seamless extension of the environment
• Vblock O/S and application support:
– Vblock accelerates virtualization of applications by standardizing IT infrastructure and IT
processes
– Broad range of O/S support
– All current applications that work in a VMware environment also work in a Vblock environment
– Vblock validated applications include:
– SAP
– VMware View 4
– Oracle RAC
– Exchange 2007
– Sharepoint and Web applications
Vblock is a scalable platform for building solutions:
Compute/Network
Cisco Nexus 1000V
VMware vSphere
Network
SAN
Cisco MDS 9506
Storage
CLARiiON CX4-480
OR
Symmetrix V-Max
228037
Vblock 1 Components
Vblock 1 is a mid-sized configuration that provides a broad range of IT capabilities for organizations of
all sizes. Typical use cases include shared services, such as e-mail, file and print, virtual desktops, etc.
1. The network layer represented in the figure by the Cisco Nexus 7000 is not a Vblock component. EMC Ionix
is optional and available at additional cost.
Vblock 1 components:
• Compute
– 16-32 Cisco UCS B-series blades
– 128-256 Cores
– 960-1920 GB Memory
• Network
– Cisco Nexus 1000V
– The UCS uses 6100 series fabric interconnects which carry the network and storage (IP-based)
traffic from the blades to the conected SAN and LAN
• Storage
– EMC CLARiiON CX4-480
– 38-64 TB capacity
– Enterprise Flash Drives (EFD), FC, and SATA Drives
– iSCSI and SAN
– Celerra NS-G2 (optional)
– Cisco MDS 9506 (optionally MDS 9222i)
• VMware vSphere 4.0/vCenter 4.0
• Management
– EMC Ionix Unified Infrastructure Manager (optional)
– VMware vCenter
– EMC Navisphere
– EMC PowerPath/VE
– Cisco UCS Manager
– Cisco Fabric Manager
Vblock 2 Components
Vblock 2 is a high-end configuration that is extensible to meet the most demanding IT needs. It is
optimized for performance to support high-intensity application environments for enterprises and
service providers. Typical use cases include business critical ERP, CRM systems, etc.
Vblock 2 components:
• Compute
– 32-64 Cisco UCS B-series blades
– 256-512 Cores
– 3072-7144 GB memory
• Network
– Cisco Nexus 1000V
– The UCS uses 6100 series fabric interconnects which carry the network and storage (IP-based)
traffic from the blades to the conected SAN and LAN
• Storage
– EMC Symmetrix V-Max
– 96-146 TB capacity
– EFD, FC, and SATA drives
– iSCSI and SAN
– Celerra NS-G8 (optional)
– Cisco MDS 9506
• VMware vSphere 4.0/vCenter 4.0
• Management
– EMC Ionix Unified Infrastructure Manager (optional)
– VMware vCenter
– EMC Symmetrix Management console
– EMC PowerPath/VE
– Cisco UCS Manager
– Cisco Fabric Manager
CLARiiON CX4-480 or
Symmetrix V-Max
EMC Storage
LAN Management
Management
Up Links Links
10/100/1000
228038
UCS Blade Chassis
A Vblock consists of a minimum and maximum amount of components that offer balanced I/O,
bandwidth, and storage capacity relative to the compute and storage arrays offered. Each Vblock is a
fully-redundant autonomous system that has 1+1 or N+1 redundancy by default. The minimum and
maximum configurations for Vblock 1 and Vblock 2 are listed in Table 1, Table 2, and Table 3.
UCS Hardware Type 1 Minimum Type 1 Maximum Type 2 Minimum Type 2 Maximum
UCS 5100 Chassis 2 4 4 8
UCS B-200 Series Blades 16 32 32 64
UCS 6120 Fabric Interconnect 2 2
UCS 6140 Fabric Interconnect 2 2
Table 2 Software
Table 3 Network
In Vblock 1, each UCS chassis contains B-200 blades, six (6) of which have 48GB RAM and two (2) of
which have 96GB RAM. This provides good price/performance and support some memory intensive
applications such as in-memory databases within the Vblock definition. For Vblock 2, all B-200 series
have been defined with 96GB RAM by default due to the systems performance capabilities, where it is
more likely to be running very dense VM or memory intensive mission-critical applications.
The amount of RAM per blade within either a Vblock 1 or Vblock 2 may be adjusted if you have specific
requirements within the definition of a Vblock. For example, if you have specific needs to have a mixture
of RAM densities, you can specify 32G, 48, 72, and 96G RAM options. This however requires careful
consideration of the operational environment and introduces some variance.
B-250 series modules were not tested, but will be a future option. If B-250 series modules are a
requirement for memory densities greater than 96G per module, this may be accommodated within
Vblock 1 and Vblock 2 once testing and validation have been completed. Note that as the B-250 is a
full-slot module, this will have density and performance impacts that need to be ascertained. Because a
B-250 blade is a full slot module, it is expected that these will reduce the impact as the number of CPUs
per slot is reduced by 50%, that will reduce IOPs and potentially disk capacity.
Within a Vblock 1, there are no hard disks on the B-200 series blades as all boot services and storage are
provided by the SAN. However, a small hard drive may be installed if local page memory is required for
vSphere. If the local disk is used for main storage or operating system storage, it is not considered a
Vblock and is a custom implementation at this point.
For Vblock 2, each B-200 series blade module has 72GB SATA drives for page memory purposes. If
required, these may be removed to reduce power and cooling overhead, increase MTBF, or save costs.
Each 61x0 has either 4 or 8 10GE/Unified Fabric uplinks to the aggregation layer (the aggregation layer
is not a part of Vblock) Nexus 7000 (new build out) or Catalyst 6500 (upgrade to an existing data center)
switches and either 4 or 8*4G Fibre Channel connections to the SAN aggregation provided by a pair of
MDS 9506 director-class switches (SAN A and B support).
The MDS 9506 switches are recommended, but may optionally be changed for 9509 or 9513s to scale
capacity or reduced to an MDS 9222i if less density is required. However, the performance may be
acceptable for small Vblock 1 implementations.
Figure 4 illustrates the interconnection of the Cisco MDS 9222i in Vblock 1 and Figure 5 illustrates the
interconnection of the Cisco MDS 9506 in Vblock 2.
For more information on the MDS 9222i and MDS 9506, see Storage Area Network—Cisco MDS Fibre
Channel Switch.
VMware vSphere 4 Enterprise Plus licenses are mandatory within all Vblock definitions (to enable the
Cisco 1000v and EMC PowerPath/VE) and per CPU licensing is included within the defined
bill-of-materials. It is also acceptable for operating system and applications to be run directly on the
B-200 series blades. It should be noted that other hypervisors are not supported by Vblocks and
invalidate the Vblock support agreement.
Nexus 1000V and Enterprise Plus are mandatory components due to the inherent richness that they offer
in terms of policy control, segmentation, flexibility, and instrumentation.
Networking
None of the current Vblock definitions contain any form of network switches except for MDS SAN. The
MDS 9000 series are necessary components to provide Fibre Channel connectivity between the storage
arrays and UCS 61x0 series Fabric Interconnects and ultimately the UCS B-200 series blades.
For upstream connectivity, the UCS 61x0 are connected using either 4*10GE/Unified Fabric (Type 1) or
8*10GE/Unified Fabric (Type 2) connections, which equates to an oversubscription factor of 4:1. There
is no provision for an intermediate layer of “access” switches at this time.
If you require a Celerra NAS Gateway within the Vblock 1 (recommended), there are two possibilities:
• Connect the Celerra NAS Gateway to the Nexus 70001 aggregation layer directly
• Use a local Nexus 50x02 switch to provide connectivity
Figure 3 illustrates the interconnection of the EMC Celerra in Vblock.
IP SAN
61x0 Fabric
Interconnect
1/10Gb Ethernet
5100 Blade
Chassis
Data Mover Cache
Data Mover
Physical Disks
5100 Blade
Chassis
228042
For more information, see NFS Datastores and Native File Services—EMC Celerra Gateway Family.
Storage
Storage capacity has been tuned to match the I/O performance of the attached UCS systems.
Additionally, some analysis of the likely underlying applications has also been taken into account to
characterize user or VM densities that are likely for a given Vblock. Obviously, these numbers are highly
variable based upon your use cases and requirements; the numbers are intended to provide guidance on
typical densities.
Figure 4 illustrates the interconnection of the EMC CLARiiON CX4-480 in Vblock 1 and Figure 5
illustrates the interconnection of the EMC Symmetrix V-Max in Vblock 2.
61x0 Fabric
Interconnect
228040
UCS CLARiiON CX4-480
61x0 Fabric
Interconnect
SE SE SE SE FA FA FA FA FA FA FA FA
228041
UCS V-Max (Two Engine)
SE = iSCSI
FA = Fibre Channel
Table 4 through Table 9 contain CLARiiON CX4-480 and Symmetrix V-Max system controller I/O and
bandwidth capacities, as well as installed disks and other configuration information.
Table 4 Storage
Storage # of Dives for Minimum # of Drives for Maximum Capacity (TB) Minimum Capacity (TB) Maximum
CLARiiON CX4-480 110 184 61 97
Symmetrix V-Max 209 359 42 221
1 1 1
NAS Gateway NS-G2 NS-G8 NS-G2 NS-G81
1. Recommended
The CLARiiON system is configured with an amount of Flash, Fibre Channel, and SATA drives with
N+1 spares redundancy. This means that although the minimum Vblock 1 density is some 61TB of RAW
storage, 42TB is usable when system spares and overheads are factored in.
Within the Vblock definitions, NAS access is recommended for vSphere. A NAS Gateway, while
optional, provides this service, including CIFS for applications. Although these have been tested, the
NAS Gateways have not been performance validated for a pure NAS environment; further testing is
required ensure boot (PXE) as well as file access can be supported in a balanced fashion. It should be
noted that UCS does not currently support iSCSI boot of physical servers (VMs can boot on iSCSI
through vSphere), so this is neither a tested nor validated solution.
For the interim, a Vblock 1 can support NAS with the provision that primary boot services are provided
across the SAN.
For Vblock 2, the characteristics of the system are such that mission-critical applications will be hosted
that will require Fibre Channel access to maintain performance. Again, it is highly recommended that a
NAS Gateway or two are deployed for vSphere, the exact number required being ascertained during
Vblock planning phases.
EMC PowerPath/VE (PP/VE) provides several benefits in terms of performance, availability, and
operations, so the base PP/VE license is mandatory for Vblocks 1 and 2.
For more information on the EMC CLARiiON storage system, see
http://www.emc.com/products/family/clariion-family.htm and for the Symmetrix storage system, see
http://www.emc.com/products/family/symmetrix-family.htm.
Vblock Management
Within the Vblock there are several managed elements, some of which are managed by their respective
element managers. These elements offer corresponding interfaces that provide an extensible, open
management framework. The Vblocks management framework showing relationships and interfaces is
shown in Figure 6. The individual element managers and managed components are:
• VMware vCenter Server
• Cisco UCS Manager
• Cisco Fabric Manager
• EMC Symmetrix Management Console
• EMC Navisphere Manager
A Vblock element manager, Unified Infrastructure Manager (UIM)1, manages the configuration,
provisioning, and compliance of a Vblock and multiple mixed Vblocks. This accrues several benefits as
it provides “single pane of glass” for systems configuration and integration and provides Vblock service
catalogs and Vblock self-service portal capabilities.
IT Provisioning Portal
Unified
Multi-Vblock Service Profile Catalog
Element
Management Unified Configuration
Policy-Based Infrastructure
Management Provisioning, Compliance Recovery (DR)
Config & Change Analysis
Manages one
Stand-alone or more
Component Cisco EMC Symmetrix VMware Vblocks
253547
Management UCS Manager Console vCenter
It should be noted that Ionix UIM does not provide fault, performance monitoring, billing capabilities,
or software lifecycle management capabilities. By the abstractions offered by UIM, and using UIM as a
single point of integration, this simplifies Vblock integration into IT service catalogs and workflow
engines. In this respect, UIM can dramatically simplify Vblock deployment by abstracting the overall
provisioning aspects of Vblock, while offering granular access to individual components for
troubleshooting and fault management.
It should be noted however that Vblock has an open management framework that allows an organization
to integrate Vblock management with their choice of management tools should they so desire.
In practical terms, it may not be possible or desirable to do this depending upon the complexity of the
environment; it may be simpler to deploy a new Vblock, migrate workloads first, and then migrate
existing storage arrays to that infrastructure over time. Each of these situations would need to be assessed
on their relative merits and would require extensive audits.
Vblock 1 Vblock
Expansion Compute
Expansion
253557
In order to scale capacity within a Vblock, the initial Vblock configuration includes an MDS 9506 that
has a 24-port 2/4/8G Fibre Channel module. If additional capacity is required, an expansion to the
original Vblock simply connects the UCS 61x0 and CLARiiON or Symmetrix V-Max to the existing
MDS interfaces. If additional capacity is required on the MDS 9506 switch, additional interface modules
can be installed as necessary.
As Vblocks are added, the capacity of the Vblock scales either as an aggregated pool, whereby any UCS
blade can access any storage disks on the SAN or as isolated silos. For example, it is perfectly acceptable
to aggregate two Vblock 1s to provide capacity for 6,000 VMs that can share common storage capacity.
This offers an organization the ability to configure Vblock infrastructure to achieve their compliance,
security, and fault isolation objectives using a single, flexible infrastructure. As long as storage capacity
is added in conjunction with compute capacity to maintain balanced performance as published within
the Vblock, the system does not require any additional validation.
If compute or storage needs to be added asynchronously, the Vblock environment must be carefully
considered. If storage is increased above that specified on a per Vblock maximum, there is no real
concern as the performance limitation are either at the system controller or compute node. In most cases,
the performance of the systems controller and UCS system has been balanced, so this should not be a
concern.
If compute is to be scaled to be in excess of the minimum or maximum storage capacity, systemic
problems from I/O or capacity may be introduced that need careful consideration. Some applications that
may require this flexibility are high-performance compute environments (CFD, data mining, parametric
execution, etc.). This requires services engagement to validate and certify before being accepted as a
Vblock.
In order to satisfy the performance needs of a Vblock, it is recommended that only similar Vblocks are
pooled so as to maintain the performance and availability SLA associated with that Vblock. This is easily
achieved on the MDS 9500 director switches using Virtual SAN capabilities.
UCS Components
The Cisco Unified Computing System is built from the following components:
• Cisco UCS 6100 Series Fabric Interconnects
(http://www.cisco.com/en/US/partner/products/ps10276/index.html) is a family of line-rate,
low-latency, lossless, 10-Gbps Ethernet and Fibre Channel over Ethernet interconnect switches.
UCS Manager
Embedded- manages entire system
IP Storage
Aggregation Area Network
Layer (SAN)
10Gb x 4 Ethernet
Port Channel to 4Gb x 4 Fibre
Aggregation Layer Channel to SAN
Fabric Extender
Fabric Extender
UCS Manager
Data centers have become complex environments with a proliferation of management points. From a
network perspective, the access layer has fragmented, with traditional access layer switches, switches in
blade servers, and software switches used in virtualization software all having separate feature sets and
management paradigms. Most current blade systems have separate power and environmental
management modules, adding cost and management complexity. Ethernet NICs and Fibre Channel
HBAs, whether installed in blade systems or rack-mount servers, require configuration and firmware
updates. Blade and rack-mount server firmware must be maintained and BIOS settings must be managed
for consistency. As a result, data center environments have become more difficult and costly to maintain,
while security and performance may be less than desired. Change is the norm in data centers, but the
combination of x86 server architectures and the older deployment paradigm makes change difficult:
• In fixed environments in which servers run OS and application software stacks, rehosting software
on different servers as needed for scaling and load management is difficult to accomplish. I/O
devices and their configuration, network configurations, firmware, and BIOS settings all must be
configured manually to move software from one server to another, adding delays and introducing
the possibility of errors in the process. Typically, these environments deploy fixed spare servers
already configured to meet peak workload needs. Most of the time these servers are either idle or
highly underutilized, raising both capital and operating costs.
• Virtual environments inherit all the drawbacks of fixed environments, and more. The fragmentation
of the access layer makes it difficult to track virtual machine movement and to apply network
policies to virtual machines to protect security, improve visibility, support per-virtual machine QoS,
and maintain I/O connectivity. Virtualization offers significant benefits; however, it adds more
complexity.
Typically deployed in redundant pairs, fabric interconnects provide uniform access to both networks and
storage, eliminating the barriers to deploying a fully virtualized environment. Two models are available:
the 20-port Cisco UCS 6120XP and the 40-port Cisco UCS 6140XP.
Both models offer key features and benefits, including:
• High performance Unified Fabric with line-rate, low-latency, lossless 10 Gigabit Ethernet, and
FCoE.
• Centralized unified management with Cisco UCS Manager software.
• Virtual machine optimized services with the support for VN-Link technologies.
• Efficient cooling and serviceability with front-to-back cooling, redundant front-plug fans and power
supplies, and rear cabling.
• Available expansion module options provide Fibre Channel and/or 10 Gigabit Ethernet uplink
connectivity.
For more information on the Cisco UCS 6100 Series Fabric Interconnects, see:
http://www.cisco.com/en/US/products/ps10276/index.html
• Vblock 1
– 2 to 4 blade chassis
• Vblock 2
– 4 to 8 blade chassis
• Availability:
– Two Fabric Extenders per chassis
– N+1 cooling and power
• Predictable performance:
– 2:1 Oversubscription—40 Gb per chassis
– Balanced configuration
– Distribute vHBA and vNIC between fabrics
• Vblock 1
– 16-32 blades
– 128-256 cores
– 960-1920 GB Memory
– 6 blade/chassis = 48 GB
– 2 blades/chassis = 96 GB
• Vblock 2
– 32-64 blades
– 256-512 cores
– 3072-7144 GB memory
– 96 GB per blade
– (2) 73 GB internal HDD
• Availability:
– N+1 blades per chassis
– Trunk and Port Group configuration
• One dual port Converged Network Adapter (Unified Network)
– vNIC
– vHBA
• Internal connections to both Fabric Extenders
• Predictable performance
– Dual quad core Xeon® 5500 Series processors
– Balanced configuration
– Network
– Memory
– Compute
• Scalability and flexibility
– VLAN, Trunks and Port Groups
Network
with a virtual machine during live migration, ensuring persistent network, security, and storage
compliance, resulting in improved business continuance, performance management, and security
compliance. Last but not least, it aligns management of the operational environment for virtual machines
and physical server connectivity in the data center, reducing the total cost of ownership (TCO) by
providing operational consistency and visibility throughout the network. It offers flexible collaboration
between the server, network, security, and storage teams while supporting various organizational
boundaries and individual team autonomy.
For more information, see: http://www.cisco.com/en/US/products/ps9902/index.html.
Storage
Storage components include:
• EMC CLARiiON CX4 Series
• EMC Symmetrix V-Max Storage System
• NFS Datastores and Native File Services—EMC Celerra Gateway Family
• Storage Area Network—Cisco MDS Fibre Channel Switch
Enterprise Flash Drives—EMC-customized Flash drive technology provides low latency and high
throughput to break the performance barriers of traditional disk technology. EMC is the first to bring
Flash drives to midrange storage and expects the technology to become mainstream over the next few
years while revolutionizing networked storage.
Flash drives extend the storage tiering capabilities of CLARiiON by:
• Delivering 30 times the IOPS of a 15K RPM FC drive
• Consistently delivering less than 1 ms response times
• Requiring 98 percent less energy per I/O than 15K rpm Fibre Channel drives
• Weighing 58 percent less per TB than a typical Fibre Channel drive
• Providing better reliability due to no moving parts and faster RAID rebuilds
UltraFlex technology—The CLARiiON CX4 architecture features UltraFlex technology—a
combination of a modular connectivity design and unique FLARE® operating environment software
capabilities that deliver:
• Dual protocol support with FC and iSCSI as the base configuration on all models
• Easy, online expansion via hot-pluggable I/O modules
• Ability to easily add and/or upgrade I/O modules to accommodate future technologies as they
become available (i.e., FCoE)
CLARiiON Virtual Provisioning—Allows CLARiiON users to present an application with more
capacity than is physically allocated to it in the storage system. CLARiiON Virtual Provisioning can
lower total cost of ownership and offers customers these benefits:
• Efficient tiering that improves capacity utilization and optimizes tiering capabilities across all drive
types
• Ease of provisioning that simplifies and accelerates processes and delivers “just-in-time” capacity
allocation and flexibility
• Comprehensive monitoring, alerts, and reporting for efficient capacity planning
• Supports advanced capabilities including Virtual LUN, Navisphere QoS Manager, Navisphere
Analyzer, and SnapView
Multi-core Intel Xeon processors, increased memory, and 64-bit FLARE—The CX4 boasts up to
twice the performance of the previous generation and provides up to 2.5 times more processing power
with multi-core Intel® Xeon® processors. The CX4 architecture also delivers twice the capacity scale
(up to 960 drives), twice the memory, and twice the LUNs compared with the previous generation
CLARiiON.
With the CLARiiON CX4, the FLARE operating environment has also been upgraded from a 32-bit to
a 64-bit environment. This enhancement enables the scalability improvements and also provides the
foundation for more advanced software functionality such as Virtual Provisioning.
Low-power SATA II drives, adaptive cooling, and drive spin-down:
• Low-power SATA II drives deliver the highest density at the lowest cost and require 96 percent less
energy per terabyte than 15K rpm Fibre Channel drives, and 32 percent less than traditional 7.2K
rpm SATA drives.
• Adaptive cooling is a new feature that provides improved energy efficiency by dynamically
adjusting cooling and airflow within the CX4 arrays based on system activity.
• Drive spin-down allows customers to set policies at the RAID group level to place inactive drives in
sleep mode. Target applications include backup-to-disk, archiving, and test and development.
For more information, see: http://www.emc.com/products/detail/hardware/clariion-cx4-model-480.htm.
1. Optional.
Combined with the rich capabilities of EMC ControlCenter and EMC’s Storage Viewer for vCenter,
administrators are provided with end-to-end visibility and control of their virtual data center storage
resources and usage. EMC’s new PowerPath/VE support for vSphere provides optimization of usage on
all available paths between virtual machines and the storage they are using, as well as proactive failover
management.
Introduction
The Symmetrix V-Max system (see Figure 9) operating environment is a new enterprise-class storage
array that is built on the strategy of simple, intelligent, modular storage. The array incorporates a new
high-performance fabric interconnect designed to meet the performance and scalability demands for
enterprise storage within the most demanding virtual data center installations. The storage array
seamlessly grows from an entry-level configuration with a single, highly available Symmetrix V-Max
Engine and one storage bay into the world’s largest storage system with eight engines and 10 storage
bays. The largest supported configuration is shown in Figure 10. When viewing Figure 10, refer to the
following list that indicates the range of configurations supported by the Symmetrix V-Max storage
array:
• 2-16 director boards
• 48-2,400 disk drives
• Up to 2 PB usable capacity
• Up to 128 Fibre Channel ports
• Up to 64 FICON ports
• Up to 64 Gig-E/iSCSI ports
!
!
A B C D
!
!
A B C D
! ! ! !
!
!
A B C D
!
!
A B C D
!
!
A B C D
!
!
A B C D
LAN1
LAN2
!
!
A B C D
!
!
A B C D
!
!
A B C D
!
!
A B C D
! ! ! !
!
!
A B C D
!
!
A B C D
!
!
A B C D
!
!
A B C D
LAN1
LAN2
!
!
A B C D
!
!
A B C D
227117
The enterprise-class deployments in a modern data center are expected to be always available. The
design of the Symmetrix V-Max storage array enables it to meet this stringent requirement. The
replicated components which comprise every Symmetrix V-Max configuration assure that no single
point of failure can bring the system down. The hardware and software architecture of the Symmetrix
V-Max storage array allows capacity and performance upgrades to be performed online with no impact
to production applications. In fact, all configuration changes, hardware and software updates, and
service procedures are designed to be performed online and non-disruptively. This ensures that
customers can consolidate without compromising availability, performance, and functionality, while
leveraging true pay-as-you-grow economics for high-growth storage environments.
The Symmetrix V-Max system can include two to 16 directors inside one to eight Symmetrix V-Max
Engines. Each Symmetrix V-Max Engine has its own redundant power supplies, cooling fans, SPS
Modules, and Environmental Modules. Furthermore, the connectivity between the Symmetrix V-Max
array engines provides direct connections from each director to every other director, creating a redundant
and high-availability Virtual Matrix. Each Symmetrix V-Max Engine has two directors that can offer up
to eight host access ports each, therefore allowing up to 16 host access ports per Symmetrix V-Max
Engine.
Figure 11 shows a schematic representation of a single Symmetrix V-Max storage engine.
Core
Core Core
Core Core
Core Core
Core Core
Core Core
Core Core
Core Core
Core
Core
Core Core
Core Core
Core Core
Core Core
Core Core
Core Core
Core Core
Core
CPU
CPU CPU
CPU
Global Complex
Complex Complex
Complex Global
Memory Memory
227118
A
A B
B A
A B
B
The powerful, high-availability Symmetrix V-Max Engine provides the building block for all Symmetrix
V-Max systems. It includes four quad-core Intel Xeon processors, 64-128 GB of Global Memory; 8-16
ports for front-end host access or Symmetrix Remote Data Facility channels using Fibre Channel,
FICON, or Gigabit Ethernet; and 16 back-end ports connecting to up to 360 storage devices using 4Gb
Fibre Channel SATA or Enterprise Flash Drives.
Each of the two integrated directors in a Symmetrix V-Max Engine has main three parts: the back-end
director, the front-end director, and the cache memory module. The back-end director consists of two
back-end I/O modules with four logical directors that connect directly into the integrated director. The
front-end director consists of two front-end I/O modules with four logical directors that are located in
the corresponding I/O annex slots. The front-end I/O modules are connected to the director via the
midplane.
The cache memory modules are located within each integrated director, each with eight available
memory slots. Memory cards range from 2 to 8 GB, consequently allowing anywhere between 16 to 64
GB per integrated director. For added redundancy, the Symmetrix V-Max system uses mirrored cache,
so memory is mirrored across engines in a multi-engine setup. In case of a single-engine configuration,
the memory is mirrored inside the engine across the two integrated directors.
EMC’s Virtual Matrix interconnection fabric permits the connection of up to 8 Symmetrix V-Max
Engines together to scale out total system resources and flexibly adapt to the most demanding virtual
data center requirements.
Figure 12 shows a schematic representation of a maximum Symmetrix V-Max configuration.
Front End Back End Front End Back End Front End Back End Front End Back End
Host & Disk Ports Host & Disk Ports Host & Disk Ports Host & Disk Ports
Core
Core Core
Core Core
Core Core
Core Core Core Core Core Core
Core Core
Core Core
Core Core
Core Core Core Core Core
Core
Core Core
Core Core
Core Core
Core Core Core Core Core Core
Core Core
Core Core
Core Core
Core Core Core Core Core
Global
CPU
CPU CPU Global Global
CPU
CPU CPU Global
C l
Complex Complex C l
Complex Complex
Memory Memory Memory Memory
A B A B A B A B
Virtual Matrix Interface
Core
Memory
Core
Core
Core
Core
Global
Back End
Front End
B
Memory
Global
A
Host & Disk Ports
Core
Core
Core
Core
Core
Core
Core
CPU
Core
Core
Core
Core
CPU
Front End
Complex
Complex
Back End
CPU
C
A
Interface
Core
Core
Core
Core
Core
Core
l
Virtual Matrix Interface
Interface
Core
Core
Core
Core
Core
Core
Back End
Front End
Complex
CPU
Complex
CPU
l
A
Host & Disk Ports
Core
Core
Core
Core
Core
Core
Core
Core
Virtual
Core
Core
Front End
Back End
Memory
Global
Memory
Global
A
Core
Core
Core
Core
Core
Core
Virtual Matrix Interface
Core
Memory
Matrix
Core
Core
Core
Core
Global
Back End
Front End
B
Memory
Global
A
Host & Disk Ports
Core
Core
Core
Core
Core
Core
Core
CPU
Core
Core
Core
Core
CPU
Front End
Complex
Complex
Back End
CPU
C
A
Interface
Core
Core
Core
Core
Core
Core
l
Virtual Matrix Interface
Interface
Core
Core
Core
Core
Core
Core
Back End
Front End
Complex
CPU
Complex
CPU
l
A
Host & Disk Ports
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Front End
Back End
Memory
Global
Memory
Global
A
Core
Core
Core
Core
Core
Core
B A B A B A B A
227112
Host & Disk Ports Host & Disk Ports Host & Disk Ports Host & Disk Ports
Back End Front End Back End Front End Back End Front End Back End Front End
environment into the hundreds of TB from a single point of management. And you can improve
performance over standard NAS by simply adding MPFS to your environment without application
modification.
EMC Celerra Gateway platforms combine a NAS head and SAN storage for a flexible, cost-effective
implementation that maximizes the utilization of your existing resources. This approach offers the
utmost in configuration options, including:
• One-to-eight X-Blade configurations
• Flash drives, Fibre Channel, SATA and low-power SATA drive support
• Performance/availability mode in the entry-level NS-G2• EMC Symmetrix or CLARiiON storage
• Native integration with Symmetrix and CLARiiON replication software providing a single
replication solution for all your SAN and IP Storage disaster recovery requirements
The Celerra Gateway is comprised of one or more autonomous servers called X-Blades which connect
via FC SAN to a CLARiiON or Symmetrix storage array. The X-Blades control data movement from the
disks to the network. Each X-Blade consists of an Intel-based server with redundant data paths, power
supplies, and multiple Gigabit Ethernet ports and optional, multiple 10 Gigabit Ethernet optical ports.
X-Blades run EMC’s Data Access in Real Time (DART) operating system, designed and optimized for
high-performance and multi-protocol network file access. All the X-Blades in a system are managed by
the Control Station (two control stations for HA are supported on the NS-G8), which operates out of the
data path and provides a single point of configuration management and administration as well as
handling X-Blade failover and maintenance support.
EMC Celerra is a file server for the Vblock (Celerra file services may be shared across multiple
Vblocks):
• Gateway configuration sharing CLARiiON or Symmetrix storage
• Vblock 1 NS-G2
– 2 Datamovers
For more information on the EMC Celerra NS-G2, see:
http://www.emc.com/products/detail/hardware/celerra-ns-g2.htm.
• Vblock 2 NS-G8
– 2 to 8 Datamovers
For more information on the EMC Celerra NS-G8, see:
http://www.emc.com/products/detail/hardware/celerra-ns-g8.htm.
• May be shared across multiple Vblocks
The Cisco MDS 9222i Multiservice Modular Switch delivers state-of-the-art multiprotocol and
distributed multiservice convergence, offering:
• High-performance SAN extension and disaster recovery solutions
• Intelligent fabric services such as storage media encryption
• Cost-effective multiprotocol connectivity
Its compact form factor, expansion slot modularity, and advanced capabilities make the MDS 9222i an
ideal solution for departmental and remote branch-office SANs requiring director-class features at a
lower cost.
Product highlights:
• High-density Fibre Channel switch; scales up to 66 Fibre Channel ports
• Integrated hardware-based virtual fabric isolation with virtual SANs (VSANs) and Fibre Channel
routing with inter-VSAN routing
• Remote SAN extension with high-performance Fibre Channel over IP (FCIP)
• Long distance over Fibre Channel with extended buffer-to-buffer credits
• Multiprotocol and mainframe support (Fibre Channel, FCIP, Small Computer System Interface over
IP [iSCSI], and IBM Fiber Connection [FICON])
• IPv6 capable
• Platform for intelligent fabric applications such as storage media encryption
• In-Service Software Upgrade (ISSU)
• Comprehensive network security framework
• High-performance intelligent application with the combination of 16-port storage services node
For more information on the Cisco MDS 922i, see:
http://www.cisco.com/en/US/products/ps8420/index.html
For more information on the Cisco MDS 9200 Series Multilayer Switches, see:
http://www.cisco.com/en/US/products/ps5988/index.html
The Cisco® MDS 9506 Multilayer Director provides industry-leading availability, scalability, security,
and management. The Cisco MDS 9506 allows you to deploy high-performance storage-area networks
(SANs) with lowest total cost of ownership (TCO). Layering a rich set of intelligent features onto a
high-performance, protocol-independent switch fabric, the Cisco MDS 9506 addresses the stringent
requirements of large data center storage environments: uncompromising high availability, security,
scalability, ease of management, and transparent integration of new technologies. Compatible with first,
second, and third generation Cisco MDS 9000 Family switching modules, the Cisco MDS 9506 provides
advanced functionality and unparalleled investment protection, allowing the use of any Cisco MDS 9000
Family switching module in this compact system.
The Cisco MDS 9506 offers the following benefits:
• Scalability and Availability—The Cisco MDS 9506 combines nondisruptive software upgrades,
stateful process restart/failover, and full redundancy of all major components for best-in-class
availability. Supporting up to 192 Fibre Channel ports in a single chassis, up to 1152 Fibre Channel
ports in a single rack, the Cisco MDS 9506 is designed to meet the requirements of large data center
storage environments.
• Compact design—The Cisco MDS 9506 provides high port density in a small footprint, saving
valuable data center floor space. The seven-rack-unit chassis allows up to six Cisco MDS 9506
multilayer directors in a standard rack, maximizing the number of available Fibre Channel ports.
• 1/2/4/8-Gbps and 10-Gbps Fibre Channel—Supports new 8-Gbps as well as existing 10-Gbps,
4-Gbps, and 2-Gbps MDS Fibre Channel switching modules.
• Flexibility and investment protection—Supports mix of mix of new, second, and first generation
Cisco MDS 9000 Family modules providing forward and backward compatibility and unparalleled
investment protection.
• TCO driven design—The Cisco MDS 9506 offers advanced management tools for overall lowest
TCO. It includes VSAN technology for hardware-enforced, isolated environments within a single
physical fabric for secure sharing of physical infrastructure, further decreasing TCO.
• Multiprotocol—The multilayer architecture of the Cisco MDS 9000 Family enables a consistent
feature set over a protocol-independent switch fabric. The Cisco MDS 9506 transparently integrates
Fibre Channel, IBM Fiber Connection (FICON), Small Computer System Interface over IP (iSCSI),
and Fibre Channel over IP (FCIP) in one system.
• Intelligent network services—Provides integrated support for VSAN technology, access control lists
(ACLs) for hardware-based intelligent frame processing, and advanced traffic-management features
such as Fibre Channel Congestion Control (FCC) and fabric-wide quality of service (QoS) to enable
migration from SAN islands to enterprise-wide storage networks.
• Integrated Cisco Storage Media Encryption (SME) as distributed fabric service—Supported on the
Cisco MDS 18/4-Port Multiservice Module, Cisco SME encrypts data at rest on heterogeneous tape
drives and virtual tape libraries (VTLs) in a SAN environment using secure IEEE standard
Advanced Encryption Standard (AES) 256-bit algorithms. Cisco MDS 18/4-Port Multiservice
Module helps ensure ease of deployment, scalability, and high availability by using innovative
technology to transparently offer Cisco SME capabilities to any device connected to the fabric
without the need for reconfiguration or rewiring. Cisco SME provisioning is integrated into the
Cisco Fabric Manager; no additional software is required. Cisco SME key management can be
provided by either the Cisco Key Management Center (KMC) or with RSA Key Manager for the
Datacenter from RSA, the Security Division of EMC.
• Open platform for intelligent storage applications—Provides the intelligent services necessary for
hosting and/or accelerating storage applications such as network-hosted volume management, data
migration and backup.
• Integrated hardware-based VSANs and Inter-VSAN Routing (IVR)—Enables deployment of
large-scale multi-site and heterogeneous SAN topologies. Integration into port-level hardware
allows any port within a system or fabric to be partitioned into any VSAN. Integrated
hardware-based inter-VSAN routing provides line-rate routing between any ports within a system
or fabric without the need for external routing appliances
• Advanced FICON services—Supports 1/2/4-Gbps FICON environments, including cascaded
FICON fabrics, VSAN-enabled intermix of mainframe and open systems environments, and N_Port
ID virtualization for mainframe Linux partitions. CUP (Control Unit Port) support enables in-band
management of Cisco MDS 9000 Family switches from the mainframe management console.
• Comprehensive security framework—Supports RADIUS and TACACS+, Fibre Channel Security
Protocol (FC-SP), Secure File Transfer Protocol (SFTP), Secure Shell (SSH) Protocol and Simple
Network Management Protocol Version 3 (SNMPv3) implementing Advanced Encryption Standard
(AES), VSANs, hardware-enforced zoning, ACLs, and per-VSAN role-based access control.
• Sophisticated diagnostics—Provides intelligent diagnostics, protocol decoding, and network
analysis tools as well as integrated Call Home capability for added reliability, faster problem
resolution, and reduced service costs.
• Unified SAN management—The Cisco MDS 9000 Family includes built-in storage network
management with all features available through a command-line interface (CLI) or Cisco Fabric
Manager, a centralized management tool that simplifies management of multiple switches and
fabrics. Integration with third party storage management platforms allows seamless interaction with
existing management tools.
• Cisco TrustSec Fibre Channel Link Encryption—Delivers transparent, hardware-based, line-rate
encryption of Fibre Channel data between any Cisco MDS 9000 Family 8-Gbps modules.
For more information on the Cisco MDS 9506 Multilayer Director, see:
http://www.cisco.com/en/US/products/hw/ps4159/ps4358/ps5395/index.html
For more information on the Cisco MDS 9500 Series Multilayer Directors, see:
http://www.cisco.com/en/US/products/ps5990/index.html
Virtualization
VMware vSphere/vCenter
• VMware vSphere 4 is the virtualized infrastructure for the Vblock
– Virtualizes all application servers
– Provides VMWare High Availability (HA) and Dynamic Resource Scheduling (DRS)
• Templates enable rapid provisioning
VMware vSphere and vCenter Server offer the highest levels of availability and responsiveness for all
applications and services with VMware vSphere, the industry’s most reliable platform for data center
virtualization. Optimize IT service delivery and deliver the highest levels of application service
agreements with the lowest total cost per application workload by decoupling your business critical
applications from the underlying hardware for unprecedented flexibility and reliability.
VMware vCenter Server provides a scalable and extensible platform that forms the foundation for
virtualization management (http://www.vmware.com/solutions/virtualization-management/). VMware
vCenter Server, formerly VMware VirtualCenter, centrally manages VMware vSphere
(http://www.vmware.com/products/vsphere/) environments, allowing IT administrators dramatically
improved control over the virtual environment compared to other management platforms. VMware
vCenter Server:
• Provides centralized control and visibility at every level of virtual infrastructure.
• Unlocks the power of vSphere through proactive management.
• Is a scalable and extensible management platform with a broad partner ecosystem.
For more information, see http://www.vmware.com/products/.
Physical Architecture
50 (3-phase
Symmetrix V-Max SE (1 System and 1 Storage Delta-4 CS8365C (3-phase
Bay) 10.4 34000 88 4008 1818 200-240 50-60 wire) 2 per bay Delta-4 wire)
50 (3-phase
Delta-4 CS8365C (3-phase
Symmetrix V-Max System Bay (4 Engine) 4.1 13700 44 1830 830 200-240 50-60 wire) 2 per bay Delta-4 wire)
50 (3-phase
Delta-4 CS8365C (3-phase
Symmetrix V-Max System Bay (8 Engine) 7.8 26300 44 2774 1258.3 200-240 50-60 wire) 2 per bay Delta-4 wire)
50 (3-phase
Delta-4 CS8365C (3-phase
Symmetrix V-Max Storage Bay 6.1 19800 44 2144 972.5 200-240 50-60 wire) 2 per bay Delta-4 wire)
Power Cord, 125VAC
15A NEMA 5-15 Plug,
Cisco MDS 9222i 1.05625 2884 3 53.5 24.3 100-240 50-60 16 2 per chassis North America
NEMA L6-20P, IE320-
Cisco MDS 9506 2.7 12000 7 124 56.69 100-240 50-60 12 2 per chassis C19
NEMA L6-20P, IE320-
Cisco Catalyst 6504-E 2.7 12000 5 40 18.18 100-240 48-60 16 2 per chassis C19
Cisco ACE 4710 Load Balancer 0.128 1560 1 1 IEC320-C14
1 per power
Celerra NS-G2 0.52 3236 2 74.91 34.03 100-240 47-63 15 supply IEC320-C14 0.98
180-240
(single 20 for each 1 per power
Celerra NS-G8-4X-Blade System 1.7 5500 9 228 104 phase) 47-63 phase supply IEC320-C14 0.95
180-240
(single 20 for each 1 per power
Celerra NS-G8-6X-Blade System 2.5 8100 13 333 151 phase) 47-63 phase supply IEC320-C14 0.95
180-240
(single 20 for each 1 per power
Celerra NS-G8-8X-Blade System 3.3 10600 17 438 199 phase) 47-63 phase supply IEC320-C14 0.95
Rack
Vblock 1 Power Cooling Space Vblock 1 Power Cooling Rack Vblock 2 Rack Vblock 2 Power Cooling Rack
Equipment Description Min (#) (kVA) (BTU/hr) (RU) Max (#) (kVA) (BTU/hr) Space (U) Min (#) Power (kVA) Cooling (BTU/hr) Space (U) Max (#) (kVA) (BTU/hr) Space (U)
UCS 6120XP Fabric Interconnect 2 1.375 3072 2 2 1.375 3072 2 2 1.375 3072 2 4 2.75 6144 4
UCS Chassis (5108 Chassis and B2000 M1
Blade) 2 8.5 24280 12 4 17 48560 24 4 17 48560 24 8 34 97120 48
MDS 9506 2 5.4 24000 14 2 5.4 24000 14 2 5.4 24000 14 2 5.4 24000 14
Catalyst 6504 2 5.4 24000 10 2 5.4 24000 10 2 5.4 24000 10 2 5.4 24000 10
ACE Load Balancer 2 0.256 3120 2 2 0.256 3120 2 2 0.256 3120 2 2 0.256 3120 2
Clariion CX4-480 Processor (Pair) 1 0.355 990 3 1 0.355 990 3
Clariion CX4-480 DAEs 7 3.08 10150 21 12 5.28 17400 36
V-Max SE (2 Bay Configuration) 1 1 1 10.4 34000 88
V-Max System Bay (4 Engine) 1 4.1 13700 44
V-Max Storage Bay 1 6.1 19800 44
228354
Estimated
Number of
1 Rack Unit (RU) = 1.75 Inches Racks 2 3 4 5
1 kilowatt = 3,412.14163 BTU/hr
1 Std Rack = 45 RU
1 CLARiiON Rack = 40 RU
1 V-Max Bay = 44 RU
3 kVA (7 DAEs) to
Power load for each rack (startup power 5.28 kVA (12 20A to 40A RMS at
surge) 14 kVA 27 kVA 10.8 kVA 4.1 kVA 6.7 kVA 0.355 kVA DAEs) 6 kVA 15A Startup Surge
2 kVA (7 DAEs) to
Power load for each rack (max power 5.28 kVA (12
draw ) 14 kVA 27 kVA 10.8 kVA 4.1 kVA 6.7 kVA 0.355 kVA DAEs) 10.5 kVA 0.520 kVA 1.6 kVA to 3.3 kVA
3 kVA (7 DAEs) to
Power load for each rack (average power 5.28 kVA (12
draw ) 14 kVA 19 kVA 10.8 kVA 4.1 kVA 6.7 kVA 0.355 kVA DAEs) 6 kVA 0.520 kVA 1.6 kVA to 3.3 kVA
Heat load of each rack max only( in 5500 BTU/hr to
Kwatts heat) 27352 BTU/hr 51632 BTU/hr 48000 BTU/hr 13700 BTU/hr 21800 BTU/hr 990 BTU/hr 1440 BTU/hr 32175 BTU/hr 3236 BTU/hr 10600 BTU/hr
Confirm any rack is suitable 4 post rack 4 post rack 2 Post Rack OK No No Yes Yes No Yes Yes
Confirm can be power from above or Above and Below Above and Below Above and Below Above and Below Above and Below Above and Below Above and Below Above and Below Above and Below Above and Below
below ( current symetrix cannot). OK OK OK OK OK OK OK OK OK OK
76.66 in x 30.2 in x 76.66 in x 30.2 in x
Footprint of any non rack devices Rackable Rackable Rackable 41.88 in 41.88 in Rackable Rackable 75 in x 24 in x 39 in Rackable Rackable
Connectivity requirements of each rack
(fiber LC-LC)(SAN) 4 4 depends depends none depends no no depends depends
Connectivity requirements of each rack
(fiber LC-LC)(IP) 2 2 depends depends none depends no no none none
Any non-fibre connectity if needed? 1 for service 2 for Service
(ethernet) 2 2 depends processor none Processors no 1 control station 1 control station
Recommended power strip requirements 15.5A for power 15.5A for each 12A for MDS & 16A
eg. # of amps supply power supply for Catalyst 50A 50A 7.8 A 10A 30A each 15A 20A for each phase
Recommended # of power plugs 4+4 8+4 8 2 2 2 2 2 2 4 to 10 (depends)
CAB-L520P-C19-US, CAB-L520P-C19-
NEMA L5-20, 20A, US, NEMA L5-20,
250 VAC and C14 20A, 250 VAC and
Recommended power strip requirements for Fabric C14 for Fabric NEMA L6-20P, CS8365C (3-phase CS8365C (3-phase
eg. type (C19 ) Interconnects Interconnects IE320-C19 Delta-4 wire) Delta-4 wire) IEC320-C14 IEC320-C14 NEMA L6-30P IEC320-C14 IEC320-C14
2 x CDUs 2 x CDUs
(ServerTech.com, (ServerTech.com,
model# CW-24V2- model# CW-24V2-
L30M, 24 x L30M, 24 x L6-30A Power Strip L6-30A Power Strip L6-30A Power L6-30A Power Strip
IEC320/C13 pluges, IEC320/C13 pluges, with C13 with C13 Strip with C13 with C13
Recommended power strip requirements NEMA L6-30P NEMA L6-30P L6-30A Power Strip EMC Bay EMC Bay Receptacles Receptacles NEMA L6-30P Receptacles Receptacles
Any colocation requirements eg disk
must be attached or with 100 meters of none Shark Rack none Shark Rack
228360
devices on OM3 carriage ? Part# T2EC014-A Part# T2EC014-A none none none none none none none none
A summary of power, cooling, and space requirements for Vblock 1 and Vblock 2 is shown in Table 14.
Rack Layouts
Rack layouts are provided for:
• Vblock 1 Minimum Configuration—Rack Layout Front View
• Vblock 1 Maximum Configuration—Rack Layout Front View
• Vblock 2 Minimum Configuration—Rack Layout Front View
• Vblock 2 Maximum Configuration—Rack Layout Front View
42 U 42 U
MDS
Fabric Interconnect
6120 * 2
UCS Chassis
5108
UCS Chassis
5108 Clariion CX4-480
NS-G2
253591
DAE Power
42 U 42 U 42 U
Fabric Interconnect
6120 * 2
MDS
UCS Chassis
5108
UCS Chassis
5108
NS-G2
UCS Chassis
5108
UCS Chassis
5108
Clariion CX4-480
253592
DAE Power
4 -UCS 5108 Chassis 2 - MDS System Bay Storage Bay Storage Bay
2 -UCS 6140 1 – NAS Gateway With 2 Engines With 13 DAEs With 13 DAEs
42 U 42 U 42 U 42 U 42 U
253593
42 U 42 U 42 U 42 U 42 U
X 4 Racks
of DAEs
253594
References
• VMware View Reference Architecture
http://www.vmware.com/resources/techresources/1084
• VMware 3.0
http://www.vmware.com/products/view/
• Cisco UCS
http://www.cisco.com/go/unifiedcomputing
• Cisco Data Center Solutions
http://www.cisco.com/go/datacenter
Copyright © 2010 Cisco Systems, Inc. All rights reserved. Cisco, the Cisco logo, and Cisco Systems are registered trademarks or trademarks of
Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries. All other trademarks mentioned in this document or
Website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any
other company.
Copyright © 2010 EMC Corporation. All rights reserved. EMC2, EMC, Celerra, CLARiiON, Ionix, Navisphere, PowerPath, Symmetrix, V-Max,
Virtual Matrix, and where information lives are registered trademarks or trademarks of EMC Corporation in the United States or other
countries. All other trademarks used herein are the property of their respective owners. Published in the USA. P/N h6935
Copyright © 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark
of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective
companies.