You are on page 1of 65

SOLUTION DESIGN GUIDE

Brocade Data Center Fabric Architectures

53-1004601-02
06 October 2016
2016, Brocade Communications Systems, Inc. All Rights Reserved.

Brocade, the B-wing symbol, and MyBrocade are registered trademarks of Brocade Communications Systems, Inc., in the United States and in other
countries. Other brands, product names, or service names mentioned of Brocade Communications Systems, Inc. are listed at www.brocade.com/en/legal/
brocade-Legal-intellectual-property/brocade-legal-trademarks.html. Other marks may belong to third parties.

Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment,
equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without
notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade
sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the
United States government.

The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the accuracy of this
document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that accompany it.

The product described by this document may contain open source software covered by the GNU General Public License or other open source license
agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to the open source software, and
obtain a copy of the programming source code, please visit http://www.brocade.com/support/oscd.

Brocade Data Center Fabric Architectures


2 53-1004601-02
Contents
Preface...................................................................................................................................................................................................................................5
Document History......................................................................................................................................................................................................................................5
About the Author........................................................................................................................................................................................................................................ 5
Overview........................................................................................................................................................................................................................................................ 5
Purpose of This Document....................................................................................................................................................................................................................5
About Brocade............................................................................................................................................................................................................................................ 6

Data Center Networking Architectures........................................................................................................................................................................... 7


Throughput and Traffic Patterns...........................................................................................................................................................................................................7
Scale Requirements of Cloud Networks...........................................................................................................................................................................................9
Traffic Isolation, Segmentation, and Application Continuity..................................................................................................................................................... 9

Data Center Networks: Building Blocks.......................................................................................................................................................................11


Brocade VDX and SLX Platforms....................................................................................................................................................................................................11
VDX 6740........................................................................................................................................................................................................................................11
VDX 6940........................................................................................................................................................................................................................................12
VDX 8770........................................................................................................................................................................................................................................13
SLX 9850.........................................................................................................................................................................................................................................14
Networking Endpoints...........................................................................................................................................................................................................................15
Single-Tier Topology............................................................................................................................................................................................................................. 16
Design Considerations................................................................................................................................................................................................................ 17
Oversubscription Ratios..............................................................................................................................................................................................................17
Port Density and Speeds for Uplinks and Downlinks.....................................................................................................................................................17
Scale and Future Growth............................................................................................................................................................................................................ 17
Ports on Demand Licensing..................................................................................................................................................................................................... 18
Leaf-Spine Topology (Two Tiers)......................................................................................................................................................................................................18
Design Considerations................................................................................................................................................................................................................ 19
Oversubscription Ratios..............................................................................................................................................................................................................19
Leaf and Spine Scale....................................................................................................................................................................................................................19
Port Speeds for Uplinks and Downlinks.............................................................................................................................................................................. 20
Scale and Future Growth............................................................................................................................................................................................................ 20
Ports on Demand Licensing..................................................................................................................................................................................................... 20
Deployment Model....................................................................................................................................................................................................................... 20
Data Center Points of Delivery................................................................................................................................................................................................. 21
Optimized 5-Stage Folded Clos Topology (Three Tiers)....................................................................................................................................................... 21
Design Considerations................................................................................................................................................................................................................ 22
Oversubscription Ratios..............................................................................................................................................................................................................23
Deployment Model....................................................................................................................................................................................................................... 23
Edge Services and Border Switches Topology.......................................................................................................................................................................... 23
Design Considerations................................................................................................................................................................................................................ 24
Oversubscription Ratios..............................................................................................................................................................................................................24
Data Center Core/WAN Edge Handoff.................................................................................................................................................................................24
Data Center Core and WAN Edge Routers......................................................................................................................................................................... 25

Building Data Center Sites with Brocade VCS Fabric Technology........................................................................................................................ 27


Data Center Site with Leaf-Spine Topology.................................................................................................................................................................................28
Scale....................................................................................................................................................................................................................................................30
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics...........................................................................................................31

Brocade Data Center Fabric Architectures


53-1004601-02 3
Scale....................................................................................................................................................................................................................................................34

Building Data Center Sites with Brocade IP Fabric...................................................................................................................................................37


Data Center Site with Leaf-Spine Topology.................................................................................................................................................................................37
Scale....................................................................................................................................................................................................................................................39
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos............................................................................................................................40
Scale....................................................................................................................................................................................................................................................41

Building Data Center Sites with Layer 2 and Layer 3 Fabrics................................................................................................................................ 45

Scaling a Data Center Site with a Data Center Core................................................................................................................................................. 47

Control-Plane and Hardware-Scale Considerations.................................................................................................................................................49


Control-Plane Architectures................................................................................................................................................................................................................50
Single-Tier Data Center Sites....................................................................................................................................................................................................50
Brocade VCS Fabric..................................................................................................................................................................................................................... 51
Multi-Fabric Topology Using VCS Technology................................................................................................................................................................ 54
Brocade IP Fabric.......................................................................................................................................................................................................................... 56

Routing Protocol Architecture for Brocade IP Fabric and Multi-Fabric Topology Using VCS Technology..................................................59
eBGP-based Brocade IP Fabric and Multi-Fabric Topology............................................................................................................................................... 59
iBGP-based Brocade IP Fabric and Multi-Fabric Topology.................................................................................................................................................60

Choosing an Architecture for Your Data Center.........................................................................................................................................................63


High-Level Comparison Table ......................................................................................................................................................................................................... 63
Deployment Scale Considerations...................................................................................................................................................................................................64
Fabric Architecture..................................................................................................................................................................................................................................65
Recommendations................................................................................................................................................................................................................................. 65

Brocade Data Center Fabric Architectures


4 53-1004601-02
Preface
Document History................................................................................................................................................................................................ 5
About the Author...................................................................................................................................................................................................5
Overview...................................................................................................................................................................................................................5
Purpose of This Document.............................................................................................................................................................................. 5
About Brocade.......................................................................................................................................................................................................6

Document History
Date Part Number Description

February 9, 2016 Initial release with DC fabric architectures, network virtualization, Data Center Interconnect, and
automation content.
September 13, 2016 53-1004601-01 Initial release of solution design guide for DC fabric architectures.
October 06, 2016 53-1004601-02 Replaced the figures for the Brocade VDX 6940-36Q and the Brocade VDX 6940-144S.

About the Author


Anuj Dewangan is the lead Technical Marketing Engineer (TME) for Brocade's data center switching products. He holds a CCIE in
Routing and Switching and has several years of experience in the networking industry with roles in software development, solution
validation, technical marketing, and product management. At Brocade, his focus is creating reference architectures, working with
customers and account teams to address their challenges with data center networks, creating product and solution collateral, and helping
define products and solutions. He regularly speaks at industry events and has authored several white papers and solution design guides
on data center networking.

The author would like to acknowledge Jeni Lloyd and Patrick LaPorte for their in-depth review of this solution guide and for providing
valuable insight, edits, and feedback.

Overview

Based on the principles of the New IP, Brocade is building on Brocade VDX and Brocade SLX platforms by delivering cloud-
optimized network and network virtualization architectures and delivering new automation innovations to meet customer demand for
higher levels of scale, agility, and operational efficiency.

The scalable and highly automated Brocade data center fabric architectures described in this solution design guide make it easy for
infrastructure planners to architect, automate, and integrate with current and future data center technologies while they transition to their
own cloud-optimized data center on their own time and terms.

Purpose of This Document


This guide helps network architects, virtualization architects, and network engineers to make informed design, architecture, and
deployment decisions that best meet their technical and business objectives. Network architecture and deployment options for scaling
from tens to hundreds of thousands of servers are discussed in detail.

Brocade Data Center Fabric Architectures


53-1004601-02 5
About Brocade

About Brocade
Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where
applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity,
non-stop networking, application optimization, and investment protection.

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and
cost while enabling virtualization and cloud computing to increase business agility.

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support,
and professional services offerings (www.brocade.com).

Brocade Data Center Fabric Architectures


6 53-1004601-02
Data Center Networking Architectures
Throughput and Traffic Patterns..................................................................................................................................................................... 7
Scale Requirements of Cloud Networks..................................................................................................................................................... 9
Traffic Isolation, Segmentation, and Application Continuity................................................................................................................ 9

Data center networking architectures have evolved with the changing requirements of the modern data center and cloud environments.
This evolution has been triggered by a combination of industry technology trends like server virtualization as well as the architectural
changes of the applications being deployed in the data centers. These technological and architectural changes are affecting the way
private and public cloud networks are designed. As these changes proliferate in the traditional data centers, the need to adopt modern
data center architectures has been growing.

Throughput and Traffic Patterns


Traditional data center network architectures were a derivative of the three-tier topology, prevalent in enterprise campus environments.
The tiers are defined as Access, Aggregation, and Core. Figure 1 shows an example of a data center network built using a traditional
three-tier topology.

FIGURE 1 Three-Tier Data Center Architecture

The three-tier topology was architected with the requirements of an enterprise campus in mind. In a campus network, the basic
requirement of the access layer is to provide connectivity to workstations. These workstations exchange traffic either with an enterprise

Brocade Data Center Fabric Architectures


53-1004601-02 7
Throughput and Traffic Patterns

data center for business application access or with the Internet. As a result, most traffic in this network traverses in and out through the
tiers in the network. This traffic pattern is commonly referred to as north-south traffic.

The throughput requirements for traffic in a campus environment are less compared to those of a data center network where server
virtualization has increased the application density and subsequently the data throughput to and from the servers. In addition, cloud
applications are often multitiered and hosted at different endpoints connected to the network. The communication between these
application tiers is a major contributor to the overall traffic in a data center. The multitiered nature of the applications deployed in a data
center drives traffic patterns in a data center network to be more east-west than north-south. In fact, some of the very large data centers
hosting multitiered applications report that more than 90 percent of their overall traffic occurs between the application tiers.

Because of high throughput requirements and the east-west traffic patterns, the networking access layer that connects directly to the
servers exchanges a much higher proportion of traffic with the upper layers of the networking infrastructure, as compared to an enterprise
campus network.

These reasons have driven the data center network architecture evolution into scale-out architectures. Figure 2 illustrates a leaf-spine
topology, which is an example of a scale-out architecture. These scale-out architectures are built to maximize the throughput of traffic
exchange between the leaf layer and the spine layer.

FIGURE 2 Scale-Out Architecture: Ideal for East-West Traffic Patterns Common with Web-Based or Cloud-Based Application Designs

As compared to a three-tier network, where the aggregation layer is restricted to two devicestypically because of technologies like
Multi-Chassis Trunking (MCT) where exactly two devices can participate in the creation of port channels facing the access-layer switches
the spine layer can have multiple devices and hence provide a higher port density to connect to the leaf-layer switches. This allows
more interfaces from each leaf to connect into the spine layer, providing higher throughput from each leaf to the spine layer. The
characteristics of a leaf-spine topology are discussed in more detail in subsequent sections.

The traditional three-tier datacenter architecture is still prevalent in environments where traffic throughput requirements between the
networking layers can be satisfied through high-density platforms at the aggregation layer. For certain use cases like co-location data
centers, where customer traffic is restricted to racks or managed areas within the data center, a three-tier architecture maybe more
suitable. Similarly, enterprises hosting nonvirtualized and single-tiered applications may find the three-tier datacenter architecture more
suitable.

Brocade Data Center Fabric Architectures


8 53-1004601-02
Traffic Isolation, Segmentation, and Application Continuity

Scale Requirements of Cloud Networks


Another trend in recent years has been the consolidation of disaggregated infrastructures into larger central locations. With the changing
economics and processes of application delivery, there has also been a shift of application workloads to public cloud provider networks.
Enterprises have looked to consolidate and host private cloud services. Meanwhile, software cloud services, as well as infrastructure and
platform service providers, have grown at a rapid pace. With this increasing shift of applications to the private and public cloud, the scale
of the network deployment has increased drastically. Advanced scale-out architectures allow networks to be deployed at many multiples
of the scale of a leaf-spine topology. An example of Brocade's advanced scale-out architecture is shown in Figure 3.

FIGURE 3 Example of Brocade's Advanced Scale-Out Architecture (Optimized 5-Stage Clos)

Brocade's advanced scale-out architectures allow data centers to be built at very high scales of ports and racks. Advanced scale-out
architectures using an optimized 5-stage Clos topology are described later in more detail.

A consequence of server virtualization enabling physical servers to host several virtual machines (VMs) is that the scale requirement for
the control and data planes for networking parameters like MAC addresses, IP addresses, and Address Resolution Protocol (ARP) tables
has multiplied. Also, these virtualized servers must support much higher throughput than in a traditional enterprise environment, leading
to an evolution in Ethernet standards of 10 Gigabit Ethernet (10 GbE), 25GbE, 40 GbE, 50GbE, and 100 GbE.

Traffic Isolation, Segmentation, and Application Continuity


For multitenant cloud environments, providing traffic isolation between the network tenants is a priority. This isolation must be achieved at
all networking layers. In addition, many environments must support overlapping IP addresses and VLAN numbering for the tenants of
the network. Providing traffic segmentation through enforcement of security and traffic policies for each cloud tenant's application tiers is
a requirement as well.

In order to support application continuity and infrastructure high availability, it is commonly required that the underlying networking
infrastructure be extended within and across one or more data center sites. Extension of Layer 2 domains is a specific requirement in
many cases. Examples of this include virtual machine mobility across the infrastructure for high availability; resource load balancing and
fault tolerance needs; and creation of application-level clustering, which commonly relies on shared broadcast domains for clustering

Brocade Data Center Fabric Architectures


53-1004601-02 9
Traffic Isolation, Segmentation, and Application Continuity

operations like cluster node discovery and many-to-many communication. The need to extend tenant Layer 2 and Layer 3 domains
while still supporting a common infrastructure Layer 3 environment across the infrastructure and also across sites is creating new
challenges for network architects and administrators.

The remainder of this solution design guide describes data center networking architectures that meet the requirements identified above
for building cloud-optimized networks that address current and future needs for enterprises and service provider clouds. This guide
focuses on the design considerations and choices for building a data center site using Brocade platforms and technologies. Refer to the
Brocade Data Center Fabric Architectures for Network Virtualization Solution Design Guide for a discussion on multitenant
infrastructures and overlay networking that builds on the architectural concepts defined here.

Brocade Data Center Fabric Architectures


10 53-1004601-02
Data Center Networks: Building Blocks
Brocade VDX and SLX Platforms...............................................................................................................................................................11
Networking Endpoints..................................................................................................................................................................................... 15
Single-Tier Topology........................................................................................................................................................................................ 16
Leaf-Spine Topology (Two Tiers)................................................................................................................................................................ 18
Optimized 5-Stage Folded Clos Topology (Three Tiers)..................................................................................................................21
Edge Services and Border Switches Topology..................................................................................................................................... 23

This section discusses the building blocks that are used to build the network and network virtualization architectures for a data center site.
These building blocks consist of the various elements that fit into an overall data center site deployment. The goal is to build fairly
independent elements that can be assembled together, depending on the scale requirements of the networking infrastructure.

Brocade VDX and SLX Platforms



The first building block for the networking infrastructure is the Brocade networking platforms, which include Brocade VDX switches and

Brocade SLX routers. This section provides a high-level summary of each of these two platform families.
Brocade VDX switches with IP fabrics and VCS fabrics provide automation, resiliency, and scalability. Industry-leading Brocade VDX
switches are the foundation for high-performance connectivity in data center fabric, storage, and IP network environments. Available in
fixed and modular forms, these highly reliable, scalable, and available switches are designed for a wide range of environments, enabling a
low Total Cost of Ownership (TCO) and fast Return on Investment (ROI).

VDX 6740
The Brocade VDX 6740 series of switches provides the advanced feature set that data centers require while delivering the high
performance and low latency that virtualized environments demand. Together with Brocade data center fabrics, these switches transform
data center networks to support the New IP by enabling cloud-based architectures that deliver new levels of scale, agility, and operational
efficiency. These highly automated, software-driven, and programmable data center fabric design solutions support a breadth of network
virtualization options and scale for data center environments ranging from tens to thousands of servers. Moreover, they make it easy for
organizations to architect, automate, and integrate current and future data center technologies while they transition to a cloud model that
addresses their needs, on their timetable and on their terms. The Brocade VDX 6740 Switch offers 48 10-Gigabit-Ethernet (GbE) Small
Form Factor Pluggable Plus (SFP+) ports and 4 40-GbE Quad SFP+ (QSFP+) ports in a 1U form factor. Each 40-GbE SFP+ port can
be broken out into four independent 10-GbE SFP+ ports, providing an additional 16 10-GbE SFP+ ports, which can be licensed with
Ports on Demand (PoD).

FIGURE 4 VDX 6740

Brocade Data Center Fabric Architectures


53-1004601-02 11
Brocade VDX and SLX Platforms

FIGURE 5 VDX 6740T

FIGURE 6 VDX 6740T-1G

VDX 6940
The Brocade VDX 6940-36Q is a fixed 40-Gigabit-Ethernet (GbE)optimized switch in a 1U form factor. It offers 36 40-GbE QSFP+
ports and can be deployed as a spine or leaf switch. Each 40-GbE port can be broken out into four independent 10-GbE SFP+ ports,
providing a total of 144 10-GbE SFP+ ports. Deployed as a spine, it provides options to connect 40-GbE or 10-GbE uplinks from leaf
switches. By deploying this high-density, compact switch, data center administrators can reduce their TCO through savings on power,
space, and cooling. In a leaf deployment, 10-GbE and 40-GbE ports can be mixed, offering flexible design options to cost-effectively
support demanding data center and service provider environments. As with other Brocade VDX platforms, the Brocade VDX 6940-36Q
offers a Ports on Demand (PoD) licensing model. The Brocade VDX 6940-36Q is available with 24 ports or 36 ports. The 24-port
model offers a lower entry point for organizations that want to start small and grow their networks over time. By installing a software
license, organizations can upgrade their 24-port switch to the maximum 36-port switch. The Brocade VDX 6940-144S Switch is
10 GbE optimized with 40-GbE or 100-GbE uplinks in a 2U form factor. It offers 96 native 1/10-GbE SFP/SFP+ ports and 12
40-GbE QSFP+ ports, or 4 100-GbE QSFP28 ports.

FIGURE 7 VDX 6940-36Q

Brocade Data Center Fabric Architectures


12 53-1004601-02
Brocade VDX and SLX Platforms

FIGURE 8 VDX 6940-144S

VDX 8770
The Brocade VDX 8770 switch is designed to scale and support complex environments with dense virtualization and dynamic traffic
patternswhere more automation is required for operational scalability. The 100-GbE-ready Brocade VDX 8770 dramatically increases
the scale that can be achieved in Brocade data center fabrics, with 10-GbE and 40-GbE wire-speed switching, numerous line card
options, and the ability to connect over 8,000 server ports in a single switching domain. Available in 4-slot and 8-slot versions, the
Brocade VDX 8770 is a highly scalable, low-latency modular switch that supports the most demanding data center networks.

FIGURE 9 VDX 8770-4

Brocade Data Center Fabric Architectures


53-1004601-02 13
Brocade VDX and SLX Platforms

FIGURE 10 VDX 8770-8

SLX 9850

The Brocade SLX 9850 Router is designed to deliver the cost-effective density, scale, and performance needed to address the
ongoing explosion of network bandwidth, devices, and services today and in the future. This flexible platform powered by Brocade
SLX-OS provides carrier-class advanced features leveraging proven Brocade routing technology that is used in the most demanding
data center, service provider, and enterprise networks today and is delivered on best-in-class forwarding hardware. The extensible
architecture of the Brocade SLX 9850 is designed for investment protection to readily support future needs for greater bandwidth, scale,
and forwarding capabilities.

Additionally, the Brocade SLX 9850 helps address the increasing agility and analytics needs of digital businesses with network

automation and network visibility innovation supported through the Brocade Workflow Composer and the Brocade SLX Insight

Architecture .

FIGURE 11 Brocade SLX-9850-4

Brocade Data Center Fabric Architectures


14 53-1004601-02
Networking Endpoints

FIGURE 12 Brocade SLX-9850-8

Networking Endpoints
The next building blocks are the networking endpoints to connect to the networking infrastructure. These endpoints include the compute
servers and storage devices, as well as network service appliances such as firewalls and load balancers.

FIGURE 13 Networking Endpoints and Racks

Figure 13 shows the different types of racks used in a data center infrastructure:
Infrastructure and management racksThese racks host the management infrastructure, which includes any management
appliances or software used to manage the infrastructure. Examples of these are server virtualization management software like
VMware vCenter or Microsoft SCVMM, orchestration software like OpenStack or VMware vRealize Automation, network
controllers like the Brocade SDN Controller or VMware NSX, and network management and automation tools like Brocade
Network Advisor. Examples of infrastructure racks are IP physical or virtual storage appliances.

Brocade Data Center Fabric Architectures


53-1004601-02 15
Single-Tier Topology

Compute racksCompute racks host the workloads for the data centers. These workloads can be physical servers, or they can
be virtualized servers when the workload is made up of virtual machines (VMs). The compute endpoints can be single or can be
multihomed to the network.
Edge racksNetwork services like perimeter firewalls, load balancers, and NAT devices connected to the network are
consolidated in edge racks. The role of the edge racks is to host the edge services, which can be physical appliances or virtual
machines.

These definitions of infrastructure/management, compute, and edge racks are used throughout this solution design guide.

Single-Tier Topology
The second building block is a single-tier network topology to connect endpoints to the network. Because of the existence of only one
tier, all endpoints connect to this tier of the network. An example of a single-tier topology is shown in Figure 14. The single-tier switches
are shown as a virtual Link Aggregation Group (vLAG) pair. However, the single-tier switches can also be part of a Multi-Chassis Trunking
(MCT) pair. The Brocade VDX supports vLAG pairs, whereas the Brocade SLX 9850 supports MCT.

The topology in Figure 14 shows the management/infrastructure, compute, and edge racks connected to a pair of switches participating
in multiswitch port channeling. This pair of switches is called a vLAG pair.

FIGURE 14 Single Networking Tier

The single-tier topology scales the least among all the topologies described in this guide, but it provides the best choice for smaller
deployments, as it reduces the Capital Expenditure (CapEx) costs for the network in terms of the size of the infrastructure deployed. It
also reduces the optics and cabling costs for the networking infrastructure.

Brocade Data Center Fabric Architectures


16 53-1004601-02
Single-Tier Topology

Design Considerations
The design considerations for deploying a single-tier topology are summarized in this section.

Oversubscription Ratios
It is important for network architects to understand the expected traffic patterns in the network. To this effect, the oversubscription ratios
at the vLAG pair/MCT should be well understood and planned for.

The north-south oversubscription at the vLAG pair/MCT is described as the ratio of the aggregate bandwidth of all downlinks from the
vLAG pair/MCT that are connected to the endpoints to the aggregate bandwidth of all uplinks that are connected to the data center
core/WAN edge router (described in a later section). The north-south oversubscription dictates the proportion of traffic between the
endpoints versus the traffic entering and exiting the single-tier topology.
It is also important to understand the bandwidth requirements for the inter-rack traffic. This is especially true for all north-south
communication through the services hosted in the edge racks. All such traffic flows through the vLAG pair/MCT to the edge racks, and if
the traffic needs to exit, it flows back to the vLAG/MCT switches. Thus, the aggregate ratio of bandwidth connecting the compute racks
to the aggregate ratio of bandwidth connecting the edge racks is an important consideration.

Another consideration is the bandwidth of the link that interconnects the vLAG pair/MCT. In case of multihomed endpoints and no failure,
this link should not be used for data-plane forwarding. However, if there are link failures in the network, this link may be used for data-
plane forwarding. The bandwidth requirement for this link depends on the redundancy design for link failures. For example, a design to
tolerate up to two 10-GbE link failures has a 20-GbE interconnection between the Top of Rack/End of Row (ToR/EoR) switches.

Port Density and Speeds for Uplinks and Downlinks


In a single-tier topology, the uplink and downlink port density of the vLAG pair/MCT determines the number of endpoints that can be
connected to the network, as well as the north-south oversubscription ratios.

Another key consideration for single-tier topologies is the choice of port speeds for the uplink and downlink interfaces. Brocade VDX and
SLX Series platforms support 10-GbE, 40-GbE, and 100-GbE interfaces, which can be used for uplinks and downlinks (25-GbE
interfaces will be supported in the future with the Brocade SLX 9850). The choice of the platform for the vLAG pair/MCT depends on
the interface speed and density requirements.

Scale and Future Growth


A design consideration for single-tier topologies is the need to plan for more capacity in the existing infrastructure and more endpoints in
the future.

Adding more capacity between existing endpoints and vLAG switches can be done by adding new links between them. Any future
expansion in the number of endpoints connected to the single-tier topology should be accounted for during the network design, as this
requires additional ports in the vLAG switches.

Other key considerations are whether to connect the vLAG/MCT pair to external networks through data center core/WAN edge routers
and whether to add a networking tier for higher scale. These designs require additional ports at the ToR/EoR. Multitier designs are
described in a later section of this guide.

Brocade Data Center Fabric Architectures


53-1004601-02 17
Leaf-Spine Topology (Two Tiers)

Ports on Demand Licensing


Ports on Demand licensing allows you to expand your capacity at your own pace, in that you can invest in a higher port density platform,
yet license only a subset of the available portsthe ports that you are using for current needs. This allows for an extensible and future-
proof network architecture without the additional upfront cost for unused ports on the switches.

Leaf-Spine Topology (Two Tiers)


The two-tier leaf-spine topology has become the de facto standard for networking topologies when building medium- to large-scale
data center infrastructures. An example of leaf-spine topology is shown in Figure 15.

FIGURE 15 Leaf-Spine Topology

The leaf-spine topology is adapted from traditional Clos telecommunications networks. This topology is also known as the "3-stage
folded Clos," with the ingress and egress stages proposed in the original Clos architecture folding together at the spine to form the leafs.

The role of the leaf is to provide connectivity to the endpoints in the network. These endpoints include compute servers and storage
devices, as well as other networking devices like routers and switches, load balancers, firewalls, or any other networking endpoint
physical or virtual. As all endpoints connect only to the leafs, policy enforcement including security, traffic path selection, Quality of
Service (QoS) markings, traffic scheduling, policing, shaping, and traffic redirection are implemented at the leafs. The Brocade VDX 6740
and 6940 family of switches is used as leaf switches.

The role of the spine is to provide interconnectivity between the leafs. Network endpoints do not connect to the spines. As most policy
implementation is performed at the leafs, the major role of the spine is to participate in the control-plane and data-plane operations for
traffic forwarding between the leafs. Brocade VDX or SLX platform families are used as the spine switches depending on the scale and
feature requirements.

As a design principle, the following requirements apply to the leaf-spine topology:


Each leaf connects to all spines in the network.
The spines are not interconnected with each other.
The leafs are not interconnected with each other for data-plane purposes. (The leafs may be interconnected for control-plane
operations such as forming a server-facing vLAG.)

The following are some of the key benefits of a leaf-spine topology:


Because each leaf is connected to every spine, there are multiple redundant paths available for traffic between any pair of leafs.
Link failures cause other paths in the network to be used.
Because of the existence of multiple paths, Equal-Cost Multipathing (ECMP) can be leveraged for flows traversing between
pairs of leafs. With ECMP, each leaf has equal-cost routes to reach destinations in other leafs, equal to the number of spines in
the network.

Brocade Data Center Fabric Architectures


18 53-1004601-02
Leaf-Spine Topology (Two Tiers)

The leaf-spine topology provides a basis for a scale-out architecture. New leafs can be added to the network without affecting
the provisioned east-west capacity for the existing infrastructure.
New spines and new uplink ports on the leafs can be provisioned to increase the capacity of the leaf-spine fabric.
The role of each tier in the network is well defined (as discussed previously), providing modularity in the networking functions
and reducing architectural and deployment complexities.
The leaf-spine topology provides granular control over subscription ratios for traffic flowing within a rack, between racks, and
outside the leaf-spine topology.

Design Considerations
There are several design considerations for deploying a leaf-spine topology. This section summarizes the key considerations.

Oversubscription Ratios
It is important for network architects to understand the expected traffic patterns in the network. To this effect, the oversubscription ratios
at each layer should be well understood and planned for.

For a leaf switch, the ports connecting to the endpoints are defined as downlink ports, and the ports connecting to the spines are defined
as uplink ports. The north-south oversubscription ratio at the leafs is the ratio of the aggregate bandwidth for the downlink ports and the
aggregate bandwidth for the uplink ports.

For a spine switch in a leaf-spine topology, the east-west oversubscription ratio is defined per pair of leaf switches connecting to the
spine switch. For a given pair of leaf switches connecting to the spine switch, the east-west oversubscription ratio at the spine is the ratio
of the aggregate bandwidth of the uplinks of the first switch and the aggregate bandwidth of the uplinks of the second switch. In a
majority of deployments, this ratio is 1:1, making the east-west oversubscription ratio at the spine nonblocking. Exceptions to the
nonblocking east-west oversubscriptions should be well understood and depend on the traffic patterns of the endpoints that are
connected to the respective leafs.

The oversubscription ratios described here govern the ratio of the traffic bandwidth between endpoints connected to the same leaf switch
and the traffic bandwidth between endpoints connected to different leaf switches. For example, if the north-south oversubscription ratio is
3:1 at the leafs and 1:1 at the spines, then the bandwidth of traffic between endpoints connected to the same leaf switch should be three
times the bandwidth between endpoints connected to different leafs. From a network endpoint perspective, the network
oversubscriptions should be planned so that the endpoints connected to the network have the required bandwidth for communications.
Specifically, endpoints that are expected to use higher bandwidth should be localized to the same leaf switch (or the same leaf switch pair
when endpoints are multihomed).

The ratio of the aggregate bandwidth of all spine downlinks connected to the leafs and the aggregate bandwidth of all downlinks
connected to the border leafs (described in Edge Services and Border Switches Topology on page 23) defines the north-south
oversubscription at the spine. The north-south oversubscription dictates the traffic destined to the services that are connected to the
border leaf switches and that exit the data center site.

Leaf and Spine Scale


Because the endpoints in the network connect only to the leaf switches, the number of leaf switches in the network depends on the
number of interfaces required to connect all the endpoints. The port count requirement should also account for multihomed endpoints.
Because each leaf switch connects to all spines, the port density on the spine switch determines the maximum number of leaf switches
in the topology. A higher oversubscription ratio at the leafs reduces the leaf scale requirements, as well.

Brocade Data Center Fabric Architectures


53-1004601-02 19
Leaf-Spine Topology (Two Tiers)

The number of spine switches in the network is governed by a combination of the throughput required between the leaf switches, the
number of redundant/ECMP paths between the leafs, and the port density in the spine switches. Higher throughput in the uplinks from
the leaf switches to the spine switches can be achieved by increasing the number of spine switches or bundling the uplinks together in
port-channel interfaces between the leafs and the spines.

Port Speeds for Uplinks and Downlinks


Another consideration for leaf-spine topologies is the choice of port speeds for the uplink and downlink interfaces. Brocade VDX
switches support 10-GbE, 40-GbE, and 100-GbE interfaces, which can be used for uplinks and downlinks. The choice of platform for
the leaf and spine depends on the interface speed and density requirements.

Scale and Future Growth


Another design consideration for leaf-spine topologies is the need to plan for more capacity in the existing infrastructure and to plan for
more endpoints in the future.

Adding more capacity between existing leaf and spine switches can be done by adding spine switches or adding new interfaces between
existing leaf and spine switches. In either case, the port density requirements for the leaf and the spine switches should be accounted for
during the network design process.

If new leaf switches need to be added to accommodate new endpoints in the network, ports at the spine switches are required to connect
the new leaf switches.

In addition, you must decide whether to connect the leaf-spine topology to external networks through border leaf switches or whether to
add an additional networking tier for higher scale. Such designs require additional ports at the spine. These designs are described in
another section of this guide.

Ports on Demand Licensing


Remember that Ports on Demand licensing allows you to expand your capacity at your own pace in that you can invest in a higher port
density platform, yet license only the ports on the Brocade VDX switch that you are using for current needs. This allows for an extensible
and future-proof network architecture without additional cost.

Deployment Model
The links between the leaf and spine can be either Layer 2 or Layer 3 links.

If the links between the leaf and spine are Layer 2 links, the deployment is known as a Layer 2 (L2) leaf-spine deployment or a Layer 2

Clos deployment. You can deploy Brocade VDX switches in a Layer 2 deployment by using Brocade VCS Fabric technology. With
Brocade VCS Fabric technology, the switches in the leaf-spine topology cluster together and form a fabric that provides a single point for
management, a distributed control plane, embedded automation, and multipathing capabilities from Layer 1 to Layer 3. The benefits of
deploying a VCS fabric are described later in this design guide.

If the links between the leaf and spine are Layer 3 links, the deployment is known as a Layer 3 (L3) leaf-spine deployment or a Layer 3
Clos deployment. You can deploy Brocade VDX and SLX platforms in a Layer 3 deployment by using Brocade IP fabric technology.
Brocade VDX switches can be deployed in spine and leaf Place in the Networks (PINs), whereas the Brocade SLX 9850 can be
deployed in the spine PIN. Brocade IP fabrics provide a highly scalable, programmable, standards-based, and interoperable networking
infrastructure. The benefits of Brocade IP fabrics are described later in this guide.

Brocade Data Center Fabric Architectures


20 53-1004601-02
Optimized 5-Stage Folded Clos Topology (Three Tiers)

Data Center Points of Delivery


Figure 16 shows a building block for a data center site. This building block is called a data center point of delivery (PoD). The data center
PoD consists of the networking infrastructure in a leaf-spine topology along with the endpoints grouped together in management/
infrastructure and compute racks. The idea of a PoD is to create a simple, repeatable, and scalable unit for building a data center site at
scale.

FIGURE 16 A Data Center PoD

Optimized 5-Stage Folded Clos Topology (Three Tiers)


Multiple leaf-spine topologies can be aggregated for higher scale in an optimized 5-stage folded Clos topology. This topology adds a
new tier to the network known as the super-spine. The role of the super-spine is to provide connectivity between the spine switches
across multiple data center PoDs. Figure 17 shows four super-spine switches connecting the spine switches across multiple data center
PoDs.

Brocade Data Center Fabric Architectures


53-1004601-02 21
Optimized 5-Stage Folded Clos Topology (Three Tiers)

FIGURE 17 An Optimized 5-Stage Folded Clos with Data Center PoDs

The connection between the spines and the super-spines follows the Clos principles:
Each spine connects to all super-spines in the network.
Neither the spines nor the super-spines are interconnected with each other.

Similarly, all the benefits of a leaf-spine topologynamely, multiple redundant paths, ECMP, scale-out architecture, and control over
traffic patternsare realized in the optimized 5-stage folded Clos topology as well.

With an optimized 5-stage Clos topology, a PoD is a simple and replicable unit. Each PoD can be managed independently, including
firmware versions and network configurations. This topology also allows the data center site capacity to scale up by adding new PoDs or
to scale down by removing existing PoDs, without affecting the existing infrastructure, providing elasticity in scale and isolation of failure
domains.

Brocade VDX switches are used for the leaf PIN, whereas depending on scale and features being deployed, either Brocade VDX or SLX
platforms can be deployed at the spine and super-spine PINs.

This topology also provides a basis for interoperation of different deployment models of Brocade VCS fabrics and IP fabrics. This is
described later in this guide.

Design Considerations
The design considerations of oversubscription ratios, port speeds and density, spine and super-spine scale, planning for future growth,
and Brocade Ports on Demand licensing, which were described for the leaf-spine topology, apply to the optimized 5-stage folded Clos
topology as well. Some key considerations are highlighted below.

Brocade Data Center Fabric Architectures


22 53-1004601-02
Edge Services and Border Switches Topology

Oversubscription Ratios
Because the spine switches now have uplinks connecting to the super-spine switches, the north-south oversubscription ratios for the
spine switches dictate the ratio of aggregate bandwidth of traffic switched east-west within a data center PoD to the aggregate bandwidth
of traffic exiting the data center PoD. This is a key consideration from the perspective of network infrastructure and services placement,
application tiers, and (in the case of service providers) tenant placement. In cases of north-south oversubscription at the spines,
endpoints should be placed to optimize traffic within a data center PoD.

At the super-spine switch, the east-west oversubscription defines the ratio of the bandwidth of the downlink connections for a pair of data
center PoDs. In most cases, this ratio is 1:1.

The ratio of the aggregate bandwidth of all super-spine downlinks connected to the spines and the aggregate bandwidth of all downlinks
connected to the border leafs (described in Edge Services and Border Switches Topology on page 23) defines the north-south
oversubscription at the super-spine. The north-south oversubscription dictates the traffic destined to the services connected to the
border leaf switches and exiting the data center site.

Deployment Model
The Layer 3 gateways for the endpoints connecting to the networking infrastructure can be at the leaf, at the spine, or at the super-spine.

With Brocade IP fabric architecture (described later in this guide), the Layer 3 gateways are present at the leaf layer. So the links between
the leafs, spines, and super-spines are Layer 3.

With Brocade multi-fabric topology using VCS fabric architecture (described later in this guide), there is a choice of the Layer 3 gateway
at the spine layer or at the super-spine layer. In either case, the links between the leafs and spines will be Layer 2 links. If the Layer 3
gateway is at the spine layer, the links between the spine and super-spine are Layer 3. Else, those links are Layer 2 as well. These Layer
2 links are IEEE-802.1Q-VLAN-based optionally over Link Aggregation Control Protocol (LACP) aggregated links. These architectures
are described later in this guide.

Edge Services and Border Switches Topology


For two-tier and three-tier data center topologies, the role of the border switches in the network is to provide external connectivity to the
data center site. In addition, as all traffic enters and exits the data center through the border leaf switches, they present the ideal location in
the network to connect network services like firewalls, load balancers, and edge VPN routers.

The topology for interconnecting the border switches depends on the number of network services that need to be attached and the
oversubscription ratio at the border switches. Figure 18 shows a simple topology for border switches, where the service endpoints
connect directly to the border switches. Border switches in this simple topology are referred to as "border leaf switches" because the
service endpoints connect to them directly.

Brocade Data Center Fabric Architectures


53-1004601-02 23
Edge Services and Border Switches Topology

FIGURE 18 Edge Services PoD

If more services or higher bandwidth for exiting the data center site is needed, multiple sets of border leaf switches can be deployed. The
border switches and the edge racks together form the edge services PoD.

Brocade VDX switches are used for the border leaf PIN. The border leaf switches can also participate in a vLAG pair. This allows the edge
service appliances and servers to dual-home into the border leaf switches for redundancy and higher throughput.

Design Considerations
The following sections describe the design considerations for border switches.

Oversubscription Ratios
The border leaf switches have uplink connections to spines in the leaf-spine topology and to super-spines in the 3-tier topology. They
also have uplink connections to the data center core/WAN edge routers as described in the next section.

The ratio of the aggregate bandwidth of the uplinks connecting to the spines/super-spines and the aggregate bandwidth of the uplink
connecting to the core/edge routers determines the oversubscription ratio for traffic exiting the data center site.

The north-south oversubscription ratios for the services connected to the border leafs are another consideration. Because many of the
services connected to the border leafs may have public interfaces that face external entities like core/edge routers and internal interfaces
that face the internal network, the north-south oversubscription for each of these connections is an important design consideration.

Data Center Core/WAN Edge Handoff


The uplinks to the data center core/WAN edge routers from the border leafs carry the traffic entering and exiting the data center site. The
data center core/WAN edge handoff can be Layer 2 and/or Layer 3 in combination with overlay protocols.

Brocade Data Center Fabric Architectures


24 53-1004601-02
Edge Services and Border Switches Topology

The handoff between the border leafs and the data center core/WAN edge may provide domain isolation for the control- and data-plane
protocols running in the internal network and built using one-tier, two-tier, or three-tier topologies. This helps in providing independent
administrative, fault-isolation, and control-plane domains for isolation, scale, and security between the different domains of a data center
site.

Data Center Core and WAN Edge Routers


The border leaf switches connect to the data center core/WAN edge devices in the network to provide external connectivity to the data
center site. Figure 19 shows an example of the connectivity between the vLAG/MCT pair from a single-tier topology, spine switches
from a two-tier topology, border leafs, a collapsed data center core/WAN edge tier, and external networks for Internet and data center
interconnection.

FIGURE 19 Collapsed Data Center Core and WAN Edge Routers Connecting Internet and DCI Fabric to the Border Leaf in the Data
Center Site

If more services or higher bandwidth for exiting the data center site is needed, multiple sets of border leaf switches can be deployed. The
border switches and the edge racks together form the edge services PoD.

Brocade VDX switches are used for the border leaf PIN. The border leaf switches can also participate in a vLAG pair. This allows the edge
service appliances and servers to dual-home into the border leaf switches for redundancy and higher throughput.

Brocade Data Center Fabric Architectures


53-1004601-02 25
Brocade Data Center Fabric Architectures
26 53-1004601-02
Building Data Center Sites with Brocade
VCS Fabric Technology
Data Center Site with Leaf-Spine Topology........................................................................................................................................... 28
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics......................................................................31

Brocade VCS fabrics are Ethernet fabrics built for modern data center infrastructure needs. With Brocade VCS Fabric technology, up to
48 Brocade VDX switches can participate in a VCS fabric. The data plane of the VCS fabric is based on the Transparent Interconnection
of Lots of Links (TRILL) standard, supported by Layer 2 routing protocols that propagate topology information within the fabrics. This
ensures that there are no loops in the fabrics, and there is no need to run Spanning Tree Protocol (STP). Also, none of the links are
blocked. Brocade VCS Fabric technology provides a compelling solution for deploying a Layer 2 Clos topology.

Brocade VCS Fabric technology provides the following benefits:


TRILL-based Ethernet fabricBrocade VCS Fabric technology, which is based on the TRILL standard, uses a Layer 2 routing
protocol within the fabric. This ensures that all links are always utilized within the VCS fabric, and there is no need for loop-
prevention protocols like Spanning Tree that block links and provide inefficient utilization of the networking infrastructure.
Active-Active vLAGVCS fabrics allow for active-active port channels between networking endpoints and multiple VDX
switches participating in a VCS fabric, enabling redundancy and increased throughput.
Single point of managementWith all switches in a VCS fabric participating in a logical chassis, the entire topology can be
managed as a single switch. This drastically reduces the configuration, validation, monitoring, and troubleshooting complexity of
the fabric.
Distributed MAC address learningWith Brocade VCS Fabric technology, the MAC addresses that are learned at the edge
ports of the fabric are distributed to all nodes participating within the fabric. This means that the MAC address learning within
the fabric does not rely on flood-and-learn mechanisms, and flooding related to unknown unicast frames is avoided.
Multipathing from Layer 1 to Layer 3Brocade VCS Fabric technology provides efficiency and resiliency through the use of
multipathing from Layer 1 to Layer 3:
At Layer 1, Brocade Trunking (BTRUNK) enables frame-based load balancing between a pair of switches that are part of
the VCS fabric. This provides near identical link utilization for links participating in a BTRUNK. This ensures that thick(or
elephant) flows do not congest an inter-switch link (ISL).
Because of the existence of a Layer 2 routing protocol, Layer 2 ECMP is performed between multiple next hops. This is
critical in a Clos topology, where all spines are ECMP next hops for a leaf that sends traffic to an endpoint connected to
another leaf.
Layer 3 ECMP using Layer 3 routing protocols ensures that traffic is load-balanced between Layer 3 next hops.
Distributed control planeControl-plane and data-plane state information is shared across devices in the VCS fabric, which
enables fabric-wide MAC address learning, multiswitch port channels (vLAG), Distributed Spanning Tree (DiST), and gateway
redundancy protocols like Virtual Router Redundancy ProtocolExtended (VRRP-E) and Fabric Virtual Gateway (FVG), among
others. These enable the VCS fabric to function like a single switch to interface with other entities in the infrastructurethus
appearing as a single control-plane entity to other devices in the network.
Embedded automationBrocade VCS Fabric technology provides embedded turnkey automation built into Brocade Network
OS. These automation features enable zero-touch provisioning of new switches into an existing fabric. Brocade VDX switches
also provide multiple management methods, including the command-line interface (CLI), Simple Network Management
Protocol (SNMP), REST, and Network Configuration Protocol (NETCONF) interfaces.
Multitenancy at Layers 2 and 3With Brocade VCS Fabric technology, multitenancy features at Layers 2 and 3 enable traffic
isolation and segmentation across the fabric. Brocade VCS Fabric technology allows an extended range of up to 8,000 Layer 2

Brocade Data Center Fabric Architectures


53-1004601-02 27
Data Center Site with Leaf-Spine Topology

domains within the fabric, while isolating overlapping IEEE-802.1Q-based tenant networks into separate Layer 2 domains.
Layer 3 multitenancy using Virtual Routing and Forwarding (VRF) protocols, multi-VRF routing protocols, and BGP-EVPN
enables large-scale Layer 3 multitenancy.
Ecosystem integration and virtualization featuresBrocade VCS Fabric technology integrates with leading industry solutions
and products like OpenStack; VMware products like vSphere, NSX, and vRealize; common infrastructure programming tools like
Python; and Brocade tools like Brocade Network Advisor. Brocade VCS Fabric technology is virtualization-aware and helps
dramatically reduce administrative tasks and enable seamless VM migration with features like Automatic Migration of Port
Profiles (AMPP), which automatically adjusts port-profile information as a VM moves from one server to another.
Advanced storage featuresBrocade VDX switches provide rich storage protocols and features like Fibre Channel over
Ethernet (FCoE), Data Center Bridging (DCB), Monitoring and Alerting Policy Suite (MAPS), and Auto-NAS (Network Attached
Storage), among others, to enable advanced storage networking.

The benefits and features listed simplify Layer 2 Clos deployment by using Brocade VDX switches and Brocade VCS Fabric technology.
The next section describes data center site designs that use Layer 2 Clos built with Brocade VCS Fabric technology.

Data Center Site with Leaf-Spine Topology


Figure 20 shows a data center site built using a leaf-spine topology deployed using Brocade VCS Fabric technology. In this topology, the
spines are connected to the data center core/WAN edge devices directly. The spine PIN in this topology is sometimes referred to as the
"border spine" because it performs both the spine function of east-west traffic switches and the border function of providing an interface
to the data center core/WAN edge.

Brocade Data Center Fabric Architectures


28 53-1004601-02
Data Center Site with Leaf-Spine Topology

FIGURE 20 Data Center Site Built with a Leaf-Spine Topology and Brocade VCS Fabric Technology with Border Spine Switches

Figure 21 shows a data center site built using a leaf-spine topology deployed using Brocade VCS Fabric technology. In this topology,
border leaf switches are added along with the edge services PoD for external connectivity and hosting edge services.

Brocade Data Center Fabric Architectures


53-1004601-02 29
Data Center Site with Leaf-Spine Topology

FIGURE 21 Data Center Site Built with a Leaf-Spine Topology and Brocade VCS Fabric Technology with Border Leaf Switches

The border leafs in the edge services PoD are built using a separate VCS fabric. The border leafs are connected to the spine switches in
the data center PoD and also to the data center core/WAN edge routers. These links can be either Layer 2 or Layer 3 links, depending on
the requirements of the deployment and the handoff required to the data center core/WAN edge routers. There can be more than one
edge services PoD in the network, depending on the service needs and the bandwidth requirement for connecting to the data center
core/WAN edge routers.

As an alternative to the topology shown in Figure 21, the border leaf switches in the edge services PoD and the data center PoD can be
part of the same VCS fabric, to extend the fabric benefits to the entire data center site. This model is shown in Brocade VCS Fabric on
page 51.

The data center PoDs shown in Figure 20 and Figure 21 are built using Brocade VCS fabric technology. With Brocade VCS fabric
technology, we recommend interconnecting the spines with each other (not shown in the figures) to ensure the best traffic path during
failure scenarios.

Scale
Table 1 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX platforms at the leaf and spine Places
in the Network (PINs) in a Brocade VCS fabric.

TABLE 1 Scale Numbers for a Data Center Site with a Leaf-Spine Topology Implemented with Brocade VCS Fabric Technology
Leaf Switch Spine Switch Leaf Leaf Count Spine Count VCS Fabric Size 10-GbE Port
Oversubscription (Number of Count
Ratio Switches)

6740, 6740T, 6940-36Q 3:1 36 4 40 1,728


6740T-1G
6740, 6740T, 8770-4 3:1 44 4 48 2,112
6740T-1G
6940-144S 6940-36Q 2:1 36 12 48 3,456
6940-144S 8770-4 2:1 36 12 48 3,456

Brocade Data Center Fabric Architectures


30 53-1004601-02
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics

The following assumptions are made:


Links between the leafs and the spines are 40 GbE.
The Brocade VDX 6740 Switch platforms use 4 40-GbE uplinks. The Brocade VDX 6740 platform family includes the
Brocade VDX 6740 Switch, the Brocade VDX 6740T Switch, and the Brocade VDX 6740T-1G Switch. (The Brocade VDX
6740T-1G requires a Capacity on Demand license to upgrade to 10GBASE-T ports.)
The Brocade VDX 6940-144S platforms use 12 40-GbE uplinks.
The Brocade VDX 8770-4 Switch uses 27 40-GbE line cards with 40-GbE interfaces.

Scaling the Data Center Site with a Multi-Fabric Topology Using VCS
Fabrics
If multiple VCS fabrics are needed at a data center site, the optimized 5-stage Clos topology is used to increase scale by interconnecting
the data center PoDs built using leaf-spine topology with Brocade VCS Fabric technology. This deployment architecture is referred to as
a multi-fabric topology using VCS fabrics.

In a multi-fabric topology using VCS fabrics, individual data center PoDs resemble a leaf-spine topology deployed using Brocade VCS
Fabric technology. Note that we recommend that the spines be interconnected in a data center PoD built using Brocade VCS Fabric
technology.

A new super-spine tier is used to interconnect the spine switches in the data center PoD. In addition, the border leaf switches are also
connected to the super-spine switches. There are two deployment options available to build multi-fabric topology using VCS fabrics.

In the first deployment option, the links between the spine and super-spine are Layer 2. In order to achieve a loop-free environment and
avoid loop-prevention protocols between the spine and super-spine tiers, the super-spine devices participate in a VCS fabric as well. The
connections between the spine and the super-spines are bundled together in (dual-sided) vLAGs to create a loop-free topology. The
standard VLAN range of 1 to 4094 can be extended between the DC PoDs using IEEE 802.1Q tags over the dual-sided vLAGs. This is
illustrated in Figure 22.

Brocade Data Center Fabric Architectures


53-1004601-02 31
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics

FIGURE 22 Multi-Fabric Topology with VCS TechnologyWith L2 Links Between Spine and Super-Spine and DC Core/WAN Edge
Connected to Super-Spine

In this topology, the super-spines connect directly into the data center core/WAN edge, which provides external connectivity to the
network. Alternately, Figure 23 shows the border leafs connecting directly to the data center core/WAN edge. In this topology, if the
Layer 3 boundary is at the super-spine, the links between the super-spine and the border leafs carry Layer 3 traffic as well.

Brocade Data Center Fabric Architectures


32 53-1004601-02
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics

FIGURE 23 Multi-Fabric Topology with VCS TechnologyWith L2 Links Between Spine and Super-Spine and DC Core/WAN Edge
Connected to Border Leafs

In the second deployment option, the links between the spine and super-spine are Layer 3. In cases where the Layer 3 gateways for the
VLANs in the VCS fabrics are at the spine layer, this model provides routing between the data center PoDs. As a consequence of the
links being Layer3, a loop-free topology is achieved. Here the Brocade SLX 9850 is an option for the super-spine PIN. This is illustrated
in Figure 24.

FIGURE 24 Multi-Fabric Topology with VCS TechnologyWith L3 Links Between Spine and Super-Spine

If Layer 2 extension is required between the DC PoDs, Virtual Fabric Extension (VF-Extension) technology can be used. With VF-
Extension, the spine switches (VDX 6740 and VDX 6940 only) can be configured as VXLAN Tunnel Endpoints (VTEPs). Subsequently,

Brocade Data Center Fabric Architectures


53-1004601-02 33
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics

the VXLAN protocol can be used to extend the Layer 2 VLANs as well as the virtual fabrics between the VCS fabrics of the DC PoDs.
This is described in more detail in the Brocade Data Center Fabric Architectures for Network Virtualization Solution Design Guide.

Figure 23 and Figure 24 show only one edge services PoD, but there can be multiple such PoDs depending on the edge service
endpoint requirements, the oversubscription for traffic that is exchanged with the data center core/WAN edge, and the related handoff
mechanisms.

Scale
Table 2 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine,
and super-spine PINs for an optimized 5-stage Clos built with Brocade VCS fabrics. The following assumptions are made:
Links between the leafs and the spines are 40 GbE. Links between the spines and super-spines are also 40 GbE.
The Brocade VDX 6740 platforms use 4 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade
VDX 6740, Brocade VDX 6740T, and Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity on
Demand license to upgrade to 10GBASE-T ports.) Four spines are used to connect the uplinks.
The Brocade 6940-144S platforms use 12 40-GbE uplinks. Twelve spines are used to connect the uplinks.
The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the
bandwidth of downlink ports at the spines. A larger port scale can be realized with a higher oversubscription ratio at the spines.
However, a 1:1 oversubscription ratio is used here and is also recommended.
One spine plane is used for the scale calculations. This means that all spine switches in each data center PoD connect to all
super-spine switches in the topology. This topology is consistent with the optimized 5-stage Clos topology.
Brocade VDX 8770 platforms use 27 40-GbE line cards in performance mode (uses 18 40-GbE per line card) for
connections between spines and super-spines. The Brocade VDX 8770-4 supports 72 40-GbE ports in performance mode.
The Brocade VDX 8770-8 supports 144 40-GbE ports in performance mode.
The link between the spines and the super-spines is assumed to be Layer 3, and 32-way Layer 3 ECMP is utilized for spine to
super-spine connections. This gives a maximum of 32 super-spines for the multi-fabric topology using Brocade VCS Fabric
technology. Refer to the release notes for your platform to check the ECMP support scale.

NOTE
For a larger port scale for the multi-fabric topology using Brocade VCS Fabric technology, multiple spine planes are used.
Architectures with multiple spine planes are described later.

TABLE 2 Sample Scale Numbers for a Data Center Site Built as a Multi-Fabric Topology Using Brocade VCS Fabric Technology
Leaf Switch Spine Switch Super-Spine Leaf Over- Leaf Count Spine Count Number of Number of 10-GbE
Switch subscription per Data per Data Super- Data Center Port Count
Ratio Center PoD Center PoD Spines PoDs

VDX 6740, VDX 6940-36Q VDX 6940-36Q 3:1 18 4 18 9 7,776


VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q VDX 6940-36Q 2:1 18 12 18 3 5,184
VDX 6740, VDX 8770-4 VDX 6940-36Q 3:1 32 4 32 9 13,824
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 8770-4 VDX 6940-36Q 2:1 32 12 32 3 9,216
VDX 6740, VDX 6940-36Q VDX 8770-4 3:1 18 4 18 18 15,552
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q VDX 8770-4 2:1 18 12 18 6 10,368

Brocade Data Center Fabric Architectures


34 53-1004601-02
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics

TABLE 2 Sample Scale Numbers for a Data Center Site Built as a Multi-Fabric Topology Using Brocade VCS Fabric Technology
(continued)
Leaf Switch Spine Switch Super-Spine Leaf Over- Leaf Count Spine Count Number of Number of 10-GbE
Switch subscription per Data per Data Super- Data Center Port Count
Ratio Center PoD Center PoD Spines PoDs
VDX 6740, VDX 8770-4 VDX 8770-4 3:1 32 4 32 18 27,648
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 8770-4 VDX 8770-4 2:1 32 12 32 6 18,432
VDX 6740, VDX 6940-36Q VDX 8770-8 3:1 18 4 18 36 31,104
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q VDX 8770-8 2:1 18 12 18 12 20,736
VDX 6740, VDX 8770-4 VDX 8770-8 3:1 32 4 32 36 55,296
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 8770-4 VDX 8770-8 2:1 32 12 32 12 36,864
VDX 6740, VDX 6940-36Q SLX 9850-4 3:1 18 4 18 60 51,840
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q SLX 9850-4 2:1 18 12 18 20 34,560
VDX 6740, VDX 8770-4 SLX 9850-4 3:1 32 4 32 60 92,160
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 8770-4 SLX 9850-4 2:1 32 12 32 20 61,440
VDX 6740, VDX 6940-36Q SLX 9850-8 3:1 18 4 18 120 103,680
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q SLX 9850-8 2:1 18 12 18 40 69,120
VDX 6740, VDX 8770-4 SLX 9850-8 3:1 32 4 32 120 184,320
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 8770-4 SLX 9850-8 2:1 32 12 32 40 122,880

Brocade Data Center Fabric Architectures


53-1004601-02 35
Brocade Data Center Fabric Architectures
36 53-1004601-02
Building Data Center Sites with Brocade
IP Fabric
Data Center Site with Leaf-Spine Topology........................................................................................................................................... 37
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos.......................................................................................40

Brocade IP fabric provides a Layer 3 Clos deployment architecture for data center sites. With Brocade IP fabric, all links in the Clos
topology are Layer 3 links. The Brocade IP fabric includes the networking architecture, the protocols used to build the network, turnkey
automation features used to provision, validate, remediate, troubleshoot, and monitor the networking infrastructure, and the hardware
differentiation with Brocade VDX and SLX platforms. The following sections describe these aspects of building data center sites with
Brocade IP fabrics.

Because the infrastructure is built on IP, advantages like loop-free communication using industry-standard routing protocols, ECMP, very
high solution scale, and standards-based interoperability are leveraged.

The following are some of the key benefits of deploying a data center site with Brocade IP fabrics:
Highly scalable infrastructureBecause the Clos topology is built using IP protocols, the scale of the infrastructure is very high.
These port and rack scales are documented with descriptions of the Brocade IP fabric deployment topologies.
Standards-based and interoperable protocolsBrocade IP fabric is built using industry-standard protocols like the Border
Gateway Protocol (BGP) and Open Shortest Path First (OSPF). These protocols are well understood and provide a solid
foundation for a highly scalable solution. In addition, industry-standard overlay control- and data-plane protocols like BGP-
EVPN and Virtual Extensible Local Area Network (VXLAN) are used to extend the Layer 2 domain and extend tenancy domains
by enabling Layer 2 communications and VM mobility.
Active-active vLAG pairsBy supporting vLAG pairs on leaf switches, dual-homing of the networking endpoints is supported.
This provides higher redundancy. Also, because the links are active-active, vLAG pairs provide higher throughput to the
endpoints. vLAG pairs are supported for all 10-GbE, 40-GbE, and 100-GbE interface speeds, and up to 32 links can
participate in a vLAG.
Support for unnumbered interfacesUsing Brocade Network OS support for IP unnumbered interfaces available in Brocade
VDX switches, only one IP address per switch is required to configure the routing protocol peering. This significantly reduces the
planning and use of IP addresses, and it simplifies operations.
Turnkey automationBrocade automated provisioning dramatically reduces the deployment time of network devices and
network virtualization. Prepackaged, server-based automation scripts provision Brocade IP fabric devices for service with
minimal effort.
Programmable automationBrocade server-based automation provides support for common industry automation tools such
as Python Ansible, Puppet, and YANG model-based REST and NETCONF APIs. The prepackaged PyNOS scripting library
and editable automation scripts execute predefined provisioning tasks, while allowing customization for addressing unique
requirements to meet technical or business objectives when the organization is ready.
Ecosystem integrationThe Brocade IP fabric integrates with leading industry solutions and products like VMware vCenter,
NSX, and vRealize. Cloud orchestration and control are provided through OpenStack and OpenDaylight-based Brocade SDN
Controller support.

Data Center Site with Leaf-Spine Topology


A data center PoD built with IP fabrics supports dual-homing of network endpoints using multiswitch port channel interfaces formed
between a pair of Brocade VDX switches participating in a vLAG. This pair of leaf switches is called a vLAG pair (see Figure 25).

Brocade Data Center Fabric Architectures


53-1004601-02 37
Data Center Site with Leaf-Spine Topology

FIGURE 25 An IP Fabric Data Center PoD Built with Leaf-Spine Topology and vLAG Pairs for Dual-Homed Network Endpoint

The Brocade VDX switches in a vLAG pair have a link between them for control-plane purposes to create and manage the multiswitch
port-channel interfaces. When network virtualization with BGP EVPN is used, these links also carry switched traffic in case of downlink
failures or single-homed endpoints. Oversubscription of the inter-switch link (ISL) is an important consideration for these scenarios.

Figure 26 shows a data center site deployed using a leaf-spine topology and an edge services PoD. Here the network endpoints are
illustrated as single-homed, but dual homing is enabled through vLAG pairs where required.

FIGURE 26 Data Center Site Built with Leaf-Spine Topology and an Edge Services PoD

Brocade Data Center Fabric Architectures


38 53-1004601-02
Data Center Site with Leaf-Spine Topology

The links between the leafs, spines, and border leafs are all Layer 3 links. The border leafs are connected to the spine switches in the data
center PoD and also to the data center core/WAN edge routers. The uplinks from the border leaf to the data center core/WAN edge can
be either Layer 2 or Layer 3, depending on the requirements of the deployment and the handoff required to the data center core/WAN
edge routers.

There can be more than one edge services PoD in the network, depending on service needs and the bandwidth requirement for
connecting to the data center core/WAN edge routers.

Scale
Table 3 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf and
spine PINs in a Brocade IP fabric with 40-GbE links between leafs and spines.

The following assumptions are made:


Links between the leafs and the spines are 40 GbE.
The Brocade VDX 6740 platforms use 4 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade
VDX 6740, Brocade VDX 6740T, and Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity on
Demand license to upgrade to 10GBASE-T ports.)
The Brocade VDX 6940-144S platforms use 12 40-GbE uplinks.
The Brocade VDX 8770 platforms use 27 40-GbE line cards in performance mode (18 40-GbE per line card) for
connections between leafs and spines. The Brocade VDX 8770-4 supports 72 40-GbE ports in performance mode. The
Brocade VDX 8770-8 supports 144 40-GbE ports in performance mode.

NOTE
For a larger port scale in Brocade IP fabrics in a 3-stage folded Clos, the Brocade VDX 8770-4 or 8770-8 can be used as a
leaf switch.

TABLE 3 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 40-GbE Links Between Leafs
and Spines
Leaf Switch Spine Switch Leaf Leaf Count Spine Count IP Fabric Size 10-GbE Port
Oversubscription (Number of Count
Ratio Switches)

VDX 6740, VDX 6940-36Q 3:1 36 4 40 1,728


VDX 6740T,
VDX 6740T-1G
VDX 6740, VDX 8770-4 3:1 72 4 76 3,456
VDX 6740T,
VDX 6740T-1G
VDX 6740, VDX 8770-8 3:1 144 4 148 6,912
VDX 6740T,
VDX 6740T-1G
VDX 6740, SLX 9850-4 3:1 240 4 244 11,520
VDX 6740T,
VDX 6740T-1G
VDX 6740, SLX 9850-8 3:1 480 4 484 23,040
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q 2:1 36 12 48 3,456
VDX 6940-144S VDX 8770-4 2:1 72 12 84 6,912
VDX 6940-144S VDX 8770-8 2:1 144 12 156 13,824
VDX 6940-144S SLX 9850-4 2:1 240 12 252 23,040

Brocade Data Center Fabric Architectures


53-1004601-02 39
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos

TABLE 3 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 40-GbE Links Between Leafs
and Spines (continued)
Leaf Switch Spine Switch Leaf Leaf Count Spine Count IP Fabric Size 10-GbE Port
Oversubscription (Number of Count
Ratio Switches)
VDX 6940-144S SLX 9850-8 2:1 480 12 492 46,080

Table 4 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf and
spine PINs in a Brocade IP fabric with 100-GbE links between leafs and spines.

The following assumptions are made:


Links between the leafs and the spines are 100 GbE.
The Brocade VDX 6940-144S platforms use 4 100-GbE uplinks.

TABLE 4 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 100-GbE Links Between Leafs
and Spines
Leaf Switch Spine Switch Leaf Leaf Count Spine Count IP Fabric Size 10-GbE Port
Oversubscription (Number of Count
Ratio Switches)

VDX 6940-144S VDX 8770-4 2.4:1 24 12 36 2,304


VDX 6940-144S VDX 8770-8 2.4:1 48 12 60 4,608
VDX 6940-144S SLX 9850-4 2.4:1 144 12 156 13,824
VDX 6940-144S SLX 9850-8 2.4:1 288 12 300 27,648

Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
If a higher scale is required, the optimized 5-stage folded Clos topology is used to interconnect the data center PoDs built using a
Layer 3 leaf-spine topology. An example topology is shown in Figure 27.

FIGURE 27 Data Center Site Built with an Optimized 5-Stage Folded Clos Topology and IP Fabric PoDs

Brocade Data Center Fabric Architectures


40 53-1004601-02
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos

Figure 27 shows only one edge services PoD, but there can be multiple such PoDs, depending on the edge service endpoint
requirements, the amount of oversubscription for traffic exchanged with the data center core/WAN edge, and the related handoff
mechanisms.

Scale
Figure 28 shows a variation of the optimized 5-stage Clos. This variation includes multiple super-spine planes. Each spine in a data
center PoD connects to a separate super-spine plane.

FIGURE 28 Optimized 5-Stage Clos with Multiple Super-Spine Planes

The number of super-spine planes is equal to the number of spines in the data center PoDs. The number of uplink ports on the spine
switch is equal to the number of switches in a super-spine plane. Also, the number of data center PoDs is equal to the port density of the
super-spine switches. Introducing super-spine planes to the optimized 5-stage Clos topology greatly increases the number of data
center PoDs that can be supported. For the purposes of port scale calculations of the Brocade IP fabric in this section, the optimized
5-stage Clos with multiple super-spine plane topology is considered.

Table 5 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine,
and super-spine PINs for an optimized 5-stage Clos with multiple super-spine planes built with Brocade IP fabric with 40-GbE
interfaces between leafs, spines, and super-spines. The following assumptions are made:
Links between the leafs and the spines are 40 GbE. Links between spines and super-spines are also 40 GbE.
The Brocade VDX 6740 platforms use 4 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade
VDX 6740, the Brocade VDX 6740T, and the Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity
on Demand license to upgrade to 10GBASE-T ports.) Four spines are used for connecting the uplinks.
The Brocade VDX 6940-144S platforms use 12 40-GbE uplinks. Twelve spines are used for connecting the uplinks.
The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the
bandwidth of downlink ports at the spines. The number of physical ports utilized from the spine toward the super-spine is equal
to the number of ECMP paths supported. However, a 1:1 subscription ratio is used here and is also recommended.

Brocade Data Center Fabric Architectures


53-1004601-02 41
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos

The Brocade VDX 8770 platforms use 27 40-GbE line cards in performance mode (18 40 GbE) for connections between
spines and super-spines. The Brocade VDX 8770-4 supports 72 40-GbE ports in performance mode. The Brocade VDX
8770-8 supports 144 40-GbE ports in performance mode.
32-way Layer 3 ECMP is utilized for spine-to-super-spine connections. This gives a maximum of 32 super-spines for the
Brocade IP fabric. Refer to the platform release notes to check the ECMP support scale.

TABLE 5 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP
Fabric with 40 GbE Between Leaf, Spine, and Super-Spine
Leaf Switch Spine Switch Super-Spine Leaf Over- Leaf Spine Number Number Number 10-GbE
Switch subscription Count per Count per of Super- of Super- of Data Port Count
Ratio Data Data Spine Spines in Center
Center Center Planes Each PoDs
PoD PoD Super-
Spine
Plane

VDX 6740, VDX 6940-36Q VDX 6940-36Q 3:1 18 4 4 18 36 31,104


VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q VDX 6940-36Q 2:1 18 12 12 18 36 62,208
VDX 6740, VDX 6940-36Q VDX 8770-4 3:1 18 4 4 18 72 62,208
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q VDX 8770-4 2:1 18 12 12 18 72 124,416
VDX 6740, VDX 6940-36Q VDX 8770-8 3:1 18 4 4 18 144 124,416
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q VDX 8770-8 2:1 18 12 12 18 144 248,832
VDX 6740, VDX 6940-36Q SLX 9850-4 3:1 18 4 4 18 240 207,360
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q SLX 9850-4 2:1 18 12 12 18 240 414,720
VDX 6740, VDX 6940-36Q SLX 9850-8 3:1 18 4 4 18 480 414,720
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q SLX 9850-8 2:1 18 12 12 18 480 829,440
VDX 6740, VDX 8770-4 VDX 8770-4 3:1 32 4 4 32 72 110,592
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 8770-4 VDX 8770-4 2:1 32 12 12 32 72 221,184
VDX 6740, VDX 8770-4 VDX 8770-8 3:1 32 4 4 32 144 221,184
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 8770-4 VDX 8770-8 2:1 32 12 12 32 144 442,368
VDX 6740, VDX 8770-8 VDX 8770-8 3:1 32 4 4 32 144 221,184
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 8770-8 VDX 8770-8 2:1 32 12 12 32 144 442,368
VDX 6740, SLX 9850-4 SLX 9850-4 3:1 32 4 4 32 240 368,640
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S SLX 9850-4 SLX 9850-4 2:1 32 12 12 32 240 737,280

Brocade Data Center Fabric Architectures


42 53-1004601-02
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos

TABLE 5 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP
Fabric with 40 GbE Between Leaf, Spine, and Super-Spine (continued)
Leaf Switch Spine Switch Super-Spine Leaf Over- Leaf Spine Number Number Number 10-GbE
Switch subscription Count per Count per of Super- of Super- of Data Port Count
Ratio Data Data Spine Spines in Center
Center Center Planes Each PoDs
PoD PoD Super-
Spine
Plane
VDX 6740, SLX 9850-4 SLX 9850-8 3:1 32 4 4 32 480 737,280
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S SLX 9850-4 SLX 9850-8 2:1 32 12 12 32 480 1,474,560

Table 6 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine,
and super-spine PINs for an optimized 5-stage Clos with multiple super-spine planes built with Brocade IP fabric with 100-GbE
interfaces between the leafs, spines, and super spines. The following assumptions are made:
Links between the leafs and the spines are 100 GbE. Links between spines and super-spines are also 100 GbE.
The Brocade VDX 6940-144S platforms use 4 100-GbE uplinks. Four spines are used for connecting the uplinks.
The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the
bandwidth of downlink ports at the spines. The number of physical ports utilized from spine toward super-spine is equal to the
number of ECMP paths supported. However, a 1:1 subscription ratio is used here and is also recommended.
32-way Layer 3 ECMP is utilized for spine-to-super-spine connections. This gives a maximum of 32 super-spines for the
Brocade IP fabric. Refer to the platform release notes to check the ECMP support scale.

TABLE 6 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP
Fabric with 100 GbE Between Leaf, Spine, and Super-Spine
Leaf Switch Spine- Switch Super-Spine Leaf Over- Leaf Count Spine Number of Number of Number of 10-GbE
Switch subscription per Data Count per Super- Super- Data Port Count
Ratio Center Data Spine Spines in Center
PoD Center Planes Each PoDs
PoD Super-
Spine
Plane

VDX 6940-144S VDX 8770-4 VDX 8770-4 2.4:1 12 4 4 12 24 27,648


VDX 6940-144S VDX 8770-4 VDX 8770-8 2.4:1 12 4 4 12 48 55,296
VDX 6940-144S VDX 8770-8 VDX 8770-8 2.4:1 24 4 4 24 48 110,592
VDX 6940-144S SLX 9850-4 SLX 9850-4 2.4:1 32 4 4 32 144 442,368
VDX 6940-144S SLX 9850-4 SLX 9850-8 2.4:1 32 4 4 32 288 884,736

Further higher scale can be achieved by physically connecting all available ports on the switching platform and using BGP policies to
enforce the maximum ECMP scale as limited by the platform. This provides higher port scale for the topology, while still ensuring that
maximum ECMP scale is used. It should be noted that this arrangement provides a nonblocking 1:1 north-south subscription at the
spine in most scenarios.

In Table 7, the scale for a 5-stage folded Clos with 40-GbE interfaces between leaf, spine, and super-spine is shown assuming that BGP
policies are used to enforce the ECMP maximum scale.

Brocade Data Center Fabric Architectures


53-1004601-02 43
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos

TABLE 7 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes, BGP Policy-Enforced
ECMP Maximum, and 100 GbE Between Leafs, Spines, and Super-Spines
Leaf Switch Spine Switch Super-Spine Leaf Over- Leaf Spine Number of Number of Number of 10-GbE Port
Switch subscription Count per Count per Super- Super- Data Count
Ratio Data Data Spine Spines in Center
Center Center Planes Each PoDs
PoD PoD Super-
Spine
Plane

VDX 6740, VDX 8770-8 VDX 8770-8 3:1 72 4 4 72 144 497,664


VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 8770-8 VDX 8770-8 2:1 72 4 4 72 144 995,328
VDX 6740, SLX 9850-4 SLX 9850-4 3:1 120 4 4 120 240 1,382,400
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S SLX 9850-4 SLX 9850-4 2:1 120 12 12 120 240 2,764,800
VDX 6740, SLX 9850-4 SLX 9850-8 3:1 120 4 4 120 480 2,764,800
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S SLX 9850-4 SLX 9850-8 2:1 120 12 12 120 480 5,529,600
VDX 6740, SLX 9850-8 SLX 9850-8 3:1 240 4 4 240 480 5,529,600
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S SLX 9850-8 SLX 9850-8 2:1 240 12 12 240 480 11,059,200

In Table 8, the scale for a 5-stage folded Clos with 100-GbE interfaces between leaf, spine, and super spine is shown assuming that
BGP policies are used to enforce the ECMP maximum scale.

TABLE 8 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes, BGP Policy-Enforced
ECMP Maximum, and 100 GbE Between Leaf, Spines, and Super Spines
Leaf Switch Spine Switch Super-Spine Leaf Over- Leaf Count Spine Number of Number of Number of 10-GbE
Switch subscription per Data Count per Super- Super- Data Port Count
Ratio Center Data Spine Spines in Center
PoD Center Planes Each PoDs
PoD Super-
Spine
Plane

VDX 6940-144S SLX 9850-4 SLX 9850-4 2.4:1 72 4 4 72 144 995,328


VDX 6940-144S SLX 9850-4 SLX 9850-8 2.4:1 72 4 4 72 288 1,990,656
VDX 6940-144S SLX 9850-8 SLX 9850-8 2.4:1 144 4 4 144 288 3,981,312

Brocade Data Center Fabric Architectures


44 53-1004601-02
Building Data Center Sites with Layer 2
and Layer 3 Fabrics
A data center site can be built using a Layer 2 and Layer 3 Clos that uses Brocade VCS fabrics and Brocade IP fabrics simultaneously in
the same topology. This topology is applicable when a particular fabric is more suited for a given application or use case. Figure 29
shows a deployment with both Brocade VCS based data center PoDs and IP fabric based data center PoDs, interconnected in an
optimized 5-stage Clos topology.

FIGURE 29 Data Center Site Built Using VCS Fabric and IP Fabric Pods

In this topology, the links between the spines, super-spines, and border leafs are Layer 3. This provides a consistent interface between
the data center PoDs and enables full communication between endpoints in any PoD.

Figure 30 shows an alternate topology where a leaf appliance (vLAG pair) is used to connect the spines of a VCS fabric with the rest of
the IP fabric. This topology should be used to interconnect VCS and IP fabric based topologies when network virtualization with BGP
EVPN is used. In Figure 30, if used, the BGP EVPN domain spans all Layer 3 links. If a common Layer 3 gateway is required to enable
virtual machine mobility and application continuity, then the Layer 3 boundary for the VCS fabric endpoints should be at the leaf
appliance using Brocade static anycast gateway technology.

Brocade Data Center Fabric Architectures


53-1004601-02 45
FIGURE 30 Data Center Site Built Using VCS Fabric and IP Fabric Pods Using a vLAG Leaf Appliance

Brocade Data Center Fabric Architectures


46 53-1004601-02
Scaling a Data Center Site with a Data
Center Core
A very large data center site can use multiple different deployment topologies. Figure 31 shows a data center site with multiple 5-stage
Clos deployments that are interconnected using a data center core.

FIGURE 31 Data Center Site Built with Optimized 5-Stage Clos Topologies Interconnected with a Data Center Core

Note that the border leafs or leaf switches from each of the Clos deployments connect into the data center core routers. The handoff
from the border leafs/leafs to the data center core router can be Layer 2 and/or Layer 3, with overlay protocols like VXLAN and BGP-
EVPN, depending on the requirements. The border leafs in this topology are optional and are not needed unless there are specific
requirements like connecting to edge services specific to a Clos topology or creating a network control-plane domain for administration
and scale purposes.

The number of Clos topologies that can be connected to the data center core depends on the port density and throughput of the data
center core devices. Each deployment connecting into the data center core can be a single-tier, leaf-spine, or optimized 5-stage Clos
design deployed using an IP fabric architecture or a multi-fabric topology using VCS fabrics.

Also shown in Figure 31 is a centralized edge services PoD that provides network services for the entire site. There can be one or more
edge services PoDs with the border leafs in the edge services PoD, providing the handoff to the data center core. The WAN edge routers
also connect to the edge services PoDs and provide connectivity to the external network.

Brocade Data Center Fabric Architectures


53-1004601-02 47
Brocade Data Center Fabric Architectures
48 53-1004601-02
Control-Plane and Hardware-Scale
Considerations
Control-Plane Architectures.......................................................................................................................................................................... 50

The maximum size of the network deployment depends on the scale of the control-plane protocols, as well as the scale of hardware
Application-Specific Integrated Circuit (ASIC) tables.

The control plane for a VCS fabric includes the following:


A Layer 2 routing protocol called Fabric Shortest Path First (FSPF)
VCS fabric messaging services for protocol messaging and state exchange
Ethernet Name Server (ENS) for MAC address learning
Protocols for VCS formation:
Brocade Link Discovery Protocol (BLDP)
Join and Merge Protocol (JMP)
State maintenance and distributed protocols:
Distributed Spanning Tree Protocol (dSTP)
VRRP-E
FVG
vLAG
Distributed VXLAN Gateway

The maximum scale of the VCS fabric deployment is a function of the number of nodes, topology of the nodes, link reliability, distance
between the nodes, features deployed in the fabric, and the scale of the deployed features. A maximum of 48 nodes are supported in a
VCS fabric.

In a Brocade IP fabric, the control plane is based on routing protocols like BGP and OSPF. In addition, a control plane is provided for
formation of vLAG pairs. In the case of virtualization with VXLAN overlays, BGP-EVPN provides the control plane. The maximum scale
of the topology also depends on the scalability of these protocols.

For both Brocade VCS fabrics and IP fabrics, it is important to understand the hardware table scale and the related control-plane scales.
These tables include:
MAC address table
Host route tables/Address Resolution Protocol/Neighbor Discovery (ARP/ND) tables
Longest Prefix Match (LPM) tables for IP prefix matching
Next-hop table
Tertiary Content Addressable Memory (TCAM) tables for packet matching

These tables are programmed into the switching ASICs based on the information learned through configuration, the data plane, or the
control-plane protocols. This also means that it is important to consider the control-plane scale for carrying information for these tables
when determining the maximum size of the network deployment.

Brocade Data Center Fabric Architectures


53-1004601-02 49
Control-Plane Architectures

Control-Plane Architectures

Single-Tier Data Center Sites


In a single-tier datacenter site, all networking endpoints are connected to the vLAG pair. The single-tier switches participate in vLAG
pairing/MCT for multihoming of the network endpoints. Figure 32 shows an example where the networking devices connected to the
vLAG pair/MCT are dual-homed.

FIGURE 32 Control Plane for Single-Tier Data Center Site

The Layer 3 boundary and default gateway for all networking endpoints are also hosted in the vLAG pair/MCT. Gateway redundancy
protocols like VRRP-E and Fabric Virtual Gateway provided in Brocade VDX Network OS provide active-active gateway redundancy for
the endpoints. Similarly VRRP-E available in Brocade SLX 9850 SLX-OS provides active-active gateway redundancy for the endpoints.
Optionally, the Layer 3 boundary can be pushed to the datacenter core/WAN edge.

Because the border leaf functionality is also folded into the vLAG, the handoff to the datacenter core/WAN edge devices would be Layer
2 and/or Layer 3 with a combination of overlay protocols like BGP-EVPN and VXLAN.

Brocade Data Center Fabric Architectures


50 53-1004601-02
Control-Plane Architectures

Brocade VCS Fabric


Brocade VCS provides simplicity in deployment and management as the control plane for the Brocade VCS fabric is built in Brocade
Network OS. The control-plane operations ensure that once the Brocade VDX switches join the fabric, all switches automatically
participate in the VCS control plane, without the need for any additional configuration. The VCS control plane ensures that all switches
participating in the fabric are aware of all other switches and also the network topology. There is no need for additional configuration to
setup the control plane and data plane for reachability between the switches.

Figure 33 demonstrates a Brocade VCS fabric deployed in a leaf-spine architecture.

FIGURE 33 Layer 2 in Brocade VCS Fabric

With Brocade VCS, Layer 2 domains are automatically extended across the fabric. Layer 2 domain separation for edge ports of the
fabric can be performed using VLANs or virtual fabrics, which provides a larger set of up to 8,000 Layer 2 domains. With virtual fabrics,
it is possible to support overlapping VLAN IDs between multiple tenants of the network on the same switch and also across the VCS
fabric.

In Figure 34, the Layer 3 boundary for all networking endpoints is shown to be in the spine. The spine devices participate in active-active
gateway redundancy using VRRP-E or Fabric Virtual Gateway. Multitenancy can be achieved at Layer 3 using Virtual Routing and
Forwarding (VRF) instances at the Layer 3 boundary. The links between the spines and the data center core/WAN edge are Layer 3 in
this case. If VRFs are used, VRF-lite technology is used to extend the VRFs to the data center core/WAN edge. Optionally, the Layer 3
boundary can be hosted at the leaf as well.

Brocade Data Center Fabric Architectures


53-1004601-02 51
Control-Plane Architectures

FIGURE 34 Brocade VCS Fabric with Layer 3 Boundary at the Spine

In Figure 35, the Layer 3 boundary for all networking endpoints is shown to be at the WAN edge/data center core. A port channel
connects the spines to the data center core/WAN edge devices. Multitenancy can be achieved at Layer 3 using Virtual Routing and
Forwarding (VRF) instances at the Layer 3 boundary.

Brocade Data Center Fabric Architectures


52 53-1004601-02
Control-Plane Architectures

FIGURE 35 Brocade VCS Fabric with Layer 3 Boundary Outside the Fabric

In Figure 36, the Layer 3 boundary for all networking endpoints is shown to be at the border leaf switches. Here the border leafs are
shown as part of the VCS fabric. However, they can be a separate VCS fabric as well, in which case, the connections between the spine
and the border leaf use a Layer 2 dual-side vLAG (see Figure 11). Multitenancy can be achieved at Layer 3 using Virtual Routing and
Forwarding (VRF) instances at the Layer 3 boundary. Alternately, the Layer 3 boundary can be at the spine as well. In that case, the link
between the spine and the border leaf switches carries Layer 3 traffic as well (need routing enabled).

Brocade Data Center Fabric Architectures


53-1004601-02 53
Control-Plane Architectures

FIGURE 36 Brocade VCS Fabric with Layer 3 Boundary at the Border Leaf

Multi-Fabric Topology Using VCS Technology


For a multi-fabric topology using VCS technology, the individual datacenter PoDs are built with Brocade VCS. Layer 2 multitenancy
exists within each datacenter PoD with VLAN and/or virtual fabrics.

In Figure 37, because the Layer 2 domains are extended to the super-spine, the Layer 3 boundary for the VLANs is at the super-spine
layer. Using VRRP-E and FVG technologies, all super-spine switches can participate in active forwarding for the gateway IP addresses
for the VLANs extended up to the super-spines. Multitenancy can be achieved at Layer 3 using Virtual Routing and Forwarding (VRF)
instances at the Layer 3 boundary.

Brocade Data Center Fabric Architectures


54 53-1004601-02
Control-Plane Architectures

FIGURE 37 Control Plane for Multi-Fabric Topology Using VCS Technology with Layer 3 Boundary at the Super-Spines

The links between the super-spines and the data center core/WAN edge are Layer 3 links. These links can carry VRF traffic using VRF-
Lite.

Figure 38 shows each VCS fabric in the DC PoDs with the Layer 3 boundary for the VLANs and virtual fabrics at the spine layer. The
connections between the spines and the super-spines are Layer 3. Multitenancy can be achieved at Layer 3 using VRF instances at the
Layer 3 boundary. Routing protocols like BGP and OSPF are used to provide the routing between the datacenter PoDs. If VRFs are
being used, VRF-Lite can be used to extend the VRF domains.

Brocade Data Center Fabric Architectures


53-1004601-02 55
Control-Plane Architectures

FIGURE 38 Control Plane for Multi-Fabric Topology Using VCS Technology with Layer 3 Between Spines and Super-Spines

Brocade IP Fabric
With Brocade IP fabric, routing protocols between the networking tiers are required to exchange reachability information. Figure 39
shows a leaf-spine topology deployed using Brocade IP fabric with the protocol options of iBGP, eBGP, and OSPF. The routing protocol
designs are described in the next section.

FIGURE 39 Control Plane for Brocade IP Fabric in a Leaf-Spine Topology

Brocade Data Center Fabric Architectures


56 53-1004601-02
Control-Plane Architectures

The Layer 3 boundary in a Brocade IP fabric is always at the leafs. In cases of multihomed network endpoints, gateway redundancy
protocols like VRRP-E are used to provide active-active forwarding on the vLAG pair. When BGP EVPN is used, the Brocade Static
Anycast Gateway protocol can be used for active-active forwarding on the vLAG pair.

Figure 40 illustrates a Brocade IP fabric deployment in a 5-stage Clos topology.

FIGURE 40 Control Plane for Brocade IP Fabric in a 5-Stage Clos Topology

For simplicity, the entire site is deployed with the same routing protocol. All individual datacenter PoDs participate in the same routing
domain. However, different routing protocols per datacenter PoD can be deployed with redistribution into the routing protocol between
the super-spine and the spine. Route aggregation and policy enforcement can be implemented at the leaf and spine layers to reduce the
forwarding table sizes and influence the traffic patterns.

Brocade Data Center Fabric Architectures


53-1004601-02 57
Brocade Data Center Fabric Architectures
58 53-1004601-02
Routing Protocol Architecture for Brocade
IP Fabric and Multi-Fabric Topology Using
VCS Technology
eBGP-based Brocade IP Fabric and Multi-Fabric Topology.......................................................................................................... 59
iBGP-based Brocade IP Fabric and Multi-Fabric Topology........................................................................................................... 60

eBGP-based Brocade IP Fabric and Multi-Fabric Topology


Figure 41 illustrates AS numbering and peering for eBGP in the Brocade IP fabric deployment. The same discussion applies to eBGP
deployments in a multi-fabric topology using VCS. eBGP peering is done between the spine and the super-spine (when the links
between the spine and super-spine are Layer 3).

FIGURE 41 eBGP-based Brocade IP Fabric Deployment

Key design elements of the eBGP deployment:


Each leaf switch has an eBGP peering with each spine switch. Peering is done using interface IP addresses. Brocade VDX
switches also support peering based on an unnumbered interface, which allows switches to use a common IP address for all its
BGP peering session.
All spine switches share the same AS number. The spine switches are not connected and do not run iBGP between them.
All the super-spine switches also share the same AS number. The super-spine switches are not connected and do not run iBGP
between them.
All leaf switches in a vLAG pair share the same AS number. The leaf switches do not run iBGP between them.
The AS numbers of each leaf/vLAG pair, of the spines and of the super-spines, are unique in the data center site.
Private AS numbering is used for the leaf/vLAG pair, spines, and super-spines. However, depending on the interface between
the WAN edge and external networks, a public AS number (ASN) might be required for the WAN edge devices.
A 4-byte ASN is supported for higher scale in the number of AS numbers in the number of AS numbers.

Brocade Data Center Fabric Architectures


53-1004601-02 59
iBGP-based Brocade IP Fabric and Multi-Fabric Topology

If a Layer 3 handoff to the datacenter core/WAN edge is needed, eBGP is used for the peering.
The WAN edge devices remove the private AS numbers while advertising the routes for the datacenter site.
The WAN edge devices summarize the routes for the datacenter site before advertising to the other sites.
The WAN edge devices initiate aggregate routes and/or a default route into the eBGP domain for the datacenter site.

iBGP-based Brocade IP Fabric and Multi-Fabric Topology


Figure 42 illustrates AS numbering and peering for iBGP in the Brocade IP fabric deployment. The same discussion applies to iBGP
deployments in a multi-fabric topology with a VCS deployment between the spine and the super-spine (when the links between the spine
and super-spine are Layer 3).

FIGURE 42 iBGP-based Brocade IP Fabric Deployment

Key design elements of the iBGP deployment:


Each leaf switch has an iBGP peer with each spine switch. Peering is done using interface IP addresses. Brocade VDX switches
also support peering based on an unnumbered interface, which allows switches to use a common IP address for all its BGP
peering session.
All the spine, leaf switches, and border leaf switches share the same AS number.
There is no iBGP peering between leafs in a vLAG pair.
eBGP or iBGP peering can be used between the spines and the super-spines. If iBGP peering is used, the super-spines also
share the same AS number as the spines, leafs, and border leaf switches.
All the spine switches act as route-reflectors for the IPv4 and IPv6 unicast address families (underlay protocols). Similarly, all the
super-spine switches act as route-reflectors for the IPv4 and IPv6 unicast address families, if iBGP is used between the spines
and super-spines.
Each route-reflector uses the "next-hop-self" option for the IPv4 and IPv6 address families (underlay protocols).
Private AS numbers are used within the datacenter site. However, depending on the interface between the WAN edge and
external networks, a public ASN might be required at the WAN edge devices.
A 4-byte ASN is supported for higher scale in the number of AS numbers.
If a Layer 3 handoff to the datacenter core/WAN edge is needed, eBGP is used for the peering.

Brocade Data Center Fabric Architectures


60 53-1004601-02
iBGP-based Brocade IP Fabric and Multi-Fabric Topology

The WAN edge devices remove the private AS numbers while advertising the routes for the datacenter site.
The WAN edge devices summarize the routes for the datacenter site before advertising to the other sites.
The WAN Edge devices initiate aggregate routes and/or a default route into the datacenter site.

Brocade Data Center Fabric Architectures


53-1004601-02 61
Brocade Data Center Fabric Architectures
62 53-1004601-02
Choosing an Architecture for Your Data
Center
High-Level Comparison Table .................................................................................................................................................................... 63
Deployment Scale Considerations............................................................................................................................................................. 64
Fabric Architecture............................................................................................................................................................................................ 65
Recommendations............................................................................................................................................................................................65

Because of the ongoing and rapidly evolving transition toward the cloud and the need across IT to quickly improve operational agility and
efficiency, the best choice is an architecture based on Brocade data center fabrics. However, the process of choosing an architecture that
best meets your needs today while leaving you flexibility to change can be paralyzing.

Brocade recognizes how difficult it is for customers to make long-term technology and infrastructure investments, knowing they will have
to live with those choices for years. For this reason, Brocade provides solutions that help you build cloud-optimized networks with
confidence, knowing that your investments have value todayand will continue to have value well into the future.

High-Level Comparison Table


Table 9 provides information about which Brocade data center fabric best meets your needs. The IP Fabric columns represent all
deployment topologies for IP fabric, including the leaf-spine and optimized 5-stage Clos topologies.

TABLE 9 Data Center Fabric Support Comparison Table


Customer Requirement VCS Fabric Multi-Fabric Topology IP Fabric IP Fabric with BGP-
with VCS Technology EVPN
with L2 Extension
Across DC PoDs

Virtual LAN (VLAN) extension Yes Yes Yes


VM mobility across racks Yes Yes Yes
Embedded turnkey provisioning and automation Yes Yes, in each data
center PoD
Embedded centralized fabric management Yes Yes, in each data
center PoD
vLAG support Yes, up to 8 devices Yes, up to 8 devices Yes, up to 2 devices Yes, up to 2 devices
Gateway redundancy Yes, VRRP/ Yes, VRRP/ Yes, VRRP-E Yes, Static Anycast
VRRP-E/FVG VRRP-E/FVG Gateway
Controller-based network virtualization (for Yes Yes Yes Yes
example, VMware NSX)
DevOps tool-based automation Yes Yes Yes Yes
Multipathing and ECMP Yes Yes Yes Yes
Layer 3 scale-out between PoDs Yes Yes Yes
Turnkey off-box provisioning and automation Planned Yes Yes
Data center PoDs optimized for Layer 3 scale- Yes Yes
out
Optimization of unknown unicast related Yes N/A Yes
broadcasts
Rack-level ARP suppression Yes

Brocade Data Center Fabric Architectures


53-1004601-02 63
Deployment Scale Considerations

Deployment Scale Considerations


The scalability of a solution is an important consideration for deployment. Depending on whether the topology is a leaf-spine or
optimized 5-stage Clos topology, deployments based on Brocade VCS Fabric technology and Brocade IP fabrics scale differently. The
port scales for each of these deployments are documented in previous sections.

In addition, the deployment scale also depends on the control plane and on the hardware tables of the platform. Table 10 provides an
example of the scale considerations for parameters in a leaf-spine topology with Brocade VCS fabric and IP fabric deployments. The
table illustrates how scale requirements for the parameters vary between a VCS fabric and an IP fabric for the same environment.

The following assumptions are made:


20 compute racks are in the leaf-spine topology.
4 spines and 20 leafs are deployed. Physical servers are single-homed.
The Layer 3 boundary is at the spine of a VCS fabric deployment and at the leaf in an IP fabric deployment.
Each peering between leafs and spines uses a separate subnet.
Brocade IP fabric with BGP-EVPN extends all VLANs across all 20 racks.
40 1-Rack-Unit (RU) servers per rack (a standard rack has 42 RUs).
2 CPU sockets per physical server 1 Quad-core CPU per socket = 8 CPU cores per physical server.
5 VMs per CPU core 8 CPU cores per physical server = 40 VMs per physical server.
A single virtual Network Interface Card (vNIC) for each VM.
40 VLANs per rack.

TABLE 10 Scale Considerations for Brocade VCS Fabric and IP Fabric Deployments
Brocade VCS Fabric Brocade IP Fabric Brocade IP Fabric with BGP-EVPN-Based
VXLAN

Leaf Spine Leaf Spine Leaf Spine


MAC 40 VMs/server 40 40 VMs/server 40 40 VMs/server Small number of 40 VMs/server 40 Small number of MAC
Addresses servers/rack 20 servers/rack 20 40 servers/rack = MAC addresses servers/rack 20 racks = addresses needed for
racks = 32,000 racks = 32,000 1,600 MAC needed for 32,000 MAC addresses peering
MAC addresses MAC addresses addresses peering
VLANs 40 VLANs/rack 40 VLANs/rack 40 VLANs No VLANs at 40 VLANs/rack extended No VLANs at spine
20 racks = 800 20 racks = 800 spine to all 20 racks = 800
VLANs VLANs VLANs
ARP None 40 VMs/server 40 40 VMs/server Small number of 40 VMs/server 40 Small number of ARP
Entries/ servers/rack 20 40 servers/rack = ARP entries for servers/rack 20 racks entries for peers
Host racks = 32,000 1,600 ARP peers + 20 VTEP loopback IP
Routes ARP entries entries addresses = 32,020 host
routes/ARP entries
L3 Routes None Default gateway for 40 default 40 subnets 20 80 peering subnets + 40 Small number of L3
(Longest 800 VLANs = 800 gateways + 40 racks + 80 subnets 20 racks = 880 routes for peering
Prefix L3 routes remote subnets peering subnets L3 routes
Match) 19 racks + 80 = 880 L3 routes
peering subnets =
880 L3 routes
Layer 3 None 40 VLANs/rack 40 VLANs/rack = None 40 VLANs/rack 20 None
Default 20 racks = 800 40 default racks = 800 default
Gateways default gateways gateways gateways

Brocade Data Center Fabric Architectures


64 53-1004601-02
Recommendations

Fabric Architecture
Another way to determine which Brocade data center fabric provides the best solution for your needs is to compare the architectures
side-by-side.

Figure 43 provides a side-by-side comparison of the two Brocade data center fabric architectures. The blue text shows how each
Brocade data center fabric is implemented. For example, a VCS fabric is topology-agnostic and uses TRILL as its transport mechanism,
whereas the topology for an IP fabric is a Clos that uses IP for transport.

FIGURE 43 Data Center Fabric Architecture Comparison

It is important to note that the same Brocade VDX platform, Brocade Network OS software, and licenses are used for either deployment.
So, when you are making long-term infrastructure purchase decisions, be reassured to know that you need only one switching platform.

Recommendations
Of course, each organization's choices are based on its unique requirements, culture, and business and technical objectives. Yet by and
large, the scalability and seamless server mobility of a Layer 2 scale-out VCS fabric provides the ideal starting point for most enterprise
and cloud providers. Like IP fabrics, VCS fabrics provide open interfaces and software extensibility, if you decide to extend the already
capable and proven embedded automation of Brocade VCS Fabric technology.

For organizations looking for a Layer 3 optimized scale-out approach, Brocade IP fabric is the best architecture to deploy. And if
controller-less network virtualization using Internet-proven technologies such as BGP-EVPN is the goal, Brocade IP fabric is the best
underlay.

Brocade architectures also provide the flexibility of combining both of these deployment topologies in an optimized 5-stage Clos
architecture, as illustrated in Figure 29 on page 45. This provides the flexibility to choose a different deployment model per data center
PoD.

Most importantly, if you find your infrastructure technology investment decisions challenging, you can be confident that an investment in
the Brocade VDX switch platform will continue to prove its value over time. With the versatility of the Brocade VDX platform and its
support for both Brocade data center fabric architectures, your infrastructure needs will be fully met today and into the future.

Brocade Data Center Fabric Architectures


53-1004601-02 65

You might also like