Professional Documents
Culture Documents
53-1004601-02
06 October 2016
2016, Brocade Communications Systems, Inc. All Rights Reserved.
Brocade, the B-wing symbol, and MyBrocade are registered trademarks of Brocade Communications Systems, Inc., in the United States and in other
countries. Other brands, product names, or service names mentioned of Brocade Communications Systems, Inc. are listed at www.brocade.com/en/legal/
brocade-Legal-intellectual-property/brocade-legal-trademarks.html. Other marks may belong to third parties.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment,
equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without
notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade
sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the
United States government.
The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the accuracy of this
document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that accompany it.
The product described by this document may contain open source software covered by the GNU General Public License or other open source license
agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to the open source software, and
obtain a copy of the programming source code, please visit http://www.brocade.com/support/oscd.
Routing Protocol Architecture for Brocade IP Fabric and Multi-Fabric Topology Using VCS Technology..................................................59
eBGP-based Brocade IP Fabric and Multi-Fabric Topology............................................................................................................................................... 59
iBGP-based Brocade IP Fabric and Multi-Fabric Topology.................................................................................................................................................60
Document History
Date Part Number Description
February 9, 2016 Initial release with DC fabric architectures, network virtualization, Data Center Interconnect, and
automation content.
September 13, 2016 53-1004601-01 Initial release of solution design guide for DC fabric architectures.
October 06, 2016 53-1004601-02 Replaced the figures for the Brocade VDX 6940-36Q and the Brocade VDX 6940-144S.
The author would like to acknowledge Jeni Lloyd and Patrick LaPorte for their in-depth review of this solution guide and for providing
valuable insight, edits, and feedback.
Overview
Based on the principles of the New IP, Brocade is building on Brocade VDX and Brocade SLX platforms by delivering cloud-
optimized network and network virtualization architectures and delivering new automation innovations to meet customer demand for
higher levels of scale, agility, and operational efficiency.
The scalable and highly automated Brocade data center fabric architectures described in this solution design guide make it easy for
infrastructure planners to architect, automate, and integrate with current and future data center technologies while they transition to their
own cloud-optimized data center on their own time and terms.
About Brocade
Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where
applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity,
non-stop networking, application optimization, and investment protection.
Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and
cost while enabling virtualization and cloud computing to increase business agility.
To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support,
and professional services offerings (www.brocade.com).
Data center networking architectures have evolved with the changing requirements of the modern data center and cloud environments.
This evolution has been triggered by a combination of industry technology trends like server virtualization as well as the architectural
changes of the applications being deployed in the data centers. These technological and architectural changes are affecting the way
private and public cloud networks are designed. As these changes proliferate in the traditional data centers, the need to adopt modern
data center architectures has been growing.
The three-tier topology was architected with the requirements of an enterprise campus in mind. In a campus network, the basic
requirement of the access layer is to provide connectivity to workstations. These workstations exchange traffic either with an enterprise
data center for business application access or with the Internet. As a result, most traffic in this network traverses in and out through the
tiers in the network. This traffic pattern is commonly referred to as north-south traffic.
The throughput requirements for traffic in a campus environment are less compared to those of a data center network where server
virtualization has increased the application density and subsequently the data throughput to and from the servers. In addition, cloud
applications are often multitiered and hosted at different endpoints connected to the network. The communication between these
application tiers is a major contributor to the overall traffic in a data center. The multitiered nature of the applications deployed in a data
center drives traffic patterns in a data center network to be more east-west than north-south. In fact, some of the very large data centers
hosting multitiered applications report that more than 90 percent of their overall traffic occurs between the application tiers.
Because of high throughput requirements and the east-west traffic patterns, the networking access layer that connects directly to the
servers exchanges a much higher proportion of traffic with the upper layers of the networking infrastructure, as compared to an enterprise
campus network.
These reasons have driven the data center network architecture evolution into scale-out architectures. Figure 2 illustrates a leaf-spine
topology, which is an example of a scale-out architecture. These scale-out architectures are built to maximize the throughput of traffic
exchange between the leaf layer and the spine layer.
FIGURE 2 Scale-Out Architecture: Ideal for East-West Traffic Patterns Common with Web-Based or Cloud-Based Application Designs
As compared to a three-tier network, where the aggregation layer is restricted to two devicestypically because of technologies like
Multi-Chassis Trunking (MCT) where exactly two devices can participate in the creation of port channels facing the access-layer switches
the spine layer can have multiple devices and hence provide a higher port density to connect to the leaf-layer switches. This allows
more interfaces from each leaf to connect into the spine layer, providing higher throughput from each leaf to the spine layer. The
characteristics of a leaf-spine topology are discussed in more detail in subsequent sections.
The traditional three-tier datacenter architecture is still prevalent in environments where traffic throughput requirements between the
networking layers can be satisfied through high-density platforms at the aggregation layer. For certain use cases like co-location data
centers, where customer traffic is restricted to racks or managed areas within the data center, a three-tier architecture maybe more
suitable. Similarly, enterprises hosting nonvirtualized and single-tiered applications may find the three-tier datacenter architecture more
suitable.
Brocade's advanced scale-out architectures allow data centers to be built at very high scales of ports and racks. Advanced scale-out
architectures using an optimized 5-stage Clos topology are described later in more detail.
A consequence of server virtualization enabling physical servers to host several virtual machines (VMs) is that the scale requirement for
the control and data planes for networking parameters like MAC addresses, IP addresses, and Address Resolution Protocol (ARP) tables
has multiplied. Also, these virtualized servers must support much higher throughput than in a traditional enterprise environment, leading
to an evolution in Ethernet standards of 10 Gigabit Ethernet (10 GbE), 25GbE, 40 GbE, 50GbE, and 100 GbE.
In order to support application continuity and infrastructure high availability, it is commonly required that the underlying networking
infrastructure be extended within and across one or more data center sites. Extension of Layer 2 domains is a specific requirement in
many cases. Examples of this include virtual machine mobility across the infrastructure for high availability; resource load balancing and
fault tolerance needs; and creation of application-level clustering, which commonly relies on shared broadcast domains for clustering
operations like cluster node discovery and many-to-many communication. The need to extend tenant Layer 2 and Layer 3 domains
while still supporting a common infrastructure Layer 3 environment across the infrastructure and also across sites is creating new
challenges for network architects and administrators.
The remainder of this solution design guide describes data center networking architectures that meet the requirements identified above
for building cloud-optimized networks that address current and future needs for enterprises and service provider clouds. This guide
focuses on the design considerations and choices for building a data center site using Brocade platforms and technologies. Refer to the
Brocade Data Center Fabric Architectures for Network Virtualization Solution Design Guide for a discussion on multitenant
infrastructures and overlay networking that builds on the architectural concepts defined here.
This section discusses the building blocks that are used to build the network and network virtualization architectures for a data center site.
These building blocks consist of the various elements that fit into an overall data center site deployment. The goal is to build fairly
independent elements that can be assembled together, depending on the scale requirements of the networking infrastructure.
VDX 6740
The Brocade VDX 6740 series of switches provides the advanced feature set that data centers require while delivering the high
performance and low latency that virtualized environments demand. Together with Brocade data center fabrics, these switches transform
data center networks to support the New IP by enabling cloud-based architectures that deliver new levels of scale, agility, and operational
efficiency. These highly automated, software-driven, and programmable data center fabric design solutions support a breadth of network
virtualization options and scale for data center environments ranging from tens to thousands of servers. Moreover, they make it easy for
organizations to architect, automate, and integrate current and future data center technologies while they transition to a cloud model that
addresses their needs, on their timetable and on their terms. The Brocade VDX 6740 Switch offers 48 10-Gigabit-Ethernet (GbE) Small
Form Factor Pluggable Plus (SFP+) ports and 4 40-GbE Quad SFP+ (QSFP+) ports in a 1U form factor. Each 40-GbE SFP+ port can
be broken out into four independent 10-GbE SFP+ ports, providing an additional 16 10-GbE SFP+ ports, which can be licensed with
Ports on Demand (PoD).
VDX 6940
The Brocade VDX 6940-36Q is a fixed 40-Gigabit-Ethernet (GbE)optimized switch in a 1U form factor. It offers 36 40-GbE QSFP+
ports and can be deployed as a spine or leaf switch. Each 40-GbE port can be broken out into four independent 10-GbE SFP+ ports,
providing a total of 144 10-GbE SFP+ ports. Deployed as a spine, it provides options to connect 40-GbE or 10-GbE uplinks from leaf
switches. By deploying this high-density, compact switch, data center administrators can reduce their TCO through savings on power,
space, and cooling. In a leaf deployment, 10-GbE and 40-GbE ports can be mixed, offering flexible design options to cost-effectively
support demanding data center and service provider environments. As with other Brocade VDX platforms, the Brocade VDX 6940-36Q
offers a Ports on Demand (PoD) licensing model. The Brocade VDX 6940-36Q is available with 24 ports or 36 ports. The 24-port
model offers a lower entry point for organizations that want to start small and grow their networks over time. By installing a software
license, organizations can upgrade their 24-port switch to the maximum 36-port switch. The Brocade VDX 6940-144S Switch is
10 GbE optimized with 40-GbE or 100-GbE uplinks in a 2U form factor. It offers 96 native 1/10-GbE SFP/SFP+ ports and 12
40-GbE QSFP+ ports, or 4 100-GbE QSFP28 ports.
VDX 8770
The Brocade VDX 8770 switch is designed to scale and support complex environments with dense virtualization and dynamic traffic
patternswhere more automation is required for operational scalability. The 100-GbE-ready Brocade VDX 8770 dramatically increases
the scale that can be achieved in Brocade data center fabrics, with 10-GbE and 40-GbE wire-speed switching, numerous line card
options, and the ability to connect over 8,000 server ports in a single switching domain. Available in 4-slot and 8-slot versions, the
Brocade VDX 8770 is a highly scalable, low-latency modular switch that supports the most demanding data center networks.
SLX 9850
The Brocade SLX 9850 Router is designed to deliver the cost-effective density, scale, and performance needed to address the
ongoing explosion of network bandwidth, devices, and services today and in the future. This flexible platform powered by Brocade
SLX-OS provides carrier-class advanced features leveraging proven Brocade routing technology that is used in the most demanding
data center, service provider, and enterprise networks today and is delivered on best-in-class forwarding hardware. The extensible
architecture of the Brocade SLX 9850 is designed for investment protection to readily support future needs for greater bandwidth, scale,
and forwarding capabilities.
Additionally, the Brocade SLX 9850 helps address the increasing agility and analytics needs of digital businesses with network
automation and network visibility innovation supported through the Brocade Workflow Composer and the Brocade SLX Insight
Architecture .
Networking Endpoints
The next building blocks are the networking endpoints to connect to the networking infrastructure. These endpoints include the compute
servers and storage devices, as well as network service appliances such as firewalls and load balancers.
Figure 13 shows the different types of racks used in a data center infrastructure:
Infrastructure and management racksThese racks host the management infrastructure, which includes any management
appliances or software used to manage the infrastructure. Examples of these are server virtualization management software like
VMware vCenter or Microsoft SCVMM, orchestration software like OpenStack or VMware vRealize Automation, network
controllers like the Brocade SDN Controller or VMware NSX, and network management and automation tools like Brocade
Network Advisor. Examples of infrastructure racks are IP physical or virtual storage appliances.
Compute racksCompute racks host the workloads for the data centers. These workloads can be physical servers, or they can
be virtualized servers when the workload is made up of virtual machines (VMs). The compute endpoints can be single or can be
multihomed to the network.
Edge racksNetwork services like perimeter firewalls, load balancers, and NAT devices connected to the network are
consolidated in edge racks. The role of the edge racks is to host the edge services, which can be physical appliances or virtual
machines.
These definitions of infrastructure/management, compute, and edge racks are used throughout this solution design guide.
Single-Tier Topology
The second building block is a single-tier network topology to connect endpoints to the network. Because of the existence of only one
tier, all endpoints connect to this tier of the network. An example of a single-tier topology is shown in Figure 14. The single-tier switches
are shown as a virtual Link Aggregation Group (vLAG) pair. However, the single-tier switches can also be part of a Multi-Chassis Trunking
(MCT) pair. The Brocade VDX supports vLAG pairs, whereas the Brocade SLX 9850 supports MCT.
The topology in Figure 14 shows the management/infrastructure, compute, and edge racks connected to a pair of switches participating
in multiswitch port channeling. This pair of switches is called a vLAG pair.
The single-tier topology scales the least among all the topologies described in this guide, but it provides the best choice for smaller
deployments, as it reduces the Capital Expenditure (CapEx) costs for the network in terms of the size of the infrastructure deployed. It
also reduces the optics and cabling costs for the networking infrastructure.
Design Considerations
The design considerations for deploying a single-tier topology are summarized in this section.
Oversubscription Ratios
It is important for network architects to understand the expected traffic patterns in the network. To this effect, the oversubscription ratios
at the vLAG pair/MCT should be well understood and planned for.
The north-south oversubscription at the vLAG pair/MCT is described as the ratio of the aggregate bandwidth of all downlinks from the
vLAG pair/MCT that are connected to the endpoints to the aggregate bandwidth of all uplinks that are connected to the data center
core/WAN edge router (described in a later section). The north-south oversubscription dictates the proportion of traffic between the
endpoints versus the traffic entering and exiting the single-tier topology.
It is also important to understand the bandwidth requirements for the inter-rack traffic. This is especially true for all north-south
communication through the services hosted in the edge racks. All such traffic flows through the vLAG pair/MCT to the edge racks, and if
the traffic needs to exit, it flows back to the vLAG/MCT switches. Thus, the aggregate ratio of bandwidth connecting the compute racks
to the aggregate ratio of bandwidth connecting the edge racks is an important consideration.
Another consideration is the bandwidth of the link that interconnects the vLAG pair/MCT. In case of multihomed endpoints and no failure,
this link should not be used for data-plane forwarding. However, if there are link failures in the network, this link may be used for data-
plane forwarding. The bandwidth requirement for this link depends on the redundancy design for link failures. For example, a design to
tolerate up to two 10-GbE link failures has a 20-GbE interconnection between the Top of Rack/End of Row (ToR/EoR) switches.
Another key consideration for single-tier topologies is the choice of port speeds for the uplink and downlink interfaces. Brocade VDX and
SLX Series platforms support 10-GbE, 40-GbE, and 100-GbE interfaces, which can be used for uplinks and downlinks (25-GbE
interfaces will be supported in the future with the Brocade SLX 9850). The choice of the platform for the vLAG pair/MCT depends on
the interface speed and density requirements.
Adding more capacity between existing endpoints and vLAG switches can be done by adding new links between them. Any future
expansion in the number of endpoints connected to the single-tier topology should be accounted for during the network design, as this
requires additional ports in the vLAG switches.
Other key considerations are whether to connect the vLAG/MCT pair to external networks through data center core/WAN edge routers
and whether to add a networking tier for higher scale. These designs require additional ports at the ToR/EoR. Multitier designs are
described in a later section of this guide.
The leaf-spine topology is adapted from traditional Clos telecommunications networks. This topology is also known as the "3-stage
folded Clos," with the ingress and egress stages proposed in the original Clos architecture folding together at the spine to form the leafs.
The role of the leaf is to provide connectivity to the endpoints in the network. These endpoints include compute servers and storage
devices, as well as other networking devices like routers and switches, load balancers, firewalls, or any other networking endpoint
physical or virtual. As all endpoints connect only to the leafs, policy enforcement including security, traffic path selection, Quality of
Service (QoS) markings, traffic scheduling, policing, shaping, and traffic redirection are implemented at the leafs. The Brocade VDX 6740
and 6940 family of switches is used as leaf switches.
The role of the spine is to provide interconnectivity between the leafs. Network endpoints do not connect to the spines. As most policy
implementation is performed at the leafs, the major role of the spine is to participate in the control-plane and data-plane operations for
traffic forwarding between the leafs. Brocade VDX or SLX platform families are used as the spine switches depending on the scale and
feature requirements.
The leaf-spine topology provides a basis for a scale-out architecture. New leafs can be added to the network without affecting
the provisioned east-west capacity for the existing infrastructure.
New spines and new uplink ports on the leafs can be provisioned to increase the capacity of the leaf-spine fabric.
The role of each tier in the network is well defined (as discussed previously), providing modularity in the networking functions
and reducing architectural and deployment complexities.
The leaf-spine topology provides granular control over subscription ratios for traffic flowing within a rack, between racks, and
outside the leaf-spine topology.
Design Considerations
There are several design considerations for deploying a leaf-spine topology. This section summarizes the key considerations.
Oversubscription Ratios
It is important for network architects to understand the expected traffic patterns in the network. To this effect, the oversubscription ratios
at each layer should be well understood and planned for.
For a leaf switch, the ports connecting to the endpoints are defined as downlink ports, and the ports connecting to the spines are defined
as uplink ports. The north-south oversubscription ratio at the leafs is the ratio of the aggregate bandwidth for the downlink ports and the
aggregate bandwidth for the uplink ports.
For a spine switch in a leaf-spine topology, the east-west oversubscription ratio is defined per pair of leaf switches connecting to the
spine switch. For a given pair of leaf switches connecting to the spine switch, the east-west oversubscription ratio at the spine is the ratio
of the aggregate bandwidth of the uplinks of the first switch and the aggregate bandwidth of the uplinks of the second switch. In a
majority of deployments, this ratio is 1:1, making the east-west oversubscription ratio at the spine nonblocking. Exceptions to the
nonblocking east-west oversubscriptions should be well understood and depend on the traffic patterns of the endpoints that are
connected to the respective leafs.
The oversubscription ratios described here govern the ratio of the traffic bandwidth between endpoints connected to the same leaf switch
and the traffic bandwidth between endpoints connected to different leaf switches. For example, if the north-south oversubscription ratio is
3:1 at the leafs and 1:1 at the spines, then the bandwidth of traffic between endpoints connected to the same leaf switch should be three
times the bandwidth between endpoints connected to different leafs. From a network endpoint perspective, the network
oversubscriptions should be planned so that the endpoints connected to the network have the required bandwidth for communications.
Specifically, endpoints that are expected to use higher bandwidth should be localized to the same leaf switch (or the same leaf switch pair
when endpoints are multihomed).
The ratio of the aggregate bandwidth of all spine downlinks connected to the leafs and the aggregate bandwidth of all downlinks
connected to the border leafs (described in Edge Services and Border Switches Topology on page 23) defines the north-south
oversubscription at the spine. The north-south oversubscription dictates the traffic destined to the services that are connected to the
border leaf switches and that exit the data center site.
The number of spine switches in the network is governed by a combination of the throughput required between the leaf switches, the
number of redundant/ECMP paths between the leafs, and the port density in the spine switches. Higher throughput in the uplinks from
the leaf switches to the spine switches can be achieved by increasing the number of spine switches or bundling the uplinks together in
port-channel interfaces between the leafs and the spines.
Adding more capacity between existing leaf and spine switches can be done by adding spine switches or adding new interfaces between
existing leaf and spine switches. In either case, the port density requirements for the leaf and the spine switches should be accounted for
during the network design process.
If new leaf switches need to be added to accommodate new endpoints in the network, ports at the spine switches are required to connect
the new leaf switches.
In addition, you must decide whether to connect the leaf-spine topology to external networks through border leaf switches or whether to
add an additional networking tier for higher scale. Such designs require additional ports at the spine. These designs are described in
another section of this guide.
Deployment Model
The links between the leaf and spine can be either Layer 2 or Layer 3 links.
If the links between the leaf and spine are Layer 2 links, the deployment is known as a Layer 2 (L2) leaf-spine deployment or a Layer 2
Clos deployment. You can deploy Brocade VDX switches in a Layer 2 deployment by using Brocade VCS Fabric technology. With
Brocade VCS Fabric technology, the switches in the leaf-spine topology cluster together and form a fabric that provides a single point for
management, a distributed control plane, embedded automation, and multipathing capabilities from Layer 1 to Layer 3. The benefits of
deploying a VCS fabric are described later in this design guide.
If the links between the leaf and spine are Layer 3 links, the deployment is known as a Layer 3 (L3) leaf-spine deployment or a Layer 3
Clos deployment. You can deploy Brocade VDX and SLX platforms in a Layer 3 deployment by using Brocade IP fabric technology.
Brocade VDX switches can be deployed in spine and leaf Place in the Networks (PINs), whereas the Brocade SLX 9850 can be
deployed in the spine PIN. Brocade IP fabrics provide a highly scalable, programmable, standards-based, and interoperable networking
infrastructure. The benefits of Brocade IP fabrics are described later in this guide.
The connection between the spines and the super-spines follows the Clos principles:
Each spine connects to all super-spines in the network.
Neither the spines nor the super-spines are interconnected with each other.
Similarly, all the benefits of a leaf-spine topologynamely, multiple redundant paths, ECMP, scale-out architecture, and control over
traffic patternsare realized in the optimized 5-stage folded Clos topology as well.
With an optimized 5-stage Clos topology, a PoD is a simple and replicable unit. Each PoD can be managed independently, including
firmware versions and network configurations. This topology also allows the data center site capacity to scale up by adding new PoDs or
to scale down by removing existing PoDs, without affecting the existing infrastructure, providing elasticity in scale and isolation of failure
domains.
Brocade VDX switches are used for the leaf PIN, whereas depending on scale and features being deployed, either Brocade VDX or SLX
platforms can be deployed at the spine and super-spine PINs.
This topology also provides a basis for interoperation of different deployment models of Brocade VCS fabrics and IP fabrics. This is
described later in this guide.
Design Considerations
The design considerations of oversubscription ratios, port speeds and density, spine and super-spine scale, planning for future growth,
and Brocade Ports on Demand licensing, which were described for the leaf-spine topology, apply to the optimized 5-stage folded Clos
topology as well. Some key considerations are highlighted below.
Oversubscription Ratios
Because the spine switches now have uplinks connecting to the super-spine switches, the north-south oversubscription ratios for the
spine switches dictate the ratio of aggregate bandwidth of traffic switched east-west within a data center PoD to the aggregate bandwidth
of traffic exiting the data center PoD. This is a key consideration from the perspective of network infrastructure and services placement,
application tiers, and (in the case of service providers) tenant placement. In cases of north-south oversubscription at the spines,
endpoints should be placed to optimize traffic within a data center PoD.
At the super-spine switch, the east-west oversubscription defines the ratio of the bandwidth of the downlink connections for a pair of data
center PoDs. In most cases, this ratio is 1:1.
The ratio of the aggregate bandwidth of all super-spine downlinks connected to the spines and the aggregate bandwidth of all downlinks
connected to the border leafs (described in Edge Services and Border Switches Topology on page 23) defines the north-south
oversubscription at the super-spine. The north-south oversubscription dictates the traffic destined to the services connected to the
border leaf switches and exiting the data center site.
Deployment Model
The Layer 3 gateways for the endpoints connecting to the networking infrastructure can be at the leaf, at the spine, or at the super-spine.
With Brocade IP fabric architecture (described later in this guide), the Layer 3 gateways are present at the leaf layer. So the links between
the leafs, spines, and super-spines are Layer 3.
With Brocade multi-fabric topology using VCS fabric architecture (described later in this guide), there is a choice of the Layer 3 gateway
at the spine layer or at the super-spine layer. In either case, the links between the leafs and spines will be Layer 2 links. If the Layer 3
gateway is at the spine layer, the links between the spine and super-spine are Layer 3. Else, those links are Layer 2 as well. These Layer
2 links are IEEE-802.1Q-VLAN-based optionally over Link Aggregation Control Protocol (LACP) aggregated links. These architectures
are described later in this guide.
The topology for interconnecting the border switches depends on the number of network services that need to be attached and the
oversubscription ratio at the border switches. Figure 18 shows a simple topology for border switches, where the service endpoints
connect directly to the border switches. Border switches in this simple topology are referred to as "border leaf switches" because the
service endpoints connect to them directly.
If more services or higher bandwidth for exiting the data center site is needed, multiple sets of border leaf switches can be deployed. The
border switches and the edge racks together form the edge services PoD.
Brocade VDX switches are used for the border leaf PIN. The border leaf switches can also participate in a vLAG pair. This allows the edge
service appliances and servers to dual-home into the border leaf switches for redundancy and higher throughput.
Design Considerations
The following sections describe the design considerations for border switches.
Oversubscription Ratios
The border leaf switches have uplink connections to spines in the leaf-spine topology and to super-spines in the 3-tier topology. They
also have uplink connections to the data center core/WAN edge routers as described in the next section.
The ratio of the aggregate bandwidth of the uplinks connecting to the spines/super-spines and the aggregate bandwidth of the uplink
connecting to the core/edge routers determines the oversubscription ratio for traffic exiting the data center site.
The north-south oversubscription ratios for the services connected to the border leafs are another consideration. Because many of the
services connected to the border leafs may have public interfaces that face external entities like core/edge routers and internal interfaces
that face the internal network, the north-south oversubscription for each of these connections is an important design consideration.
The handoff between the border leafs and the data center core/WAN edge may provide domain isolation for the control- and data-plane
protocols running in the internal network and built using one-tier, two-tier, or three-tier topologies. This helps in providing independent
administrative, fault-isolation, and control-plane domains for isolation, scale, and security between the different domains of a data center
site.
FIGURE 19 Collapsed Data Center Core and WAN Edge Routers Connecting Internet and DCI Fabric to the Border Leaf in the Data
Center Site
If more services or higher bandwidth for exiting the data center site is needed, multiple sets of border leaf switches can be deployed. The
border switches and the edge racks together form the edge services PoD.
Brocade VDX switches are used for the border leaf PIN. The border leaf switches can also participate in a vLAG pair. This allows the edge
service appliances and servers to dual-home into the border leaf switches for redundancy and higher throughput.
Brocade VCS fabrics are Ethernet fabrics built for modern data center infrastructure needs. With Brocade VCS Fabric technology, up to
48 Brocade VDX switches can participate in a VCS fabric. The data plane of the VCS fabric is based on the Transparent Interconnection
of Lots of Links (TRILL) standard, supported by Layer 2 routing protocols that propagate topology information within the fabrics. This
ensures that there are no loops in the fabrics, and there is no need to run Spanning Tree Protocol (STP). Also, none of the links are
blocked. Brocade VCS Fabric technology provides a compelling solution for deploying a Layer 2 Clos topology.
domains within the fabric, while isolating overlapping IEEE-802.1Q-based tenant networks into separate Layer 2 domains.
Layer 3 multitenancy using Virtual Routing and Forwarding (VRF) protocols, multi-VRF routing protocols, and BGP-EVPN
enables large-scale Layer 3 multitenancy.
Ecosystem integration and virtualization featuresBrocade VCS Fabric technology integrates with leading industry solutions
and products like OpenStack; VMware products like vSphere, NSX, and vRealize; common infrastructure programming tools like
Python; and Brocade tools like Brocade Network Advisor. Brocade VCS Fabric technology is virtualization-aware and helps
dramatically reduce administrative tasks and enable seamless VM migration with features like Automatic Migration of Port
Profiles (AMPP), which automatically adjusts port-profile information as a VM moves from one server to another.
Advanced storage featuresBrocade VDX switches provide rich storage protocols and features like Fibre Channel over
Ethernet (FCoE), Data Center Bridging (DCB), Monitoring and Alerting Policy Suite (MAPS), and Auto-NAS (Network Attached
Storage), among others, to enable advanced storage networking.
The benefits and features listed simplify Layer 2 Clos deployment by using Brocade VDX switches and Brocade VCS Fabric technology.
The next section describes data center site designs that use Layer 2 Clos built with Brocade VCS Fabric technology.
FIGURE 20 Data Center Site Built with a Leaf-Spine Topology and Brocade VCS Fabric Technology with Border Spine Switches
Figure 21 shows a data center site built using a leaf-spine topology deployed using Brocade VCS Fabric technology. In this topology,
border leaf switches are added along with the edge services PoD for external connectivity and hosting edge services.
FIGURE 21 Data Center Site Built with a Leaf-Spine Topology and Brocade VCS Fabric Technology with Border Leaf Switches
The border leafs in the edge services PoD are built using a separate VCS fabric. The border leafs are connected to the spine switches in
the data center PoD and also to the data center core/WAN edge routers. These links can be either Layer 2 or Layer 3 links, depending on
the requirements of the deployment and the handoff required to the data center core/WAN edge routers. There can be more than one
edge services PoD in the network, depending on the service needs and the bandwidth requirement for connecting to the data center
core/WAN edge routers.
As an alternative to the topology shown in Figure 21, the border leaf switches in the edge services PoD and the data center PoD can be
part of the same VCS fabric, to extend the fabric benefits to the entire data center site. This model is shown in Brocade VCS Fabric on
page 51.
The data center PoDs shown in Figure 20 and Figure 21 are built using Brocade VCS fabric technology. With Brocade VCS fabric
technology, we recommend interconnecting the spines with each other (not shown in the figures) to ensure the best traffic path during
failure scenarios.
Scale
Table 1 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX platforms at the leaf and spine Places
in the Network (PINs) in a Brocade VCS fabric.
TABLE 1 Scale Numbers for a Data Center Site with a Leaf-Spine Topology Implemented with Brocade VCS Fabric Technology
Leaf Switch Spine Switch Leaf Leaf Count Spine Count VCS Fabric Size 10-GbE Port
Oversubscription (Number of Count
Ratio Switches)
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS
Fabrics
If multiple VCS fabrics are needed at a data center site, the optimized 5-stage Clos topology is used to increase scale by interconnecting
the data center PoDs built using leaf-spine topology with Brocade VCS Fabric technology. This deployment architecture is referred to as
a multi-fabric topology using VCS fabrics.
In a multi-fabric topology using VCS fabrics, individual data center PoDs resemble a leaf-spine topology deployed using Brocade VCS
Fabric technology. Note that we recommend that the spines be interconnected in a data center PoD built using Brocade VCS Fabric
technology.
A new super-spine tier is used to interconnect the spine switches in the data center PoD. In addition, the border leaf switches are also
connected to the super-spine switches. There are two deployment options available to build multi-fabric topology using VCS fabrics.
In the first deployment option, the links between the spine and super-spine are Layer 2. In order to achieve a loop-free environment and
avoid loop-prevention protocols between the spine and super-spine tiers, the super-spine devices participate in a VCS fabric as well. The
connections between the spine and the super-spines are bundled together in (dual-sided) vLAGs to create a loop-free topology. The
standard VLAN range of 1 to 4094 can be extended between the DC PoDs using IEEE 802.1Q tags over the dual-sided vLAGs. This is
illustrated in Figure 22.
FIGURE 22 Multi-Fabric Topology with VCS TechnologyWith L2 Links Between Spine and Super-Spine and DC Core/WAN Edge
Connected to Super-Spine
In this topology, the super-spines connect directly into the data center core/WAN edge, which provides external connectivity to the
network. Alternately, Figure 23 shows the border leafs connecting directly to the data center core/WAN edge. In this topology, if the
Layer 3 boundary is at the super-spine, the links between the super-spine and the border leafs carry Layer 3 traffic as well.
FIGURE 23 Multi-Fabric Topology with VCS TechnologyWith L2 Links Between Spine and Super-Spine and DC Core/WAN Edge
Connected to Border Leafs
In the second deployment option, the links between the spine and super-spine are Layer 3. In cases where the Layer 3 gateways for the
VLANs in the VCS fabrics are at the spine layer, this model provides routing between the data center PoDs. As a consequence of the
links being Layer3, a loop-free topology is achieved. Here the Brocade SLX 9850 is an option for the super-spine PIN. This is illustrated
in Figure 24.
FIGURE 24 Multi-Fabric Topology with VCS TechnologyWith L3 Links Between Spine and Super-Spine
If Layer 2 extension is required between the DC PoDs, Virtual Fabric Extension (VF-Extension) technology can be used. With VF-
Extension, the spine switches (VDX 6740 and VDX 6940 only) can be configured as VXLAN Tunnel Endpoints (VTEPs). Subsequently,
the VXLAN protocol can be used to extend the Layer 2 VLANs as well as the virtual fabrics between the VCS fabrics of the DC PoDs.
This is described in more detail in the Brocade Data Center Fabric Architectures for Network Virtualization Solution Design Guide.
Figure 23 and Figure 24 show only one edge services PoD, but there can be multiple such PoDs depending on the edge service
endpoint requirements, the oversubscription for traffic that is exchanged with the data center core/WAN edge, and the related handoff
mechanisms.
Scale
Table 2 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine,
and super-spine PINs for an optimized 5-stage Clos built with Brocade VCS fabrics. The following assumptions are made:
Links between the leafs and the spines are 40 GbE. Links between the spines and super-spines are also 40 GbE.
The Brocade VDX 6740 platforms use 4 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade
VDX 6740, Brocade VDX 6740T, and Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity on
Demand license to upgrade to 10GBASE-T ports.) Four spines are used to connect the uplinks.
The Brocade 6940-144S platforms use 12 40-GbE uplinks. Twelve spines are used to connect the uplinks.
The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the
bandwidth of downlink ports at the spines. A larger port scale can be realized with a higher oversubscription ratio at the spines.
However, a 1:1 oversubscription ratio is used here and is also recommended.
One spine plane is used for the scale calculations. This means that all spine switches in each data center PoD connect to all
super-spine switches in the topology. This topology is consistent with the optimized 5-stage Clos topology.
Brocade VDX 8770 platforms use 27 40-GbE line cards in performance mode (uses 18 40-GbE per line card) for
connections between spines and super-spines. The Brocade VDX 8770-4 supports 72 40-GbE ports in performance mode.
The Brocade VDX 8770-8 supports 144 40-GbE ports in performance mode.
The link between the spines and the super-spines is assumed to be Layer 3, and 32-way Layer 3 ECMP is utilized for spine to
super-spine connections. This gives a maximum of 32 super-spines for the multi-fabric topology using Brocade VCS Fabric
technology. Refer to the release notes for your platform to check the ECMP support scale.
NOTE
For a larger port scale for the multi-fabric topology using Brocade VCS Fabric technology, multiple spine planes are used.
Architectures with multiple spine planes are described later.
TABLE 2 Sample Scale Numbers for a Data Center Site Built as a Multi-Fabric Topology Using Brocade VCS Fabric Technology
Leaf Switch Spine Switch Super-Spine Leaf Over- Leaf Count Spine Count Number of Number of 10-GbE
Switch subscription per Data per Data Super- Data Center Port Count
Ratio Center PoD Center PoD Spines PoDs
TABLE 2 Sample Scale Numbers for a Data Center Site Built as a Multi-Fabric Topology Using Brocade VCS Fabric Technology
(continued)
Leaf Switch Spine Switch Super-Spine Leaf Over- Leaf Count Spine Count Number of Number of 10-GbE
Switch subscription per Data per Data Super- Data Center Port Count
Ratio Center PoD Center PoD Spines PoDs
VDX 6740, VDX 8770-4 VDX 8770-4 3:1 32 4 32 18 27,648
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 8770-4 VDX 8770-4 2:1 32 12 32 6 18,432
VDX 6740, VDX 6940-36Q VDX 8770-8 3:1 18 4 18 36 31,104
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q VDX 8770-8 2:1 18 12 18 12 20,736
VDX 6740, VDX 8770-4 VDX 8770-8 3:1 32 4 32 36 55,296
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 8770-4 VDX 8770-8 2:1 32 12 32 12 36,864
VDX 6740, VDX 6940-36Q SLX 9850-4 3:1 18 4 18 60 51,840
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q SLX 9850-4 2:1 18 12 18 20 34,560
VDX 6740, VDX 8770-4 SLX 9850-4 3:1 32 4 32 60 92,160
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 8770-4 SLX 9850-4 2:1 32 12 32 20 61,440
VDX 6740, VDX 6940-36Q SLX 9850-8 3:1 18 4 18 120 103,680
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 6940-36Q SLX 9850-8 2:1 18 12 18 40 69,120
VDX 6740, VDX 8770-4 SLX 9850-8 3:1 32 4 32 120 184,320
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S VDX 8770-4 SLX 9850-8 2:1 32 12 32 40 122,880
Brocade IP fabric provides a Layer 3 Clos deployment architecture for data center sites. With Brocade IP fabric, all links in the Clos
topology are Layer 3 links. The Brocade IP fabric includes the networking architecture, the protocols used to build the network, turnkey
automation features used to provision, validate, remediate, troubleshoot, and monitor the networking infrastructure, and the hardware
differentiation with Brocade VDX and SLX platforms. The following sections describe these aspects of building data center sites with
Brocade IP fabrics.
Because the infrastructure is built on IP, advantages like loop-free communication using industry-standard routing protocols, ECMP, very
high solution scale, and standards-based interoperability are leveraged.
The following are some of the key benefits of deploying a data center site with Brocade IP fabrics:
Highly scalable infrastructureBecause the Clos topology is built using IP protocols, the scale of the infrastructure is very high.
These port and rack scales are documented with descriptions of the Brocade IP fabric deployment topologies.
Standards-based and interoperable protocolsBrocade IP fabric is built using industry-standard protocols like the Border
Gateway Protocol (BGP) and Open Shortest Path First (OSPF). These protocols are well understood and provide a solid
foundation for a highly scalable solution. In addition, industry-standard overlay control- and data-plane protocols like BGP-
EVPN and Virtual Extensible Local Area Network (VXLAN) are used to extend the Layer 2 domain and extend tenancy domains
by enabling Layer 2 communications and VM mobility.
Active-active vLAG pairsBy supporting vLAG pairs on leaf switches, dual-homing of the networking endpoints is supported.
This provides higher redundancy. Also, because the links are active-active, vLAG pairs provide higher throughput to the
endpoints. vLAG pairs are supported for all 10-GbE, 40-GbE, and 100-GbE interface speeds, and up to 32 links can
participate in a vLAG.
Support for unnumbered interfacesUsing Brocade Network OS support for IP unnumbered interfaces available in Brocade
VDX switches, only one IP address per switch is required to configure the routing protocol peering. This significantly reduces the
planning and use of IP addresses, and it simplifies operations.
Turnkey automationBrocade automated provisioning dramatically reduces the deployment time of network devices and
network virtualization. Prepackaged, server-based automation scripts provision Brocade IP fabric devices for service with
minimal effort.
Programmable automationBrocade server-based automation provides support for common industry automation tools such
as Python Ansible, Puppet, and YANG model-based REST and NETCONF APIs. The prepackaged PyNOS scripting library
and editable automation scripts execute predefined provisioning tasks, while allowing customization for addressing unique
requirements to meet technical or business objectives when the organization is ready.
Ecosystem integrationThe Brocade IP fabric integrates with leading industry solutions and products like VMware vCenter,
NSX, and vRealize. Cloud orchestration and control are provided through OpenStack and OpenDaylight-based Brocade SDN
Controller support.
FIGURE 25 An IP Fabric Data Center PoD Built with Leaf-Spine Topology and vLAG Pairs for Dual-Homed Network Endpoint
The Brocade VDX switches in a vLAG pair have a link between them for control-plane purposes to create and manage the multiswitch
port-channel interfaces. When network virtualization with BGP EVPN is used, these links also carry switched traffic in case of downlink
failures or single-homed endpoints. Oversubscription of the inter-switch link (ISL) is an important consideration for these scenarios.
Figure 26 shows a data center site deployed using a leaf-spine topology and an edge services PoD. Here the network endpoints are
illustrated as single-homed, but dual homing is enabled through vLAG pairs where required.
FIGURE 26 Data Center Site Built with Leaf-Spine Topology and an Edge Services PoD
The links between the leafs, spines, and border leafs are all Layer 3 links. The border leafs are connected to the spine switches in the data
center PoD and also to the data center core/WAN edge routers. The uplinks from the border leaf to the data center core/WAN edge can
be either Layer 2 or Layer 3, depending on the requirements of the deployment and the handoff required to the data center core/WAN
edge routers.
There can be more than one edge services PoD in the network, depending on service needs and the bandwidth requirement for
connecting to the data center core/WAN edge routers.
Scale
Table 3 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf and
spine PINs in a Brocade IP fabric with 40-GbE links between leafs and spines.
NOTE
For a larger port scale in Brocade IP fabrics in a 3-stage folded Clos, the Brocade VDX 8770-4 or 8770-8 can be used as a
leaf switch.
TABLE 3 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 40-GbE Links Between Leafs
and Spines
Leaf Switch Spine Switch Leaf Leaf Count Spine Count IP Fabric Size 10-GbE Port
Oversubscription (Number of Count
Ratio Switches)
TABLE 3 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 40-GbE Links Between Leafs
and Spines (continued)
Leaf Switch Spine Switch Leaf Leaf Count Spine Count IP Fabric Size 10-GbE Port
Oversubscription (Number of Count
Ratio Switches)
VDX 6940-144S SLX 9850-8 2:1 480 12 492 46,080
Table 4 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf and
spine PINs in a Brocade IP fabric with 100-GbE links between leafs and spines.
TABLE 4 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 100-GbE Links Between Leafs
and Spines
Leaf Switch Spine Switch Leaf Leaf Count Spine Count IP Fabric Size 10-GbE Port
Oversubscription (Number of Count
Ratio Switches)
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
If a higher scale is required, the optimized 5-stage folded Clos topology is used to interconnect the data center PoDs built using a
Layer 3 leaf-spine topology. An example topology is shown in Figure 27.
FIGURE 27 Data Center Site Built with an Optimized 5-Stage Folded Clos Topology and IP Fabric PoDs
Figure 27 shows only one edge services PoD, but there can be multiple such PoDs, depending on the edge service endpoint
requirements, the amount of oversubscription for traffic exchanged with the data center core/WAN edge, and the related handoff
mechanisms.
Scale
Figure 28 shows a variation of the optimized 5-stage Clos. This variation includes multiple super-spine planes. Each spine in a data
center PoD connects to a separate super-spine plane.
The number of super-spine planes is equal to the number of spines in the data center PoDs. The number of uplink ports on the spine
switch is equal to the number of switches in a super-spine plane. Also, the number of data center PoDs is equal to the port density of the
super-spine switches. Introducing super-spine planes to the optimized 5-stage Clos topology greatly increases the number of data
center PoDs that can be supported. For the purposes of port scale calculations of the Brocade IP fabric in this section, the optimized
5-stage Clos with multiple super-spine plane topology is considered.
Table 5 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine,
and super-spine PINs for an optimized 5-stage Clos with multiple super-spine planes built with Brocade IP fabric with 40-GbE
interfaces between leafs, spines, and super-spines. The following assumptions are made:
Links between the leafs and the spines are 40 GbE. Links between spines and super-spines are also 40 GbE.
The Brocade VDX 6740 platforms use 4 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade
VDX 6740, the Brocade VDX 6740T, and the Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity
on Demand license to upgrade to 10GBASE-T ports.) Four spines are used for connecting the uplinks.
The Brocade VDX 6940-144S platforms use 12 40-GbE uplinks. Twelve spines are used for connecting the uplinks.
The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the
bandwidth of downlink ports at the spines. The number of physical ports utilized from the spine toward the super-spine is equal
to the number of ECMP paths supported. However, a 1:1 subscription ratio is used here and is also recommended.
The Brocade VDX 8770 platforms use 27 40-GbE line cards in performance mode (18 40 GbE) for connections between
spines and super-spines. The Brocade VDX 8770-4 supports 72 40-GbE ports in performance mode. The Brocade VDX
8770-8 supports 144 40-GbE ports in performance mode.
32-way Layer 3 ECMP is utilized for spine-to-super-spine connections. This gives a maximum of 32 super-spines for the
Brocade IP fabric. Refer to the platform release notes to check the ECMP support scale.
TABLE 5 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP
Fabric with 40 GbE Between Leaf, Spine, and Super-Spine
Leaf Switch Spine Switch Super-Spine Leaf Over- Leaf Spine Number Number Number 10-GbE
Switch subscription Count per Count per of Super- of Super- of Data Port Count
Ratio Data Data Spine Spines in Center
Center Center Planes Each PoDs
PoD PoD Super-
Spine
Plane
TABLE 5 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP
Fabric with 40 GbE Between Leaf, Spine, and Super-Spine (continued)
Leaf Switch Spine Switch Super-Spine Leaf Over- Leaf Spine Number Number Number 10-GbE
Switch subscription Count per Count per of Super- of Super- of Data Port Count
Ratio Data Data Spine Spines in Center
Center Center Planes Each PoDs
PoD PoD Super-
Spine
Plane
VDX 6740, SLX 9850-4 SLX 9850-8 3:1 32 4 4 32 480 737,280
VDX 6740T,
VDX 6740T-1G
VDX 6940-144S SLX 9850-4 SLX 9850-8 2:1 32 12 12 32 480 1,474,560
Table 6 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine,
and super-spine PINs for an optimized 5-stage Clos with multiple super-spine planes built with Brocade IP fabric with 100-GbE
interfaces between the leafs, spines, and super spines. The following assumptions are made:
Links between the leafs and the spines are 100 GbE. Links between spines and super-spines are also 100 GbE.
The Brocade VDX 6940-144S platforms use 4 100-GbE uplinks. Four spines are used for connecting the uplinks.
The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the
bandwidth of downlink ports at the spines. The number of physical ports utilized from spine toward super-spine is equal to the
number of ECMP paths supported. However, a 1:1 subscription ratio is used here and is also recommended.
32-way Layer 3 ECMP is utilized for spine-to-super-spine connections. This gives a maximum of 32 super-spines for the
Brocade IP fabric. Refer to the platform release notes to check the ECMP support scale.
TABLE 6 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP
Fabric with 100 GbE Between Leaf, Spine, and Super-Spine
Leaf Switch Spine- Switch Super-Spine Leaf Over- Leaf Count Spine Number of Number of Number of 10-GbE
Switch subscription per Data Count per Super- Super- Data Port Count
Ratio Center Data Spine Spines in Center
PoD Center Planes Each PoDs
PoD Super-
Spine
Plane
Further higher scale can be achieved by physically connecting all available ports on the switching platform and using BGP policies to
enforce the maximum ECMP scale as limited by the platform. This provides higher port scale for the topology, while still ensuring that
maximum ECMP scale is used. It should be noted that this arrangement provides a nonblocking 1:1 north-south subscription at the
spine in most scenarios.
In Table 7, the scale for a 5-stage folded Clos with 40-GbE interfaces between leaf, spine, and super-spine is shown assuming that BGP
policies are used to enforce the ECMP maximum scale.
TABLE 7 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes, BGP Policy-Enforced
ECMP Maximum, and 100 GbE Between Leafs, Spines, and Super-Spines
Leaf Switch Spine Switch Super-Spine Leaf Over- Leaf Spine Number of Number of Number of 10-GbE Port
Switch subscription Count per Count per Super- Super- Data Count
Ratio Data Data Spine Spines in Center
Center Center Planes Each PoDs
PoD PoD Super-
Spine
Plane
In Table 8, the scale for a 5-stage folded Clos with 100-GbE interfaces between leaf, spine, and super spine is shown assuming that
BGP policies are used to enforce the ECMP maximum scale.
TABLE 8 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes, BGP Policy-Enforced
ECMP Maximum, and 100 GbE Between Leaf, Spines, and Super Spines
Leaf Switch Spine Switch Super-Spine Leaf Over- Leaf Count Spine Number of Number of Number of 10-GbE
Switch subscription per Data Count per Super- Super- Data Port Count
Ratio Center Data Spine Spines in Center
PoD Center Planes Each PoDs
PoD Super-
Spine
Plane
FIGURE 29 Data Center Site Built Using VCS Fabric and IP Fabric Pods
In this topology, the links between the spines, super-spines, and border leafs are Layer 3. This provides a consistent interface between
the data center PoDs and enables full communication between endpoints in any PoD.
Figure 30 shows an alternate topology where a leaf appliance (vLAG pair) is used to connect the spines of a VCS fabric with the rest of
the IP fabric. This topology should be used to interconnect VCS and IP fabric based topologies when network virtualization with BGP
EVPN is used. In Figure 30, if used, the BGP EVPN domain spans all Layer 3 links. If a common Layer 3 gateway is required to enable
virtual machine mobility and application continuity, then the Layer 3 boundary for the VCS fabric endpoints should be at the leaf
appliance using Brocade static anycast gateway technology.
FIGURE 31 Data Center Site Built with Optimized 5-Stage Clos Topologies Interconnected with a Data Center Core
Note that the border leafs or leaf switches from each of the Clos deployments connect into the data center core routers. The handoff
from the border leafs/leafs to the data center core router can be Layer 2 and/or Layer 3, with overlay protocols like VXLAN and BGP-
EVPN, depending on the requirements. The border leafs in this topology are optional and are not needed unless there are specific
requirements like connecting to edge services specific to a Clos topology or creating a network control-plane domain for administration
and scale purposes.
The number of Clos topologies that can be connected to the data center core depends on the port density and throughput of the data
center core devices. Each deployment connecting into the data center core can be a single-tier, leaf-spine, or optimized 5-stage Clos
design deployed using an IP fabric architecture or a multi-fabric topology using VCS fabrics.
Also shown in Figure 31 is a centralized edge services PoD that provides network services for the entire site. There can be one or more
edge services PoDs with the border leafs in the edge services PoD, providing the handoff to the data center core. The WAN edge routers
also connect to the edge services PoDs and provide connectivity to the external network.
The maximum size of the network deployment depends on the scale of the control-plane protocols, as well as the scale of hardware
Application-Specific Integrated Circuit (ASIC) tables.
The maximum scale of the VCS fabric deployment is a function of the number of nodes, topology of the nodes, link reliability, distance
between the nodes, features deployed in the fabric, and the scale of the deployed features. A maximum of 48 nodes are supported in a
VCS fabric.
In a Brocade IP fabric, the control plane is based on routing protocols like BGP and OSPF. In addition, a control plane is provided for
formation of vLAG pairs. In the case of virtualization with VXLAN overlays, BGP-EVPN provides the control plane. The maximum scale
of the topology also depends on the scalability of these protocols.
For both Brocade VCS fabrics and IP fabrics, it is important to understand the hardware table scale and the related control-plane scales.
These tables include:
MAC address table
Host route tables/Address Resolution Protocol/Neighbor Discovery (ARP/ND) tables
Longest Prefix Match (LPM) tables for IP prefix matching
Next-hop table
Tertiary Content Addressable Memory (TCAM) tables for packet matching
These tables are programmed into the switching ASICs based on the information learned through configuration, the data plane, or the
control-plane protocols. This also means that it is important to consider the control-plane scale for carrying information for these tables
when determining the maximum size of the network deployment.
Control-Plane Architectures
The Layer 3 boundary and default gateway for all networking endpoints are also hosted in the vLAG pair/MCT. Gateway redundancy
protocols like VRRP-E and Fabric Virtual Gateway provided in Brocade VDX Network OS provide active-active gateway redundancy for
the endpoints. Similarly VRRP-E available in Brocade SLX 9850 SLX-OS provides active-active gateway redundancy for the endpoints.
Optionally, the Layer 3 boundary can be pushed to the datacenter core/WAN edge.
Because the border leaf functionality is also folded into the vLAG, the handoff to the datacenter core/WAN edge devices would be Layer
2 and/or Layer 3 with a combination of overlay protocols like BGP-EVPN and VXLAN.
With Brocade VCS, Layer 2 domains are automatically extended across the fabric. Layer 2 domain separation for edge ports of the
fabric can be performed using VLANs or virtual fabrics, which provides a larger set of up to 8,000 Layer 2 domains. With virtual fabrics,
it is possible to support overlapping VLAN IDs between multiple tenants of the network on the same switch and also across the VCS
fabric.
In Figure 34, the Layer 3 boundary for all networking endpoints is shown to be in the spine. The spine devices participate in active-active
gateway redundancy using VRRP-E or Fabric Virtual Gateway. Multitenancy can be achieved at Layer 3 using Virtual Routing and
Forwarding (VRF) instances at the Layer 3 boundary. The links between the spines and the data center core/WAN edge are Layer 3 in
this case. If VRFs are used, VRF-lite technology is used to extend the VRFs to the data center core/WAN edge. Optionally, the Layer 3
boundary can be hosted at the leaf as well.
In Figure 35, the Layer 3 boundary for all networking endpoints is shown to be at the WAN edge/data center core. A port channel
connects the spines to the data center core/WAN edge devices. Multitenancy can be achieved at Layer 3 using Virtual Routing and
Forwarding (VRF) instances at the Layer 3 boundary.
FIGURE 35 Brocade VCS Fabric with Layer 3 Boundary Outside the Fabric
In Figure 36, the Layer 3 boundary for all networking endpoints is shown to be at the border leaf switches. Here the border leafs are
shown as part of the VCS fabric. However, they can be a separate VCS fabric as well, in which case, the connections between the spine
and the border leaf use a Layer 2 dual-side vLAG (see Figure 11). Multitenancy can be achieved at Layer 3 using Virtual Routing and
Forwarding (VRF) instances at the Layer 3 boundary. Alternately, the Layer 3 boundary can be at the spine as well. In that case, the link
between the spine and the border leaf switches carries Layer 3 traffic as well (need routing enabled).
FIGURE 36 Brocade VCS Fabric with Layer 3 Boundary at the Border Leaf
In Figure 37, because the Layer 2 domains are extended to the super-spine, the Layer 3 boundary for the VLANs is at the super-spine
layer. Using VRRP-E and FVG technologies, all super-spine switches can participate in active forwarding for the gateway IP addresses
for the VLANs extended up to the super-spines. Multitenancy can be achieved at Layer 3 using Virtual Routing and Forwarding (VRF)
instances at the Layer 3 boundary.
FIGURE 37 Control Plane for Multi-Fabric Topology Using VCS Technology with Layer 3 Boundary at the Super-Spines
The links between the super-spines and the data center core/WAN edge are Layer 3 links. These links can carry VRF traffic using VRF-
Lite.
Figure 38 shows each VCS fabric in the DC PoDs with the Layer 3 boundary for the VLANs and virtual fabrics at the spine layer. The
connections between the spines and the super-spines are Layer 3. Multitenancy can be achieved at Layer 3 using VRF instances at the
Layer 3 boundary. Routing protocols like BGP and OSPF are used to provide the routing between the datacenter PoDs. If VRFs are
being used, VRF-Lite can be used to extend the VRF domains.
FIGURE 38 Control Plane for Multi-Fabric Topology Using VCS Technology with Layer 3 Between Spines and Super-Spines
Brocade IP Fabric
With Brocade IP fabric, routing protocols between the networking tiers are required to exchange reachability information. Figure 39
shows a leaf-spine topology deployed using Brocade IP fabric with the protocol options of iBGP, eBGP, and OSPF. The routing protocol
designs are described in the next section.
The Layer 3 boundary in a Brocade IP fabric is always at the leafs. In cases of multihomed network endpoints, gateway redundancy
protocols like VRRP-E are used to provide active-active forwarding on the vLAG pair. When BGP EVPN is used, the Brocade Static
Anycast Gateway protocol can be used for active-active forwarding on the vLAG pair.
For simplicity, the entire site is deployed with the same routing protocol. All individual datacenter PoDs participate in the same routing
domain. However, different routing protocols per datacenter PoD can be deployed with redistribution into the routing protocol between
the super-spine and the spine. Route aggregation and policy enforcement can be implemented at the leaf and spine layers to reduce the
forwarding table sizes and influence the traffic patterns.
If a Layer 3 handoff to the datacenter core/WAN edge is needed, eBGP is used for the peering.
The WAN edge devices remove the private AS numbers while advertising the routes for the datacenter site.
The WAN edge devices summarize the routes for the datacenter site before advertising to the other sites.
The WAN edge devices initiate aggregate routes and/or a default route into the eBGP domain for the datacenter site.
The WAN edge devices remove the private AS numbers while advertising the routes for the datacenter site.
The WAN edge devices summarize the routes for the datacenter site before advertising to the other sites.
The WAN Edge devices initiate aggregate routes and/or a default route into the datacenter site.
Because of the ongoing and rapidly evolving transition toward the cloud and the need across IT to quickly improve operational agility and
efficiency, the best choice is an architecture based on Brocade data center fabrics. However, the process of choosing an architecture that
best meets your needs today while leaving you flexibility to change can be paralyzing.
Brocade recognizes how difficult it is for customers to make long-term technology and infrastructure investments, knowing they will have
to live with those choices for years. For this reason, Brocade provides solutions that help you build cloud-optimized networks with
confidence, knowing that your investments have value todayand will continue to have value well into the future.
In addition, the deployment scale also depends on the control plane and on the hardware tables of the platform. Table 10 provides an
example of the scale considerations for parameters in a leaf-spine topology with Brocade VCS fabric and IP fabric deployments. The
table illustrates how scale requirements for the parameters vary between a VCS fabric and an IP fabric for the same environment.
TABLE 10 Scale Considerations for Brocade VCS Fabric and IP Fabric Deployments
Brocade VCS Fabric Brocade IP Fabric Brocade IP Fabric with BGP-EVPN-Based
VXLAN
Fabric Architecture
Another way to determine which Brocade data center fabric provides the best solution for your needs is to compare the architectures
side-by-side.
Figure 43 provides a side-by-side comparison of the two Brocade data center fabric architectures. The blue text shows how each
Brocade data center fabric is implemented. For example, a VCS fabric is topology-agnostic and uses TRILL as its transport mechanism,
whereas the topology for an IP fabric is a Clos that uses IP for transport.
It is important to note that the same Brocade VDX platform, Brocade Network OS software, and licenses are used for either deployment.
So, when you are making long-term infrastructure purchase decisions, be reassured to know that you need only one switching platform.
Recommendations
Of course, each organization's choices are based on its unique requirements, culture, and business and technical objectives. Yet by and
large, the scalability and seamless server mobility of a Layer 2 scale-out VCS fabric provides the ideal starting point for most enterprise
and cloud providers. Like IP fabrics, VCS fabrics provide open interfaces and software extensibility, if you decide to extend the already
capable and proven embedded automation of Brocade VCS Fabric technology.
For organizations looking for a Layer 3 optimized scale-out approach, Brocade IP fabric is the best architecture to deploy. And if
controller-less network virtualization using Internet-proven technologies such as BGP-EVPN is the goal, Brocade IP fabric is the best
underlay.
Brocade architectures also provide the flexibility of combining both of these deployment topologies in an optimized 5-stage Clos
architecture, as illustrated in Figure 29 on page 45. This provides the flexibility to choose a different deployment model per data center
PoD.
Most importantly, if you find your infrastructure technology investment decisions challenging, you can be confident that an investment in
the Brocade VDX switch platform will continue to prove its value over time. With the versatility of the Brocade VDX platform and its
support for both Brocade data center fabric architectures, your infrastructure needs will be fully met today and into the future.