You are on page 1of 77

Cisco Nexus 7000 Hardware Architecture

BRKARC-3470

Tim Stevenson
Distinguished Engineer, Technical Marketing
Session Abstract
This session presents an in-depth study of the architecture of the Nexus 7000
data center switch. Topics include the latest generations of supervisor, fabric, I/O
modules, forwarding engine, and physical design elements, as well as a
discussion of key hardware- enabled features that combine to implement high-
performance data center network services.

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Session Goal
 To provide a thorough understanding of the Cisco Nexus™ 7000 switching
architecture, supervisor, fabric, and I/O module design, packet flows, and key
forwarding engine functions
 This session will examine only the latest additions to the Nexus 7000 platform
 This session will not examine NX-OS software architecture or other Nexus
platform architectures

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
What Is Nexus 7000?
Data-center class Ethernet switch designed to deliver high performance, high
availability, system scale, and investment protection
Supervisor Engines

I/O Modules

Chassis

Fabrics

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
Agenda
 Chassis Architecture
 Supervisor Engine and I/O Module Architecture
 Forwarding Engine Architecture
 Fabric Architecture
 I/O Module Queuing
 Layer 2 Forwarding
 IP Forwarding
 Classification
 NetFlow
 Conclusion

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
Nexus 7000 Chassis Family NX-OS 4.1(2) and later

Nexus 7010 Nexus 7018

25RU
21RU

Front N7K-C7010 Rear


Front N7K-C7018 Rear
NX-OS 5.2(1) and later
NX-OS 6.1(2) and later
Nexus 7009 Nexus 7004

14RU 7RU

Front N7K-C7004 Rear


Front N7K-C7009 Rear
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
Supported in NX-OS release 6.1(2) and later

Nexus 7004 Chassis


 4 slot chassis – 2 payload slots, 2
supervisor slots
 No fabric modules – I/O modules
connect back-to-back
 Side-to-back airflow
 4 X 3000W power supplies (AC or
DC)
 All FRUs accessed from chassis
front
 Supports Sup2 / 2E only
 Supports M1-XL, M2, F2, F2E
modules
– No support for M1 non-XL, F1 modules

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
Key Chassis Components
 Common components:
– Supervisor Engines
– I/O Modules
– Power Supplies (except 7004)
 Chassis-specific components:
– Fabric Modules
– Fan Trays

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
Agenda
 Chassis Architecture
 Supervisor Engine and I/O Module Architecture
 Forwarding Engine Architecture
 Fabric Architecture
 I/O Module Queuing
 Layer 2 Forwarding
 IP Forwarding
 Classification
 NetFlow
 Conclusion

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
Supervisor Engine 2 / 2E
 Next generation supervisors providing control plane and management functions
Supervisor Engine 2 Supervisor Engine 2E
Base performance High performance
One quad-core 2.1GHz CPU with 12GB DRAM Two quad-core 2.1GHz CPU with 32GB DRAM

 Connects to fabric via 1G inband interface


 Interfaces with I/O modules via 1G switched EOBC
 Second-generation dedicated central arbiter ASIC
– Controls access to fabric bandwidth via dedicated arbitration path to I/O modules

N7K-SUP2/N7K-SUP2E

ID LED
Status Management USB Host USB Log USB Expansion
LEDs Console Port Ethernet Ports Flash Flash

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
Nexus 7000 I/O Module Families
M Series and F Series

 M Series – L2/L3/L4 with large forwarding tables and rich feature set

N7K-M148GT-11L

N7K-M108X2-12L
N7K-M206FQ-23L
N7K-M224XP-23L
N7K-M148GS-11L

N7K-M132XP-12L N7K-M202CF-22L

 F Series – High performance, low latency, low power with streamlined feature set
N7K-F248XT-25E N7K-F248XP-25E

N7K-F132XP-15 N7K-F248XP-25

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
Supported in NX-OS release 6.1(1) and later

24-Port 10G M2 I/O Module


N7K-M224XP-23L

 24-port 10G with SFP+ transceivers


 240G full-duplex fabric connectivity
 Two integrated forwarding engines (120Mpps)
– Support for “XL” forwarding tables (licensed feature)
N7K-M224XP-23L
 Supports Nexus 2000 (FEX) connections
 Distributed L3 multicast replication
 802.1AE LinkSec on all ports

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 13
24-Port 10G M2 I/O Module Architecture
N7K-M224XP-23L
EOBC To Fabric Modules To Central Arbiters

LC Fabric 2 ASIC Arbitration


CPU Aggregator

VOQs VOQs Forwarding Forwarding VOQs VOQs
Engine Engine

Replication Replication
Engine Engine
Replication Replication
Engine Engine

12 X 10G MAC / LinkSec 12 X 10G MAC / LinkSec

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Front Panel Ports

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
Reference: ASIC Functions – M2 Modules
 12 X 10G MAC / LinkSec – Provides port ASIC functions, including buffering/queuing, and performs
802.1ae encryption/decryption for 12 front-panel 10G ports
 Replication Engine – Bridge between front panel port, forwarding engine, and fabric; performs multicast
and SPAN replication
 Forwarding Engine – Performs all Layer 2, Layer 3, and Layer 4 forwarding decisions and policy
enforcement
 VOQs – Interface to central arbiter and local crossbar fabric, implements Virtual Output Queuing
 Arbitration Aggregator – Muxes arbitration requests from VOQs before sending to central arbiter on
Supervisor Engine
 Fabric 2 – Local fabric that provides first/third stage of three-stage crossbar
 (LC CPU – Linecard CPU, runs module-specific NX-OS processes and interfaces with Supervisor Engine
over EOBC)

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Supported in NX-OS release 6.1(1) and later

6-Port 40G M2 I/O Module


N7K-M206FQ-23L

 6-port 40G with QSFP+ transceivers


– Option to breakout to 4X10G interfaces per 40G port* N7K-M206FQ-23L

 240G full-duplex fabric connectivity


 Two integrated forwarding engines (120Mpps)
– Support for “XL” forwarding tables (licensed feature)
 Distributed L3 multicast replication
 802.1AE LinkSec on all ports

* Roadmap feature
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
6-Port 40G M2 I/O Module Architecture
N7K-M206FQ-23L
EOBC To Fabric Modules To Central Arbiters

LC Fabric 2 ASIC Arbitration


CPU Aggregator

VOQs VOQs Forwarding Forwarding VOQs VOQs
Engine Engine

Replication Replication
Engine Engine
Replication Replication
Engine Engine

3 X 40G MAC / LinkSec 3 X 40G MAC / LinkSec

1 2 3 4 5 6
Front Panel Ports

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
40G Transceivers – QSFP+
 40GBASE-SR4 supported in 6.1(1)
– 12-fiber MPO/MTP connector
– 100m over OM3 MMF, 150m over OM4 MMF
 40GBASE-LR4 supported in 6.1(4) QSFP-40G-LR4 QSFP-40G-SR4
– SC connector
– 10km over SMF

Interior of ribbon fiber cable

MPO/MTP Optical Connector

40G MPO/MTP interface 40G 12-strand ribbon fiber


(one row of 12 fibers) (4 middle fibers unused)
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
Supported in NX-OS release 6.1(1) and later

2-Port 100G M2 I/O Module


N7K-M202CF-22L

 2-port 100G with CFP transceivers


– Option to breakout to 2X40G or 10X10G interfaces per 100G port*
 200G full-duplex fabric connectivity
 Two integrated forwarding engines (120Mpps)
– Support for “XL” forwarding tables (licensed feature)
N7K-M202CF-22L
 Distributed L3 multicast replication
 802.1AE LinkSec on all ports

* Roadmap feature
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
2-Port 100G M2 I/O Module Architecture
N7K-M202CF-22L
EOBC To Fabric Modules To Central Arbiters

LC Fabric 2 ASIC Arbitration


CPU Aggregator

VOQs VOQs Forwarding Forwarding VOQs VOQs
Engine Engine

Replication Replication
Engine Engine
Replication Replication
Engine Engine

1 X 100G MAC / LinkSec 1 X 100G MAC / LinkSec

1 2
Front Panel Ports
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
100G Module Transceivers – 40G and 100G CFP
 100GBASE-LR4 supported from 6.1(1)
– SC connector
– 10km over SMF
 CFP-100G-SR10 supported from 6.1(3)
– 24-fiber MPO/MTP connector CFP-100G-SR10 CFP-100G-LR4
– 100m over OM3 MMF, 150m over OM4 MMF

 40GBASE-SR4 supported from 6.1(2)


– 12-fiber MPO/MTP connector
– 100m over MMF
 40GBASE-LR4 supported from 6.1(2)
CFP-40G-SR4 CFP-40G-LR4
– SC connector
– 10km over SMF

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
Supported in NX-OS release 6.0(1) and later

48-Port 1G/10G F2 I/O Module


N7K-F248XP-25

 48-port 1G/10G with SFP/SFP+ transceivers


 480G full-duplex fabric connectivity
 System-on-chip (SoC)* forwarding engine design
– 12 independent SoC ASICs N7K-F248XP-25
 Layer 2/Layer 3 forwarding with L3/L4 services
(ACL/QoS)
 Supports Nexus 2000 (FEX) connections
 FabricPath-capable
 FCoE-capable (with Supervisor 2 / 2E)

* also called “switch-on-chip”


BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
Supported in NX-OS release 6.1(2) and later

48-Port 1G/10G F2E I/O Modules


N7K-F248XP-25E / N7K-F248XT-25E
N7K-F248XP-25E
 Enhanced version of original F2 I/O module
 Fiber and copper version
 480G full-duplex fabric connectivity
 Same basic SoC architecture as original F2
with some additional functionality

N7K-F248XT-25E

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
What’s Different in F2E?
 Interoperability with M1/M2, in Layer 2 mode*
– Proxy routing for inter-VLAN/L3 traffic
 LinkSec support*
– Fiber version: 8 ports
– Copper version: 48 ports
 Energy Efficient Ethernet (EEE) capability on F2E copper version

* Roadmap feature
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
48-Port 1G/10G F2 / F2E I/O Module Architecture
N7K-F248XP-25 / N7K-F248XP-25E / N7K-F248XT-25E
To Fabric Modules To Central Arbiters
EOBC

LC Arbitration
CPU Aggregator …

Fabric 2

4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G
SoC SoC SoC SoC SoC SoC SoC SoC SoC SoC SoC SoC

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48

Front Panel Ports LinkSec-capable (F2E fiber only)

LinkSec-capable (F2E copper only)

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
Reference: ASIC Functions – F2/F2E Modules
 4 X 10G SoC – Four-port 10G system-on-chip used on F2 modules; provides
Port ASIC, Replication Engine, Forwarding Engine, and VOQ functions
 Arbitration Aggregator – Muxes arbitration requests from SoCs before sending
to central arbiter on Supervisor Engine
 Fabric 2 – Local fabric that provides first/third stage of three-stage crossbar
 (LC CPU – Linecard CPU, runs module-specific NX-OS processes and
interfaces with Supervisor Engine over EOBC)

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
F2-Only VDC
Communication between F2-only
VDC and M1/M2/F1 VDC must be
through external connection

F2 module
 F2/F2E modules do not interoperate with F2-only
other Nexus 7000 modules* F2E module
VDC
F2E module
 Must deploy in an “F2 only” VDC
 Can be default VDC, or any other VDC
– Use the limit-resource module-type f2 VDC
configuration command M2 module

 System with only F2 modules and empty M2 module M1/M2/F1


configuration boots with F2-only default VDC M1 module VDC
automatically F1 module

M1/M2/F1 modules can exist in same


chassis as F2/F2E modules, but not
* F2E will interoperate in Layer 2 mode with M1/M2 in a future software release in the same VDC
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
Agenda
 Chassis Architecture
 Supervisor Engine and I/O Module Architecture
 Forwarding Engine Architecture
 Fabric Architecture
 I/O Module Queuing
 Layer 2 Forwarding
 IP Forwarding
 Classification
 NetFlow
 Conclusion

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 28
M-Series Forwarding Engine Hardware
 Two hardware forwarding engines integrated  MPLS
on every M2 I/O module  OTV
 120Mpps (60Mpps per forwarding engine)  RACL/VACL/PACL
Layer 2 bridging with hardware MAC learning
 QoS remarking and policing policies
 120 Mpps (60Mpps per forwarding engine)
Layer 3 IPv4  Policy-based routing (PBR)
 60Mpps (30Mpps per forwarding engine)  Unicast RPF check and IP source guard
Layer 3 IPv6 unicast  IGMP snooping
 Layer 3 IPv4 and IPv6 multicast support (SM,  Ingress and egress NetFlow (full and
SSM, bidir) sampled)
Hardware Table M-Series Modules M-Series Modules with
without Scale License Scale License
MAC Address Table 128K 128K
FIB TCAM 128K IPv4 / 64K IPv6 900K IPv4 / 350K IPv6
Classification TCAM (ACL/QoS) 64K 128K
NetFlow Table 512K 512K

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
M-Series Forwarding Engine Architecture
FE Daughter Card
L3 Engine
 Egress NetFlow collection
FIB TCAM/
Layer 3 FIB
 Ingress NetFlow collection ADJ

 Egress ACL/QoS classification NetFlow


 FIB TCAM and adjacency table
lookups for Layer 3 forwarding
 Ingress ACL/QoS classification Policing  ECMP hashing
 Multicast RPF check
Classification Egress lookup
CL TCAM
(ACL/QoS) pipeline  Ingress policing

Ingress lookup pipeline  Egress policing

L2 Engine
MAC L2 Lookup (post-L3)
Table  Egress MAC lookups
L2 Lookup (pre-L3)
 Egress IGMP snooping
lookups
 Ingress MAC table lookups Ingress Parser Final Results
 Port-channel hash result
 Ingress IGMP snooping PKT
lookups HDR
From I/O Module To I/O Module
BRKARC-3470
Replication Engines
© 2013 Cisco and/or its affiliates. All rights reserved.
Replication Engines
Cisco Public 30
F2/F2E Forwarding Engine Hardware
 Each SoC forwarding engine services 4 front-  QoS remarking and policing policies
panel 10G ports (12 SoCs per module)  Policy-based routing (PBR)
 60Mpps per SoC Layer 2 bridging with  Unicast RPF check and IP source guard
hardware MAC learning
 IGMP snooping
 60Mpps per forwarding engine Layer 3 IPv4/
IPv6 unicast  FabricPath forwarding
 Layer 3 IPv4 and IPv6 multicast support (SM,  FCoE
SSM)  Ingress sampled NetFlow
 RACL/VACL/PACL

Hardware Table Per F2 SoC Per F2 Module


MAC Address Table 16K 192K*
FIB TCAM 32K IPv4/16K IPv6 32K IPv4/16K IPv6
Classification TCAM (ACL/QoS) 16K 192K*
* Assumes specific configuration to scale SoC resources

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
F2/F2E Forwarding Engine
To/From Central Ingress and egress
Forwarding forwarding decisions
Arbiter To Fabric tables From Fabric
(L2/L3 lookups,
ACL/QoS, etc.)
4 X 10G
Decision Engine SoC
PKT HDR
L2 Lookup (post-L3)
Virtual output
queues Ingress MAC FIB/ADJ Layer 3 Lookup Egress
Egress fabric
Buffer (VOQ) Table CL QoS / ACL Buffer
receive buffer

L2 Lookup (pre-L3)

PKT
Ingress HDR
Parser Egress Parser

1G and 10G capable


interface MAC

1G/10G MAC 1G/10G MAC


Four front-panel
interfaces per
ASIC
Port A Port B Port C Port D
PKT HDR
1G/10G 1G/10G 1G/10G 1G/10G
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
Agenda
 Chassis Architecture
 Supervisor Engine and I/O Module Architecture
 Forwarding Engine Architecture
 Fabric Architecture
 I/O Module Queuing
 Layer 2 Forwarding
 IP Forwarding
 Classification
 NetFlow
 Conclusion

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 33
Crossbar Switch Fabric Modules
N7K-C7018-FAB-1/FAB-2

 Provide interconnection of I/O modules in Nexus 7009 / 7010 / 7018 chassis


 Each installed fabric increases available per-payload slot bandwidth
 Two fabric generations available – Fabric 1 and Fabric 2
Supported Per-fabric module Total bandwidth with
Fabric Module Supported Chassis
I/O Modules bandwidth 5 fabric modules
Fabric 1 7010 / 7018 All 46Gbps per slot 230Gbps per slot
Fabric 2 7009 / 7010 / 7018 All 110Gbps per slot 550Gbps per slot

 Different I/O modules leverage different amount of fabric bandwidth


 Access to fabric bandwidth controlled using QoS-aware central arbitration with
VOQ

N7K-C7009-FAB-2 N7K-C7010-FAB-1/FAB-2
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
Multistage Crossbar
Nexus 7000 implements 3-stage crossbar switch fabric
 Stages 1 and 3 on I/O modules
 Stage 2 on fabric modules
Fabric Modules
2nd stage 1 2 3 4 5
Fabric ASIC Fabric ASIC Fabric ASIC Fabric ASIC Fabric ASIC

2 x 23Gbps channels (46G per Fab1) –or–


2 x 55Gbps channels (110G per Fab2)
per slot, per fabric module
Up to 230Gbps (Fab1) –or–
Up to 550Gbps (Fab2)
per I/O module with 5 fabric modules
Fabric ASIC Fabric ASIC

1st stage 3rd stage


Ingress Egress
Module Module

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
I/O Module Capacity – Fabric 1
Fabric 1 Modules
230Gbps
46Gbps
184Gbps
138Gbps
92Gbps 46Gbps/slot
Fabric 1
1
per slot bandwidth Local Fabric 2 ASICs

(480G)
One fabric:
2
 Any port can pass traffic to any other port 46Gbps/slot
Fabric 1
ASICs
in VDC
Two fabrics: Local Fabric 2
(240G) 3
 80G M1 module has full bandwidth 46Gbps/slot
Fabric 1
ASICs

Five fabrics:
 240G M2 module limited to 230G per slot 4
Fabric 1
46Gbps/slot
Local Fabric 1 ASICs
 480G F2/F2E module limited to 230G per slot
(80G)

5
Fabric 1
46Gbps/slot
ASICs

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
I/O Module Capacity – Fabric 2 Fabric channels run at
lowest common speed

Fab2 does NOT make Fab1-based Fabric 2 Modules


110Gbps/slot
modules faster!! Fabric 2
1
Local Fabric 2 ASICs

550Gbps
110Gbps
440Gbps
220Gbps
330Gbps (480G)
per slot bandwidth
One fabric: 2
Fabric 2
ASICs
 Any port can pass traffic to any other port
in VDC Local Fabric 2
(240G) 3
Two fabrics: Fabric 2
ASICs

 80G M1 module has full bandwidth


Three fabrics: 4
Fabric 2
 240G M2 module has maximum bandwidth Local Fabric 1 ASICs

(80G)
Five fabrics:
5
 480G F2/F2E module has maximum Fabric 2
ASICs
bandwidth

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
What About Nexus 7004?
 Nexus 7004 has no fabric modules
 I/O modules have local fabric with 10 available fabric channels
– I/O modules connect “back-to-back” via 8 fabric channels
– Two fabric channels “borrowed” to connect supervisor engines
 Available inter-module bandwidth dependent on installed module types
M1 Modules in Nexus 7004 F2/F2E/M2 Modules in Nexus 7004

Sup Module 1 Crossbar Crossbar Sup Module 2 Sup Module 1 Crossbar Crossbar Sup Module 2
ASIC ASIC ASIC ASIC

2 * 23G 2 * 55G
fabric channels fabric channels

M1 Module 3 M1 Module 4 F2/F2E/M2 F2/F2E/M2


Fabric 1 Fabric 1 Module 3 Fabric 2 Fabric 2 Module 4
Crossbar Crossbar Crossbar Crossbar
ASIC ASIC ASIC ASIC

8 * 23G local fabric channels 8 * 55G local fabric channels


interconnect I/O modules (184G) interconnect I/O modules (440G)

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
Fabric, VOQ, and Arbitration
 Fabric, VOQ, and arbitration combine to provide all necessary infrastructure for
packet transport inside switch

 Crossbar fabric – Provides dedicated, high-bandwidth interconnects between


ingress and egress I/O modules
 Virtual Output Queues (VOQs) – Provide buffering and queuing for ingress-
buffered switch architecture
 Central arbitration – Controls scheduling of traffic into fabric based on
fairness, priority, and bandwidth availability at egress ports

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
Agenda
 Chassis Architecture
 Supervisor Engine and I/O Module Architecture
 Forwarding Engine Architecture
 Fabric Architecture
 I/O Module Queuing
 Layer 2 Forwarding
 IP Forwarding
 Classification
 NetFlow
 Conclusion

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
Buffering, Queuing, and Scheduling
 Buffering – storing packets in memory
– Needed to absorb bursts, manage congestion
 Queuing – buffering packets according to traffic class
– Provides dedicated buffer for packets of different priority
 Scheduling – controlling the order of transmission of buffered packets
– Ensures preferential treatment for packets of higher priority and fair treatment for packets of equal
priority

 Nexus 7000 uses queuing policies and network-QoS policies to define buffering, queuing,
and scheduling behavior
 Default queuing and network-QoS policies always in effect in absence of any user
configuration

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
I/O Module Buffering Models
 Buffering model varies by I/O module family
– M-series modules: hybrid model combining ingress VOQ-buffered architecture with
egress port-buffered architecture
– F-series modules: pure ingress VOQ-buffered architecture

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
M2 – Hybrid Ingress/Egress Buffered Model Diagram represents half
of each I/O module

Buffering / queuing / scheduling

Ingress port buffer – Manages congestion of


ingress forwarding/replication engines, and
congestion toward egress destinations (VQIs)

VOQ 0 VOQ 0

Port ASIC 0 Port ASIC 0

RE 0 RE 0
DWRR

FABRIC
Port 1 Port 1

RE 1 VOQ 1 VOQ 1 RE 1
DWRR

Port 12 Port 12

BRKARC-3470 Ingress Module


© 2013 Cisco and/or its affiliates. All rights reserved. Egress
Cisco Public Module 43
M2 – Hybrid Ingress/Egress Buffered Model Diagram represents half
of each I/O module

Buffering / queuing / scheduling Buffering / queuing Scheduling

Ingress port buffer – Manages congestion of Ingress VOQ buffer – Manages Egress VOQ buffer – Receives
ingress forwarding/replication engines, and congestion toward egress frames from fabric
congestion toward egress destinations (VQIs) destinations (VQIs)

VOQ 0 VOQ 0
1

DWRR
2 VQI 1
Port ASIC 0 VOQ Port ASIC 0

SP
Source
3 Buffer
RE 0 4 RE 0


DWRR

FABRIC
Port 1 Port 1
5

DWRR
6 VQI 6

SP

4 3 2 SP
Priority

RE 1 VOQ 1 VOQ 1 RE 1
DWRR

Port 12 Port 12
Sources 7-12 VQIs 7-12

BRKARC-3470 Ingress Module


© 2013 Cisco and/or its affiliates. All rights reserved. Egress
Cisco Public Module 44
M2 – Hybrid Ingress/Egress Buffered Model Diagram represents half
of each I/O module

Buffering / queuing / scheduling Buffering / queuing Scheduling Buffering / queuing / scheduling

Ingress port buffer – Manages congestion of Ingress VOQ buffer – Manages Egress VOQ buffer – Receives Egress port buffer –
ingress forwarding/replication engines, and congestion toward egress frames from fabric Manages congestion at egress
congestion toward egress destinations (VQIs) destinations (VQIs) physical interface

VOQ 0 VOQ 0
1

DWRR
2 VQI 1
Port ASIC 0 VOQ Port ASIC 0

SP
Source
3 Buffer

SP
RE 0 4 RE 0


DWRR

FABRIC
Port 1 Port 1

DWRR
5

DWRR
6 VQI 6

SP

4 3 2 SP


Priority

SP
RE 1 VOQ 1 VOQ 1 RE 1
DWRR

DWRR
Port 12 Port 12
Sources 7-12 VQIs 7-12

BRKARC-3470 Ingress Module


© 2013 Cisco and/or its affiliates. All rights reserved. Egress
Cisco Public Module 45
FAQ: What Is a VQI?
 VQI – Virtual Queuing Index
 “A Destination Across the Fabric”
 In M2 / F2 / F2E modules, VQI == 10G interface
 For high-bandwidth ports (M2 40/100G), use multiple 10G VQIs

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
F2/F2E – Ingress Buffered Model Diagram represents one SoC on each I/O module

Buffering / queuing Scheduling


Ingress VOQ buffer – Manages congestion toward Egress VOQ buffer – Receives frames from
egress destinations (VQIs) fabric

Ingress VOQ Egress VOQ

DWRR
hi
Port 1 1 VQI 1 Port 1
lo

PQ
FABRIC
hi
Port 2 2 Port 2
VOQ lo


Buffer hi
Port 3 3 Port 3
lo

DWRR
hi VQI 4
Port 4 4 Port 4

PQ
lo

Ingress SOC Egress SOC

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
Agenda
 Chassis Architecture
 Supervisor Engine and I/O Module Architecture
 Forwarding Engine Architecture
 Fabric Architecture
 I/O Module Queuing
 Layer 2 Forwarding
 IP Forwarding
 Classification
 NetFlow
 Conclusion

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
Hardware Layer 2 Forwarding Process
Layer 2 forwarding – traffic steering based on destination MAC address
 MAC table lookup drives Layer 2 forwarding
 Source MAC and destination MAC lookups performed for each frame, based on
{VLAN,MAC} pairs
 Source MAC lookup drives new learns and refreshes aging timers
 Destination MAC lookup dictates outgoing switchport

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
HDR = Packet Headers DATA = Packet Data CTRL = Internal Signaling

M2 L2 Packet Flow – 10G  Credit grant


for fabric
access
Supervisor Engine  Return
credit
to pool
Central Arbiter

Fabric Module 1 Fabric Module 2 Fabric Module 3


 Receive from
Fabric ASIC Fabric ASIC Fabric ASIC fabric
 VOQ arbitration  Return buffer
and queuing credit

Module 1 Module 2
 ACL/QoS/  Round-robin
NetFlow Fabric 2 ASIC transmit to VQI Fabric 2 ASIC
lookups
 Round-robin
Forwarding transmit to fabric Forwarding
Engine Engine
 L2 SMAC/ DMAC
lookups Layer 3 VOQs  Hash-based Layer 3 VOQs
 Port-channel hash uplink selection
result Engine VOQs Engine VOQs
 Submit packet
 Return result – Replication Replication
headers for
destination +
Layer 2 Layer 2  Static
hash result Engine lookup Engine
downlink
Engine Replication Engine Replication selection
 Static RE uplink
Engine Engine
selection
 Egress
12 X 10G MAC / LinkSec  LinkSec decryption 12 X 10G MAC / LinkSec  LinkSec
port QoS
encryption
 Ingress port QoS

 Receive  Transmit
packet
e1/1 packet on
e2/2
from wire wire
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
HDR = Packet Headers DATA = Packet Data CTRL = Internal Signaling
M2 L2 Packet Flow –  Credit grant
for fabric
Supervisor Engine  Return
credit
40G/100G access

Central Arbiter
to pool

Fabric Module 1 Fabric Module 2 Fabric Module 3


 Receive from
Fabric ASIC Fabric ASIC Fabric ASIC fabric
 VOQ arbitration  Return buffer
and queuing credit

Module 1 Module 2
 ACL/QoS/  Round-robin
NetFlow Fabric 2 ASIC transmit to VQI Fabric 2 ASIC
lookups
 Round-robin
Forwarding transmit to fabric Forwarding
 L2 SMAC/
Engine Engine
DMAC lookups Layer 3 VOQs  Hash-based uplink Layer 3 VOQs
 Port-channel and VQI selection
hash result Engine VOQs Engine VOQs
 Return result –  Submit packet
destination +
Replication headers for Replication
Layer 2 Layer 2  Static RE
hash result Engine lookup Engine
downlink
Engine Replication Engine Replication selection
 Hash-based uplink
Engine Engine
selection
 Egress
3 X 40G or 1 X 100G MAC / LinkSec  LinkSec decryption 3 X 40G or 1 X 100G MAC / LinkSec  LinkSec
port QoS
encryption
 Ingress port QoS

 Receive  Transmit
packet
e1/1 packet on
e2/2
BRKARC-3470 from wire © 2013 Cisco and/or its affiliates. All rights reserved. wire Public
Cisco 51
Replication Engine Selection on Ingress
40G / 100G M2 Module

 Hash Result generated by Port ASIC selects replication engine uplink


 Hash input uses Layer 3 + Layer 4 information

Replication Replication Replication Replication


Engine 0 Engine 1 Engine 2 Engine 3

Port 0 Port 1 Port 0 Port 1 Port 0 Port 1 Port 0 Port 1

Port ASIC 0 Port ASIC 1

Hash function Hash function

1 2 3 4 5 6

BRKARC-3470 Different Flows


© 2013 Cisco and/or its affiliates. All rights reserved. Different
Cisco Public Flows 52
VQI Load Sharing Table

VQI Selection on Ingress Destination Hash


0
VQI
VQI #1
1 VQI #2
M2 Modules 2
3
VQI #3
VQI #4
4 VQI #1
5 VQI #2
 Combination of destination port and hash result 40G
6 VQI #3
7 VQI #4
generated by forwarding engine selects destination Port
8 VQI #1
e2/1
VQI for 40G or 100G Send to VQI #7
9 VQI #2
10 VQI #3
 Hash input same as port-channel hash (Layer 3 + 11 VQI #4
12 VQI #1
Layer 4 by default on M2 modules) VOQ 2 13 VQI #2
14 VQI #3
15 VQI #4
0 VQI #5
Forwarding Engine 1 VQI #6
Result: Destination + e2/2 2 VQI #7
Hash=2 3 VQI #8
Hash Result
4 VQI #5
VQI Lookup 5 VQI #6
Replication 6 VQI #7
40G
Hash function Engine 7 VQI #8
Packet lookup Port
8 VQI #5
e2/2
9 VQI #6
10 VQI #7
11 VQI #8
12 VQI #5
13 VQI #6
14 VQI #7
Port ASIC 15 VQI #8
etc.

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
40G and 100G Flow Limits – Internal versus “On
the Wire”
Internal to Nexus 7000 System On the Wire (40G)

40G Port
Ingress Modules
Tx 1 5 1
Destination

64/66B Encoding
VQIs 1 packet
Spines Tx 2 6 2
Spines
Spines
Spines
Fabrics n … 4 3 2 1
Tx 3 … 3
64 bits
Tx 4 4
10G 10G 40G 40G 100G
1 VQI 1 VQI 4 VQIs 4 VQIs 10 VQIs 66 bits

Egress Interfaces
 Each Virtual Queuing Index (VQI) sustains ~10G traffic  Packets split into 66-bit “code words”
flow
 Four code words transmitted in parallel, one on
 All packets in given 5-tuple flow hash to single VQI each physical Tx fiber
 Single-flow limit is ~10G  No per-flow limit imposed – splitting occurs at
physical layer
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
HDR = Packet Headers DATA = Packet Data CTRL = Internal Signaling

F2 / F2E L2 Packet Flow


 Credit grant for
Supervisor Engine  Return
credit
fabric access to pool
Central Arbiter

Fabric Module 1 Fabric Module 2 Fabric Module 3 Fabric Module 4 Fabric Module 5

Fabric ASIC Fabric ASIC Fabric ASIC Fabric ASIC Fabric ASIC

 Transmit
to fabric

 Receive from fabric


 Submit packet headers for lookup
 VOQ arbitration Fabric ASIC Fabric ASIC

 Ingress L2 SMAC/ DMAC


lookups, ACL/QoS lookups,
VOQ DE NetFlow sampling VOQ
 Return result –
destination
SoC Module 1 SoC Module 2
 Ingress  Egress port QoS
port QoS  Receive  Transmit (Scheduling)
(VOQ) e1/1 packet packet e2/2  Return buffer credit
from wire on wire

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
Agenda
 Chassis Architecture
 Supervisor Engine and I/O Module Architecture
 Forwarding Engine Architecture
 Fabric Architecture
 I/O Module Queuing
 Layer 2 Forwarding
 IP Forwarding
 Classification
 NetFlow
 Conclusion

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 56
IP Forwarding
 Nexus 7000 decouples control plane and data plane
 Forwarding tables built on control plane using routing protocols or static
configuration
–OSPF, EIGRP, IS-IS, RIP, BGP for dynamic routing
 Tables downloaded to forwarding engine hardware for data plane forwarding
–FIB TCAM contains IP prefixes
–Adjacency table contains next-hop information

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 57
Hardware IP Forwarding Process
 FIB TCAM lookup based on longest-match destination prefix comparison
 FIB “hit” returns adjacency, adjacency contains rewrite information (next-hop)
 Pipelined forwarding engine architecture also performs ACL, QoS, and NetFlow
lookups, affecting final forwarding result

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
IPv4 FIB TCAM Lookup
Generate TCAM lookup key
(destination IP address)
Ingress
Generate
unicast IPv4
Lookup Key
packet header
Compare
10.1.1.10 lookup
key Flow Forwarding Engine
Data
10.1.1.2 Index, # next-hops Next-hop 1 (IF, MAC)
Load-Sharing
10.1.1.3 Index, # next-hops Hash Next-hop 2 (IF, MAC)
10.1.1.4 Index, # next-hops
10.10.0.10 Index, # next-hops
10.10.0.100 Index, # next-hops Next-hop 3 (IF, MAC)
Offset
10.10.0.33 Index, # next-hops
mod
Return lookup
10.1.1.xx
10.1.2.xx Index, # next-hops result

10.1.3.xx Index, # next-hops Next-hop 4 (IF, MAC)


# next-
10.10.100.xx Index, # next-hops
hops
Next-hop 5 (IF, MAC)

HIT! 10.1.1.xx Index, # next-hops Next-hop 6 (IF, MAC)


Adj Index Result
10.100.1.xx Index, # next-hops Next-hop 7 (IF, MAC)
10.10.0.xx Index, # next-hops
Hit in FIB Modulo
10.100.1.xx Index, Adjacency
returns result# next-hops
function selects
index
in FIB DRAM exact next hop
identifies ADJ
block to use entry to use
FIB TCAM FIB DRAM Adjacency Table
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
HDR = Packet Headers DATA = Packet Data CTRL = Internal Signaling

M2 L3 Packet Flow  Credit grant


for fabric
access
Supervisor Engine  Return
credit
to pool
Central Arbiter

Fabric Module 1 Fabric Module 2 Fabric Module 3


 Receive from
Fabric ASIC Fabric ASIC Fabric ASIC fabric
 VOQ arbitration  Return buffer
and queuing credit

 Module 1
L3 FIB/ADJ lookup Module 2
 Ingress and egress
 Round-robin
ACL/QoS/NetFlow
lookups Fabric 2 ASIC transmit to VOQ Fabric 2 ASIC
 Round-robin
Forwarding transmit to fabric Forwarding
 L2 ingress and egress Engine Engine
SMAC/ DMAC lookups VOQs VOQs
 Hash-based uplink
 Port-channel hash Layer 3 Layer 3
(and VQI) selection
result Engine VOQs Engine VOQs
 Submit packet
 Return result – Replication headers for Replication
destination + Layer 2 Layer 2  Static RE
Engine lookup Engine
hash result downlink
Engine Replication Engine Replication selection
 Static or Hash-based
Engine Engine
uplink selection
 Egress
10G / 40G / 100G MAC / LinkSec  LinkSec decryption 10G / 40G / 100G MAC / LinkSec  LinkSec
port QoS
encryption
 Ingress port QoS

 Receive  Transmit
packet
e1/1 packet on
e2/2
from wire wire
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
HDR = Packet Headers DATA = Packet Data CTRL = Internal Signaling

F2 / F2E L3 Packet Flow


 Credit grant for
Supervisor Engine  Return
credit
fabric access to pool
Central Arbiter

Fabric Module 1 Fabric Module 2 Fabric Module 3 Fabric Module 4 Fabric Module 5

Fabric ASIC Fabric ASIC Fabric ASIC Fabric ASIC Fabric ASIC

 Transmit
to fabric

 Receive from fabric


 Submit packet headers for lookup
 VOQ arbitration Fabric ASIC Fabric ASIC

 L2 ingress and egress SMAC/


DMAC lookups
VOQ DE  L3 FIB/ADJ lookup VOQ
 Return result –  Ingress and egress ACL/QoS
destination lookups, NetFlow sampling
SoC Module 1 SoC Module 2
 Ingress  Egress port QoS
port QoS  Receive  Transmit (Scheduling)
(VOQ) e1/1 packet packet e2/1  Return buffer credit
from wire on wire

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
Agenda
 Chassis Architecture
 Supervisor Engine and I/O Module Architecture
 Forwarding Engine Architecture
 Fabric Architecture
 I/O Module Queuing
 Layer 2 Forwarding
 IP Forwarding
 Classification
 NetFlow
 Conclusion

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
What Is Classification?
 Matching packets
– Layer 2, Layer 3, and/or Layer 4 information
 Used to decide whether to apply a particular policy to a packet
– Enforce security, QoS, or other policies
 Some examples:
– Match TCP/UDP source/destination port numbers to enforce security policy
– Match destination IP addresses to apply policy-based routing (PBR)
– Match 5-tuple to apply marking policy
– Match protocol-type to apply Control Plane Policing (CoPP)
– etc.

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
Security ACL

CL TCAM Lookup – ACL ip access-list example


permit ip any host 10.1.2.100
deny ip any host 10.1.68.44
deny ip any host 10.33.2.25
Packet header: permit tcp any any eq 22
SIP: 10.1.1.1 Generate TCAM
deny tcp any any eq 23
lookup key
DIP: 10.2.2.2 deny udp any any eq 514
Protocol: TCP permit tcp any any eq 80
SPORT: 33992 permit udp any any eq 161
Generate SIP | DIP | Pr | SP | DP
DPORT: 80
Lookup Key
10.1.1.1 | 10.2.2.2 | tcp | 33992 | 80
Compare lookup Forwarding Engine
key to CL TCAM
entries
xxxxxxx| 10.1.2.100
xxxxxxx | 10.2.2.2 | |xx
xx| xxx | xxx
| xxx | xxx Permit
xxxxxxx | 10.1.68.44 | xx | xxx | xxx Deny
Results
xxxxxxx | 10.33.2.25 | xx | xxx | xxx Deny
Comparisons xxxxxxx xxxxxxx| |tcp
xxxxxxx | xxxxxxx tcp| xxx
| xxx
| |80
22 Permit
(X = “Mask”)
xxxxxxx | xxxxxxx | tcp | xxx | 23 Deny
xxxxxxx | xxxxxxx | udp | xxx | 514 Deny
Result affects
HIT! xxxxxxx | xxxxxxx | tcp | xxx | 80 Permit final packet
xxxxxxx | xxxxxxx | udp | xxx | 161 Result handling
Permit
SIP | DIP | Pr | SP | DP
Hit in CL TCAM
CL TCAM CL SRAM Return
returns result in
lookup
CL SRAM
result

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 64
QoS Classification ACLs
CL TCAM Lookup – QoS ip access-list police
permit ip any 10.3.3.0/24
permit ip any 10.4.12.0/24
ip access-list remark-dscp-32
permit udp 10.1.1.0/24 any
Packet header: ip access-list remark-dscp-40
Generate TCAM permit tcp 10.1.1.0/24 any
SIP: 10.1.1.1 ip access-list remark-prec-3
lookup key
DIP: 10.2.2.2 permit tcp any 10.5.5.0/24 eq 23
Protocol: TCP
SPORT: 33992 SIP | DIP | Pr | SP | DP
Generate
DPORT: 80 Lookup Key
10.1.1.1 | 10.2.2.2 | tcp | 33992 | 80
Forwarding Engine
Compare
lookup key
xxxxxxx | 10.2.2.xx
10.3.3.xx | xx | xxx | xxx Policer ID 1
Results
Comparisons xxxxxxx | 10.4.12.xx | xx | xxx | xxx Policer ID 1
(X = “Mask”)
10.1.1.xx
10.1.1.xx || xxxxxxx
xxxxxxx || udp
tcp || xxx|
xxx |xxx
xxx Remark DSCP 32
Result affects
HIT! 10.1.1.xx | xxxxxxx | tcp | xxx | xxx Remark DSCP 40 final packet
Result handling
xxxxxxx | 10.5.5.xx| tcp | xxx | 23 Remark IP Prec 3
SIP | DIP | Pr | SP | DP
Return
CL SRAM lookup
CL TCAM Hit in CL TCAM result
returns result in
CL SRAM

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Agenda
 Chassis Architecture
 Supervisor Engine and I/O Module Architecture
 Forwarding Engine Architecture
 Fabric Architecture
 I/O Module Queuing
 Layer 2 Forwarding
 IP Forwarding
 Classification
 NetFlow
 Conclusion

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
NetFlow on Nexus 7000
 NetFlow collects flow data for packets traversing the switch
 Each module maintains independent NetFlow table
M2 F2 / F2E

Per-interface NetFlow Yes Yes

NetFlow direction Ingress/Egress Ingress only

Full NetFlow Yes No

Sampled NetFlow Yes Yes

Bridged NetFlow Yes Yes

Hardware Cache Yes No

Software Cache No Yes


512K entries per
Hardware Cache Size N/A
forwarding engine
NDE (v5/v9) Yes Yes

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
Full vs. Sampled NetFlow
 NetFlow collects full or sampled flow data
 Full NetFlow: Accounts for every packet of every flow on interface
– Available on M-Series modules only
– Flow data collection up to capacity of hardware NetFlow table
 Sampled NetFlow: Accounts for M in N packets on interface
– Available on both M-Series (ingress/egress) and F2/F2E (ingress only)
– M-Series: Flow data collection up to capacity of hardware NetFlow table
– F2/F2E: Flow data collection for up to ~1000pps per module

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 68
Sampled NetFlow Details
 Random packet-based sampling
 M:N sampling: Out of N consecutive packets, select M consecutive packets and
account only for those flows
 On M-Series, sampled packets create hardware NetFlow table entry
 On F2/F2E, sampled packets sent to LC CPU via module inband
– Rate limited to ~1000pps per module
 Software multiplies configured sampler rate by 100 on F2/F2E modules
– Example: when using 1 out-of 100 sampler on F2/F2E interface, sampled rate becomes
1:10000

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
NetFlow on M2 Modules
Generate NetFlow v5
To NetFlow Collector or v9 export packets
M2 Module

Supervisor LC NetFlow
Fabric CPU Table
Engine Hardware
ASIC Aged Flow Info
Flow Creation
via Supervisor Forwarding
Inband Engine
VOQs
M2 Module
LC NetFlow
Switched CPU Table
Main Hardware
EOBC Aged Flow Info
CPU Flow Creation
Forwarding
Engine

via mgmt0 Mgmt M2 Module


Enet
LC NetFlow
CPU Table
Hardware
Aged Flow Info Flow Creation
To NetFlow Collector Forwarding
Engine
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 70
Sampled NetFlow on F2/F2E Populate cache based
on received samples
Modules Age flows and F2 Module
generate NetFlow v5 DRAM
NetFlow
or v9 export packets
To NetFlow Collector Cache
Aged Sampled
Flows via Module Packets SoC
Supervisor LC Inband
Fabric CPU Decision Data Flow
via Supervisor Engine Engine
Inband ASIC

DRAM F2E Module


VOQs NetFlow
Cache
Aged Sampled
Flows via Module Packets SoC
LC Inband
Main Switched
CPU Decision Data Flow
CPU EOBC Engine

DRAM F2E Module


via mgmt0 Mgmt NetFlow
Enet Cache
Aged Sampled
Flows via Module Packets SoC
LC Inband
To NetFlow Collector CPU Decision Data Flow
Engine

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
Agenda
 Chassis Architecture
 Supervisor Engine and I/O Module Architecture
 Forwarding Engine Architecture
 Fabric Architecture
 I/O Module Queuing
 Layer 2 Forwarding
 IP Forwarding
 Classification
 NetFlow
 Conclusion

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 72
Nexus 7000 Architecture Summary Control plane
protocols, system and
network management
Multiple chassis
designs with
density and airflow Supervisor Engines
I/O Modules options

Chassis

Fabrics

Variety of front-panel interface and


transceiver types with hardware-based
forwarding and services, including
unicast/multicast, bridging/routing, ACL/QoS
classification, and NetFlow statistics High-bandwidth fabric to
interconnect I/O modules and
provide investment protection
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 73
Conclusion
 You should now have a thorough understanding of the Nexus 7000
switching architecture, I/O module design, packet flows, and key
forwarding engine functions…
 Any questions?

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public
74 74
Reference: Acronym Decoder
 ACL–Access Control List  NDE–NetFlow Data Export

 ADJ–Adjacency  OTV–Overlay Transport Virtualization

 ASIC–Application Specific Integrated Circuit  PACL–Port ACL


 CFP–C Formfactor Pluggable  PBR–Policy-Based Routing

 CoPP–Control Plane Policing  PIM–Protocol Independent Multicast

 COS–Class of Service  QoS–Quality of Service


 DSCP–Differentiated Services Code Point  QSFP+–40G Quad Small-Formfactor Pluggable

 DWRR–Deficit Weighted Round Robin  RACL–Router ACL

 ECMP–Equal Cost Multi Path  RE–Replication Engine


 EEE–Energy Efficient Ethernet  RPF–Reverse Path Forwarding

 EOBC–Ethernet Out-of-Band Channel  RU–Rack Unit

 FCoE–Fiber Channel over Ethernet  SFP+–10G Small-Formfactor Pluggable


 FE–Forwarding Engine  SoC–System-on-chip/switch-on-chip

 FEX–Fabric Extender (Nexus 2000 family)  TCAM–Ternary CAM


 FIB–Forwarding Information Base  uRPF–Unicast RPF
 FRU–Field Replaceable Unit  VACL–VLAN ACL

 GRE–Generic Route Encapsulation  VDC–Virtual Device Context

 HSRP–Hot Standby Router Protocol  VOQ–Virtual Output Queuing


 IGMP–Internet Group Management Protocol  VQI–Virtual Queuing Index

 MPLS–Multiprotocol Label Switching  XL–Refers to forwarding engine with larger FIB and ACL TCAMs

BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 75
Complete Your Online Session Evaluation
Give us your feedback and
you could win fabulous prizes.
Winners announced daily.
Receive 20 Cisco Daily Challenge points for
each session evaluation
you complete.
Complete your session evaluation online now
through either the mobile app or internet kiosk
stations.

Maximize your Cisco Live experience with your


free Cisco Live 365 account. Download session
PDFs, view sessions on-demand and participate
in live activities throughout the year. Click the
Enter Cisco Live 365 button in your Cisco Live
portal to log in.
BRKARC-3470 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 76

You might also like