Professional Documents
Culture Documents
October 2014
Brocade, the B-wing symbol, Brocade Assurance, ADX, AnyIO, DCX, Fabric OS, FastIron, HyperEdge, ICX, MLX, MyBrocade, NetIron,
OpenScript, VCS, VDX, and Vyatta are registered trademarks, and The Effortless Network and the On-Demand Data Center are trademarks
of Brocade Communications Systems, Inc., in the United States and in other countries. Other brands and product names mentioned may be
trademarks of others.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any
equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document
at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be
currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in
this document may require an export license from the United States government.
The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the
accuracy of this document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that
accompany it.
The product described by this document may contain open source software covered by the GNU General Public License or other open
source license agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to
the open source software, and obtain a copy of the programming source code, please visit http://www.brocade.com/support/oscd.
Contents
Preface..................................................................................................................................... 5
Introduction..............................................................................................................................7
What is Brocade VCS Ethernet Fabric?............................................................7
Multi-Layer Architecture.................................................................................... 7
VCS Fabric Technology.................................................................................... 8
Deploying IP Multicast............................................................................................................ 35
PIM-SM........................................................................................................... 35
Configuring PIM-SM Multicast.........................................................................36
Procedure Summary........................................................................... 37
Detailed Procedure with Examples................................................................. 38
Configuring OSPF, PIM, and VRRP-E................................................ 40
Configuring Access 2 VLAN and Protocol...........................................41
OSPF Configuration............................................................................ 43
Configuring VE Interfaces................................................................... 45
Verifying OSPF and PIM..................................................................... 47
Starting the Receivers.........................................................................57
Preface
This document is a deployment guide for implementing multicast functionality in a Brocade VCS fabric
using either Fabric Cluster or Logical Chassis mode. It is written for technology decision-makers,
architects, systems engineers, NOC engineers and other experts responsible for network deployment
and implementation.
This document provides step-by-step examples to prepare, perform, and verify the deployment of
multicast functionality, including IP multicast and Internet Group Management Protocol (IGMP)
snooping . It is assumed that the reader is familiar with establishing console access and entering
commands using the Brocade CLI.
Document History
Date
Version
Description
October 2014
Initial version
Preface
Introduction
What is Brocade VCS Ethernet Fabric?............................................................................7
Multi-Layer Architecture.................................................................................................... 7
VCS Fabric Technology.................................................................................................... 8
Multi-Layer Architecture
A multi-layer architecture is oftetn used in Ethernet data centers to improve scalability, performance,
and management. This architecture typically has three layers:
Access Layer: Provides connectivity to all end devices, including storage, servers, hypervisors, and
so forth.
Aggregation Layer: Provides the boundary between Layer 2 and Layer 3 networks, and between
Access and Core layers. The best practice has been to connect multiple Access layer networks
through the Aggregation layer to avoid Layer 2 loops on redundant links. The Aggregation layer
provides resiliency and acts as the Layer 3 gateway.
Core Layer: Interconnects multiple Aggregation layers and provides resiliency to avoid a single
point of failure, using a Layer 3 interior gateway protocol (IGP), such as Open Shortest Path First
(OSPF), Intermediate System-to-Intermediate System (IS-IS), or Border Gateway Protocol (BGP).
VDX 6710
VDX 6720
VDX 6730/F
VDX 6740
VDX 8770
Ports
48 1-Gb
24 10-GB
2410 GB
48 10-Gb
48 1-Gb
6 10-Gb
60 10-Gb
8 8 Gb
4 40-Gb
4810-Gb
60 10-Gb
48 10-Gb
16 8-Gb
12 40-Gb
27 40-Gb
6 100-Gb
Ethernet ports
1-Gb
1-Gb
1-Gb
1-Gb
1-Gb
10-Gb
10-Gb
10-Gb
10-Gb
10-Gb
40-Gb
40-Gb
100-Gb
FCoE
Yes
Yes
Yes
Yes
Yes
FC
No
No
Yes, 8-Gb
Yes, 32
No
Flexports in NOS
Release 5
iSCSi
Yes
Yes
Yes
Yes
Yes
VCS
Yes
Yes
Yes
Yes
Yes
Virtual fabric/GVlan
No
No
No
Yes
Yes
VX-LAN
Gateway
No
No
No
Yes
No
In a Clos topology, also known as core-edge topology, edge switches connect to core switches (see
Figure 1 ). A fabric deployment using the Clos topology, illustrated in the figure below, provides a
balance between scalability, high availability and low latency.
Introduction
In a full mesh topology, shown Figure 2 , every switch is connected directly to every other switch. A fullmesh topology provides a low-latency fabric with maximum path availability for deployments where
scalability is not a major requirement.
FIGURE 2 Full Mesh Topology
Introduction
This example illustrates three separate VCS fabrics in two separate datacenters, each with its own
VCS ID:
VCS 10: Access fabric including four 6740 switches (VDX1, VDX2, VDX3, and VDX4)
VCS11: Aggregation fabric including two 8770 switches (VDX5 and VDX6)
VCS 12: Access fabric including four 6740 switches (VDX7, VDX8, VDX9, and VDX 10
These fabrics are interconnected through MLX core routers.
In Fabric Cluster mode, each switch is configured independently, but all switches in the same fabric
must be enabled with the same VCS ID. Changing the VCS ID of a switch reverts the configuration of
the switch to the default-config and restarts the switch.
Automatic Neighbor Discovery. When a new switch is connected to an existing switch in Fabric
Cluster mode, the switch in Fabric Cluster mode determines if the new neighbor has Fabric Cluster
mode enabled, and if so, joins the new switch to the existing Fabric Cluster VCS. With Fabric Cluster
mode enabled, if the f VCS IDs are the same, then the switch joins the fabric.
10
Introduction
Automatic ISL Formation and Hardware-based Trunk Formation. When a switch joins an Ethernet
fabric, ISLs automatically form between directly connected switches within the fabric. If more than one
ISL exists between two switches, Brocade trunks are automatically formed. All ISLs connected to the
same neighboring VDX switch attempt to form a trunk. The trunks are formed only when the ports
belong to the same port group. If the ports belong to different port groups then different ISLs are
formed. No user intervention is necessary to form a Brocade trunk.
Brocade trunks are hardware-based LAGs, which distribute traffic evenly across all the available links in
a trunk on a frame-by-frame basis without hashing. Brocade trunk ISLs are basically zero configuration
LAGs. The limitations affecting the deployment of Brocade trunks include the following:
11
12
vcs vcsid 10
vcs vcsid 10
<-Configuring vcsid 10
<-Configuring vcsid
vcs vcsid 10
<-Configuring vcsid
vcs vcsid 10
<-Configuring vcsid
Step 2: Configure the VCS ID for the Aggregation layer nodes in Datacenter
1.
VDX5-Agr1# vcs vcsid 10
VDX6-Agr2# vcs vcsid 10
<-Configuring vcsid 11
<-Configuring vcsid 11
Step 3: Configure the VCS ID for the Access layer nodes in Datacenter 2.
VDX7-Access7# vcs vcsid 12
VDX8-Access8# vcs vcsid 12
<-Configuring vcsid 12
<-Configuring vcsid 12
13
NOTE
if there is an rbridge Id conflict between two VDX switches, one of the offending switches has to
change its rbridge Id. The valid range for rbridge id is 1-239.
To configure the rbridge IDs for the example deployment, complete the following steps.
Procedure
Step 1: Configure the rbridge IDs for the Access layer nodes in Datacenter
1.
VDX1-Access1# vcs rbridge-id 100
100
VDX2-Access2 vcs rbridge-id 101
101
VDX3-Access3# vcs rbridge-id 102
102
VDX4-Access4 vcs rbridge-id 103
103
<-Configuring rbridge id
<-Configuring rbridge id
<-Configuring rbridge id
<-Configuring rbridge id
Step 2: Configure the rbridge IDs for the Aggregation layer nodes in
Datacenter 2.
DX5-Agr1# vcs rbridge-id 200
VDX6-Agr2# vcs rbridge-id 201
Step 3: Configure the rbridge IDs for the Access layer nodes in Datacenter
2.
VDX7-Access7# vcs rbridge-id 150
150
VDX8-Access8# vcs rbridge-id 151
151
<-Configuring rbridge id
<-Configuring rbridge id
14
Procedure
Step 1: Verify VDX-1 Access1 VCS.
VDX1-Access1# show vcs
Config Mode
: Local-Only <- It is local mode and configuration should be done
individually on each node.
VCS ID
: 10 <-VCS ID
Total Number of Nodes
: 4
Rbridge-Id
WWN
Management IP
Status
HostName
---------------------------------------------------------------------------------------100
>10:00:00:27:F8:FD:F6:C0*
10.24.19.1
Online
VDX1Access1 <- Local
101
10:00:00:27:F8:FE:44:FC
10.24.19.2
Online
VDX2Access2 <-Other nodes
102
10:00:00:27:F8:FD:2B:98
10.24.19.3
Online
VDX3Access3
103
10:00:00:27:F8:FC:B7:89
10.24.19.4
Online
VDX4Access4
VDX1-Access1 VCS verification in detail
VDX1-Access1# show vcs detail
Config Mode
: Local-Only
VCS ID
: 10
Total Number of Nodes
: 4
Node :1
Serial Number
Condition
Status
VCS Id
Rbridge-Id
Co-ordinator
WWN
Switch MAC
FCF MAC
Switch Type
Internal IP
Management IP
management
<rbridge-id>/0
Node :2
Serial Number
Condition
Status
VCS Id
Rbridge-Id
Co-ordinator
WWN
Switch MAC
FCF MAC
Switch Type
Internal IP
Management IP
Node :3
Serial Number
Condition
Status
VCS Id
Rbridge-Id
Co-ordinator
WWN
Switch MAC
FCF MAC
Switch Type
Internal IP
Management IP
Node :4
Serial Number
Condition
Status
VCS Id
Rbridge-Id
Co-ordinator
WWN
:
:
:
:
:
:
:
:
:
:
:
:
CPLayer 2507K0CN
Good
Co-ordinator
10
100*
YES
10:00:00:27:F8:FD:F6:C0
00:27:F8:FD:F6:C0
00:27:F8:FD:F7:44
BR-VDX6740
127.1.0.100
10.24.19.1 <- Should be configured by the user on interface
:
:
:
:
:
:
:
:
:
:
:
:
CPLayer 2507K0CS
Good
Connected to Cluster
10
101
NO
10:00:00:27:F8:FE:44:FC
00:27:F8:FE:44:FC
00:27:F8:FE:45:80
BR-VDX6740
127.1.0.101
10.24.19.2
:
:
:
:
:
:
:
:
:
:
:
:
CPLayer 2507K0CP
Good
Connected to Cluster
10
102
NO
10:00:00:27:F8:FD:2B:98
00:27:F8:FD:2B:98
00:27:F8:FD:2C:1C
BR-VDX6740
127.1.0.102
10.24.19.3
:
:
:
:
:
:
:
CPLayer 2507K0CJ
Good
Connected to Cluster
10
103
NO
10:00:00:27:F8:FC:B7:89
15
Switch MAC
: 00:27:F8:FC:B7:89
FCF MAC
: 00:27:F8:FC:B8:0D
Switch Type
: BR-VDX6740
Internal IP
: 127.1.0.103
Management IP : 10.24.19.4
VDX1-Access1 ISL verification
VDX1-Access1# show fabric isl all
Rbridge-id: 100
#ISLs: 3 <- Rbridge ID 100 forms 3 ISLs
Src
Src
Nbr
Nbr
Index
Interface
Index
Interface
Nbr-WWN
BW
Trunk NbrName
--------------------------------------------------------------------------------------------4
Te 100/0/1
4
Te 101/0/1
10:00:00:27:F8:FE:44:FC
40-Gb
Yes
"VDX2-Access2" <-ISLs formed with other nodes in same fabric. It displays host-name
of
other switches. Hostnames should be configured by user under switch-attributes
<rbridge-id>
using the CLI host-name <name>
12
Te 100/0/9
12
Te 103/0/9
10:00:00:27:F8:FC:B7:89
40-Gb
Yes
"VDX4-Access4"
21
Te 100/0/18
21
Te 102/0/18
10:00:00:27:F8:FD:2B:98
40-Gb
Yes
"VDX3-Access3"
16
Step 3: Repeat Step 1 and 2 for all the switches in the fabric.
When establishing dynamic LACP LAGs, PDU frames are exchanged between the switch and end
devices. The PDU includes a unique identifier for both the switch and the device that is used to
determine the port channel with which a link should be associated.
Brocade NOS Release 2.1.0 and later supports a unique and consistent local LACP System ID (SID),
which is shared between all rbridges that are connected to the same vLAG. This SID is unique for each
switch in the VCS. You must configure the same parameters for a given vLAG on all participating nodes
in the FCS. Other important points to keep in mind when configuring vLAGs include the following:
Members of a vLAG in a VCS need not be directly connected with each other.
MAC addresses on a vLAG are learned as a multi-home MAC address reachable over all the
participating VCS nodes.
vLAGs are supported on 1-GbE, 10-GbE and 40-GbE ports, but all ports much be the same speed.
All VLAN, ACL and QoS configuration applied on a vLAG is similar to Layer 2 port channel
configuration.
All STP BPDUs are treated as data traffic on any member port of a vLAG.
If a MAC address learned on a vLAG moves to another vLAG or another edge port on the VCS, the
new source port triggers a fabric-wide update.
FCoE over vLAGs is not supported.
The first node and port to join the vLAG is treated as the primary node and port. All broadcast,
unicast, and multicast (BUM) traffic travels on the primary node and port.
17
If the participating VCS nodes in the vLAG are not at equal distance from the VCS entry point where
the destination MAC address is looked up, the shortest-path algorithm chooses the node that is
closest and lowest cost.
You can specify the minimum number of links that must be active on a vLAG before it can form; the
default is 1. Until the minimum number of links is online, the port channel appears with "protocol down"
status.
vLAG scalability
The table below summarizes the maximum number of components allowed in a single VCS fabric.
18
Maximum number
384
32
512
16
Procedure
To configure the port channels in the Fabric Cluster mode deployment, complete
the following steps.
The following steps are for configuring the Access layer nodes in Datacenter1.
Step 1: Configure the VDX1-Access1 interfaces.
interface FortyGigabitEthernet 100/0/49
no fabric isl enable <- Disabling isl and trunk
no fabric trunk enable
channel-group 50 mode active type standard <- configuring link aggregation
lacp timeout long <- LACP long timeout is configured
no shutdown <- Interface is enabled
!
interface FortyGigabitEthernet 100/0/51
no fabric isl enable <- Disabling isl and trunk
no fabric trunk enable
channel-group 50 mode active type standard <- configuring link aggregation
lacp timeout long <- LACP long timeout is configured
no shutdown <- Interface is enabled
no
The following steps are for configuring the Aggregation layer nodes in
Datacenter1.
Step 5: Configure the VDX5-Agr1 interfaces.
interface FortyGigabitEthernet 200/0/49
no fabric isl enable <- Disabling isl and trunk
no fabric trunk enable
channel-group 50 mode active type standard <- configuring link aggregation
lacp timeout long <- LACP long timeout is configured
no shutdown <- Interface is enabled
interface FortyGigabitEthernet 200/0/51
no fabric isl enable
19
20
Step 12: Perform Step 8 to 11 to verify the port-channel details for the other
switches in the fabric.
21
vLAG scalability
22
Username/RBACs
Fabric ISL
VLAN
MTU
LLDP DCBX
Channel group
LACP timeout
vCenter Name
QoS
23
Monitor Session
sFlow
License installation/enablement/removal
The following commands transition a device between standalone, Logical Chassis, and Fabric Cluster
modes:
Procedure
To deploy a VCS in Logical Chassis mode, complete the following step.
Step 1: Configure Access Layer in Datacenter 1.
VDX1-Access1# vcs vcsid 10 rbridge-id 100 logical-chassis enable
vcsid 10 in logical-chassis mode
VDX2-Access2# vcs vcsid 10 rbridge-id 101 logical-chassis enable
<-Configuring vcsid 10 in logical-chassis
mode
VDX3-Access3# vcs vcsid 10 rbridge-id 102 logical-chassis enable
<-Configuring vcsid 10 in logical-chassis
mode
VDX4-Access4# vcs vcsid 10 rbridge-id 103 logical-chassis enable
<-Configuring vcsid 10 in logical-chassis
mode
<-Configuring
24
VDX2Access2
102
>10:00:00:27:F8:FD:2B:98*
10.24.19.3
Online
VDX3Access3 <- Principal switch. Configurations for other switches in the fabric can be
performed through this switch
103
10:00:00:27:F8:FC:B7:89
10.24.19.4
Online
VDX4Access4
25
Internal IP
: 127.1.0.103
Management IP
: 10.24.19.4 <- Can be configured through principal switch by
accessing interface management 103/0
#ISLs: 3
Src
Src
Nbr
Nbr
Index
Interface
Index
Interface
Nbr-WWN
BW
Trunk NbrName
-------------------------------------------------------------------------------------------4
Te 101/0/1
4
Te 100/0/1
10:00:00:27:F8:FD:F6:C0
40-Gb
Yes
"VDX1-Access1"
12
Te 101/0/9
12
Te 102/0/9
10:00:00:27:F8:FD:2B:98
40-Gb
Yes
"VDX3-Access3"
20
Te 101/0/17
20
Te 103/0/17
10:00:00:27:F8:FC:B7:89
40-Gb
Yes
"VDX4-Access4"
Rbridge-id: 103
#ISLs: 3
Src
Src
Nbr
Nbr
Index
Interface
Index
Interface
Nbr-WWN
BW
Trunk NbrName
-------------------------------------------------------------------------------------------4
Te 103/0/1
4
Te 102/0/1
10:00:00:27:F8:FD:2B:98
30G
Yes
"VDX3-Access3"
14
Te 103/0/11
14
Te 100/0/11
10:00:00:27:F8:FD:F6:C0
40-Gb
Yes
"VDX1-Access1"
20
Te 103/0/17
20
Te 101/0/17
10:00:00:27:F8:FE:44:FC
40-Gb
Yes
"VDX2-Access2"
Rbridge-id: 102
#ISLs: 3
Src
Src
Nbr
Nbr
Index
Interface
Index
Interface
Nbr-WWN
BW
Trunk NbrName
-------------------------------------------------------------------------------------------4
Te 102/0/1
4
Te 103/0/1
10:00:00:27:F8:FC:B7:89
30G
Yes
26
"VDX4-Access4"
12
Te 102/0/9
Yes
"VDX2-Access2"
20
Te 102/0/17
Yes
"VDX1-Access1"
12
Te 101/0/9
10:00:00:27:F8:FE:44:FC
40-Gb
20
Te 100/0/17
10:00:00:27:F8:FD:F6:C0
40-Gb
Username/RBACs
Fabric ISL
27
VLAN
MTU
LLDP DCBX
Channel group
LACP timeout
vCenter Name
QoS
Monitor Session
sFlow
License installation/enablement/removal
The following commands transition a device between standalone, Logical Chassis, and Fabric Cluster
modes:
Procedure
To deploy a VCS in Logical Chassis mode, complete the following step.
Step 1: Configure Access Layer in Datacenter 1.
VDX1-Access1# vcs vcsid 10 rbridge-id 100 logical-chassis enable
vcsid 10 in logical-chassis mode
VDX2-Access2# vcs vcsid 10 rbridge-id 101 logical-chassis enable
<-Configuring vcsid 10 in logical-chassis
mode
VDX3-Access3# vcs vcsid 10 rbridge-id 102 logical-chassis enable
<-Configuring vcsid 10 in logical-chassis
mode
VDX4-Access4# vcs vcsid 10 rbridge-id 103 logical-chassis enable
<-Configuring vcsid 10 in logical-chassis
mode
<-Configuring
28
Procedure
Complete the following steps to configure the Access layer nodes in Datacenter
1.
Step 1: Configure VDX1-Access1 VLAN
VDX3-Access3(config)# rbridg-id 100 <- Accessing rbridge-id from principal switch
VDX3-Access3(config-rbridge-id-100)# interface vlan 100 <-Configuring a VLAN 100
Address
Pri State
Neigh Address
Neigh ID
Ev
29
Opt Cnt
Ve 100
5.1.1.2
254 FULL/BDR
5.1.1.1
5.1.1.1
7
2
0
Ve 100
5.1.1.2
255 FULL/DR
5.1.1.3
5.1.1.3
6
2
0
Ve 100
5.1.1.2
1
2WAY/OTHER 5.1.1.4
5.1.1.4
4
2
0
<-Because either of the nodes is DR/BDR, so neighborship stuck at 2WAY
30
Procedure
Step 1: Configure VDX5-Agr1 VLAN
VDX6-Agr2(config)# rbridg-id 200 <- Accessing rbridge-id from principal switch
VDX3-Access3(config-rbridge-id-200)# interface vlan 100 <-Configuring a VLAN 100.
31
ip
ip
ip
no
Opt
2
Address
Pri State
Neigh Address
Neigh ID
6.1.1.1
FULL/DR
6.1.1.2
5.1.1.3
5.1.1.1
FULL/OTHER 5.1.1.2
5.1.1.2
5.1.1.1
255 FULL/DR
5.1.1.1
5.1.1.3
5.1.1.3
FULL/OTHER 5.1.1.4
5.1.1.4
Opt
2
Address
Pri State
Neigh Address
Neigh ID
6.1.1.2
FULL/BDR
6.1.1.1
5.1.1.1
5.1.1.3
254 FULL/BDR
5.1.1.1
5.1.1.1
5.1.1.3
FULL/OTHER 5.1.1.2
5.1.1.2
5.1.1.3
FULL/OTHER 5.1.1.4
5.1.1.4
32
channel50
VDX6-Agr2# show vlan 101
VLAN
Name
State
Ports
(F)-FCoE
(u)-Untagged, (t)-Tagged
(R)-RSPAN
(c)-Converged
======== =============== ==========================
================================================
101
VLAN0101
INACTIVE(no member port) <-No port is tagged to this VLAN,
but
neighborship is formed with AGR2 through ISL
33
34
Deploying IP Multicast
PIM-SM........................................................................................................................... 35
Configuring PIM-SM Multicast.........................................................................................36
Detailed Procedure with Examples................................................................................. 38
IP multicast is an efficient way for transmitting a data stream to multiple hosts simultaneously. Protocol
Independent Multicast (PIM) is one of many protocols designed for IP multicast. PIM does not rely on a
specific routing protocol to create its network topology state. The PIM messages are sent encapsulated
in an IP packet with the IP protocol field set to 103. Depending on the type of message, the packet is
either sent to all PIM routers at the multicast address (224.0.0.13) or sent as a unicast to a specific host.
The following four varieties of PIM are available for deployment in different scenarios and are described
below:
PIM-SM
PIM-SM is most commonly deployed and is designed for large networks where most of the hosts are
not interested in every multicast data stream. PIM-SM creates a unidirectional shared tree from a
common node in the network, called the Rendezvous Point (RP), acting as the root. The RP acts as the
middle man between the source for a specific multicast group and the interested hosts or devices. The
RP can be statically configured on each PIM router or dynamically assigned through mechanisms such
as Bootstrap Router (BSR), Auto-RP, Anycast-RP or Embedded RP.
Within a network, the RP should always be upstream in relation to the destination hosts. Each device
interested in joining a group, sends a request to the RP for the group. To limit the join messages, the
local network identifies an upstream router as the designated router (DR). All hosts downstream from
the DR send Internet Group Message Protocol (IGMP) join messages to it. The DR in turn sends a
single join message to the RP on behalf of all the downstream interested hosts.
The RP receives the first few packets of the multicast stream from the source hosts encapsulated in the
PIM register message. These messages are sent as a unicast to the RP. The RP removes the
encapsulation of these packets and forwards them to the subscribed DRs.
In addition to creating the shared tree, PIM-SM also provides the option to create a source-based tree
with the root at a router adjacent to the tree. This provides the destination hosts with an option of
switching to the source-based tree when it provides a shorter path from the multicast source. This helps
optimize the traffic flow by removing the extra hops.
NOTE
Each PIM-enabled node in the fabric should be connected with the RP.
PIM support with VDX switches is not provided for the following:
35
IP Version 6
Virtual Routing and Forwarding (VRF)
Prefix list
Switch as BSR candidate (VDX switch processes BSR messages in current release)
Switch as RP candidate
36
Procedure Summary
In the above topology, the source is attached to Access 1 in VLAN 100 from 20.1.1.3 to Group
225.0.0.10 to 225.0.0.19. Layer 2 VLAN 100 is enabled on Access1, and Layer 3 PIM is enabled on
Aggregation in that VLAN 100. Three receivers are attached.
One receiver is attached in Datacenter 1 on VLAN 120 by enabling IGMP snooping on VLAN 120 in
Access 1 and IP and PIM on Aggregation 1.
A second receiver is attached in Datacenter 2 on VLAN 150 by enabling IGMP snooping on VLAN
150 on VDX 9 and VDX 10 and by enabling Layer 3 with PIM on VDX7 and VDX8 on VLAN 150 and
VLANs 160 and 161 to Core2.
The third receiver is attached to Datacenter 2 in VLAN 200 by enabling IGMP snooping on Access 2
and Layer 3 and PIM on Core2.
MSDP anycast-RP is established between MLX1 and MLX2 in the same PIM domain. MSDP peering is
established from MLX1 and MLX2 to MLX3 in a different PIM domain.
Procedure Summary
Step 1: Configure Layer 2 VLANs.
Configure a Layer 2 VLAN 100 in Access 1 with ports connected to the source on VDX3 and VDX4
tagged to it. Tag the port channel connected to Aggregation1 in that VLAN.
Configure Layer 2 VLAN 100 on Aggregation 1 and tag the port channel to Access 1 in that VLAN.
Configure Layer 2 VLANS 101 and 103 on VDX5 and tag ports connected to MLX1 and MLX2 in
those VLANs respectively.
Configure Layer 2 VLANs 102 and 104 on VDX6 and tag ports connected to MLX1 and MLX2
respectively.
Step 2: Enable OSPF, VRRP-E and PIM.
Enable OSPF, VRRP-E and PIM globally in the Aggregation layer. OSPF, VRRP-E and PIM are
enabled on VE interface (VE 100) of the Aggregation layer on ports connected to Access 1.
OSPF and PIM is enabled on VE interfaces (VE 101, 102, 103, 104) to Core 1 on the respective VE
interfaces.
OSPF and PIM are enabled on ports from VDX 7 and VDX 8 connected to Core 2 in different VLANs
(VLAN 160, 161).
Step 3: Establish OSPF neighbors in Area 1 and Area 2.
Because IPV4 multicast with BGP is not supported in NOS 4.0.1, OSPF neighborship is established
between Aggregation1 and Core 1 (MLX-1 and MLX-2) in OSPF Area 1.
NOTE
This does not apply to Release 5.0.0 and later, which support IPv4 multicast with BGP.
OSPF neighborship is established between CORE 2 (MLX-3) and Access 2 (VDX7 and VDX8) in
OSPF Area 2.
PIM neighborship is established between Aggregation 1 and Core 1 (MLX-1 and MLX-2) and
between Access 2 (VDX7 and VDX8) and Core 2 (MLX-3).
Step 4: Establish OSPF neighbors in Area 0.
OSPF neighborship is established between MLX1, MLX2, MLX3 in Area 0.
IP PIM-Sparse is enabled on interfaces connecting MLX1, MLX2, and MLX3. IP PIM border is
configured on interfaces between MLX-1 to MLX-3 and MLX-2 to MLX-3 to differentiate PIM
domains.
Step 5: Establish MSDP peers.
37
MSDP peering is established between MLX1, MLX2, and MLX3 with MSDP anycast RP between
MLX1 and MLX2 for load-balancing in the same PIM domain. With MSDP, individual source-group
pairs can be filtered in Source-Active messages.
Step 6: Configure Layer 2 VLANs on Access 1 and Access 2 with IGMP snooping.
Configure Layer 2 VLANs 120 and 200 on Access 1 and Access 2 respectively with IGMP snooping
enabled on them.
Add the ports on VDX 3, 4, and VDX 9, 10 connected to receivers in those VLANs respectively.
Add the ports connected to Aggregation 1 from Access 1 and Core 2 from Access 2 in those VLANs
with VE interfaces after adding IP address, OSPF, PIM on Aggregation 1 and Core 2 and VRRP-E
on Aggregation 1.
Step 7: Configure another Layer 2 VLAN 150 on Access2 with IGMP snooping.
Configure another Layer 2 VLAN 150 on Access2 with ports connected to Receivers on VDX 9 and
10 tagged in that VLAN with IGMP snooping.
Configure VE interface with OSPF, VRRP-E, IP address, and PIM on VDX 7 and VDX 8.
Step 8: Start the receivers.
Start the receivers to send IGMP joins.
Verify the IGMP group membership on all nodes.
Run the source traffic and verify traffic forwarding.
Procedure
Step 1: Configure VLANs.
Step 2: Configure VDX3-Access3 VLAN.
VDX3-Access3# interface Vlan 100 <- Enabling VLAN 100
Access1: Adding ports connected to source and port-channel connected to Aggregation Layer
in Layer 2 VLAN
VDX4-Access4# show run int ten 102/0/33
interface TenGigabitEthernet 102/0/33
fabric isl enable
fabric trunk enable
switchport
switchport mode trunk
switchport trunk allowed vlan add 100 <- Port is tagged to VLAN 100
switchport trunk tag native-vlan
spanning-tree shutdown
no shutdown
!
VDX4-Access4# show run int ten 102/0/34
interface TenGigabitEthernet 102/0/34
fabric isl enable
fabric trunk enable
switchport
switchport mode trunk
switchport trunk allowed vlan add 100
switchport trunk tag native-vlan
spanning-tree shutdown
no shutdown
38
Deploying IP Multicast
!
VDX4-Access4# show run int ten 103/0/33
interface TenGigabitEthernet 103/0/33
fabric isl enable
fabric trunk enable
switchport
switchport mode trunk
switchport trunk allowed vlan add 100
switchport trunk tag native-vlan
spanning-tree shutdown
no shutdown
!
VDX4-Access4# show run int ten 103/0/34
interface TenGigabitEthernet 103/0/34
fabric isl enable
fabric trunk enable
switchport
switchport mode trunk
switchport trunk allowed vlan add 100
switchport trunk tag native-vlan
spanning-tree shutdown
no shutdown
!
VDX4-Access4# show run int port-channel 50
interface Port-channel 50
vlag ignore-split
speed 40000
switchport
switchport mode trunk
switchport trunk allowed vlan add 100
switchport trunk tag native-vlan
spanning-tree shutdown
no shutdown
!
VDX4-Access4#
39
Procedure
Step 1: Configure OSPF, PIM, and VRRP-E on VDX5-Agr1.
router ospf <- Enabling OSPF
area 1 <-Configuring Area 1
router pim <- Enabling PIM
rp-address 11.11.11.11 <-Configuring static RP
protocol vrrp-extended <-Configuring VRRP-E
!
40
Procedure
Step 1: Configure VLAN globally from principal switch VDX10.
VDX10-Access10 VLAN configuration
VDX10-ACCess10# interface Vlan 160 <- Enabling VLAN 160
41
Deploying IP Multicast
42
OSPF Configuration
! !
router ospf
<- Enabling OSPF
area 0 <- Areas 0 and 1 are configured
area 1
log adjacency
router pim <- Enabling PIM
rp-address 11.11.11.11 <- Static RP
1
address
1
address
OSPF Configuration
To establish OSPF neighbors in the VCS fabric, complete the following steps.
Procedure
Step 1: Establish OSPF neighbors in Area 0.
OSPF neighborship is established between MLX1, MLX2, and MLX3 in Area 0. IP PIM-Sparse is
enabled on interfaces connecting MLX1, MLX2, and MLX3 and IP PIM border is configured on
interfaces between MLX-1 to MLX-3 and MLX-2 to MLX-3 to differentiate PIM domains.
Step 2: Configure OSPF, PIM and router-id on the MLX1 interface.
interface ethernet 3/2
enable
ip ospf area 0 <- OSPF area 0
ip ospf cost 5
43
Deploying IP Multicast
44
Configuring VE Interfaces
Configuring VE Interfaces
To configure VE interface from principal switch under rbridge-id, complete the following steps.
45
Deploying IP Multicast
Procedure
Step 1: Configure VLAN (IGMP) on VDX5-Agr1.
VDX5-Agr1# interface Vlan 120 <- Enabling VLAN 120
ip igmp snooping enable <- IGMP snooping is enabled to receive IGMP joins
interface Ve 120
ip ospf area 1 <-OSPF area 1 is configured
ip proxy-arp
ip address 40.1.1.1/25 <-Unique IP address
ip pim-sparse<-PIM-SM
no shutdown<-VE interface is enabled
vrrp-extended-group 120<-VRRP-E is configured with VRID 120
virtual-ip 40.1.1.4<-Virtual-IP is configured to be used as Gateway for the host
from
which joins are sent
enable<-VRRP-E is activated
no preempt-mode
short-path-forwarding<-To make the node to forward traffic when it receives instead
of going
through VRRP-E master
short-
46
from
which joins are sent
enable<-VRRP-E is activated
no preempt-mode
short-path-forwarding<-To make the node to forward traffic when it receives
instead of
going through VRRP-E master
Step 7: Configure VLAN on Access2 with ports connected to receivers with IGMP snooping.
Configure another Layer 2 VLAN 150 on Access2 with ports connected to Receivers on VDX 9 and 10
tagged in that VLAN with IGMP snooping.
Configure VE interface with OSPF, VRRP-E, IP address, and PIM on VDX 7 and VDX 8.
Step 8: Configure VLAN (IGMP) on VDX7-Access7.
VDX7-Access7# interface Vlan 200 <- Enabling VLAN 200
ip igmp snooping enable <- IGMP snooping is enabled to receive IGMP joins
Step 12:
vlan 200 <- VLAN 200
tagged ethernet 1/1 <- Tagged interface
router-interface ve 200 <- VE interface
interface ve 200
ip ospf area 2 <- OSPF area 2
ip address 30.1.1.1/25 <- IP address connected to host through Access 2
ip pim-sparse <- PIM-SM which will enable IGMP by default
Procedure
Step 1: Verify OSPF on MLX1.
Core1-MLX1#show ip os neighbor
Number of Neighbors is 4, in FULL state 4
Port
Address
Pri State
Neigh Address
3/2
5.1.2.1
1
FULL/DR
5.1.2.2
Neigh ID
44.44.44.44
Ev Opt Cnt
6 66 0
47
Deploying IP Multicast
3/4
v101
v102
6.1.1.1
2.1.1.2
2.2.1.2
254 FULL/DR
1
FULL/BDR
1
FULL/DR
6.1.1.2
2.1.1.1
2.2.1.1
33.33.33.33
2.1.1.1
20.1.1.2
5
4
6
66
2
2
0
0
0
Address
Pri State
Neigh Address
Neigh ID
Ev Opt
6.1.1.2
5.1.3.1
3.1.1.2
3.2.1.2
255
1
1
1
6.1.1.1
5.1.3.2
3.1.1.1
3.2.1.1
22.22.22.22
44.44.44.44
2.1.1.1
20.1.1.2
5
5
5
6
FULL/BDR
FULL/DR
FULL/BDR
FULL/DR
66
66
2
2
0
0
0
0
Address
Pri State
Neigh Address
Neigh ID
Ev
5.1.2.2
5.1.3.2
7.1.1.2
7.2.1.2
1
1
1
1
5.1.2.1
5.1.3.1
7.1.1.1
7.2.1.1
22.22.22.22
33.33.33.33
45.1.1.2
45.1.1.3
5
5
17
16
FULL/BDR
FULL/BDR
FULL/DR
FULL/DR
Opt
66
66
2
2
0
0
0
0
Opt
2
Address
Pri State
Neigh Address
Neigh ID
2.1.1.1
FULL/DR
2.1.1.2
22.22.22.22
3.1.1.1
FULL/DR
3.1.1.2
33.33.33.33
20.1.1.4
254 FULL/DR
20.1.1.2
20.1.1.2
40.1.1.1
40.1.1.3
20.1.1.2
FULL/DR
Opt
2
Address
Pri State
Neigh Address
Neigh ID
20.1.1.2
255 FULL/BDR
20.1.1.4
2.1.1.1
40.1.1.3
FULL/BDR
40.1.1.1
2.1.1.1
2.2.1.1
FULL/BDR
2.2.1.2
22.22.22.22
3.2.1.1
FULL/BDR
3.2.1.2
33.33.33.33
Opt
2
Address
Pri State
Neigh Address
Neigh ID
45.1.1.2
254 FULL/BDR
45.1.1.3
45.1.1.3
7.1.1.1
7.1.1.2
44.44.44.44
FULL/BDR
48
Opt
Address
Pri State
Neigh Address
Neigh ID
45.1.1.3
255 FULL/DR
45.1.1.2
45.1.1.2
Deploying IP Multicast
8
Ve 161
2
02
2
0
7.2.1.1
FULL/BDR
7.2.1.2
44.44.44.44
10
10
105
180
3
60
10
60
1
uc-default
49
Deploying IP Multicast
default 1
3000ms
l1 11.11.11.11
SMv2 Ena Itself
None
default 1
3000ms
Total Number of Interfaces : 5
1 None
20
105
180
3
60
10
60
1
uc-default
50
Deploying IP Multicast
v103 3.1.1.2
SMv2 Ena Itself
None
default 1
3000ms
v104 3.2.1.2
SMv2 Ena Itself
None
default 1
3000ms
l1 11.11.11.11
SMv2 Ena Itself
None
default 1
3000ms
Total Number of Interfaces : 5
1 None
1 None
1 None
20
105
180
3
60
10
60
1
uc-default
51
Deploying IP Multicast
e1/2 5.1.2.2
SMv2
1
3000ms
e1/5 5.1.3.2
SMv2
1
3000ms
v160 7.1.1.2
SMv2
1
3000ms
v161 7.2.1.2
SMv2
1
3000ms
v200 30.1.1.1
SMv2
1
3000ms
l1 11.11.11.11
SMv2
1
3000ms
Total Number of Interfaces : 6
Ena Itself
1 None
default
Ena Itself
1 None
default
Ena Itself
1 None
default
Ena Itself
1 None
default
Ena Itself
1 None
default
Ena Itself
1 None
default
100
101
103
120
Ve
Ve
Ve
Ve
100
101
103
120
20.1.1.2
2.1.1.2
3.1.1.2
40.1.1.3
Holdtime
sec
105
105
105
105
Age
sec
21
17
2
4
UpTime
Dd HH:MM:SS
21:02:49
1d 01:22:36
23:36:58
20:37:32
Priority
1
1
1
1
52
Holdtime Age
sec
sec
UpTime
Dd HH:MM:SS
Priority
Deploying IP Multicast
Ve
Ve
Ve
Ve
100
102
104
120
Ve
Ve
Ve
Ve
100
102
104
120
20.1.1.4
2.2.1.2
3.2.1.2
40.1.1.1
105
105
105
105
13
30
23
27
21:04:14
1d 01:02:20
23:38:19
20:38:58
1
1
1
1
Ve 150
Ve 160
45.1.1.3
7.1.1.2
Holdtime
sec
105
105
Age
sec
21
27
UpTime
Dd HH:MM:SS
22:05:51
21:13:56
Priority
1
1
53
Deploying IP Multicast
-------------+---------------+----+-----+------------------------------+---+--------+-----Ve 150
45.1.1.2
v2SM SM
45.1.1.3
Ve 150
1
None
1
Ve 160
7.1.1.1
v2SM SM
7.1.1.2
Ve 160
1
None
1
VDX7-Access7# show ip pim rpf 11.11.11.11
upstream nbr 7.1.1.2 on Ve 160 <- It shows the connected interface to MLX3
Ve 150
Ve 161
45.1.1.2
7.2.1.2
Holdtime
sec
105
105
Age
sec
3
7
UpTime
Dd HH:MM:SS
22:07:02
21:15:06
Priority
1
1
54
Age
10
11
Deploying IP Multicast
IP Address
33.33.33.33
Keep Alive Time
60
State
Mesh-group-name
ESTABLISH
Hold Time
Age
75
17
Message Sent
Message Received
Keep Alive
1313
1313
Notifications
0
0
Source-Active
1323
70
Lack of Resource
0
Last Connection Reset Reason:Reason Unknown
Notification Message Error Code Received:Unspecified
Notification Message Error SubCode Received:Not Applicable
Notification Message Error Code Transmitted:Unspecified
Notification Message Error SubCode Transmitted:Not Applicable
Local IP Address: 22.22.22.22
TCP Connection state: ESTABLISHED
Local host: 22.22.22.22, Local Port: 8711
Remote host: 33.33.33.33, Remote Port: 639
ISentSeq: 1701393677 SendNext: 1701566001 TotUnAck:
0
SendWnd:
65000 TotSent:
172324 ReTrans:
0
IRcvSeq: 1691881404 RcvNext: 1691894064 RcvWnd:
65000
TotalRcv:
12660 RcvQue:
0 SendQue:
0
Input SA Filter:Not Applicable
Input (S,G) route-map:None
Input RP route-map:None
Output SA Filter:Not Applicable
Output (S,G) route-map:None
Output RP route-map:None
SA
In
1438
1381
Out
72
1385
In
0
0
NOT
Out
0
0
Age
38
31
55
Deploying IP Multicast
IP Address
44.44.44.44
Keep Alive Time
60
State
Mesh-group-name
ESTABLISH
Hold Time
Age
75
4
Message Sent
Message Received
Keep Alive
1308
1308
Notifications
0
0
Source-Active
1385
1382
Lack of Resource
0
Last Connection Reset Reason:Reason Unknown
Notification Message Error Code Received:Unspecified
Notification Message Error SubCode Received:Not Applicable
Notification Message Error Code Transmitted:Unspecified
Notification Message Error SubCode Transmitted:Not Applicable
Local IP Address: 33.33.33.33
TCP Connection state: ESTABLISHED
Local host: 33.33.33.33, Local Port: 8676
Remote host: 44.44.44.44, Remote Port: 639
ISentSeq: 2243890535 SendNext: 2244067186 TotUnAck:
0
SendWnd:
65000 TotSent:
176651 ReTrans:
8
IRcvSeq: 2317396728 RcvNext: 2317573235 RcvWnd:
65000
TotalRcv:
176507 RcvQue:
0 SendQue:
0
Input SA Filter:Not Applicable
Input (S,G) route-map:None
Input RP route-map:None
Output SA Filter:Not Applicable
Output (S,G) route-map:None
Output RP route-map:None
SA
In
1394
1384
Out
72
1385
In
0
0
NOT
Out
0
0
Age
15
13
56
IP Address
33.33.33.33
Keep Alive Time
60
State
Mesh-group-name
ESTABLISH
Hold Time
Age
75
18
Message Sent
Message Received
Keep Alive
1310
1307
Notifications
0
0
Source-Active
1385
1384
Lack of Resource
0
Last Connection Reset Reason:Reason Unknown
Notification Message Error Code Received:Unspecified
Notification Message Error SubCode Received:Not Applicable
Notification Message Error Code Transmitted:Unspecified
Notification Message Error SubCode Transmitted:Not Applicable
Local IP Address: 44.44.44.44
TCP Connection state: ESTABLISHED
Local host: 44.44.44.44, Local Port: 639
Remote host: 33.33.33.33, Remote Port: 8676
ISentSeq: 2317396728 SendNext: 2317573625 TotUnAck:
0
SendWnd:
64872 TotSent:
176897 ReTrans:
6
IRcvSeq: 2243890535 RcvNext: 2244067448 RcvWnd:
65000
TotalRcv:
176913 RcvQue:
0 SendQue:
0
Input SA Filter:Not Applicable
Input (S,G) route-map:None
Input RP route-map:None
Output SA Filter:Not Applicable
Output (S,G) route-map:None
Output RP route-map:None
Procedure
Step 1: Verify IGMP on VDX1-Access1.
VDX1-Access1# show ip igmp groups
Total Number of Groups: 10
IGMP Connected Group Membership
Group Address
Interface
Uptime
Expires
Last Reporter
225.0.0.10
Vlan 120
20:37:56
00:03:49
40.1.1.2 <- Each Group
membership should have correct VLAN ID, Group address and host address
Member Ports: Te 103/0/33
225.0.0.11
Vlan 120
20:37:56
00:03:42
40.1.1.2
Member Ports: Te 103/0/33
225.0.0.12
Vlan 120
20:37:56
00:03:48
40.1.1.2
Member Ports: Te 103/0/33
225.0.0.13
Vlan 120
20:37:56
00:03:43
40.1.1.2
Member Ports: Te 103/0/33
225.0.0.14
Vlan 120
20:37:56
00:03:42
40.1.1.2
Member Ports: Te 103/0/33
225.0.0.15
Vlan 120
20:37:56
00:03:41
40.1.1.2
Member Ports: Te 103/0/33
225.0.0.16
Vlan 120
20:37:56
00:03:49
40.1.1.2
Member Ports: Te 103/0/33
225.0.0.17
Vlan 120
20:37:56
00:03:46
40.1.1.2
Member Ports: Te 103/0/33
225.0.0.18
Vlan 120
20:37:56
00:03:47
40.1.1.2
Member Ports: Te 103/0/33
225.0.0.19
Vlan 120
20:37:56
00:03:47
40.1.1.2
Member Ports: Te 103/0/33
VDX1-Access1# show ip igmp interface
Interface Vlan 1
IGMP Snooping disabled
IGMP Snooping fast-leave disabled
IGMP Snooping querier disabled
Number of router-ports: 0
Interface Vlan 100
IGMP Snooping disabled
IGMP Snooping fast-leave disabled
57
Deploying IP Multicast
Uptime
20:40:48
Expires
00:03:00
Last Reporter
40.1.1.2
20:40:48
00:02:59
40.1.1.2
20:40:48
00:02:56
40.1.1.2
20:40:48
00:02:53
40.1.1.2
20:40:48
00:02:53
40.1.1.2
20:40:48
00:03:01
40.1.1.2
20:40:48
00:02:55
40.1.1.2
20:40:48
00:02:52
40.1.1.2
20:40:48
00:02:54
40.1.1.2
20:40:48
00:03:00
40.1.1.2
Uptime
20:42:01
Expires
00:03:48
Last Reporter
40.1.1.2
20:42:01
00:03:46
40.1.1.2
20:42:01
00:03:49
40.1.1.2
20:42:01
00:03:52
40.1.1.2
20:42:01
00:03:46
40.1.1.2
20:42:01
00:03:47
40.1.1.2
20:42:01
00:03:46
40.1.1.2
20:42:01
00:03:52
40.1.1.2
20:42:01
00:03:47
40.1.1.2
58
Deploying IP Multicast
225.0.0.19
Vlan 120
Member Ports: Te 103/0/33
20:42:01
00:03:47
40.1.1.2
Uptime
20:43:43
Expires
00:04:11
Last Reporter
40.1.1.2
20:43:43
00:04:10
40.1.1.2
20:43:43
00:04:12
40.1.1.2
20:43:43
00:04:10
40.1.1.2
20:43:43
00:04:14
40.1.1.2
20:43:43
00:04:15
40.1.1.2
20:43:43
00:04:12
40.1.1.2
20:43:43
00:04:14
40.1.1.2
20:43:43
00:04:11
40.1.1.2
20:43:43
00:04:10
40.1.1.2
Uptime
20:46:27
Expires
00:03:33
Last Reporter
40.1.1.2
20:46:27
00:03:30
40.1.1.2
59
Deploying IP Multicast
225.0.0.12
Member Ports:
225.0.0.13
Member Ports:
225.0.0.14
Member Ports:
225.0.0.15
Member Ports:
225.0.0.16
Member Ports:
225.0.0.17
Member Ports:
225.0.0.18
Member Ports:
225.0.0.19
Member Ports:
Vlan
Po 50
Vlan
Po 50
Vlan
Po 50
Vlan
Po 50
Vlan
Po 50
Vlan
Po 50
Vlan
Po 50
Vlan
Po 50
120
20:46:27
00:03:38
40.1.1.2
120
20:46:27
00:03:37
40.1.1.2
120
20:46:27
00:03:31
40.1.1.2
120
20:46:27
00:03:29
40.1.1.2
120
20:46:27
00:03:36
40.1.1.2
120
20:46:27
00:03:33
40.1.1.2
120
20:46:27
00:03:30
40.1.1.2
120
20:46:27
00:03:38
40.1.1.2
60
Deploying IP Multicast
IGMP
IGMP
IGMP
IGMP
IGMP
Interface Ve 103
IGMP enabled
IGMP query interval 125 seconds
IGMP other-querier interval 255 seconds
IGMP query response time 10 seconds
IGMP last-member query interval 1000 milliseconds
IGMP immediate-leave disabled
IGMP querier 3.1.1.1(this system)
IGMP version 2
Interface Ve 120
IGMP enabled
IGMP query interval 125 seconds
IGMP other-querier interval 255 seconds
IGMP query response time 10 seconds
IGMP last-member query interval 1000 milliseconds
IGMP immediate-leave disabled
IGMP querier 40.1.1.1(this system) <- IGMP querier is Agr1
IGMP version 2
Uptime
20:49:07
Expires
00:02:56
Last Reporter
40.1.1.2
20:49:07
00:02:58
40.1.1.2
20:49:07
00:02:55
40.1.1.2
20:49:07
00:03:00
40.1.1.2
20:49:07
00:02:56
40.1.1.2
20:49:07
00:02:56
40.1.1.2
20:49:07
00:03:03
40.1.1.2
20:49:07
00:02:57
40.1.1.2
20:49:07
00:02:54
40.1.1.2
20:49:07
00:02:56
40.1.1.2
102
disabled
fast-leave disabled
querier disabled
61
Deploying IP Multicast
Number of router-ports: 0
Interface Vlan 103
IGMP Snooping disabled
IGMP Snooping fast-leave disabled
IGMP Snooping querier disabled
Number of router-ports: 0
Interface Vlan 104
IGMP Snooping disabled
IGMP Snooping fast-leave disabled
IGMP Snooping querier disabled
Number of router-ports: 0
Interface Vlan 120
IGMP Snooping enabled
IGMP Snooping fast-leave disabled
IGMP Snooping querier disabled
Number of router-ports: 0
RbridgeId: 201
Interface Ve 100
IGMP enabled
IGMP query interval 125 seconds
IGMP other-querier interval 255 seconds
IGMP query response time 10 seconds
IGMP last-member query interval 1000 milliseconds
IGMP immediate-leave disabled
IGMP querier 20.1.1.2(this system) <- IGMP querier is Agr2
IGMP version 2
Interface Ve 102
IGMP enabled
IGMP query interval 125 seconds
IGMP other-querier interval 255 seconds
IGMP query response time 10 seconds
IGMP last-member query interval 1000 milliseconds
IGMP immediate-leave disabled
IGMP querier 2.2.1.1(this system)
IGMP version 2
Interface Ve 104
IGMP enabled
IGMP query interval 125 seconds
IGMP other-querier interval 255 seconds
IGMP query response time 10 seconds
IGMP last-member query interval 1000 milliseconds
IGMP immediate-leave disabled
IGMP querier 3.2.1.1(this system)
IGMP version 2
Interface Ve 120
IGMP enabled
IGMP query interval 125 seconds
IGMP other-querier interval 255 seconds
IGMP query response time 10 seconds
IGMP last-member query interval 1000 milliseconds
IGMP immediate-leave disabled
IGMP querier 40.1.1.1(other system) <- IGMP querier is Agr1
IGMP version 2
62
Deploying IP Multicast
63
Deploying IP Multicast
Exclude
BR - Blocked RPT, BA - Blocked Assert, BF - Blocked Filter, BI Blocked IIF, BM - Blocked MCT
Total entries in mcache: 20
1
(20.1.1.3, 225.0.0.10) in v102 (tag e3/3), Uptime 00:08:20, Rate 1000 (SM)
upstream neighbor 2.2.1.1
Flags (0xb002c4c1) SM SPT HW FAST MSDPADV
fast ports: ethe 3/2
AgeSltMsk: 00000004, FID: 0x800e, MVID: NotReq, RegPkt: 0, AvgRate: 1000,
profile: none, KAT Timer value: 240
Forwarding_oif: 1, Immediate_oif: 1, Blocked_oif: 1
Layer 3 (HW) 1:
e3/2(VL1), 00:08:06/209, Flags: IM <- Forward to MSDP peer MLX3 on 3/2
Blocked OIF 1:
e3/3(VL102), 00:05:19/178, Flags: IH BR BI <- Blocks to interface
connected to downstream peer for receiver attached to DataCenter1 as it picks SPT
path.
3
(20.1.1.3, 225.0.0.11) in v102 (tag e3/3), Uptime 00:08:21, Rate 1000 (SM)
upstream neighbor 2.2.1.1
Flags (0xb002c4c1) SM SPT HW FAST MSDPADV
fast ports: ethe 3/2
AgeSltMsk: 00000004, FID: 0x800e, MVID: NotReq, RegPkt: 0, AvgRate: 1000,
profile: none, KAT Timer value: 240
Forwarding_oif: 1, Immediate_oif: 1, Blocked_oif: 1
Layer 3 (HW) 1:
e3/2(VL1), 00:08:06/208, Flags: IM
Blocked OIF 1:
e3/3(VL102), 00:05:19/177, Flags: IH BR BI
64
Deploying IP Multicast
VE 200
Core1-MLX1# show ip msdp sa-cache self-originated Since MLX1 is sending the sacache to peers, the entries will be available if self-originated option is used
Index RP address
(Source,Group)
Orig Peer
Age/Uptime
1
11.11.11.11
(20.1.1.3,225.0.0.10)
Self-Orig
NA /NA
2
11.11.11.11
(20.1.1.3,225.0.0.11)
Self-Orig
NA /NA
3
11.11.11.11
(20.1.1.3,225.0.0.12)
Self-Orig
NA /NA
4
11.11.11.11
(20.1.1.3,225.0.0.13)
Self-Orig
NA /NA
5
11.11.11.11
(20.1.1.3,225.0.0.14)
Self-Orig
NA /NA
6
11.11.11.11
(20.1.1.3,225.0.0.15)
Self-Orig
NA /NA
7
11.11.11.11
(20.1.1.3,225.0.0.16)
Self-Orig
NA /NA
8
11.11.11.11
(20.1.1.3,225.0.0.17)
Self-Orig
NA /NA
9
11.11.11.11
(20.1.1.3,225.0.0.18)
Self-Orig
NA /NA
10
11.11.11.11
(20.1.1.3,225.0.0.19)
Self-Orig
NA /NA
Total of 10 Self Orig SA entries
Orig Peer
22.22.22.22
22.22.22.22
Age/Uptime
44 /00:10:06
44 /00:10:06
65
Deploying IP Multicast
3
4
5
6
7
8
9
10
Total
22.22.22.22
(20.1.1.3,225.0.0.12)
22.22.22.22
(20.1.1.3,225.0.0.13)
22.22.22.22
(20.1.1.3,225.0.0.14)
22.22.22.22
(20.1.1.3,225.0.0.15)
22.22.22.22
(20.1.1.3,225.0.0.16)
22.22.22.22
(20.1.1.3,225.0.0.17)
22.22.22.22
(20.1.1.3,225.0.0.18)
22.22.22.22
(20.1.1.3,225.0.0.19)
number of matching entries:10
22.22.22.22
22.22.22.22
22.22.22.22
22.22.22.22
22.22.22.22
22.22.22.22
22.22.22.22
22.22.22.22
44
44
44
44
44
44
44
44
Orig Peer
22.22.22.22
22.22.22.22
22.22.22.22
22.22.22.22
22.22.22.22
22.22.22.22
22.22.22.22
22.22.22.22
22.22.22.22
22.22.22.22
Age
9
9
9
9
9
9
9
9
9
9
/00:10:06
/00:10:06
/00:10:06
/00:10:05
/00:10:05
/00:10:05
/00:10:04
/00:10:04
66
Deploying IGMP
Overview of IGMP........................................................................................................... 67
IGMP Features, Options, and Terminology.....................................................................67
Configuring and Verifying IGMP Snooping..................................................................... 68
Configuring Layer 2 Multicast in Access and Aggregation Layers.................................. 69
Procedure Summary....................................................................................................... 70
Detailed Procedure with Examples................................................................................. 70
Overview of IGMP
Internet Group Management Protocol (IGMP) snooping is a mechanism that allows a Layer 2 switch
more efficiently forward multicastes to VLAN port members. Snooping relates to learning forwarding
states for multicast data traffic on VLAN port members using the IGMP control (Join/Leave) packets
received. This feature also provides an option to statically configure forwarding states through CLI
commands.
In a multicast domain, IGMP snooping is required at Layer 2, while IGMP and PIM are required at Layer
3. IGMP snooping always works at Layer 2, with working at Layer 3. All IGMP snooping timers are
imported from the IGMP protocol.
Multicast Control and data packets through the Layer 2 switch configured with VLANs is most easily
achieved by Layer 2 forwarding of all multicast packets received on all the member ports of the VLAN
interfaces. However, this simple approach is not bandwidth efficient, because only a subset of member
ports may be connected to devices interested in receiving given multicast packets.
In the worst case scenario, data is forwarded to all port members of a VLAN with a large number of
member ports (for example, all 24 ports), even if a single VLAN member is interested in receiving the
data. Such scenarios can lead to loss of throughput on a switch that gets hit by a high rate of multicast
data traffic for many destinations on different VLANs.
67
IGMP Join: This message is sent on an interface when the IGMP host decides to join the group. It
immediately transmits a Membership Report for that group, which occurs only in the Non-Member
state.
IGMP Leave: This message is sent on an interface when the IGMP host decides to leave the group.
It immediately transmits a Leave message for that group. In a Leave Group message, the group
address field contains the IP multicast group address.
IGMP general query: Used to learn which groups have members on an attached network. The
group address field is set to zero when sending a General Query, which applies to all memberships
on the interface from which the Query is received.
IGMP group specific query: Used to learn if a particular group has any members on an attached
network. A Group-Specific Query applies to membership in a single group on the interface from
which the Query is received. The group address in the IGMP header has a valid multicast group
address.
IGMP snooping table: When the host sends an IGMP join message when the VLAN is configured
for IGMP snooping, the switch learns the IGMP join and makes an entry in the IGMP Snooping
table. This table contains VLAN information, multicast group addresses, and member ports on the
switch connected to each host interested in receiving multicast traffic.
IGMP snooping entries: When the host sends an IGMP join message when the VLAN is configured
for IGMP snooping, makes an entry in the IGMP snooping table.
IGMP Snooping Entry Timers:
IGMP query interval: The query interval is the interval between general queries sent by the
querier. Valid values range from 1 through 18000 seconds; the default is 125 seconds.
IGMP lastmemberqueryinterval: The time in milliseconds that the IGMP router waits to
receive a response to a groupspecific query message, including messages sent in
response to a hostleave message. Valid values range from 100 through 25500
milliseconds; the default is 1000 milliseconds.
IGMP max query response time: Sets the maximum response time for IGMP queries for a
specific interface. When a host receives the query packet, it starts counting until it reaches
the maximum response time. When this timer expires, the switch (host) replies with a
report, provided that no other host from the same group has responded yet. Valid values
range from 1 through 25 seconds; the default is 10 seconds.
IGMP snooping fast leave: This enables the IGMP snooping fastleave option, which allows removal
of an interface from the forwardingtable without sending out group-specific queries to the interface.
IGMP snooping mrouter: A VLAN port member that is a multicast router interface, which is an
interface that faces toward a multicast router or other IGMP querier.
68
69
Procedure Summary
When passive IGMP mode is enabled, the router listens for IGMP Group Membership reports on the
VLAN or VPLS instance specified but does not send IGMP queries. The passive mode is called IGMP
snooping. Use this mode when another device in the VLAN or VPLS instance is actively sending
queries.
Procedure Summary
Step 1: Configure VLAN to source, tag port channel, and enable IGMP snooping.
Configure a Layer 2 VLAN 50 in Access 1 with ports connected to the source on VDX3 and VDX4
tagged to it. Also, tag the port-channel connected to Aggregation1 in that VLAN. Enable IGMP
snooping on VLAN 50 in Access1
Step 2: Configure VLAN, tag the port channel and ports to to Core1, and enable IGMP
snooping.
Configure Layer 2 VLAN 50 on Aggregation 1 and tag the port-channel to Access 1 and the ports
connected to Core1 in that VLAN. Enable IGMP snooping on VLAN 50 in Aggregation1.
Step 3: Establish OSPF neighbors in Area 0 and enable LDP.
Establish OSPF neighborship between MLX1, MLX2, and MLX3 in Area 0. Enabe LDP on the
interfaces. In order to bring up the LDP link session, ldr-id can be configured with unique address on
each node if the devices has Multicast Source Discovery Protocol (MSDP) configuration with common
loopback.
Step 4: Configure VPLS and add VPLS end points.
Configure VPLS between MLX1 and MLX3 with MLX2 at MPLS transit. VPLS is configured with IP
multicast active on MLX1 and passive on MLX3. The ports connected from MLX1 to Aggregation1 are
added as VPLS endpoints in VLAN 50.
Step 5: Configure VLANs with IGMP snooping, add ports to receivers, and add ports from
Access 1, Aggregation1, and Access 2.
Configure Layer 2 VLANs 60 and 70 on Access 1 and Access 2 respectively with IGMP snooping
enabled. Add the ports on VDX 3, 4 and VDX 9, 10 connected to receivers in those VLANs
respectively. Add the ports connected to Aggregation 1 from Access 1, Core1 from Aggregation1 and
Core 2 from Access 2 in those VLANs.
Step 6: Add ports from MLX1 in VPLS VLAN and from Core2 in VPLS VLAN 70.
Add the ports connected to Aggregation1 from MLX1 in VPLS VLAN 60 and the ports connected to
Access 2 from Core2 in VPLS VLAN 70.
Step 7: Start the receivers and verify IGMP group membership.
Start the receivers to send IGMP joins and verify the IGMP group membership on all nodes. Run the
source traffic and verify the traffic forwarding.
70
Procedure
Step 1: Configure VLAN to source, tag port channel, and enable IGMP
snooping.
Configure a Layer 2 VLAN 50 in Access 1 with ports connected to Source on
VDX3 and VDX4 tagged to it. Also, tag the port-channel connected to
Aggregation1 in that VLAN. Enable IGMP snooping on VLAN 50 in Access1.
VLAN with IGMP snooping should be configured globally from principal switch.
Step 2: Configure VDX3-Access3 VLAN.
VDX3-Access3# interface Vlan 50<- Enabling VLAN 50
ip igmp snooping enable <-Enabling IGMP snooping
Step 4: Configure VLAN, tag the port channel and ports to to Core1, and
enable IGMP snooping.
interface Port-channel 50
vlag ignore-split
speed 40000
switchport
switchport mode trunk
switchport trunk allowed vlan add 50, 100,120
switchport trunk tag native-vlan
spanning-tree shutdown
no shutdown
!
interface TenGigabitEthernet 200/0/1
fabric isl enable
fabric trunk enable
switchport
switchport mode trunk
switchport trunk allowed vlan add 50,101
switchport trunk tag native-vlan
spanning-tree shutdown
71
Deploying IGMP
no shutdown
!
72
Configuring VPLS
Configuring VPLS
To configure VPLS, complete the following steps.
Procedure
Step 1: Configure VPLS and add VPLS end points .
VPLS is configured between MLX1 and MLX3 with MLX2 at MPLS transit. VPLS
will be configured with IP multicast active on MLX1 and passive on MLX3. The
ports connected from MLX1 to Aggregation1 are added as VPLS end points in
VLAN 50.
Step 2: Configure MLX1 VPLS.
vpls vp1 100 <- VPLS with name and ID
vpls-peer 44.44.44.44 <- VPLS-peer address
multicast active <- IGMP snooping
vlan 50 <- VPLS VLAN
tagged ethernet 3/7 <- VPLS end point
73
74
Deploying IGMP
Step 1: Configure VLANs with IGMP snooping, add ports to receivers, and
add ports from Access 1, Aggregation1 and Access 2.
Configure Layer 2 VLANs 60, 70 on Access 1 and Access 2 respectively with
IGMP snooping enabled on them. Add the ports on VDX 3, 4 and VDX 9, 10
connected to receivers in those VLANs respectively. Add the ports connected to
Aggregation 1 from Access 1, Core1 from Aggregation1 and Core 2 from Access
2 in the respective VLANs. VLAN and interface configurations are done from
Principal switch.
Step 2: Configure VDX3-Access3 VLAN.
VDX3-Access3# interface Vlan 60<- Enabling VLAN 60
ip igmp snooping enable <-Enabling IGMP snooping
75
76
Deploying IGMP
Procedure
Step 1: Verify VDX1-Access1 IGMP.
VDX1-Access1# show ip igmp groups
Total Number of Groups: 10
IGMP Connected Group Membership
Group Address
Interface
Uptime
Expires
Last Reporter
226.0.0.10
Vlan 60
00:39:59
00:04:02
51.1.1.2
Member Ports: Te 103/0/33 <-Joins are received from interface 103/0/33 on VLAN 60
from host 51.1.1.2
226.0.0.11
Vlan 60
00:39:59
00:04:04
51.1.1.2
Member Ports: Te 103/0/33
226.0.0.12
Vlan 60
00:39:59
00:03:59
51.1.1.2
Member Ports: Te 103/0/33
226.0.0.13
Vlan 60
00:39:59
00:04:04
51.1.1.2
Member Ports: Te 103/0/33
226.0.0.14
Vlan 60
00:39:59
00:03:57
51.1.1.2
Member Ports: Te 103/0/33
226.0.0.15
Vlan 60
00:39:59
00:04:00
51.1.1.2
Member Ports: Te 103/0/33
226.0.0.16
Vlan 60
00:39:59
00:04:02
51.1.1.2
Member Ports: Te 103/0/33
226.0.0.17
Vlan 60
00:39:59
00:04:04
51.1.1.2
Member Ports: Te 103/0/33
226.0.0.18
Vlan 60
00:39:59
00:03:59
51.1.1.2
Member Ports: Te 103/0/33
226.0.0.19
Vlan 60
00:39:59
00:03:59
51.1.1.2
Member Ports: Te 103/0/33
VDX1-Access1# show ip igmp interface
Interface Vlan 1
IGMP Snooping disabled
IGMP Snooping fast-leave disabled
IGMP Snooping querier disabled
Number of router-ports: 0
Interface Vlan 50
IGMP Snooping enabled
IGMP Snooping fast-leave disabled
IGMP Snooping querier disabled
Number of router-ports: 1
Interface Vlan 60
IGMP Snooping enabled
IGMP Snooping fast-leave disabled
IGMP Snooping querier disabled
Number of router-ports: 1
Interface Vlan 100
IGMP Snooping disabled
IGMP Snooping fast-leave disabled
IGMP Snooping querier disabled
Number of router-ports: 0
Interface Vlan 120
IGMP Snooping enabled
IGMP Snooping fast-leave disabled
IGMP Snooping querier disabled
Number of router-ports: 1
Last Reporter
51.1.1.2 <- Joins are
51.1.1.2
51.1.1.2
51.1.1.2
51.1.1.2
77
Deploying IGMP
226.0.0.14
Member Ports:
226.0.0.15
Member Ports:
226.0.0.16
Member Ports:
226.0.0.17
Member Ports:
226.0.0.18
Member Ports:
226.0.0.19
Member Ports:
Vlan
Po 50
Vlan
Po 50
Vlan
Po 50
Vlan
Po 50
Vlan
Po 50
Vlan
Po 50
60
00:43:09
00:02:59
51.1.1.2
60
00:43:09
00:02:54
51.1.1.2
60
00:43:09
00:03:00
51.1.1.2
60
00:43:09
00:02:55
51.1.1.2
60
00:43:09
00:02:57
51.1.1.2
60
00:43:09
00:02:59
51.1.1.2
78
Deploying IGMP
Expires
Last Reporter
00:03:09
57.1.1.2 <- Joins are
host 57.1.1.2
00:03:16
57.1.1.2
00:03:11
57.1.1.2
00:03:08
57.1.1.2
00:03:13
57.1.1.2
00:03:09
57.1.1.2
00:03:14
57.1.1.2
00:03:11
57.1.1.2
00:03:11
57.1.1.2
00:03:12
57.1.1.2
79
Deploying IGMP
Address
5.1.2.1
6.1.1.1
Pri State
1
FULL/DR
254 FULL/DR
Neigh Address
5.1.2.2
6.1.1.2
Neigh ID
44.44.44.44
33.33.33.33
Ev Opt Cnt
6 66 0
5 66 0
Neigh ID
22.22.22.22
44.44.44.44
Ev Opt Cnt
5 66 0
5 66 0
Neigh ID
22.22.22.22
33.33.33.33
Ev Opt Cnt
5 66 0
5 66 0
Address
6.1.1.2
5.1.3.1
Pri State
255 FULL/BDR
1
FULL/DR
Neigh Address
6.1.1.1
5.1.3.2
Address
5.1.2.2
5.1.3.2
Pri State
1
FULL/BDR
1
FULL/BDR
Neigh Address
5.1.2.1
5.1.3.1
Interface
e3/4
e3/2
(targeted)
Nbr LDP ID
33.33.33.33:0
44.44.44.44:0
44.44.44.44:0
Max Hold
15
15
45
Time Left
12
11
42
80
Time Left
30
<35
Deploying IGMP
Interface
e4/4
e4/5
Nbr LDP ID
22.22.22.22:0
44.44.44.44:0
Max Hold
15
15
Time Left
14
13
State
Operational
Adj Used
Link
My Role
Active
Max Hold
36
Time Left
30
<-
Operational
Link
Passive
36
32
Outbound
Intf
e4/4 <-Tunnel should be UP
e4/4
Interface
e1/2
e1/5
(targeted)
Nbr LDP ID
22.22.22.22:0
33.33.33.33:0
22.22.22.22:0
Max Hold
15
15
45
Time Left
12
12
39
State
Operational
Adj Used
Link
My Role
Active
Max Hold
36
Time Left
31
<-
Operational
Link
Active
36
34
Outbound
Intf
e1/2 <- Tunnel should be UP
e1/2
MLX1
VPLS verification
MLX3 VPLS verification
Core1-MLX1#show mpls vpls id 100
VPLS vp1, Id 100, Max mac entries: 2048
Total vlans: 2, Tagged ports: 2 (2 Up), <- VPLS end points should be UP
Untagged ports 0 (0 Up)
IFL-ID: n/a
Vlan 50
Layer 2 Protocol: NONE
Tagged: ethe 3/7
Vlan 60
Layer 2 Protocol: NONE
Tagged: ethe 3/3
VC-Mode: Raw
Total VPLS peers: 1 (1 Operational) <- Peer should be operational
Peer address: 44.44.44.44, State: Operational, Uptime: 59 min
Tnnl in use: tnl1(3)[LDP]
Peer Index:0
Local VC lbl: 983040, Remote VC lbl: 983040
Local VC MTU: 1500, Remote VC MTU: 1500
Local VC-Type: Ethernet(0x05), Remote VC-Type: Ethernet(0x05)
81
Deploying IGMP
CPU-Protection: OFF
Local Switching: Enabled
Extended Counter: ON
Multicast Snooping: Enabled - Active
Core2-MLX3#show mpls vpls id 100
VPLS vp, Id 100, Max mac entries: 8192
Total vlans: 1, Tagged ports: 1 (1 Up), <- VPLS end points should be UP
Untagged ports 0 (0 Up)
IFL-ID: n/a
Vlan 70
Layer 2 Protocol: NONE
Tagged: ethe 1/1
VC-Mode: Raw
Total VPLS peers: 1 (1 Operational) <- Peer should be operational
Peer address: 22.22.22.22, State: Operational, Uptime: 1 hr 0 min
Tnnl in use: tnl0(3)[LDP]
Peer Index:0
Local VC lbl: 983040, Remote VC lbl: 983040
Local VC MTU: 1500, Remote VC MTU: 1500
Local VC-Type: Ethernet(0x05), Remote VC-Type: Ethernet(0x05)
CPU-Protection: OFF
Local Switching: Enabled
Extended Counter: ON
Multicast Snooping: Enabled - Passive
R: Router Port,
1
none
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
TNNL peer 44.44.44.44 VC Label 983040 R Label * Port e3/2 ( V2)
00:52:34/0s
FID: 0x801a
MVID:
47
<- The groups 226.0.0.39 is received from
Access 2 and the traffic is forwarded via VPLS.
2
1
none
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
TNNL peer 44.44.44.44 VC Label 983040 R Label * Port e3/2 ( V2)
00:52:34/0s
FID: 0x8019
MVID:
46
3
1
none
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
TNNL peer 44.44.44.44 VC Label 983040 R Label * Port e3/2 ( V2)
00:52:34/0s
FID: 0x8018
MVID:
45
4
82
Deploying IGMP
1
none
NumOIF: 1
profile:
Outgoing Interfaces:
TNNL peer 44.44.44.44 VC Label 983040 R Label * Port e3/2 ( V2)
00:52:34/0s
FID: 0x8017
MVID:
44
5
1
none
1
none
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
TNNL peer 44.44.44.44 VC Label 983040 R Label * Port e3/2 ( V2)
00:52:35/0s
FID: 0x8016
MVID:
43
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
TNNL peer 44.44.44.44 VC Label 983040 R Label * Port e3/2 ( V2)
00:52:35/0s
FID: 0x8015
MVID:
42
7
1
none
NumOIF: 1
profile:
Outgoing Interfaces:
TNNL peer 44.44.44.44 VC Label 983040 R Label * Port e3/2 ( V2)
00:52:35/0s
FID: 0x8014
MVID:
41
8
1
none
1
none
10
1
none
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
TNNL peer 44.44.44.44 VC Label 983040 R Label * Port e3/2 ( V2)
00:52:35/0s
FID: 0x8013
MVID:
40
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
TNNL peer 44.44.44.44 VC Label 983040 R Label * Port e3/2 ( V2)
00:52:35/0s
FID: 0x8012
MVID:
39
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
TNNL peer 44.44.44.44 VC Label 983040 R Label * Port e3/2 ( V2)
00:52:35/0s
FID: 0x8011
MVID:
38
11
profile: none
83
Deploying IGMP
1
none
Access 1
12
1
none
13
1
none
14
1
none
15
1
none
16
1
none
17
1
none
18
1
none
84
NumOIF: 1
profile:
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 00:57:04/0s
FID: 0x8010
MVID:
37
<- Traffic is forwared through 3/3 to
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 00:57:04/0s
FID: 0x8004
MVID:
36
(*, 226.0.0.17) 01:00:33
NumOIF: 1
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 01:00:33/124s
(56.1.1.2, 226.0.0.17) in e3/7 vlan 50 00:57:04
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 00:57:04/0s
FID: 0x800f
MVID:
35
(*, 226.0.0.16) 01:00:33
NumOIF: 1
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 01:00:33/5s
(56.1.1.2, 226.0.0.16) in e3/7 vlan 50 00:57:04
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 00:57:04/0s
FID: 0x8006
MVID:
34
(*, 226.0.0.15) 01:00:33
NumOIF: 1
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 01:00:33/1s
(56.1.1.2, 226.0.0.15) in e3/7 vlan 50 00:57:04
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 00:57:04/0s
FID: 0x8007
MVID:
33
(*, 226.0.0.14) 01:00:33
NumOIF: 1
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 01:00:33/6s
(56.1.1.2, 226.0.0.14) in e3/7 vlan 50 00:57:04
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 00:57:04/0s
FID: 0x8008
MVID:
32
(*, 226.0.0.13) 01:00:33
NumOIF: 1
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 01:00:33/1s
(56.1.1.2, 226.0.0.13) in e3/7 vlan 50 00:57:04
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 00:57:04/0s
FID: 0x8009
MVID:
31
(*, 226.0.0.12) 01:00:33
NumOIF: 1
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 01:00:33/7s
(56.1.1.2, 226.0.0.12) in e3/7 vlan 50 00:57:04
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 00:57:04/0s
Deploying IGMP
FID: 0x800a
19
1
none
20
1
none
MVID:
30
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 00:57:04/0s
FID: 0x800b
MVID:
29
(*, 226.0.0.10) 01:00:33
NumOIF: 1
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 01:00:33/128s
profile: none
NumOIF: 1
profile:
Outgoing Interfaces:
e3/3 vlan 60 ( V2) 00:57:05/0s
FID: 0x800c
MVID:
28
R: Router Port,
profile: none
1
(56.1.1.2, 226.0.0.39) in TNNL peer 22.22.22.22 00:53:41
NumOIF: 1
profile:
none
Outgoing Interfaces:
e1/1 vlan 70 ( V2) 00:53:41/0s
FID: 0x8010
MVID:
10
<- Traffic is forwarded to Access 2
through 1/1
in vlan 70.
2
profile: none
1
(56.1.1.2, 226.0.0.38) in TNNL peer 22.22.22.22 00:53:41
profile:
none
Outgoing Interfaces:
e1/1 vlan 70 ( V2) 00:53:41/0s
FID: 0x800f
MVID:
9
3
profile: none
1
(56.1.1.2, 226.0.0.37) in TNNL peer 22.22.22.22 00:53:41
profile:
none
Outgoing Interfaces:
e1/1 vlan 70 ( V2) 00:53:41/0s
FID: 0x800e
MVID:
8
4
NumOIF: 2
NumOIF: 1
NumOIF: 1
profile: none
85
Deploying IGMP
Outgoing Interfaces:
e1/1 vlan 70 ( V2) 00:55:28/66s
TNNL peer 22.22.22.22 ( R) 00:55:30/76s
1
(56.1.1.2, 226.0.0.36) in TNNL peer 22.22.22.22 00:53:44
profile:
none
Outgoing Interfaces:
e1/1 vlan 70 ( V2) 00:53:44/0s
FID: 0x800d
MVID:
7
5
profile: none
1
(56.1.1.2, 226.0.0.35) in TNNL peer 22.22.22.22 00:53:44
profile:
none
Outgoing Interfaces:
e1/1 vlan 70 ( V2) 00:53:44/0s
FID: 0x800c
MVID:
6
6
NumOIF: 1
profile: none
1
(56.1.1.2, 226.0.0.30) in TNNL peer 22.22.22.22 00:53:48
profile:
86
NumOIF: 1
profile: none
1
(56.1.1.2, 226.0.0.31) in TNNL peer 22.22.22.22 00:53:48
profile:
none
Outgoing Interfaces:
e1/1 vlan 70 ( V2) 00:53:48/0s
FID: 0x8008
MVID:
2
10
NumOIF: 1
profile: none
1
(56.1.1.2, 226.0.0.32) in TNNL peer 22.22.22.22 00:53:46
profile:
none
Outgoing Interfaces:
e1/1 vlan 70 ( V2) 00:53:46/0s
FID: 0x8009
MVID:
3
9
NumOIF: 1
profile: none
1
(56.1.1.2, 226.0.0.33) in TNNL peer 22.22.22.22 00:53:46
profile:
none
Outgoing Interfaces:
e1/1 vlan 70 ( V2) 00:53:46/0s
FID: 0x800a
MVID:
4
8
NumOIF: 1
profile: none
1
(56.1.1.2, 226.0.0.34) in TNNL peer 22.22.22.22 00:53:46
profile:
none
Outgoing Interfaces:
e1/1 vlan 70 ( V2) 00:53:46/0s
FID: 0x800b
MVID:
5
7
NumOIF: 1
NumOIF: 1
Deploying IGMP
none
Outgoing Interfaces:
e1/1 vlan 70 ( V2) 00:53:48/0s
FID: 0x8007
MVID:
1
11
12
13
14
15
16
17
18
19
20
87
88