Professional Documents
Culture Documents
Chuck Semeria
Marketing Engineer
Executive Summary
Providing protections against link or node failures is an important requirement for the
successful delivery of subscriber services. As the amount of mission-critical data carried by
native IP or converged MPLS infrastructures increases, the effect of disruptions caused by
network outages becomes more significant and more costly. This paper describes mechanisms
supported by JUNOS software that ensure the fundamental availability of the network to
support subscriber services when a link or node fails.
Introduction
Figure 1 shows Juniper Networks architecture for providing solutions that support highly
dependable services across native IP or converged MPLS infrastructures. High dependability
results from the combination of dependable platforms, dependable networks, dependable
service-enabling features, and enhanced security capabilities.
Dependable Networks
Dependable Platforms
Hardware Software Process
Like a classic protocol stack diagram, Figure 1 shows how these various elements are related
and how high dependability is the result of a holistic approach to delivering network services:
■ Dependable platforms are based on reliable hardware, reliable software, and reliable design
and manufacturing processes.
■ Dependable networks result from implementing a set of features that allow providers to
hide network faults and service interruptions from their subscribers.
■ Dependable service-enabling features include raw packet forwarding performance,
industrial-strength implementations of MPLS traffic engineering, Differentiated Services
(DiffServ) support, and ATM and Frame Relay convergence features.
■ Security is fundamental because it impacts a carrier’s ability to provide the reliable
platforms, the reliable networks, and the reliable service-enabling features needed to meet
subscriber application requirements.
This paper focuses on the set of mechanisms supported by JUNOS software that are used to
deliver dependable networks. These techniques include:
■ Synchronous Optical Network (SONET) Automatic Protection Switching (APS) or
Synchronous Digital Hierarchy (SDH) Multiplex Section Protection (MSP)
■ Virtual Router Redundancy Protocol (VRRP)
A Recovery Framework
There are two basic models for physical and logical path recovery: rerouting and protection
switching. Rerouting uses the dynamic establishment of new paths to restore the flow of traffic
after a link or router failure. Protection switching uses pre-established paths to restore the flow
of traffic after a link or router failure. Protection switching can also involve the pre-reservation
of network resources along the protection path.
The recovery framework consists of three recovery models:
■ Recovery cycle
■ Reversion cycle
■ Dynamic rerouting cycle
Recovery Cycle
The recovery cycle uses protection switching to describe the individual steps required to detect
a network failure and then restore traffic to a recovery path. If the recovery path is not optimal,
then it may be followed by the reversion cycle or the dynamic rerouting cycle.
Traffic Flows Primary Path Failure Start of Start of Recovery Recovery Operation Traffic Flows
on Primary Path Failure Detected Notification Operation Complete on Recovery Path
■ The Fault Detection Time is the time between the moment the failure occurs and the moment
the failure is detected by the recovery mechanism.
■ The Hold-Off Time is the configured waiting time between the moment the failure is
detected by the fault recovery mechanism and the start of the recovery operation. The
Hold-Off Time may occur after the Notification Time if the node responsible for the
switchover is configured to wait. In some implementations the Hold-Off Time can be zero
seconds.
■ The Notification Time is the time between the transmission of the fault indication signal by
the node detecting the fault and the start of the first recovery action by the node responsible
for the switchover. The Notification Time is zero if the node responsible for detecting the
failure is the same node that is responsible for the switchover.
■ The Recovery Operation Time is the time between the first recovery action and the last
recovery action.
■ The Traffic Restoration Time is the time between the last recovery action and the time when
the traffic flow is restored to the recovery path.
This is an example of the recovery cycle:
1. Traffic flows on the primary path
2. A link or router on the primary path fails.
3. The fault recovery mechanism detects the failure and transmits a failure indication signal to
the node that is responsible for initiating the protection switching.
4. The failure indication signal is received by the node that is responsible for initiating the
recovery.
5. The node responsible for initiating the recovery initiates a protection switch to a
preconfigured recovery path.
6. The node responsible for the recovery switches traffic from the failed working path to the
recovery path.
7. Traffic flows on the recovery path.
Reversion Cycle
When operating in reversion mode, protection switching requires that traffic be switched back
to the primary path when the failure on the primary path is corrected.
Start of Reversion
Traffic flows Primary Path Failure Primary Path Reversion Operation Traffic Flows
on Recovery Path Repaired Cleared Available Operation Complete on Primary Path
Route Traffic
Convergence Hold-down Switchover Restoration
Time Time (optional) Time Time
Each timing interval in the dynamic rerouting cycle (Figure 4) is defined below:
■ The Route Convergence Time is the time it takes for routing protocols to converge and for the
network to enter a stable state.
■ The Hold-Down Time is the time that the recovery path is used.
■ The Switchover Time is the time between the first switchover action and the last switchover
action.
■ The Traffic Restoration Time is the time between the last switchover action and the time when
the traffic flow is restored to the new working path.
This is an example of protection switching followed by dynamic rerouting:
1. Traffic flows on the primary path.
2. A link or router on the primary path fails.
3. The fault recovery mechanism detects the failure and transmits a failure indication signal to
the node that is responsible for the protection switching.
4. The failure indication signal is received by the node that is responsible for the protection
switching.
5. The node responsible for the protection switching initiates a protection switch to a
preconfigured recovery path.
6. The node responsible for the protection switching switches traffic from the failed primary
path to the recovery path.
7. Traffic flows on the recovery path.
8. The network enters a semi-stable state.
9. Dynamic routing protocols converge after the failure.
10. A new optimized primary path is calculated.
11. The new primary path is established.
12. Traffic is switched from the protection path to the new optimized primary path.
13. Traffic flows on the new optimized primary path.
Working Channels
Working Channel 1
2
ADM ADM
Protect Channel N
Protect Channel
The working channel and the protect channel constitute a bidirectional protection pair. Each
end of the link maintains an APS state machine to process the received protection group line
events and management requests. Signaling is sent between the APS state machines using the
K1/K2 overhead bytes (also called the “APS channel”) in the SONET frame. The setting of the
bits in the received K1/K2 bytes determines the current state of the remote end and whether a
switch operation is required. If a network failure is detected, the protect switchover is
coordinated by changing various bits within the K1/K2 bytes.
APS switching can operate in revertive or non-revertive mode. When operating in revertive
mode and the original working channel is repaired, traffic is switched back to the original
working channel from the protect channel. When operating in nonrevertive mode and the
original working channel is repaired, traffic remains on the protect channel and is not switched
back to the original working channel.
ADM ADM
Router A Router B
The working and protect configurations on the router interfaces must match the circuit
configurations on the ADM. That is, the working router must be connected to the ADM’s
working channel, and the protect router must be connected to the ADM’s protect channel.
ADM
Working Protect
Channel Channel
so-1/2/1 so-4/0/2
Using the SONET group-name San Jose for both the working circuit and the protect circuit
parameters defines the APS association between the two PIC interfaces.
ADM
Working Protect
Channel Channel
so-1/2/1 so-4/0/2
200.5.2.3
Router A Router B
200.5.2.4
This configuration is sufficient to ensure that if Router A fails, traffic will be switched over to
the protect channel on Router B.
ADM
SF NY
(working) NY SF (working)
(protect) (protect)
so-7/0/0 so-6/0/0
so-0/0/0 so-1/0/0
Router A Router B
200.5.2.3 200.5.2.4
To support load sharing between circuit pairs, Router A is configured with the following
parameters:
[edit interfaces so-7/0/0 sonet-options]
aps {
working-circuit SF; # groups interface with so-1/0/0 on Rtr B
authentication-key "$123"; # key for SF traffic
neighbor 200.5.2.4; # Rtr B’s address on link between A & B
paired-group SF-NY # load sharing between two pairs
}
To support load sharing between circuit pairs, Router B is configured with the following
parameters:
[edit interfaces so-6/0/0 sonet-options]
aps {
working-circuit NY; # groups interface with so-0/0/0 on Rtr A
authentication-key "$987"; # key for NY traffic
neighbor 200.5.2.4; # Rtr A’s address on link between B & A
paired-group SF-NY # load sharing between two pairs
Figure 10 shows that when the working channel carrying SF traffic on Router A fails, the two
routers automatically switch SF traffic from its original working interface (so-7/0/0 on Router
A) to its protect channel (so-1/0/0 on Router B). However, at this point Router B is required to
carry both SF traffic and NY traffic. To ensure that each router is required to carry only a single
circuit’s worth of traffic, the two routers also switch NY traffic from its original working
interface (so-6/0/0 on Router B) to its protect interface (so-0/0/0 on Router A). After the
switchover occurs, the working interface on Router A (s0-0/0/0) carries NY traffic and the
working interface on Router B (so-1/0/0) carries SF traffic.
ADM
SF NY
(protect) NY SF (protect)
(working) (working)
so-7/0/0 so-6/0/0
so-0/0/0 so-1/0/0
Router A Router B
200.5.2.3 200.5.2.4
Figure 11: Statically Configured Default Routes Create A Single Point of Failure
Server
Farm Switch Default Router
Internet
The Virtual Router Redundancy Protocol (VRRP), defined in RFC 2338, is specifically designed
to protect against such failures by allowing two or more different routers to work together and
support a single virtual router for hosts and servers on a LAN. The routers in a VRRP group
share the IP and MAC addresses of the virtual router defined for the group. The IP address of
this virtual router is configured as the default gateway address on host systems. At any
moment in time, one of the VRRP routers is the master router (actively forwards packets) and
the other VRRP routers in the group serve as backups. VRRP supports the immediate and
automatic transfer of the routing responsibility (by means of the virtual router) from one
physical router to another physical router if the original default router fails. Furthermore,
VRRP is designed to operate in load-sharing environments and allow each load-sharing router
to act as a redundant backup for the others.
VRRP Routers
During the initialization process, each VRRP router determines if it is the master router or a
backup router for virtual router #1. For this example, R1 becomes the master router because it
has a higher priority than R2. R2 becomes a backup router because it has a lower priority than
R1.
After determining that it is the master router for virtual router #1, R1 does the following:
■ Transmit an advertisement to the other VRRP routers on the LAN declaring itself the
master router for virtual router #1.
■ Broadcast a gratuitous Address Resolution Protocol (ARP) request containing the virtual
router IEEE 802 MAC address (00-00-5E-00-01-<vrrp-group>) associated with virtual router
#1 so that switches can learn the location of the virtual router #1’s MAC address.
■ Initialize the VRRP advertisement-timer to the value set in the advertisement-interval.
■ Transition to the master state.
After transitioning to master state for virtual router #1, R1 functions as the forwarding router
for the IP address associated with virtual router #1. While in this state, R1 does the following:
■ Respond to host ARP requests for the IP address associated with virtual router #1.
■ Forward packets it receives with destination MAC addresses equal to the virtual router #1
MAC address.
■ Accept packets addressed to the IP address(es) associated with virtual router #1.
■ Broadcast advertisements at regular intervals (specified by advertisement-interval) to inform
backup routers that it is still alive and acting as the master router for virtual router #1.
As a backup router for virtual router #1, R2 monitors the availability and state of the master
router. While in this state, R2 is responsible for the following:
■ Starting the master-down-timer and setting the master-down interval. The master-down interval
is typically 3X the advertisement-interval specified for the virtual router.
■ Receiving advertisements from the master router and verifying their validity.
■ Assuming the role of the master router if it receives an advertisement from another router
with a lower priority than its own.
■ Assuming the role of master router if it does not receive an advertisement from the master
router within the master-down interval.
If R1 fails, R2 detects the failure because its master-down-timer expires. R2 assume the role of
master router for virtual router #1 (vrrp-group = 1), transmits an advertisement to the other
VRRP routers in the group declaring that it is the master router for virtual router #1,
broadcasts a gratuitous ARP using the same virtual router MAC address
(00-00-5E-00-01-<vrrp-group>) so that switches can learn the new location of the virtual router
MAC address, and initializes its advertisement time. As a result of these actions, LAN switches
learn the new location of the virtual router and the transition is completely transparent to end
systems.
In this example, if R1 fails, it is backed up by R2. However, if R2 fails, it is not backed up by R1.
Consequently, this example does not provide efficient asset or bandwidth utilization because
R2 remains idle and forwards traffic only when R1 fails. To provide better asset utilization,
support load balancing for outgoing traffic, and allow R1 to back up R2, a second virtual router
must be defined. A configuration supporting these capabilities is illustrated in the next
example.
VRRP Routers
During the initialization process, each VRRP router determines its role in each of the virtual
router groups. Based on the configured priority values, R1 becomes the master router for
virtual router #1 and a backup router for virtual router #2. R2 becomes the master router for
virtual router #2 and a backup router for virtual router #1. This configuration has the effect of
load-balancing outgoing traffic across two virtual routers while also providing full
redundancy. R1 can assume the role as master router for virtual router #2 if R2 fails, and R2 can
become the master router for virtual router #1 if R1 fails.
In addition to the previously defined parameters, the following parameters can also be
configured:
■ The accept-data|no-accept-data option specifies whether the interface accepts
packets addressed to the virtual IP address. The default value is accept-data.
■ The advertise-interval parameter specifies the interval between VRRP advertisement
packets by the master router to other members of the VRRP group. The default value is 1
second.
PPP
MLPPP bundle
Multilink PPP (MLPPP), defined in RFC 1990, is specifically designed to overcome this
limitation of PPP by providing a method of splitting, recombining, and sequencing datagrams
across multiple physical links or logical channels. By supporting the dynamic addition and
deletion of PPP links over multiple simultaneous communications paths between systems,
MLPPP allows you to bundle individual paths to deliver more bandwidth, provide additional
bandwidth on demand to subscribers, load-distribute traffic across multiple paths, and ensure
that the failure of an individual channel does not disrupt packet forwarding.
The primary benefits of IEEE 802.3ad include link redundancy and increased bandwidth.
■ Link redundancy results from the multiple links that constitute the 802.3ad aggregated link.
■ Increased bandwidth results from the ability to establish multiple Ethernet links between
two network elements connected by the aggregated link. This is a cost-effective solution
and relatively simple migration strategy for applications that require incremental
bandwidth scaling rather than exponential scaling.
IEEE 802.3ad includes the following attributes:
■ Packets from the same network flow arrive in the same sequence as they were originally
transmitted because packets from the same flow traverse the same physical link. The
selection of the link used to transmit a particular packet is determined by the packet
distribution algorithm, which is not defined in the IEEE 802.3ad specification.
■ For ARP resolution to work properly, regardless of the link on which traffic will eventually
be forwarded, all ports that are members of an aggregated link must use the same MAC
address.
■ The entity that manages the aggregate link also monitors the ports that make up the
aggregated link for failures. Packets are never forwarded across a physical interface that is
down.
All packets that are sent or received across a physical interface that is part of an 802.3ad group
use the local Virtual MAC for the group (Figure 16). The Virtual MAC for an 802.3ad group
keeps track of the different physical MAC addresses of the links that are members of the group.
The source Virtual MAC distributes packets received from the Virtual MAC client to the physical
interfaces using a load-balancing algorithm and transmits the packet using the local Virtual
MAC address as the source address and the remote Virtual MAC address as the destination
address. When receiving packets, the destination Virtual MAC forwards packets that are
received from the physical MACs associated with the 802.3ad group to its Virtual MAC client.
Router A Router B
802.3ad Group
Physical Physical
MAC 1 MAC 1
Virtual MAC Virtual Physical Physical Virtual Virtual MAC
Client MAC MAC 2 MAC 2 MAC Client
Physical Physical
MAC 3 MAC 3
802.3ad also defines the Link Aggregation Control Protocol (LACP). This protocol allows you
to configure ports so they can automatically join or leave different 802.3ad groups. If the
protocol running between two nodes determines that link aggregation is possible, the links are
automatically aggregated into an 802.3ad group. If the protocol running between two nodes
determines that link aggregation is not possible, the links operate normally as individual links.
SONET/SDH Aggregation
JUNOS software supports the aggregation of SONET/SDH interfaces. An aggregate
SONET/SDH virtual link is defined by specifying the link number as a physical device and
then associating a set of physical interfaces into a SONET group. The capabilities provided by
SONET/SDH aggregation are similar to those supported by our implementation of IEEE
802.3ad but this feature is not defined in a public standard. Consequently, SONET/SDH
aggregation is proprietary to Juniper Networks and may not be compatible with equipment
from other vendors.
The features supported by the JUNOS implementation of SONET/SDH aggregation include
the following:
■ The transmission of both IP and MPLS traffic are supported
■ Each port in a SONET/SDH virtual link must have the same interface speed. The interface
speed can be OC-3c, STM-1c, OC-12c, STM-4c, OC-48c, STM-16c, OC-192c, or STM-64c.
■ A SONET/SDH virtual link cannot mix SONET and SDH modes.
■ A SONET/SDH virtual link can contain up to eight different physical interfaces.
■ A single router can support a maximum of 16 SONET/SDH groups.
■ A SONET/SDH virtual link can include any interface on any PIC in any FPC within a
router chassis.
1: Outage
Ingress Egress
LSR LSR
2: Calculate
new path
4: Switch traffic
to new path
3: Establish
new path
Two mechanisms can be used to make the ingress LSR aware of an LSP outage and the need to
calculate a new path for the LSP:
■ The LSR immediately upstream from the outage signals the failure to the ingress LSR.
■ If the link-state database of the ingress LSR indicates a failed link, the ingress LSR examines
the status of all LSPs traversing the failed link or node.
Unfortunately, this rerouting process can be time-consuming and prone to failures. For
example, the failure notification transmitted by the LSR immediately upstream of the outage
can be lost or the new path for the LSP can be slow to come up due to congestion or a sudden
surge of network activity.
Secondary Paths
Secondary paths support the configuration of primary and secondary physical paths for an
LSP to protect against link and transit node forwarding plane failures (Figure 18). The primary
path is the preferred path while the secondary path is used as an alternative route when the
primary path fails. There are two types of secondary paths: standby and non-standby. A
standby secondary path is pre-computed and pre-signaled while a non-standby secondary
path is pre-computed but is not pre-signaled.
1: Outage
Ingress Egress
LSR LSR
2: Switch traffic
to pre-established
secondary path
If a link or node in the primary path fails, the LSR immediately upstream of the outage uses the
MPLS signaling protocol to notify the ingress LSR of the failure. Upon receipt of the outage
notification, the ingress LSR reroutes traffic from the failed primary path to the secondary path.
The use of standby secondary paths enhances recovery time by eliminating the call-setup delay
that is required to establish a new physical path for the LSP. If the outage in the primary path is
corrected, the ingress LSR, after a few minutes of hold-down to ensure that the primary LSP
remains stable, switches traffic from the secondary path back to the primary path.
Secondary paths are appropriate when used in conjunction with off-line path computation
tools or when the secondary path is less constrained than the primary path. Because resources
are reserved even while the primary path is active, standby secondary paths waste more
resources than non-standby secondary paths. However, the restoration time for standby
secondary paths is faster than for non-standby secondary paths.
restored faster than traffic can be switched by the ingress LSR to a standby secondary LSP. Fast
reroute is only a short-term solution because the detour paths may not provide adequate
bandwidth and the activation of a detour path can result in congestion on bypass links.
LSP 1
Upstream
LSR 1: Outage
LSP 2 Downstream
LSR
2: Switch traffic on each LSP
to its dedicated detour path
Fast reroute is appropriate when the number of LSPs to be protected is small relative to the
total number of LSPs, when satisfying the path selection criteria (priority, bandwidth, link
coloring) for detour paths is critical, when control at the granularity of individual LSPs is
important, or when simpler configuration is desired.
LSP 1
Upstream Downstream
LSR 1: Outage LSR
LSP 2
2: Switch all LSP traffic
to the bypass link
Link protection is appropriate when the number of LSPs to be protected is large, when
satisfying path selection criteria (priority, bandwidth, link coloring) for bypass paths is less
critical, when control at the granularity of individual LSPs is a non-goal, or where
configuration complexity is not an issue.
Secondary Paths
After defining a named path on the ingress LSR, you can configure the ingress LSR to use it as
a standby secondary path by including the secondary statement at the [edit protocols mpls
label-switched-path lsp-path-name] hierarchy level.
The following example configures an ingress LSR to use the secondary path
backup-sf-to-ny-path as a standby secondary path for the LSP sf-to-ny-lsp.
In this example, the primary statement creates the primary path, which is sf-to-ny-lsp’s
preferred path. The secondary statement established a pre-calculated and pre-signaled backup
path. If the primary path can no longer be used to reach the egress LSR, the alternative path is
used.
It is not necessary to explicitly configure fast reroute on the transit or egress LSRs of an LSP.
Once fast-reroute is configured on the ingress LSR, the ingress LSR (during the LSP setup
phase) uses RSVP-TE to signal all downstream LSRs that fast-reroute is enabled for the LSP.
This informs each LSR along the physical path of the LSP that it needs to use the constrained
shortest path first (CSPF) algorithm on the information in the local LSR’s traffic engineering
database (TED) to compute and then pre-establish a detour path for the LSP. By default, the
detour path inherits the same administrative group constraints (link coloring or resource
classes) as its parent LSP when the bypass path is calculated by CSPF.
In one-to-one backup, each LSP traversing a given node is backed up by a dedicated detour
path. Fast reroute detour paths are always calculated to bypass both the link facing the
immediate downstream neighbor LSR in the LSP and the immediate downstream neighbor
LSR itself (Figure 21). This approach provides protection against both the failure of an adjacent
downstream link and the failure of an adjacent downstream node. Of course, if the network
topology contains an insufficient number of LSRs with insufficient links to other LSRs, it may
be impossible to establish a detour path.
Protected Node
Detour
Path
It is not necessary to explicitly configure link protection on the transit or egress LSRs of an LSP.
Once link protection is configured on the ingress LSR, the ingress LSR (during the LSP setup
phase) uses RSVP-TE to signal all downstream LSRs that link protection is enabled for the LSP.
You must also configure link protection on an RSVP interface by include the link-protection
statement at the [edit protocols rsvp interface interface-name] hierarchy level.
[edit protocols rsvp interface]
interface-name {
link-protection;
}
JUNOS software supports the configuration of both fast reroute and link protection for an LSP.
This provides interoperability with equipment from Cisco Systems. Assume that you configure
both fast reroute and link protection for an LSP. If the downstream node is a Cisco Systems
router, then link protection is supported. If the downstream router is a Juniper Networks
router, then fast reroute is supported
With PFE Local Repair, the routing engine (RE) downloads detour path information to the PFE
for each LSP after the detour paths are calculated. Then, if an LSR detects that a downstream
link or node has failed, the PFE can immediately switch all LSP traffic from the failed path to
precomputed and pre-established detour paths. This feature reduces the time needed to
complete protection path switching by allowing the PFE to respond immediately to a path
failure without having to wait for the RE to download the detour path to the PFE. PFE Local
Repair is enabled by default and requires no additional configuration.
Fate Sharing
Fate sharing allows you to create a database of information that is accessed by CSPF when it
computes backup protection paths. The database describes the relationships between all the
elements in the network—point-to-point links, LAN interfaces, and router IDs. Since the
network elements in the group share the same fate, this relationship is called fate sharing.
For a protection path to work optimally, it should be calculated to minimize the number of
physical links shared with the primary path. This ensures that any single point of failure will
not affect both the primary path and the protection path at the same time. The JUNOS software
fate sharing enhancement allows you manage the way that CSPF computes protection paths so
that the number of shared links between the primary path and the protection path is
minimized. This feature can be applied when calculating primary paths, secondary paths, and
fast reroute detour paths.
To configure a fate-sharing group for protection path calculation, you need to include the
fate-sharing statement at the [edit routing-options] hierarchy level on each LSR for which you
want fate sharing enabled. The following sample configuration defines a fate-sharing group
called "test."
[edit routing-options]
fate-sharing {
group test {
cost 20; # Optional. The default value is 1
from 1.2.3.4 to 1.2.3.5; # Identifies a point-to-point link
from 192.168.200.2; # Identifies a LAN interface
from 192.168.200.3; # Identifies a LAN interface
from 123.3.4.5; # Identifies a Router ID of a router node
}
}
Summary
Highly dependable IP networks result from the combination of dependable platforms,
dependable networks, dependable service-enabling features, and enhanced security
capabilities. This paper focused on the mechanisms supported by JUNOS software and M- and
T-series routing platforms that ensure the fundamental ability of your network to continue
supporting subscriber services when links or nodes fail. These mechanisms include SONET
APS and SDH MSP, VRRP, a variety of link aggregation and redundancy techniques (MLPPP,
IEEE 802.3ad, and SONET/SDH link aggregation), and a number of MPLS LSP
protection-features (standby secondary paths, fast reroute, and link protection). As the amount
of mission critical data carried by native IP or converged MPLS infrastructures increases, these
features allow you to hide network faults from your subscribers and to continue to satisfy their
demanding service level agreements. By delivering the dependable platforms, dependable
networks, dependable service-enabling features, and enhanced security capabilities you need
to deploy a highly dependable IP infrastructure, Juniper Networks provides the tools you need
to support existing subscriber services, to grow these services, and to deploy new
revenue-generating services.
References
Internet Drafts
Sharma, Vishal et al; Framework for MPLS-based Recovery;
<draft-ietf-mpls-recovery-frmwrk-07.tx>; September 2002.
Pan, Ping et al; Fast Reroute Extensions to RSVP-TE for LSP Tunnels; <draft-ietf-mpls-rsvp-lsp-
fastreroute-00.tx>; July 2002.
Copyright © 2002, Juniper Networks, Inc. All rights reserved. Juniper Networks is registered in the U.S. Patent and Trademark Office and in other countries
as a trademark of Juniper Networks, Inc. Broadband Cable Processor, ERX, ESP, G1, G10, G-series, Internet Processor, JUNOS, JUNOScript, M5, M10, M20,
M40, M40e, M160, M-series, NMC-RX, SDX, ServiceGuard, T320, T640, T-series, UMC, and Unison are trademarks of Juniper Networks, Inc. All other
trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners. All specifications are subject to change
without notice.
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise
revise this publication without notice.