You are on page 1of 8

DE OLIVEIRA LAYOUT 7/3/07 11:46 AM Page 38

Path-Computation-Element-Based
Architecture for Interdomain
MPLS/GMPLS Traffic Engineering:
Overview and Performance
Sukrit Dasgupta and Jaudelice C. de Oliveira, Drexel University
Jean-Philippe Vasseur, Cisco Systems

Abstract
The Path Computation Element Working Group at the Internet Engineering Task
Force is chartered to specify a PCE-based architecture for the path computation of
interdomain MPLS- and GMPLS-based traffic engineered label switched paths. In
this architecture, path computation does not occur at the head-end LSR, but on
another path computation entity that may not be physically located on the head-
end LSR. This method is greatly different from the traditional “per-domain”
approach to path computation. This article presents analysis and results that com-
pare performance of the PCE architecture with the current state-of-the-art approach.
Detailed simulations are undertaken on varied and realistic scenarios where prelim-
inary results show several performance benefits from the deployment of PCE. To
provide a complete overview of significant development taking place in this area,
milestones and progress at the IETF PCE WG are also discussed.

W
ith ever increasing requirements posed by of identifying the requirements and development of the archi-
significant advances in networking, service tecture accordingly, a plethora of work is underway at the PCE
providers may use traffic engineering (TE) WG on the new communication protocols that will make this
techniques to efficiently manage resources architecture work. This includes the development of new inter-
and provide consistent quality of service PCE communication protocols and introducing extensions to
(QoS). Traffic engineering in multiprotocol label switching existing underlying routing protocols. Request for Comments
(MPLS) and generalized MPLS (GMPLS) networks are fun- (RFC) 4655 [2] specifies a PCE-based architecture. RFC 4657
damentally based on constraint-based path computation. [3] covers PCE communication protocol generic requirements,
Briefly, constraint-based path computation involves the prun- and RFC 4674 [4] discusses the requirements for PCE discovery.
ing of links that do not satisfy constraints and subsequently Several WG IETF drafts are currently underway to define PCE
using the shortest path algorithm on the resulting subgraph communication protocol in different scenarios, PCE-based inter-
[1]. This process is simple and efficient when the path involves layer TE, protocol extensions for Open Shortest Path First
only one domain, but can potentially become severely resource (OSPF) and Intermediate System to Intermediate System (IS-
heavy, complex, and inefficient when multiple domains are IS) for PCE discovery, PCC-PCE communication, and policy-
involved. To address this problem, the Path Computation Ele- enabled path computation.
ment (PCE) Working Group (WG) of the Internet Engineer- As is evident, a thorough performance study to justify and
ing Task Force (IETF) has been developing an architecture quantify the motivation for this new architecture is required.
that will allow multidomain path computation to be simple As the architecture is in its infancy, a detailed analysis compar-
and efficient. The PCE architecture introduces a special com- ing existing approaches to a PCE-based approach has never
putational entity that will cooperate with similar entities to been undertaken. This article has been written with two goals
compute the best possible path through multiple domains. in mind. First, it presents a brief review of the significant
A PCE is a node that has special path computation ability developments that have taken place in the PCE WG since it
and receives path computation requests from entities known as was conceived in the IETF. Second, it identifies the most sig-
path computation clients (PCCs). The PCE holds limited rout- nificant performance metrics, and then presents detailed simu-
ing information from other domains, allowing it to possibly com- lation results and accompanying analysis to contrast the
pute better and shorter interdomain paths than those obtained performance of the existing approach with that of the PCE.
using the traditional per-domain approach. Among other pur- The rest of the article is organized as follows. Several sce-
poses, PCEs are also being advocated for CPU-intensive compu- narios that motivate the use of a PCE-based architecture are
tations, minimal-cost- based TE-LSP placement, backup path highlighted. The PCE architecture is described. The two path
computations, and bandwidth protection. Along with the process computation approaches to be compared, PCE-based and per

38 0890-8044/07/$20.00 © 2007 IEEE IEEE Network • July/August 2007


DE OLIVEIRA LAYOUT 7/3/07 11:46 AM Page 39

domain, are discussed. Performance metrics for comparison Path Computation and Setup
are identified, while the complete simulation scenario is
described and the results are analyzed. Finally, we conclude This section gives a brief overview of the two path computa-
the article. tion methodologies, the per-domain and the PCE-based
approach respectively. Related issues that arise during path
setup are also highlighted. Both approaches use the Con-
Motivation for PCE strained Shortest Path First (CSPF) algorithm [1]. CSPF first
The development of the PCE architecture has been motivated prunes all the links in the topology that do not satisfy the con-
by numerous path computation requirements and scenarios straint (available bandwidth, delay, etc.) and then runs Dijk-
arising in present-day provider networks. This section presents stra’s Shortest Path algorithm on the resulting subgraph. The
some scenarios that are aided by the presence of PCE. Several metric used for this computation is the weight assigned to the
other scenarios are discussed in [2], as well as these: links. In the analysis undertaken for this article, size of the
• Limited visibility: There are several situations where the TE-label switched path (LSP) (bandwidth to be reserved on
node responsible for path computation has limited visibility every link of the path) is the only constraint considered. It
of the network topology to the destination. In such cases, it should be noted that the path computed by CSPF always
is not possible to guarantee that the optimal (shortest) path depends on the current load conditions of the network.
will be computed, or even that a viable path will be discov- Rerouting of TE-LSPs due to link failures and setup of other
ered except, possibly, through repeated trial and error using TE-LSPs prior to the current setup request are some factors
crankback [5] or other signaling extensions. that can result in the computation of different paths for the
• Processing overhead: There are many situations where the same request. Under any circumstance, though, CSPF will
computation of a path may be highly CPU-intensive (com- always return the shortest path satisfying the constraint corre-
putation of Steiner trees, multicriteria path computation, sponding to the current loading in the network. Working of
etc.). In these situations, it may not be possible or desirable the two path computation techniques is explained in Figs. 1b
for some routers to perform path computation because of and 1c, respectively, and using the simple network in Fig. 1a.
the constraints on their CPUs.
• Limited routing capability: It is common in legacy optical Backward Recursive PCE-Based Computation
networks for the network elements not to have a control Path computation across domains is particularly difficult as the
plane or routing capability. Such network elements only visibility of a head-end router is restricted only to the domain
have a data plane and a management plane, and all cross- it lies in. Backward Recursive PCE-Based Computation
connections are made from the management plane. It is (BRPC) utilizes multiple PCEs to compute the shortest inter-
desirable in this case to run the path computation on the domain constrained path along a determined sequence of
PCE, and to send the cross-connection commands to each domains. Inherent in the PCE architecture, this technique also
node on the computed path. preserves confidentiality across domains, an important require-
• Multilayer networks: A server-layer network of one switch- ment when domains are managed by different service providers
ing capability may support multiple networks of another [6]. The BRPC procedure is backward, as the path is computed
(more granular) switching capability. The different client- in a reverse fashion from the destination area to the source
and server-layer networks may be considered distinct path area. It is recursive since the same basic sequence of steps is
computation regions within a PCE domain, so the PCE repeated for every intermediate domain lying between the
architecture is useful to allow path computation from one source and destination. Every domain has certain “entry”
client-layer network region, across the server-layer network, boundary nodes (BNs) into the domain and “exit” BNs out of
to another client-layer network region. the domain (into other domains). The following notation is
defined for a clear explanation of the BRPC procedure.
The entry BN of domain i joins domains i – 1 and i. Let the
PCE Architecture source and destination domains be denoted 0 and n, respec-
Depending on the requirements of a provider, the PCE archi- tively, and n – 1 intermediate domains 1, ⋅ ⋅ ⋅, n – 1. For
tecture can be deployed based on a single composite-node exter- domain i, the kth entry BN is denoted BNen k (i), and the kth exit
nal and dedicated entity, or a cooperative set of multiple BN is denoted BN ex k (i). BRPC uses the concept of a virtual
entities. On a PCE, a traffic engineering database (TED) is shortest path tree (VSPT). VSPT(i) denotes the multipoint-to-
maintained and updated by the underlying routing protocol. The point (MP2P) tree formed by the set of constrained shortest
TED is then used to compute paths based on requests received paths from the entry BNs of domain i to the destination
by a PCC. When the PCE exists as a composite PCE node, the router. Each link of tree VSPT(i) represents a shortest path.
path computation and path setup request take place on the The BRPC procedure is as follows. For the destination domain
same physical network element. In contrast, a PCE that is exter- n, VSPT(n) is computed from the list of shortest paths to all
nal to the entity requesting a path setup uses the TED present the entry BNs from the destination node. This is illustrated in
on it for path computation, eventually returning the computed Fig. 1b. VSPT(n) is then sent to the PCEs in the previous
path as a response. Multiple PCEs can be involved in path com- domain (domain n – 1 in this case) where it is concatenated to
putation in different scenarios. Depending on how PCEs across their TEDs (as links). This TED is then used by the PCEs in
different domains are configured, several PCEs may be involved domain n – 1 to compute VSPT(n – 1). This sequence is
in path computation. A partial path returned by one PCE may repeated until the source domain and subsequently the head-
require another partial path computation by another PCE to end router of the TE-LSP areiou reached, as shown in Fig. 1b.
complete the path. Inter-PCE communication can also result in
the computation of a single path in a cooperative manner. Per-Domain Path Computation
Based on the architecture, a provider can deploy a central- In contrast to the BRPC procedure, the per-domain path
ized or distributed computation model. The centralized model computation technique involves the computation of individual
involves a single PCE for all path computation requirements path segments in every intermediate domain without the shar-
in its domain. The distributed model may involve several ing of any path information from other domains. The com-
PCEs that cooperate and share the path computation requests. plete path for a TE-LSP is obtained by concatenating the path

IEEE Network • July/August 2007 39


DE OLIVEIRA LAYOUT 7/3/07 11:46 AM Page 40

segments that are computed for every domain i. Figure 1c capacity or highly utilized links. The maximum and average
illustrates a simple scenario of per-domain path computation. path costs are observed for each TE-LSP. The distributions
The head-end router computes the first path segment by find- for the maximum and average path costs are analyzed.
ing the shortest path to the nearest exit BN. The second path
segment is then computed for the second domain by this BN CAC Failures — As discussed in the previous section, CAC
to the next nearest exit BN. This method does not necessarily failures occur due to two reasons: a midpoint router having an
always yield the shortest path across domains. outdated TED, or when an entry BN to a domain does not
Several setup issues arise after a path has been computed have a path in the domain. Although the former situation is
and the headend router of the tunnel tries to setup reservation common to both the per-domain and PCE-based approaches,
along the route using the Resource Reservation Protocol with the latter occurs only in the per-domain approach. This metric
Traffic Engineering extensions (RSVP-TE) [7]. They result captures the total number of CAC failures that occur during
from outdated TEDs on any or all of the routers involved in initial setup and reroute (on link failure) of a TE-LSP. The
the path computation process. The TED is maintained on every distribution across all TE-LSPs is analyzed.
TE enabled router and holds information about several
attributes of the links in its domain of existence. Path computa- Crankback Signaling — When an entry BN fails to find a route
tion based on CSPF uses the TED and results in a path that in the corresponding domain, crankback signaling takes place in
satisfies the constraints put on one or a combination of these the case of per-domain path computation. As discussed in the
attributes. If the TED on the router computing the path is out- previous section, a crankback signaling message propagates to
dated, a call admission control (CAC) failure occurs at the the entry BN of the previous domain, and a new entry BN to
entry point router of the link corresponding to the outdated the next domain is chosen, after which path computation takes
information. In such a situation, if the underlying routing proto- place to find a path segment to the new entry BN of the next
col is OSPF with TE extensions (OSPF-TE), a link state adver- domain. This causes a significant delay in setup time. This met-
tisement (LSA) is flooded, or if it is IS-IS, a link state packet ric captures the distribution of the number of crankbacks and
(LSP) is flooded. This flooding updates the TEDs of all the the corresponding delay in setup time for a TE-LSP when using
routers with the most updated information with respect to the the per-domain path computation technique. The total delay
concerned link and is restricted only to the domain in which the arising from the crankback signaling is proportional to the costs
CAC failure occurred. The per-domain and PCE approaches of the links over which the signal travels (i.e., the path which is
proceed differently in this situation. Details of LSA flooding setup from the entry BN of a domain to its exit BN). Similar to
and their mechanisms are explained in great detail in [1]. above, the distribution across TE-LSPs is analyzed.

PCE-Based Approach — The BRPC method [6] is used to TE-LSPs/Bandwidth Setup Capacity — Due to the different
compute the VSPT between BN en k (i) and the destination. path computation techniques, there is a significant difference
When a CAC failure occurs, a new VSPT for that domain is in the amount of TE-LSPs/bandwidth that can be set up. This
computed, and path setup is retried in the domain. The num- metric captures the difference in the number of TE-LSPs and
ber of CAC failures that take place with the PCE approach is corresponding bandwidth that can be set up using the two
at most equal to that with the per-domain approach. techniques. The traffic matrix is continuously scaled and
stopped when the first TE-LSP cannot be set up for both
Per-Domain Approach — Here, a crankback [5] is used to methods. The difference in the scaling factor gives the extra
report a setup failure to the router that is trying to set up the bandwidth that can be set up using the corresponding path
path. For example, without any loss of generality, let the CAC computation technique.
failure take place in domain i when BNen k (i) was setting up a
j
path to BN ex(i). Upon receiving the failure information, Failed TE-LSPs/Bandwidth on Link Failure — Link failures are
BN enk (i) tries to find a new shortest path to the next closest induced in the network during the course of the simulations
(j+1)
BNex (i). This process continues until no path can be found conducted. This metric captures the number of TE-LSPs and
from BNen k (i) to any BN (i). In this situation, the setup failure the corresponding bandwidth that failed to find a route when
ex
information from BN en k (i) is sent upstream to BN k (i – 1) in one or more links lying its path failed.
en
domain i – 1 using a crankback. BNen k (i – 1) then selects the
next BNex (j+1)
(i – 1) to enter domain i. Information about setup Results and Analysis
failures can propagate using the crankback to the source
domain containing the headend router. This cycle repeats Simulation Setup
until a path can be set up or no path is found after exhausting A very detailed simulator has been developed to replicate a
all possibilities. It should be noted that crankback signaling is real-life network scenario accurately. Following is the set of
associated only with the per-domain approach to path compu- entities used in the simulation with a brief description of their
tation. Several issues related to CAC and crankback signaling behavior.
may arise, and can be found in [8, references therein].
Topology Description — To obtain meaningful results applicable to
present-day provider topologies, simulations have been run on
Performance Metrics two realistic topologies representative of current service provider
This section discusses the metrics used to capture and com- deployments. They consists of a large backbone area to which four
pare the performance of the two approaches. smaller areas are connected. For the first topology, named
MESH-CORE, a highly connected backbone was obtained from
Path Cost — As the two path computation techniques differ RocketFuel [9]. The second topology has a symmetrical backbone
significantly, paths and their corresponding costs can be very and is called SYM-CORE. The four connected smaller areas are
different. In our analysis, path cost is the sum of link obtained from [10]. Details of the topologies are shown in Table 1
metric/weight assigned to the links that constitute the path. A along with their layout in Fig. 2. All TE-LSPs set up on this net-
higher path cost is undesired since it could be representative work have their sources and destinations in different areas, and
of a total higher delay or may signify the traversal of low- all of them need to traverse the backbone network. Table 1 also

40 IEEE Network • July/August 2007


DE OLIVEIRA LAYOUT 7/3/07 11:46 AM Page 41

Domain (0) Domain (1) Domain (2)

1 1 1 1 1
1 1
5
1 1 1
S
D
3
1 1 1
1 1
BN0ex(i) BN1en(i)
(a)

9 6 6

S D
D D
1 1
4

Domain (0) Domain (1) Domain (2)

VSPT(1) VSPT(2)

Direction of path computation

c Represents a cost of c units

(b)

3 4
2

S
D

Domain (0) Domain (1) Domain (2)

Direction of path computation

c Represents a cost of c units

(c)

■ Figure 1. Interdomain path computation: a) example network with associated link costs; b) VSPTs formed by the BRPC procedure and
their costs; c) per-domain path computation and costs of the path segments.

shows the number of TE-LSPs that have their sources in the cor- limited only to the domain of existence. In the PCE approach,
responding areas along with their size distribution. routing information is passed on to the domains according to
the methodology outlined in the BRPC procedure [6].
Node Behavior — Every node in the topology represents a
router that maintains state for all the TE-LSPs passing through TE-LSP Setup —When a node boots up, it tries to set up all the
it. Each node is a source for TE-LSPs to all the other nodes in TE-LSPs that originate from it in descending order of size.
the other areas. As in a real-life scenario, where routers boot The network is dimensioned such that all TE-LSPs can find a
up at random points in time, the nodes in the topologies also path. Once set up, all TE-LSPs stay in the network for the
start sending traffic on the TE-LSPs originating from them at a complete duration of the simulation. The traffic matrix repre-
random start time (to take into account the different boot-up sents a full mesh where there is a TE-LSP between every pair
times). All nodes are up within an hour of the start of simula- of nodes that lie in different areas.
tion. All nodes maintain a TED, which is updated using LSA
updates as outlined in [1]. The LSA updates are restricted only Inducing Failures
to the domain in which they originate. The nodes have a path For thorough performance analysis and comparison, link fail-
computation engine that operates on the routing information in ures are induced in all the areas. Each link in a domain can
the TED. In the per-domain approach, routing information is fail independently with a mean failure time of 24 h and be

IEEE Network • July/August 2007 41


DE OLIVEIRA LAYOUT 7/3/07 11:46 AM Page 42

4a
1d 6a
2d 3d 9a
5a
2a
6d 4d 8a 12a
3a
7d ABR-A2-BB 13a
5d ABR-D1-BB 1a 14a
6b 1b4b 12b ABR-D0-BB 10a
67 26 ABR-A1-BB 2a 11a
9b 7b 11b
38 5b 3b ABR-A0-BB
31 10b 9
22 14 29 2b 8b 10 11 12 13
16 1512
53 76 ABR-80-BB 14
32 ABR-81-BB
40 18 15
5234 68 19 37 55
60 39 45 54 1
47 30 28 16
ABR-A1-BB 3626 49 27 20 8 17
41 24 51 43 2 ABR-B0-BB
6a 27 13
9a ABR-A0-BB 75 57
58 83 33 ABR-D1-BB 7 6 5 4 3 ABR-B1BB
3a 1a ABR-A2-BB 66 48 80 60 81 7d 6d 7b 5b 3b 1b
8a 46 70 6 ABR-D0-BB ABR-C2BB
2a 7a 9 2
14a 10a
54 ABR-C0-BB 1d 2d 12c 8c 4c ABR-C1BB ABR-C0BB 8b 2b
73 6c 4d 6b
13a 74 ABR-C2-BB 10c 3d
71 16c 5c 1c 10b
11a 12a 61 7c 9b 4b
18 17 72 ABR-C1-BB 3c 1c 13c 11c 6c
11b
78 11 9c
14c 11c 15c
2c 13c 10c 3c 12b
23 8c 17c 14c 9c
4c 7c 16c 17c
12c 15c 2c

(a) (b)

■ Figure 2. Topologies used for the simulations: a) MESH-CORE; b) SYM-CORE.

restored with a mean restore time of 15 min. Both interfailure and results were collected after the expiry of a transient peri-
and interrestore times are uniformly distributed. When a link od. Figures 3 and 4 illustrate the behavior of the metrics for
fails, an LSA is flooded by the adjacent routers of the link, topologies MESH-CORE and SYM-CORE, respectively.
and all the TEDs of the routers lying in the same domain of
existence are updated. When a link on the path of a TE-LSP Path Cost — Figures 3a and 4a show the distribution of the
fails, reservation is removed along the path, and the head-end average path cost of the TE-LSPs for MESH-CORE and
router tries to set up the TE-LSP around the failure. No SYM-CORE, respectively. During initial setup, roughly 40
attempt to re-optimize the path of a TE-LSP is made when a percent of TE-LSPs for MESH-CORE and 70 percent of TE-
link is restored. The links that join two domains never fail. LSPs for SYM-CORE have path costs greater with the per-
This step has been taken to concentrate only on how link fail- domain approach (PD-Setup) than with the PCE approach
ures within domains affect performance. (PCE-Setup). This is due to the ability of the BRPC proce-
dure to select the shortest available paths that satisfy the con-
Results and Analysis straints. Since the per-domain approach to path computation
Simulations were carried out on the two topologies previously is undertaken in stages where every entry BN to a domain
described. The results are presented and discussed in this sec- computes the path in the corresponding domain, the most
tion. In the figures, PD-Setup and PCE-Setup represent results optimal route is not always found. When failures start to take
corresponding to the initial setting up of TE-LSPs on an empty place in the network, TE-LSPs are rerouted over different
network using the per-domain and PCE approach, respectively. paths resulting in path costs that are different from the initial
Similarly, PD-Failure and PCE-Failure denote the results under costs. PD-Failure and PCE-Failure in Figs. 3a and 4a show
the link failure scenario. A period of one week was simulated the distribution of the average path costs that the TE-LSPs
have over the duration of the simulation with link
failures occurring. Similarly, the average path
Domain description TE-LSP size costs with the per-domain approach are much
higher than with the PCE approach when link
# of # of OC-48 OC-192 [0,20) [20,100] failures occur. Figures 3b and 4b show similar
Domain name
nodes links links links Mb/s Mb/s trends and present the maximum path costs for a
TE-LSP for the two topologies. It can be seen
D1 17 24 18 6 125 368 that with per-domain path computation, the max-
imum path costs are larger for 30 percent and
D2 14 17 12 5 76 186 100 percent of the TE-LSPs for MESH-CORE
and SYM-CORE, respectively.
D3 19 26 20 6 14 20
Crankbacks/Setup Delay — Due to crankbacks
that take place in the per-domain approach to
D4 9 12 9 3 7 18 path computation, TE-LSP setup time is signifi-
cantly increased. This could lead to QoS require-
MESH backbone 83 167 132 35 0 0 ments not being met, especially during failures
when rerouting needs to be quick in order to keep
SYM backbone 29 37 26 11 0 0 traffic disruption to a minimum. Since crankbacks
do not take place during path computation with a
■ Table 1. Details of all the areas used to create the two topologies. PCE, setup delays are significantly reduced. Fig-

42 IEEE Network • July/August 2007


DE OLIVEIRA LAYOUT 7/3/07 11:46 AM Page 43

45 110
PD setup PD setup
PCE setup 100 PCE setup
40 PD failure PD failure
PCE failure 90 PCE failure

Maximum path cost


Average path cost

35 80

70
30
60

50
25
40
20 30

20
15 0
0 20 40 60 80 100 0 20 40 60 80 100
Percentage of TE-LSPs Percentage of TE-LSPs
(a) (b)

16 300
PD setup PD setup
14 PCE setup PCE setup
PD failure 250 PD failure
PCE failure PCE failure
12
Crankback delay
200
10
Crankbacks

8 150

6
100
4
50
2

0 0
60 65 70 75 80 85 90 95 100 60 65 70 75 80 85 90 95 100
Percentage of TE-LSPs Percentage of TE-LSPs
(c) (d)

PD failure PD failure
1800 PCE failure 400 PCE failure
16
1600 350
PD setup
14 PCE setup
PD failure 1400
300
12 PCE failure
Number of TE-LSPs

1200
Bandwidth (Mb/s)

250
CAC failures

10
1000
8 200
800
6 150
600
4 100
400
2
200 50
0
60 65 70 75 80 85 90 95 100 0 0
Percentage of TE-LSPs Simulation runs Simulation runs
(e) (f)

■ Figure 3. Results for MESH-CORE path cost: a) distribution of average path costs across TELSPs; b) distribution of maximum path
costs across TE-LSPs; c) distribution of number of crankbacks across TE-LSPs; d) distribution of proportional setup delay due to
crankback across TE-LSPs; e) distribution of CAC failures across TE-LSPs; f) TE-LSPs and corresponding bandwidth that failed to
find a route.

IEEE Network • July/August 2007 43


DE OLIVEIRA LAYOUT 7/3/07 11:46 AM Page 44

ures 3c and 4c show the distributions of the number of PCE to fit more bandwidth in the network. For MESH-
crankbacks that took place during the setup of the correspond- CORE, on scaling, 1556 Mb/s more could be set up with PCE.
ing TE-LSPs for MESH-CORE and SYM-CORE, respectively. In comparison, for SYM-CORE this value is 986 Mb/s. The
It can be seen that all crankbacks occurred when failures were amount of extra bandwidth that can be set up on SYM-CORE
taking place in the networks. Information regarding failures is less due to its restricted nature and limited capacity.
never propagate across domains since the corresponding LSAs
are restricted only to the domain in which they originate. As a
result, BNs of one domain do not have information on a fail-
Conclusion and Future Directions
ure in the next domain, leading to a crankback when a route In this article a performance comparison with respect to several
cannot be found in the next domain. Figures 3d and 4d illus- critical metrics between a PCE-based and a per-domain path
trate the proportional setup delays experienced by the TE-LSPs computation deployment was presented. The PCE-based
due to crankbacks for the two topologies. It can be observed approach showed better results than the per-domain path com-
that for a large proportion of the TE-LSPs, the setup delays putation approach with the selected performance metrics. Keep-
arising out of crankbacks are very large, possibly proving to be ing in mind that all evaluation metrics are topology- and
very detrimental to QoS requirements. The large delays arise traffic-dependent, this study used two significantly different but
out of the crankback signaling that needs to propagate back representative topologies where the difference in performance
and forth from the exit BN of a domain to its entry BN. More improvement was also illustrated. Although many more nuances
crankbacks occur for SYM-CORE than for MESH-CORE as are left to be uncovered and analyzed in future work, novel pre-
it is a very restricted and constrained network in terms of con- liminary analysis and insight into factors governing interdomain
nectivity. This causes a lack of routes, and often several cycles path computation are brought to light. An overview of the cur-
of crankback signaling are required to find a route. rent status of the IETF PCE Working Group was also given.
The WG is currently refining drafts, as well as awaiting deploy-
CAC Failures — As discussed in the previous sections, CAC ment experiences. This should bring even further insights on the
failures occur either due to an outdated TED or when a route benefits of the PCE architecture for current network systems.
cannot be found from the selected entry BN. Figures 3e and
4e show the distribution of the total number of CAC failures References
experienced by the TE-LSPs during setup. About 38 percent [1] E. Osborne and A. Simha, Traffic Engineering with MPLS, Cisco Press, 2002.
and 55 percent of TE-LSPs for MESH-CORE and SYM- [2] A. Farrel, J.-P. Vasseur, and J. Ash, “A Path Computation Element (PCE)
CORE, respectively, experience CAC failures with per- Based Architecture,” IETF RFC 4655, Aug. 2006.
[3] J. Ash and J. L. Le Roux, “A Path Computation Element (PCE) Communication
domain path computation when link failures take place in the Protocol Generic Requirements,” IETF RFC 4657, Sept. 2006.
network. In contrast, only about 3 percent of the TE-LSPs [4] J. L. Le Roux, “Requirements for Path Computation Element (PCE) Discovery,”
experience CAC failures with the PCE method. It should be IETF RFC 4674, Oct. 2006.
noted that the CAC failures experienced with the PCE corre- [5] A. Farrel et al., “Crankback Signaling Extensions for MPLS and GMPLS
RSVP-TE,” Internet draft, v. 05, Nov. 2005, work in progress.
spond only to the TEDs being out of date. This is because [6] J.-P. Vasseur et al., “A Backward Recursive PCE-Based Computation (BRPC)
with a PCE deployment, a BN that does not have a route to Procedure to Compute Shortest Interdomain Traffic Engineering Label
the destination is never selected by the BRPC procedure. Switched Paths,” Internet draft, v. 01, Dec. 2006, work in progress.
[7] D. Awduche et al., “RSVP-TE: Extensions to RSVP for LSP Tunnels,” IETF RFC
3209, Dec. 2001
Failed TE-LSPs/Bandwidth on Link Failures — Figures 4f and 3f [8] D. Medhi, “Quality of Service Routing Computation with Path Caching: A
show the number of TE-LSPs and the associated required band- Framework and Network Performance,” IEEE Commun. Mag., vol. 40, no.
width that fail to find a route when link failures are taking place 12, June 2002.
in the topologies. For MESH-CORE, with the per-domain [9] N. Spring, R. Mahajan, and D. Wetherall, “Measuring ISP Topologies with
Rocketfuel,” Proc. ACM SIGCOMM, 2002.
approach, 395 TE-LSPs failed to find a path corresponding to [10] J. Guichard, F. Le Faucheur, and J.-P. Vasseur, Definitive MPLS Network
1612 Mb/s of bandwidth. For PCE, this number is lower at 374 Designs, Cisco Press, 2005.
corresponding to 1546 Mb/s of bandwidth. For SYM-CORE,
with the per-domain approach, 434 TE-LSPs fail to find a route Biographies
corresponding to 1893 Mb/s of bandwidth. With the PCE SUKRIT DASGUPTA (sukrit@ece.drexel.edu) is currently a Ph.D. candidate at Drexel
approach, only 192 TE-LSPs fail to find a route, corresponding University. He received his B.Tech. in computer engineering from Sikkim Manipal
to 895 Mb/s of bandwidth. It is clearly visible that the PCE Institute of Technology, India, in 2003 and M.S. in computer engineering from
Drexel University in 2006. His interests lie in several areas of traffic engineering
allows more TE-LSPs to find a route, thus leading to better per- including resource allocation, path computation, signaling, and protocol analysis.
formance during link failures. This improvement in performance
arises due to the difference in path computation procedures too. JAUDELICE CAVALCANTE DE OLIVEIRA (jau@ece.drexel.edu) received her B.S.E.E.
BRPC always includes all links that satisfy the constraint in the degree from Universidade Federal do Ceara, Ceara, Brazil, in December 1995.
She received her M.S.E.E. degree from Universidade Estadual de Campinas, Sao
VSPT and builds the shortest path from the destination. Since Paulo, Brazil, in February 1998, and her Ph.D. degree in electrical and comput-
the per-domain approach does not have all information about er engineering from the Georgia Institute of Technology in May 2003. She
links in other domains, after several TE-LSPs are set up, avail- joined Drexel University in July of 2003 as an assistant professor. Her research
able bandwidth is left over in fragments insufficient to route all interests include the development of new protocols and policies to support fine-
grained quality of service provisioning in the future Internet, researching and
requests, thereby resulting in several setup failures. developing traffic engineering strategies, and the design of solutions for efficient
routing in ad hoc and sensor networks.
TE-LSP/Bandwidth Setup Capacity — Since PCE and per-
domain path computation differ in how path computation JEAN-PHILIPPE VASSEUR (jvasseur@cisco.com) is a system architect at Cisco Systems
where he works on IP/MPLS architecture sepcifications, focusing on IP, TE, and
takes place, more bandwidth can be set up with PCE. This is network recovery. He holds an engineering degree from France and an M.S.
primarily due to the way in which BRPC functions. To observe from Stevens Institute of Technology, New Jersey. Before joining Cisco, he
the extra bandwidth that can fit into the network, the traffic worked for several service providers in large multiprotocol environments. He is
matrix was scaled. Scaling was stopped when the first TE-LSP an active member of the IETF, co-chair of the IETF PCE Working Group, and has
co-authored several IETF specifications. He is a regular speaker at various inter-
failed to set up with PCE. This metric, like all the others dis- national conference and is involved in various research projects in the area of IP
cussed above, is topology-dependent (hence the choice of two and MPLS. He has also filed several patents in the areas of IP and MPLS, and is
topologies for this study). This metric highlights the ability of the co-author of Network Recovery.

44 IEEE Network • July/August 2007


DE OLIVEIRA LAYOUT 7/3/07 11:46 AM Page 45

50 60
PD setup PD setup
PCE setup 55 PCE setup
45
PD failure PD failure
PCE failure 50 PCE failure
40
Average path cost

45

Max path cost


35 40

30 35

30
25
25
20
20
15 15
0 20 40 60 80 100 0 20 40 60 80 100
Percentage of TE-LSPs Percentage of TE-LSPs
(a) (b)

8 140
PD setup PD setup
7 PCE setup PCE setup
120 PD failure
PD failure
PCE failure PCE failure
6
100
Crankback delay

5
Crankbacks

80
4
60
3
40
2

1 20

0 0
40 50 60 70 80 90 100 40 50 60 70 80 90 100
Percentage of TE-LSPs Percentage of TE-LSPs
(c) (d)

2000 PD failure 450 PD failure


PCE failure PCE failure
8 1800
400
PD setup
7 PCE setup 1600
PD failure 350
PCE failure
6 1400
Number of TE-LSPs
Bandwidth (Mb/s)

300
1200
CAC failures

5
250
1000
4
200
800
3
600 150
2
400 100
1
200 50
0
40 50 60 70 80 90 100 0 0
Percentage of TE-LSPs Simulation runs Simulation runs
(e) (f)

■ Figure 4. Results for SYM-CORE: a) distribution of average path costs across TELSPs; b) distribution of maximum path costs across
TE-LSPs; c) distribution of number of crankbacks across TE-LSPs; d) distribution of proportional setup delay due to crankback across
TE-LSPs; e) distribution of CAC failures across TE-LSPs; f) TE-LSPs and corresponding bandwidth that failed to find a route.

IEEE Network • July/August 2007 45

You might also like