Monday, December 10, 2001

Tutorial: Advanced MPLS Signaling

Rick Gallaher, CISSP, is owner of Dragonfly Associates LLC and author of  Rick Gallaher's MPLS Training Guide

December 10, 2001

In previous tutorials, we talked about data flow (Tutorial #1) and label distribution (Tutorial #2). This article discusses MPLS signaling and the ongoing conversations regarding signaling choices.

  • Soft State – A link, path, or call that needs to be refreshed to stay alive.
  • Hard State – A link, path, or call that will stay alive until it is specifically shut down.
  • Explicit Route – A path across the Internet wherein all routers are specified. Packets must follow this route, and they cannot detour.
  • CR-LDP – Constraint-based Routing over Label-Distribution Protocol.
  • RSVP-TE – The Resource ReSerVation Protocol (RSVP), modified to handle MPLS traffic-engineering requirements.
  • IntServ – Integrated Service; allows traffic to be classified into three groups: guaranteed, controlled load, and best effort.  IntServ works together with RSVP protocol.
Your commute to work every day is a long one, but with all the congestion it seems to take forever. New lanes have been added to the highway, but they are reserved as express lanes – sure, they will cut your travel time in half, but you will have to carry extra passengers in order to use them. You decide, finally, to try it; you decide to carry four additional passengers in order to use the express lane.  You are permitted to pass through the express-lane gate and scurry on your way to and from work.

The four passengers do not cost much more to transport than yourself alone, and they really allow you to increase the speed and lower the rate of interference from the unpredictable and impossible-to-correct behavior of the routine traffic. (Figure 1)

Figure 1: Backed Up Express Lane
One day you enter the express lanes and find that they are all in a state of bumper-to-bumper congestion. You look around and find routine traffic in the express lanes. You are angry, of course, because you had guaranteed express lanes, and the routine traffic is required to stay off the express lanes unless they are carrying extra passengers.   As you slowly progress down your road, you see that construction has closed down the routine lanes and diverted the traffic to your express lanes. So, what good is it to be special if regular traffic is diverted to your express lanes?

Traffic Control in MPLS Networks
In networking, MPLS is express traffic that carries four (4) additional bytes of payload.  For taking that effort, it gets to travel the express lanes.  But, as is too often the case in the actual freeway, your nice, smooth-running express lane is subjected to routine traffic being rerouted onto it, causing congestion and slowdowns.
Remember that MPLS is an overlay protocol that overlays MPLS traffic on a routine IP network. The self-healing properties of IP may cause congestion on your express lanes. There is no accounting for the unforeseen traffic accidents and reroutes of routine traffic onto the express lanes. The Internet is self-healing with resource capabilities, but the problem becomes this: how does one ensure that paths and bandwidth that are reserved for their packets do not get overrun by rerouted traffic? (Figures 2 –4)

Figure 2: MPLS with Three Paths

In Figure 2, we see a standard MPLS network with three different paths across the Wide- Area Network.  Path A is engineered to the 90thpercentile of bandwidth of peak busy hour; Path B is engineered to the 100th percentile bandwidth of peak busy hour; finally, Path C is engineered to the 125th percentile of peak busy hour.  In theory, Path A will never have to contend with congestion, owing to sound network design (including traffic engineering). In other words, the road is engineered to take more traffic than it will receive during rush hour. The C network, however, will experience traffic jams during rush hour, because it is designed not to handle peak traffic conditions.

The Quality of Service (QoS) in Path C will have some level of unpredictability regarding both jitter and dropped packets, whereas the traffic on Path A should have consistent QoS measurements.

Figure 3: MPLS with a Failed Path C

In Figure 3, we see a network failure in Path C, and the traffic is rerouted (Figure 4) onto an available path – Path A.  Under these conditions, Path A is subjected to a loss of QoS criteria. To attain real QoS, there must be a method for controlling both traffic on the paths and the percentage of traffic that is allowed onto every engineered path.

Figure 4: MPLS with Congestion Caused by a Reroute

To help overcome the problems of rerouting congestion, the Internet Engineering Task Force (IETF) and related working groups have looked at several possible solutions.  This problem had to be addressed both in protocols and in the software systems built into the routers.

In order to have full  QoS, a system must be able to mark, classify, and police traffic. From previous articles, we see how MPLS can classify and mark packets with labels, but the policing function has been missing. Routing and label distribution establishes the Label Switch Paths, but still it does not police traffic and control the load factors on each link.

New software engines, which add management modules between the routing functions and the path selector, allow for the policing and management of bandwidth. These functions, along with the addition of two protocols, allow for traffic policing.

Figure 5: MPLS Routing State Machines
The two protocols that give MPLS the ability to police traffic and control loads are RSVP-TE and CR-LDP.


The concept of a call set-up process, wherein resources are reserved before calls are established, goes back to the signaling-theory days of telephony.  This concept was adapted for data networking when QoS became an issue.

An early method designed by the IETF in 1997, called Resource ReSerVation Protocol (RSVP), was designed for this very function.  The protocol was designed to request required bandwidth and traffic conditions on a defined or explained path.  If the bandwidth was available under the stated conditions, then the link would be established.
The link was established with three types of traffic that were similar to first-class, second-class and standby air travel – the paths were called, respectively: guaranteed load, controlled load and best-effort load. 

RSVP, with features added to accommodate MPLS traffic engineering, is called RSVP-TE. The traffic-engineering functions allow for the management of MPLS labels or colors.

Figure 6:  RSVP-TE Path Request

In Figures 6 and 7, we see how a call or path is set up between two endpoints. The target station requests a specific path, with detailed traffic conditions and treatment parameters included in the path-request message.  This message is received, and a reservation message, reserving bandwidth on the network, is sent back to the target. After the first reservation message is received at the target, the data can start to flow in explicit paths from end to end.

Figure 7: RSVP-TE Reservation

This call set-up, or signaling, process is called “soft state,” because the call will be torn down if it is not refreshed in accordance with the refresh timers. In Figure 8, we see that the path-request and reservation messages continue for as long as the data is flowing.

Figure 8: RSVP-TE Path Set Up

Some early arguments against RSVP included the problem of scalability: the more paths that were established, the more refresh messages would be created, and the network would soon become overloaded with refresh messages. Methods of addressing this problem include not allowing the traffic links and paths to become too granular, and aggregating paths.

To view an example of an RSVP-TE path request for yourself, you can download a protocol analyzer and sample file from

Protocol Analyzer: 
Sample file: Go to and click on "MPLS-TE.cap" (sample 15). 

After downloading, install ethereal and open the MPLS-TE.Cap file.

In the sample below (Figure 9), MPLS captures MPLS-TE files. In the capture, we can see the traffic specifications (TSPEC) for the controlled load.

Figure 9: RSVP-TE Details


With CR-LDP (Constraint-based Routing over Label Distribution Protocol), modifications were made to the LDP protocol to allow for traffic specifications. The impetus for this design was to use an existing protocol LDP and give it traffic-engineering capabilities.  A major effort by Nortel Networks was made to launch the CR-LDP protocol.

The CR-LDP protocol adds fields to the LDP protocol. They are called peak, committed, and excess-data rates – very similar to terms used for ATM networks. The frame format is shown in Figure 10.

Figure 10: CR-LDP Frame Format

The call set-up procedure for CR-LPD is a very simple two-step process: a request and a map, as shown in Figure 11.  The reason for the simple set-up is that CR-LPD is a hard-state protocol – meaning that the call, link, or path, once established, will not be broken down until it is requested that it be done.

Figure 11: CR-LDP Call Set Up

The major advantage of a hard-state protocol is that it should be more scaleable, because there is less “chatter” needed in order to keep the link active.

Comparing CR-LDP to RSVP-TE

The technical comparisons of these two protocols are listed in Figure 12.  We see that CR-LDP uses the LDP protocol as its carrier, where RSVP-TE uses the RSVP protocol.  RSVP is typically paired with IntServ’s detection of QoS, while the CR-LDP protocol uses ATM’s traffic-engineering terms to map QoS.

VendorsNortelCisco, Juniper, Foundry
StateHard StateSoft State
QoS TypeATMIntServ
Recovery TimeA little slowerFaster
Chat OverheadLowHigh
Transported onLDP over TCPRSVP on IP
Path ModificationsMake before breakMake before break

Figure 12:  CR-LDP vs. RSVP-TE
In the industry today, we find that while Cisco and Juniper favor the RSVP-TE model and Nortel favors the CR-LDP model, both signaling protocols are supported by most vendors.

The jury is still very much as out as to the scalability, recovery, and interoperability between the signaling protocols.  However, it appears from the sidelines that the RSVP-TE protocol may be in the lead. This is not because it is less “chatty” or more robust of the two, but is due more to the fact that RSVP was an established protocol, with most of its bugs removed, prior to the inception of MPLS.  Both protocols remain the topics of study by major universities and vendors.  In the months to come, we will see test results and market domination affect these protocols.  Stay tuned…

Special thanks to:

I would like to thank Ben Gallaher, Susan Gallaher, and Amy Quinn for their assistance, reviewing, and editing.

A special thank you to all those who assisted me with information and research on the MPLSRC-OP mail list, especially Senthil Kumar Ayyasamy ( and Javed A Syed (

Rick Gallaher, CISSP, is owner of Dragonfly Associates LLC and author of  Rick Gallaher's MPLS Training Guide