You are on page 1of 462

Chapter 4

Frame Relay

1
Chapter 4 Frame Relay
Introduction
l Packet-Switching Networks
– Switching Technique
– Routing
– X.25
l Frame Relay Networks
– Architecture
– User Data Transfer
– Call Control

2
Chapter 4 Frame Relay
Packet
Packet--Switching Networks
l Basic technology the same as in the 1970s
l One of the few effective technologies for long
distance data communications
l Frame relay and ATM are variants of packet-
switching
l Advantages:
– flexibility, resource sharing, robust, responsive
l Disadvantages:
– Time delays in distributed network, overhead
penalties
– Need for routing and congestion control

3
Chapter 4 Frame Relay
Circuit-
Circuit-Switching
l Long-haul telecom network designed for
voice
l Network resources dedicated to one call
l Shortcomings when used for data:
– Inefficient (high idle time)
– Constant data rate

4
Chapter 4 Frame Relay
Packet
Packet--Switching
l Data transmitted in short blocks, or packets
l Packet length < 1000 octets
l Each packet contains user data plus control
info (routing)
l Store and forward

5
Chapter 4 Frame Relay
Figure 4.1 The Use of Packets

6
Chapter 4 Frame Relay
Figure 4.2 Packet
Switching:
Datagram
Approach

7
Chapter 4 Frame Relay
Advantages over Circuit-
Circuit-Switching
l Greater line efficiency (many packets can
go over shared link)
l Data rate conversions
l Non-blocking under heavy traffic (but
increased delays)

8
Chapter 4 Frame Relay
Disadvantages relative to Circuit
Circuit--
Switching
l Packets incur additional delay with every
node they pass through
l Jitter: variation in packet delay
l Data overhead in every packet for routing
information, etc
l Processing overhead for every packet at
every node traversed

9
Chapter 4 Frame Relay
Figure 4.3 Simple Switching
Network

10
Chapter 4 Frame Relay
Switching Technique
l Large messages broken up into smaller packets
l Datagram
– Each packet sent independently of the others
– No call setup
– More reliable (can route around failed nodes or
congestion)
l Virtual circuit
– Fixed route established before any packets sent
– No need for routing decision for each packet at each
node

11
Chapter 4 Frame Relay
Figure 4.4 Packet
Switching: Virtual
Virtual--
Circuit Approach

12
Chapter 4 Frame Relay
Routing
l Adaptive routing
l Node/trunk failure
l Congestion

13
Chapter 4 Frame Relay
X.25
l 3 levels
l Physical level (X.21)
l Link level (LAPB, a subset of HDLC)
l Packet level (provides virtual circuit
service)

14
Chapter 4 Frame Relay
Figure 4.5 The Use of Virtual
Circuits

15
Chapter 4 Frame Relay
Figure 4.6 User Data and X.25
Protocol Control Information

16
Chapter 4 Frame Relay
Frame Relay Networks
l Designed to eliminate much of the overhead in
X.25
l Call control signaling on separate logical
connection from user data
l Multiplexing/switching of logical connections at
layer 2 (not layer 3)
l No hop-by-hop flow control and error control
l Throughput an order of magnitude higher than
X.25

17
Chapter 4 Frame Relay
Figure 4.7 Comparison of X.25
and Frame Relay Protocol Stacks

18
Chapter 4 Frame Relay
Figure 4.8 Virtual Circuits and
Frame Relay Virtual Connections

19
Chapter 4 Frame Relay
Frame Relay Architecture
l X.25 has 3 layers: physical, link, network
l Frame Relay has 2 layers: physical and
data link (or LAPF)
l LAPF core: minimal data link control
– Preservation of order for frames
– Small probability of frame loss
l LAPF control: additional data link or
network layer end-to-end functions
20
Chapter 4 Frame Relay
LAPF Core
l Frame delimiting, alignment and
transparency
l Frame multiplexing/demultiplexing
l Inspection of frame for length constraints
l Detection of transmission errors
l Congestion control

21
Chapter 4 Frame Relay
Figure 4.9 LAPF
LAPF--core
Formats

22
Chapter 4 Frame Relay
User Data Transfer
l No control field, which is normally used
for:
– Identify frame type (data or control)
– Sequence numbers
l Implication:
– Connection setup/teardown carried on
separate channel
– Cannot do flow and error control
23
Chapter 4 Frame Relay
Frame Relay Call Control
l Frame Relay Call Control
l Data transfer involves:
– Establish logical connection and DLCI
– Exchange data frames
– Release logical connection

24
Chapter 4 Frame Relay
Frame Relay Call Control
4 message types needed
l SETUP
l CONNECT
l RELEASE
l RELEASE COMPLETE

25
Chapter 4 Frame Relay
Chapter 5

Asynchronous Transfer Mode


(ATM)

1
Chapter 2 Protocols and the TCP/IP Suite
Introduction
l ATM Protocol Architecture
l Logical connections
l ATM Cells
l Service categories
l ATM Adaptation Layer (AAL)

2
Chapter 2 Protocols and the TCP/IP Suite
ATM Protocol Architecture
l Fixed-size packets called cells
l Streamlined: minimal error and flow
control
l 2 protocol layers relate to ATM functions:
– Common layer providing packet transfers
– Service dependent ATM adaptation layer
(AAL)
l AAL maps other protocols to ATM
3
Chapter 2 Protocols and the TCP/IP Suite
Protocol Model has 3 planes
l User
l Control
l management

4
Chapter 2 Protocols and the TCP/IP Suite
Figure 5.1

5
Chapter 2 Protocols and the TCP/IP Suite
Logical Connections
l VCC (Virtual Channel Connection): a
logical connection analogous to virtual
circuit in X.25
l VPC (Virtual Path Connection): a bundle
of VCCs with same endpoints

6
Chapter 2 Protocols and the TCP/IP Suite
Figure 5.2

7
Chapter 2 Protocols and the TCP/IP Suite
Advantages of Virtual Paths
l Simplified network architecture
l Increased network performance and
reliability
l Reduced processing and short connection
setup time
l Enhanced network services

8
Chapter 2 Protocols and the TCP/IP Suite
Table 5.1

9
Chapter 2 Protocols and the TCP/IP Suite
VCC Uses
l Between end users
l Between an end user and a network entity
l Between 2 network entities

10
Chapter 2 Protocols and the TCP/IP Suite
Figure 5.3

11
Chapter 2 Protocols and the TCP/IP Suite
VPC/VCC Characteristics
l Quality of Service (QoS)
l Switched and semi-permanent virtual
channel connections
l Cell sequence integrity
l Traffic parameter negotiation and usage
monitoring
l (VPC only) virtual channel identifier
restriction within a VPC
12
Chapter 2 Protocols and the TCP/IP Suite
Control Signaling
l A mechanism to establish and release
VPCs and VCCs
l 4 methods for VCCs:
– Semi-permanent VCCs
– Meta-signaling channel
– User-to-network signaling virtual channel
– User-to-user signaling virtual channel

13
Chapter 2 Protocols and the TCP/IP Suite
Control Signaling
l 3 methods for VPCs
– Semi-permanent
– Customer controlled
– Network controlled

14
Chapter 2 Protocols and the TCP/IP Suite
ATM Cells
l Fixed size
l 5-octet header
l 48-octet information field
l Small cells reduce delay for high-priority
cells
l Fixed size facilitate switching in hardware

15
Chapter 2 Protocols and the TCP/IP Suite
Header Format
l Generic flow control
l Virtual path identifier (VPI)
l Virtual channel identifier (VCI)
l Payload type
l Cell loss priority
l Header error control

16
Chapter 2 Protocols and the TCP/IP Suite
Figure 5.4

17
Chapter 2 Protocols and the TCP/IP Suite
Generic Flow Control
l Control traffic flow at user-network interface
(UNI) to alleviate short-term overload
conditions
l When GFC enabled at UNI, 2 procedures
used:
– Uncontrolled transmission
– Controlled transmission

18
Chapter 2 Protocols and the TCP/IP Suite
Table 5.3

19
Chapter 2 Protocols and the TCP/IP Suite
Header Error Control
l 8-bit field calculated based on remaining
32 bits of header
l error detection
l in some cases, error correction of single-bit
errors in header
l 2 modes:
– error detection
– Error correction
20
Chapter 2 Protocols and the TCP/IP Suite
Figure 5.5

21
Chapter 2 Protocols and the TCP/IP Suite
Figure 5.6

22
Chapter 2 Protocols and the TCP/IP Suite
Figure 5.7

23
Chapter 2 Protocols and the TCP/IP Suite
Service Categories
l Real-time service
– Constant bit rate (CBR)
– Real-time variable bit rate (rt-VBR)
l Non-real-time service
– Non-real-time variable bit rate (nrt-VBR)
– Available bit rate (ABR)
– Unspecified bit rate (UBR)
– Guaranteed frame rate (GFR)

24
Chapter 2 Protocols and the TCP/IP Suite
Figure 5.8

25
Chapter 2 Protocols and the TCP/IP Suite
ATM Adaptation Layer (ATM)
l Support non-ATM protocols
– e.g., PCM voice, LAPF
l AAL Services
– Handle transmission errors
– Segmentation/reassembly (SAR)
– Handle lost and misinserted cell conditions
– Flow control and timing control

26
Chapter 2 Protocols and the TCP/IP Suite
Applications of AAL and ATM
l Circuit emulation (e.g., T-1 synchronous
TDM circuits)
l VBR voice and video
l General data services
l IP over ATM
l Multiprotocol encapsulation over ATM
(MPOA)
l LAN emulation (LANE)
27
Chapter 2 Protocols and the TCP/IP Suite
AAL Protocols
l AAL layer has 2 sublayers:
– Convergence Sublayer (CS)
l Supports specific applications using AAL
– Segmentation and Reassembly Layer (SAR)
l Packages data from CS into cells and unpacks at
other end

28
Chapter 2 Protocols and the TCP/IP Suite
Figure 5.9

29
Chapter 2 Protocols and the TCP/IP Suite
Figure 5.10

30
Chapter 2 Protocols and the TCP/IP Suite
AAL Type 1
l Constant-bit-rate source
l SAR simply packs bits into cells and
unpacks them at destination
l One-octet header contains 3-bit SC field to
provide an 8-cell frame structure
l No CS PDU since CS sublayer primarily
for clocking and synchronization

31
Chapter 2 Protocols and the TCP/IP Suite
AAL Type 3/4
l May be connectionless or connection
oriented
l May be message mode or streaming mode

32
Chapter 2 Protocols and the TCP/IP Suite
Figure 5.11

33
Chapter 2 Protocols and the TCP/IP Suite
Figure 5.12

34
Chapter 2 Protocols and the TCP/IP Suite
AAL Type 5
l Streamlined transport for connection
oriented protocols
– Reduce protocol processing overhead
– Reduce transmission overhead
– Ensure adaptability to existing transport
protocols

35
Chapter 2 Protocols and the TCP/IP Suite
Figure 5.13

36
Chapter 2 Protocols and the TCP/IP Suite
Chapter 6

High-Speed LANs

1
Chapter 6 High-Speed LANs
Introduction
l Fast Ethernet and Gigabit Ethernet
l Fibre Channel
l High-speed Wireless LANs

2
Chapter 6 High-Speed LANs
Table 6.1

3
Chapter 6 High-Speed LANs
Emergence of High-
High-Speed LANs
l 2 Significant trends
– Computing power of PCs continues to grow
rapidly
– Network computing
l Examples of requirements
– Centralized server farms
– Power workgroups
– High-speed local backbone
4
Chapter 6 High-Speed LANs
Classical Ethernet
l Bus topology LAN
l 10 Mbps
l CSMA/CD medium access control
protocol
l 2 problems:
– A transmission from any station can be
received by all stations
– How to regulate transmission
5
Chapter 6 High-Speed LANs
Solution to First Problem
l Data transmitted in blocks called frames:
– User data
– Frame header containing unique address of
destination station

6
Chapter 6 High-Speed LANs
Figure 6.1

7
Chapter 6 High-Speed LANs
CSMA/CD
Carrier Sense Multiple Access/ Carrier Detection

1. If the medium is idle, transmit.


2. If the medium is busy, continue to listen until
the channel is idle, then transmit immediately.
3. If a collision is detected during transmission,
immediately cease transmitting.
4. After a collision, wait a random amount of
time, then attempt to transmit again (repeat
from step 1).

8
Chapter 6 High-Speed LANs
Figure 6.2

9
Chapter 6 High-Speed LANs
Figure 6.3

10
Chapter 6 High-Speed LANs
Medium Options at 10Mbps
l <data rate> <signaling method> <max length>
l 10Base5
– 10 Mbps
– 50-ohm coaxial cable bus
– Maximum segment length 500 meters
l 10Base-T
– Twisted pair, maximum length 100 meters
– Star topology (hub or multipoint repeater at central point)

11
Chapter 6 High-Speed LANs
Figure 6.4

12
Chapter 6 High-Speed LANs
Hubs and Switches
Hub
l Transmission from a station received by central
hub and retransmitted on all outgoing lines
l Only one transmission at a time

Layer 2 Switch
l Incoming frame switched to one outgoing line
l Many transmissions at same time

13
Chapter 6 High-Speed LANs
Figure 6.5

14
Chapter 6 High-Speed LANs
Bridge Layer 2 Switch
l Frame handling done l Frame handling done
in software in hardware
l Analyze and forward l Multiple data paths
one frame at a time and can handle
l Store-and-forward multiple frames at a
time
l Can do cut-through

15
Chapter 6 High-Speed LANs
Layer 2 Switches
l Flat address space
l Broadcast storm
l Only one path between any 2 devices

l Solution 1: subnetworks connected by


routers
l Solution 2: layer 3 switching, packet-
forwarding logic in hardware
16
Chapter 6 High-Speed LANs
Figure 6.6

17
Chapter 6 High-Speed LANs
Figure 6.7

18
Chapter 6 High-Speed LANs
Figure 6.8

19
Chapter 6 High-Speed LANs
Figure 6.9

20
Chapter 6 High-Speed LANs
Figure 6.10

21
Chapter 6 High-Speed LANs
Figure 6.11

22
Chapter 6 High-Speed LANs
Benefits of 10 Gbps Ethernet over
ATM
l No expensive, bandwidth consuming
conversion between Ethernet packets and
ATM cells
l Network is Ethernet, end to end
l IP plus Ethernet offers QoS and traffic
policing capabilities approach that of ATM
l Wide variety of standard optical interfaces
for 10 Gbps Ethernet
23
Chapter 6 High-Speed LANs
Fibre Channel
l 2 methods of communication with
processor:
– I/O channel
– Network communications
l Fibre channel combines both
– Simplicity and speed of channel
communications
– Flexibility and interconnectivity of network
communications
24
Chapter 6 High-Speed LANs
Figure 6.12

25
Chapter 6 High-Speed LANs
I/O channel
l Hardware based, high-speed, short
distance
l Direct point-to-point or multipoint
communications link
l Data type qualifiers for routing payload
l Link-level constructs for individual I/O
operations
l Protocol specific specifications to support
e.g. SCSI
26
Chapter 6 High-Speed LANs
Fibre Channel Network
Network--Oriented
Facilities
l Full multiplexing between multiple
destinations
l Peer-to-peer connectivity between any pair
of ports
l Internetworking with other connection
technologies

27
Chapter 6 High-Speed LANs
Fibre Channel Requirements
l Full duplex links with 2 fibres/link
l 100 Mbps – 800 Mbps
l Distances up to 10 km
l Small connectors
l high-capacity
l Greater connectivity than existing multidrop
channels
l Broad availability
l Support for multiple cost/performance levels
l Support for multiple existing interface command
sets
28
Chapter 6 High-Speed LANs
Figure 6.13

29
Chapter 6 High-Speed LANs
Fibre Channel Protocol
Architecture
l FC-0 Physical Media
l FC-1 Transmission Protocol
l FC-2 Framing Protocol
l FC-3 Common Services
l FC-4 Mapping

30
Chapter 6 High-Speed LANs
Wireless LAN Requirements
l Throughput
l Number of nodes
l Connection to backbone
l Service area
l Battery power consumption
l Transmission robustness and security
l Collocated network operation
l License-free operation
l Handoff/roaming
l Dynamic configuration
31
Chapter 6 High-Speed LANs
Figure 6.14

32
Chapter 6 High-Speed LANs
IEEE 802.11 Services
l Association
l Reassociation
l Disassociation
l Authentication
l Privacy

33
Chapter 6 High-Speed LANs
Figure 6.15

34
Chapter 6 High-Speed LANs
Figure 6.16

35
Chapter 6 High-Speed LANs
Chapter 10

Congestion Control in Data


Networks and Internets

1
Chapter 10 Congestion Control in Data Networks and Internets
Introduction
l Congestion occurs when number of
packets transmitted approaches network
capacity
l Objective of congestion control:
– keep number of packets below level at which
performance drops off dramatically

2
Chapter 10 Congestion Control in Data Networks and Internets
Queuing Theory
l Data network is a network of queues
l If arrival rate > transmission rate
then queue size grows without bound and
packet delay goes to infinity

3
Chapter 10 Congestion Control in Data Networks and Internets
Figure 10.1

4
Chapter 10 Congestion Control in Data Networks and Internets
At Saturation Point, 2 Strategies
l Discard any incoming packet if no buffer
available
l Saturated node exercises flow control over
neighbors
– May cause congestion to propagate throughout
network

5
Chapter 10 Congestion Control in Data Networks and Internets
Figure 10.2

6
Chapter 10 Congestion Control in Data Networks and Internets
Ideal Performance
l I.e., infinite buffers, no overhead for
packet transmission or congestion control
l Throughput increases with offered load
until full capacity
l Packet delay increases with offered load
approaching infinity at full capacity
l Power = throughput / delay
l Higher throughput results in higher delay

7
Chapter 10 Congestion Control in Data Networks and Internets
Figure 10.3

8
Chapter 10 Congestion Control in Data Networks and Internets
Practical Performance
l I.e., finite buffers, non-zero packet
processing overhead
l With no congestion control, increased load
eventually causes moderate congestion:
throughput increases at slower rate than
load
l Further increased load causes packet
delays to increase and eventually
throughput to drop to zero

9
Chapter 10 Congestion Control in Data Networks and Internets
Figure 10.4

10
Chapter 10 Congestion Control in Data Networks and Internets
Congestion Control
l Backpressure
– Request from destination to source to reduce
rate
– Choke packet: ICMP Source Quench
l Implicit congestion signaling
– Source detects congestion from transmission
delays and discarded packets and reduces flow

11
Chapter 10 Congestion Control in Data Networks and Internets
Explicit congestion signaling
l Direction
– Backward
– Forward
l Categories
– Binary
– Credit-based
– rate-based

12
Chapter 10 Congestion Control in Data Networks and Internets
Traffic Management
l Fairness
– Last-in-first-discarded may not be fair
l Quality of Service
– Voice, video: delay sensitive, loss insensitive
– File transfer, mail: delay insensitive, loss sensitive
– Interactive computing: delay and loss sensitive
l Reservations
– Policing: excess traffic discarded or handled on best-
effort basis

13
Chapter 10 Congestion Control in Data Networks and Internets
Figure 10.5

14
Chapter 10 Congestion Control in Data Networks and Internets
Frame Relay Congestion Control
l Minimize frame size
l Maintain QoS
l Minimize monopolization of network
l Simple to implement, little overhead
l Minimal additional network traffic
l Resources distributed fairly
l Limit spread of congestion
l Operate effectively regardless of flow
l Have minimum impact other systems in network
l Minimize variance in QoS

15
Chapter 10 Congestion Control in Data Networks and Internets
Table 10.1

16
Chapter 10 Congestion Control in Data Networks and Internets
Traffic Rate Management
l Committed Information Rate (CIR)
– Rate that network agrees to support
l Aggregate of CIRs < capacity
– For node and user-network interface (access)
l Committed Burst Size
– Maximum data over one interval agreed to by network
l Excess Burst Size
– Maximum data over one interval that network will
attempt

17
Chapter 10 Congestion Control in Data Networks and Internets
Figure 10.6

18
Chapter 10 Congestion Control in Data Networks and Internets
Figure 10.7

19
Chapter 10 Congestion Control in Data Networks and Internets
Congestion Avoidance with Explicit
Signaling
2 strategies
l Congestion always occurred slowly,
almost always at egress nodes
– forward explicit congestion avoidance
l Congestion grew very quickly in internal
nodes and required quick action
– backward explicit congestion avoidance

20
Chapter 10 Congestion Control in Data Networks and Internets
2 Bits for Explicit Signaling
l Forward Explicit Congestion Notification
– For traffic in same direction as received frame
– This frame has encountered congestion
l Backward Explicit Congestion Notification
– For traffic in opposite direction of received
frame
– Frames transmitted may encounter congestion

21
Chapter 10 Congestion Control in Data Networks and Internets
Chapter 2

Protocols and the TCP/IP Suite

1
Chapter 2 Protocols and the TCP/IP Suite
Introduction
l Layered protocol architecture
l TCP/IP protocol suite
l OSI reference model
l Internetworking

2
Chapter 2 Protocols and the TCP/IP Suite
The Need for a Protocol
Architecture
l Procedures to exchange data between
devices can be complex
l High degree of cooperation required
between communicating systems

3
Chapter 2 Protocols and the TCP/IP Suite
Example: File transfer
l Requires a data path to exist
l Tasks:
– Activate data communication path
– Source determines that destination is ready
– File transfer app destination file management
app is ready store file for user
– File format conversion

4
Chapter 2 Protocols and the TCP/IP Suite
Layered Protocol Architecture
l modules arranged in a vertical stack
l Each layer in stack:
– Performs related functions
– Relies on lower layer for more primitive
functions
– Provides services to next higher layer
– Communicates with corresponding peer layer
of neighboring system using a protocol
5
Chapter 2 Protocols and the TCP/IP Suite
Key Features of a Protocol
l Set of rules or conventions to exchange
blocks of formatted data
l Syntax: data format
l Semantics: control information
(coordination, error handling)
l Timing: speed matching, sequencing

6
Chapter 2 Protocols and the TCP/IP Suite
TCP/IP Layers
l Physical
l Network access
l Internet
l Transport Application

7
Chapter 2 Protocols and the TCP/IP Suite
TCP and UDP
l TCP:
– connection-oriented
– Reliable packet delivery in sequence
l UDP:
– connectionless (datagram)
– Unreliable packet delivery
– Packets may arrive out of sequence or
duplicated
8
Chapter 2 Protocols and the TCP/IP Suite
Figure 2.1

9
Chapter 2 Protocols and the TCP/IP Suite
Figure 2.2

10
Chapter 2 Protocols and the TCP/IP Suite
Operation of TCP and IP
l IP implemented in end systems and
routers, relaying data between hosts
l TCP implemented only in end systems,
assuring reliable delivery of blocks of data
l Each host on subnetwork has unique IP
address
l Each process on each process has unique
IP port number
11
Chapter 2 Protocols and the TCP/IP Suite
Figure 2-
2-3

12
Chapter 2 Protocols and the TCP/IP Suite
Figure 2-
2-4

13
Chapter 2 Protocols and the TCP/IP Suite
TCP Applications
l SMTP: Simple Mail Transfer Protocol
l FTP: File Transfer Protocol
l telnet: remote login

14
Chapter 2 Protocols and the TCP/IP Suite
OSI Reference Model
l Application
l Presentation
l Session
l Transport
l Network
l Data link
l physical

15
Chapter 2 Protocols and the TCP/IP Suite
Figure 2.5

16
Chapter 2 Protocols and the TCP/IP Suite
Internetworking Terms
l Communication network
l Internet
l Intranet
l Subnetwork
l End system
l Intermediate system
l Bridge
l Router

17
Chapter 2 Protocols and the TCP/IP Suite
Routers
l Provide link between networks
l Accommodate network differences:
– Addressing schemes
– Maximum packet sizes
– Hardware and software interfaces
– Network reliability

18
Chapter 2 Protocols and the TCP/IP Suite
Figure 2-
2-7

19
Chapter 2 Protocols and the TCP/IP Suite
Figure 2-
2-8

20
Chapter 2 Protocols and the TCP/IP Suite
Figure 2-
2-9

21
Chapter 2 Protocols and the TCP/IP Suite
Figure 2-
2-10

22
Chapter 2 Protocols and the TCP/IP Suite
Chapter 3
TCP and IP

1
Chapter 3 TCP and IP
Introduction
l Transmission Control Protocol (TCP)
l User Datagram Protocol (UDP)
l Internet Protocol (IP)
l IPv6

2
Chapter 3 TCP and IP
TCP
l RFC 793, RFC 1122
l Outgoing data is logically a stream of
octets from user
l Stream broken into blocks of data, or
segments
l TCP accumulates octets from user until
segment is large enough, or data marked
with PUSH flag
l User can mark data as URGENT

3
Chapter 3 TCP and IP
l Similarly, incoming data is a stream of
octets presented to user
l Data marked with PUSH flag triggers
delivery of data to user, otherwise TCP
decides when to deliver data
l Data marked with URGENT flag causes
user to be signaled

4
Chapter 3 TCP and IP
Checksum Field
l Applied to data segment and part of the
header
l Protects against bit errors in user data and
addressing information
l Filled in at source
l Checked at destination

5
Chapter 3 TCP and IP
Options
l Maximum segment size
l Window scale factor
l Timestamp

6
Chapter 3 TCP and IP
Figure 2.1

7
Chapter 3 TCP and IP
UDP
l RFC 768
l Connectionless, unreliable
l Less overhead
l Simply adds port addressing to IP
l Checksum is optional

8
Chapter 3 TCP and IP
Appropriate Uses of UDP
l Inward data collection
l Outward data dissemination
l Request-response
l Real-time applications

9
Chapter 3 TCP and IP
IP
l RFC 791
l Field highlights:
– Type of service, defined in RFC 1349, see
Figure 3.1
– More bit
– Don’t fragment bit
– Time to live (similar to a hop count)

10
Chapter 3 TCP and IP
Figure 2.2

11
Chapter 3 TCP and IP
Figure 3.1

12
Chapter 3 TCP and IP
Fragmentation and Reassembly
l Networks may have different maximum
packet size
l Router may need to fragment datagrams
before sending to next network
l Fragments may need further fragmenting
in later networks
l Reassembly done only at final destination
since fragments may take different routes
13
Chapter 3 TCP and IP
Figure 3.2

14
Chapter 3 TCP and IP
Type of Service TOS Subfield
l Set by source system
l Routers may ignore TOS
l Router may respond to requested TOS
value through:
– Route selection
– Subnetwork service
– Queuing discipline

15
Chapter 3 TCP and IP
Table 3.1

16
Chapter 3 TCP and IP
Type of Service Precedence
Subfield
l Indicates degree of urgency or priority
l Like TOS subfield, may be ignored and
there are 3 approaches to responding
l Intended to affect queuing discipline at
router
– Queue service
– Congestion control

17
Chapter 3 TCP and IP
IPv4 Options
l Security
l Source routing
l Route recording
l timestamping

18
Chapter 3 TCP and IP
IPv6
l Increase IP address from 32 bits to 128
l Accommodate higher network speeds, mix
of data streams (graphics, video, audio)
l Fixed size 40-octet header, followed by
optional extension headers
l Longer header but fewer fields (8 vs 12),
so routers should have less processing

19
Chapter 3 TCP and IP
IPv6 Header
l Version
l Traffic class
l Flow label
l Payload length
l Next header
l Hop limit
l Source address
l Destination address

20
Chapter 3 TCP and IP
IPv6 Addresses
l 128 bits
l Longer addresses can have structure that
assists routing
l 3 types:
– Unicast
– Anycast
– multicast

21
Chapter 3 TCP and IP
Figure 3.3

22
Chapter 3 TCP and IP
Optional Extension Headers
l Hop-by-hop options
l Routing
l Fragment
l Authentication
l Encapsulating security payload
l Destination options

23
Chapter 3 TCP and IP
Figure 3.4

24
Chapter 3 TCP and IP
Chapter 11

Link-Level Flow and Error


Control

1
Chapter 11 Link-Level Flow and Error Control
Introduction
l The need for flow and error control
l Link control mechanisms
l Performance of ARQ (Automatic Repeat
Request)

2
Chapter 11 Link-Level Flow and Error Control
Flow Control and Error Control
l Fundamental mechanisms that determine
performance
l Can be implemented at different levels:
link, network, or application
l Difficult to model performance
l Simplest case: point-to-point link
– Constant propagation
– Constant data rate
– Probabilistic error rate
– Traffic characteristics
3
Chapter 11 Link-Level Flow and Error Control
Flow Control
l Limits the amount or rate of data that is
sent
l Reasons:
– Source may send PDUs faster than destination
can process headers
– Higher-level protocol user at destination may
be slow in retrieving data
– Destination may need to limit incoming flow
to match outgoing flow for retransmission

4
Chapter 11 Link-Level Flow and Error Control
Flow Control at Multiple Protocol
Layers
l X.25 virtual circuits (level 3) multiplexed
over a data link using LAPB (X.25 level 2)
l Multiple TCP connections over HDLC link
l Flow control at higher level applied to
each logical connection independently
l Flow control at lower level applied to total
traffic

5
Chapter 11 Link-Level Flow and Error Control
Figure 11.1

6
Chapter 11 Link-Level Flow and Error Control
Flow Control Scope
l Hop Scope
– Between intermediate systems that are directly
connected
l Network interface
– Between end system and network
l Entry-to-exit
– Between entry to network and exit from network
l End-to-end
– Between end user systems

7
Chapter 11 Link-Level Flow and Error Control
Figure 11.2

8
Chapter 11 Link-Level Flow and Error Control
Error Control
l Used to recover lost or damaged PDUs
l Involves error detection and PDU
retransmission
l Implemented together with flow control in
a single mechanism
l Performed at various protocol levels

9
Chapter 11 Link-Level Flow and Error Control
Link Control Mechanisms
3 techniques at link level:
l Stop-and-wait
l Go-back-N
l Selective-reject

Latter 2 are special cases of sliding-window

Assume 2 end systems connected by direct link

10
Chapter 11 Link-Level Flow and Error Control
Sequence of Frames
Source breaks up message into sequence of
frames
l Buffer size of receiver may be limited
l Longer transmission are more likely to
have an error
l On a shared medium, avoids one station
monopolizing medium

11
Chapter 11 Link-Level Flow and Error Control
Stop and Wait
l Source transmits frame
l After reception, destination indicates
willingness to accept another frame in
acknowledgement
l Source must wait for acknowledgement
before sending another frame
l 2 kinds of errors:
– Damaged frame at destination
– Damaged acknowledgement at source
12
Chapter 11 Link-Level Flow and Error Control
ARQ
l Automatic Repeat Request
l Uses:
– Error detection
– Timers
– Acknowledgements
– Retransmissions

13
Chapter 11 Link-Level Flow and Error Control
Figure 11.3

14
Chapter 11 Link-Level Flow and Error Control
Figure 11.4

15
Chapter 11 Link-Level Flow and Error Control
Stop--and
Stop and--Wait Link Utilization
l If Tprop large relative to Tframe then
throughput reduced
l If propagation delay is long relative to
transmission time, line is mostly idle
l Problem is only one frame in transit at a
time
l Stop-and-Wait rarely used because of
inefficiency
16
Chapter 11 Link-Level Flow and Error Control
Sliding Window Techniques
l Allow multiple frames to be in transit at
the same time
l Source can send n frames without waiting
for acknowledgements
l Destination can accept n frames
l Destination acknowledges a frame by
sending acknowledgement with sequence
number of next frame expected (and
implicitly ready for next n frames)
17
Chapter 11 Link-Level Flow and Error Control
Figure 11.5

18
Chapter 11 Link-Level Flow and Error Control
Figure 11.6

19
Chapter 11 Link-Level Flow and Error Control
Go
Go--back
back--N ARQ
l Most common form of error control based on
sliding window
l Number of un-acknowledged frames determined
by window size
l Upon receiving a frame in error, destination
discards that frame and all subsequent frames
until damaged frame received correctly
l Sender resends frame (and all subsequent frames)
either when it receives a Reject message or timer
expires

20
Chapter 11 Link-Level Flow and Error Control
Figure 11.7

21
Chapter 11 Link-Level Flow and Error Control
Figure 11.8

22
Chapter 11 Link-Level Flow and Error Control
Error--Free Stop and Wait
Error
T = Tframe + Tprop + Tproc + Tack + Tprop + Tproc

Tframe = time to transmit frame


Tprop = propagation time
Tproc = processing time at station
Tack = time to transmit ack

Assume Tproc and Tack relatively small


23
Chapter 11 Link-Level Flow and Error Control
T ≈ Tframe + 2Tprop

Throughput = 1/T = 1/(Tframe + 2Tprop) frames/sec

Normalize by link data rate: 1/ Tframe frames/sec

S = 1/(Tframe + 2Tprop) = Tframe = 1


1/ Tframe Tframe + 2Tprop 1 + 2a

where a = Tprop / Tframe

24
Chapter 11 Link-Level Flow and Error Control
Stop--and
Stop and--Wait ARQ with Errors
P = probability a single frame is in error

Nx = 1
1-P
= average number of times each frame must be
transmitted due to errors

S= 1 = 1-P
Nx (1 + 2a) Nx (1 + 2a)

25
Chapter 11 Link-Level Flow and Error Control
The Parameter a
a = propagation time = d/V = Rd
transmission time L/R VL

where
d = distance between stations
V = velocity of signal propagation
L = length of frame in bits
R = data rate on link in bits per sec
26
Chapter 11 Link-Level Flow and Error Control
Table 11.1

27
Chapter 11 Link-Level Flow and Error Control
Figure 11.9

28
Chapter 11 Link-Level Flow and Error Control
Error--Free Sliding Window ARQ
Error
l Case 1: W ≥ 2a + 1
Ack for frame 1 reaches A before A has
exhausted its window
l Case 2: W < 2a +1
A exhausts its window at t = W and cannot send
additional frames until t = 2a + 1

29
Chapter 11 Link-Level Flow and Error Control
Figure 11.10

30
Chapter 11 Link-Level Flow and Error Control
Normalized Throughput
1 W ≥ 2a + 1
S=
W W < 2a +1
2a + 1

31
Chapter 11 Link-Level Flow and Error Control
Selective Reject ARQ
1-P W ≥ 2a + 1
S=
W(1 - P) W < 2a +1
2a + 1

32
Chapter 11 Link-Level Flow and Error Control
Go
Go--Back
Back--N ARQ
1-P W ≥ 2a + 1
S= 1 + 2aP

W(1 - P) W < 2a +1
(2a + 1)(1 – P + WP)

33
Chapter 11 Link-Level Flow and Error Control
Figure 11.11

34
Chapter 11 Link-Level Flow and Error Control
Figure 11.12

35
Chapter 11 Link-Level Flow and Error Control
Figure 11.13

36
Chapter 11 Link-Level Flow and Error Control
High
High--Level Data Link Control
l HDLC is the most important data link
control protocol
l Widely used which forms basis of other
data link control protocols

37
Chapter 11 Link-Level Flow and Error Control
Figure 11.15

38
Chapter 11 Link-Level Flow and Error Control
HDLC Operation
l Initialization
l Data transfer
l Disconnect

39
Chapter 11 Link-Level Flow and Error Control
Figure 11.16

40
Chapter 11 Link-Level Flow and Error Control
Chapter 12
TCP Traffic Control

1
Chapter 12 TCP Traffic Control
Introduction
l TCP Flow Control
l TCP Congestion Control
l Performance of TCP over ATM

2
Chapter 12 TCP Traffic Control
TCP Flow Control
l Uses a form of sliding window
l Differs from mechanism used in LLC,
HDLC, X.25, and others:
l Decouples acknowledgement of received data units
from granting permission to send more
l TCP’s flow control is known as a credit
allocation scheme:
l Each transmitted octet is considered to have a
sequence number
3
Chapter 12 TCP Traffic Control
TCP Header Fields for Flow Control
l Sequence number (SN) of first octet in
data segment
l Acknowledgement number (AN)
l Window (W)
l Acknowledgement contains AN = i, W = j:
l Octets through SN = i - 1 acknowledged
l Permission is granted to send W = j more octets,

i.e., octets i through i + j - 1

4
Chapter 12 TCP Traffic Control
Figure 12.1 TCP Credit
Allocation Mechanism

5
Chapter 12 TCP Traffic Control
Credit Allocation is Flexible
Suppose last message B issued was AN = i, W = j
l To increase credit to k (k > j) when no new
data, B issues AN = i, W = k
l To acknowledge segment containing m octets
(m < j), B issues AN = i + m, W = j - m

6
Chapter 12 TCP Traffic Control
Figure 12.2 Flow Control
Perspectives

7
Chapter 12 TCP Traffic Control
Credit Policy
l Receiver needs a policy for how much
credit to give sender
l Conservative approach: grant credit up to
limit of available buffer space
l May limit throughput in long-delay
situations
l Optimistic approach: grant credit based on
expectation of freeing space before data
arrives
8
Chapter 12 TCP Traffic Control
Effect of Window Size
W = TCP window size (octets)
R = Data rate (bps) at TCP source
D = Propagation delay (seconds)
l After TCP source begins transmitting, it
takes D seconds for first octet to arrive,
and D seconds for acknowledgement to
return
l TCP source could transmit at most 2RD
bits, or RD/4 octets
9
Chapter 12 TCP Traffic Control
Normalized Throughput S

1 W > RD / 4
S =
4W W < RD / 4
RD

10
Chapter 12 TCP Traffic Control
Figure 12.3 Window Scale
Parameter

11
Chapter 12 TCP Traffic Control
Complicating Factors
l Multiple TCP connections are multiplexed over
same network interface, reducing R and
efficiency
l For multi-hop connections, D is the sum of
delays across each network plus delays at each
router
l If source data rate R exceeds data rate on one of
the hops, that hop will be a bottleneck
l Lost segments are retransmitted, reducing
throughput. Impact depends on retransmission
policy
12
Chapter 12 TCP Traffic Control
Retransmission Strategy
l TCP relies exclusively on positive
acknowledgements and retransmission on
acknowledgement timeout
l There is no explicit negative
acknowledgement
l Retransmission required when:
1. Segment arrives damaged, as indicated by
checksum error, causing receiver to discard
segment
2. Segment fails to arrive
13
Chapter 12 TCP Traffic Control
Timers
l A timer is associated with each segment as it is
sent
l If timer expires before segment acknowledged,
sender must retransmit
l Key Design Issue:
value of retransmission timer
l Too small: many unnecessary retransmissions,
wasting network bandwidth
l Too large: delay in handling lost segment

14
Chapter 12 TCP Traffic Control
Two Strategies
l Timer should be longer than round-trip
delay (send segment, receive ack)
l Delay is variable

Strategies:
1. Fixed timer
2. Adaptive

15
Chapter 12 TCP Traffic Control
Problems with Adaptive Scheme
l Peer TCP entity may accumulate
acknowledgements and not acknowledge
immediately
l For retransmitted segments, can’t tell
whether acknowledgement is response to
original transmission or retransmission
l Network conditions may change suddenly

16
Chapter 12 TCP Traffic Control
Adaptive Retransmission Timer
l Average Round-Trip Time (ARTT)
K+1

ARTT(K + 1) = 1 ∑ RTT(i)
K+1 i=1

= K ART(K) + 1 RTT(K + 1)
K+1 K+1
17
Chapter 12 TCP Traffic Control
RFC 793 Exponential Averaging
Smoothed Round-Trip Time (SRTT)

SRTT(K + 1) = α × SRTT(K)
+ (1 – α) × SRTT(K + 1)

The older the observation, the less it is


counted in the average.

18
Chapter 12 TCP Traffic Control
Figure 12.4
Exponential
Smoothing
Coefficients

19
Chapter 12 TCP Traffic Control
Figure 12.5
Exponential
Averaging

20
Chapter 12 TCP Traffic Control
RFC 793 Retransmission Timeout
RTO(K + 1) =
Min(UB, Max(LB, β × SRTT(K + 1)))
UB, LB: prechosen fixed upper and lower
bounds
Example values for α, β:
0.8 < α < 0.9 1.3 < β < 2.0
21
Chapter 12 TCP Traffic Control
Implementation Policy Options
l Send
l Deliver
l Accept
l In-order
l In-window
l Retransmit
l First-only
l Batch
l individual
l Acknowledge
l immediate
l cumulative
22
Chapter 12 TCP Traffic Control
TCP Congestion Control
l Dynamic routing can alleviate congestion by
spreading load more evenly
l But only effective for unbalanced loads and brief
surges in traffic
l Congestion can only be controlled by limiting
total amount of data entering network
l ICMP source Quench message is crude and not
effective
l RSVP may help but not widely implemented

23
Chapter 12 TCP Traffic Control
TCP Congestion Control is Difficult
l IP is connectionless and stateless, with
no provision for detecting or controlling
congestion
l TCP only provides end-to-end flow
control
l No cooperative, distributed algorithm to
bind together various TCP entities

24
Chapter 12 TCP Traffic Control
TCP Flow and Congestion Control
l The rate at which a TCP entity can transmit is
determined by rate of incoming ACKs to
previous segments with new credit
l Rate of Ack arrival determined by round-trip
path between source and destination
l Bottleneck may be destination or internet
l Sender cannot tell which
l Only the internet bottleneck can be due to
congestion

25
Chapter 12 TCP Traffic Control
Figure 12.6 TCP
Segment Pacing

26
Chapter 12 TCP Traffic Control
Figure 12.7 TCP Flow and
Congestion Control

27
Chapter 12 TCP Traffic Control
Retransmission Timer
Management
Three Techniques to calculate retransmission
timer (RTO):
1. RTT Variance Estimation
2. Exponential RTO Backoff
3. Karn’s Algorithm

28
Chapter 12 TCP Traffic Control
RTT Variance Estimation
(Jacobson’s Algorithm)
3 sources of high variance in RTT
l If data rate relative low, then transmission
delay will be relatively large, with larger
variance due to variance in packet size
l Load may change abruptly due to other
sources
l Peer may not acknowledge segments
immediately
29
Chapter 12 TCP Traffic Control
Jacobson’s Algorithm
SRTT(K + 1) = (1 – g) × SRTT(K) + g × RTT(K + 1)

SERR(K + 1) = RTT(K + 1) – SRTT(K)

SDEV(K + 1) = (1 – h) × SDEV(K) + h ×|SERR(K + 1)|

RTO(K + 1) = SRTT(K + 1) + f × SDEV(K + 1)

g = 0.125
h = 0.25
f = 2 or f = 4 (most current implementations use f = 4)
30
Chapter 12 TCP Traffic Control
Figure 12.8
Jacobson’s RTO
Calculations

31
Chapter 12 TCP Traffic Control
Two Other Factors
Jacobson’s algorithm can significantly
improve TCP performance, but:
l What RTO to use for retransmitted
segments?
ANSWER: exponential RTO backoff algorithm
l Which round-trip samples to use as input
to Jacobson’s algorithm?
ANSWER: Karn’s algorithm
32
Chapter 12 TCP Traffic Control
Exponential RTO Backoff
l Increase RTO each time the same segment
retransmitted – backoff process
l Multiply RTO by constant:

RTO = q × RTO
l q = 2 is called binary exponential backoff

33
Chapter 12 TCP Traffic Control
Which Round
Round--trip Samples?
l If an ack is received for retransmitted
segment, there are 2 possibilities:
1. Ack is for first transmission
2. Ack is for second transmission
l TCP source cannot distinguish 2 cases
l No valid way to calculate RTT:
– From first transmission to ack, or
– From second transmission to ack?
34
Chapter 12 TCP Traffic Control
Karn’s Algorithm
l Do not use measured RTT to update SRTT
and SDEV
l Calculate backoff RTO when a
retransmission occurs
l Use backoff RTO for segments until an
ack arrives for a segment that has not been
retransmitted
l Then use Jacobson’s algorithm to calculate
RTO
35
Chapter 12 TCP Traffic Control
Window Management
l Slow start
l Dynamic window sizing on congestion
l Fast retransmit
l Fast recovery
l Limited transmit

36
Chapter 12 TCP Traffic Control
Slow Start
awnd = MIN[ credit, cwnd]
where
awnd = allowed window in segments
cwnd = congestion window in segments
credit = amount of unused credit granted in most
recent ack
cwnd = 1 for a new connection and increased
by 1 for each ack received, up to a
maximum
37
Chapter 12 TCP Traffic Control
Figure 23.9 Effect of
Slow Start

38
Chapter 12 TCP Traffic Control
Dynamic Window Sizing on Congestion
l A lost segment indicates congestion
l Prudent to reset cwsd = 1 and begin slow
start process
l May not be conservative enough: “ easy to
drive a network into saturation but hard for
the net to recover” (Jacobson)
l Instead, use slow start with linear growth
in cwnd

39
Chapter 12 TCP Traffic Control
Figure 12.10 Slow
Start and
Congestion
Avoidance

40
Chapter 12 TCP Traffic Control
Figure 12.11 Illustration of Slow
Start and Congestion Avoidance

41
Chapter 12 TCP Traffic Control
Fast Retransmit
l RTO is generally noticeably longer than actual
RTT
l If a segment is lost, TCP may be slow to
retransmit
l TCP rule: if a segment is received out of order,
an ack must be issued immediately for the last in-
order segment
l Fast Retransmit rule: if 4 acks received for same
segment, highly likely it was lost, so retransmit
immediately, rather than waiting for timeout

42
Chapter 12 TCP Traffic Control
Figure 12.12 Fast
Retransmit

43
Chapter 12 TCP Traffic Control
Fast Recovery
l When TCP retransmits a segment using Fast
Retransmit, a segment was assumed lost
l Congestion avoidance measures are appropriate
at this point
l E.g., slow-start/congestion avoidance procedure
l This may be unnecessarily conservative since
multiple acks indicate segments are getting
through
l Fast Recovery: retransmit lost segment, cut cwnd
in half, proceed with linear increase of cwnd
l This avoids initial exponential slow-start
44
Chapter 12 TCP Traffic Control
Figure 12.13 Fast
Recovery Example

45
Chapter 12 TCP Traffic Control
Limited Transmit
l If congestion window at sender is small,
fast retransmit may not get triggered,
e.g., cwnd = 3
1. Under what circumstances does sender have
small congestion window?
2. Is the problem common?
3. If the problem is common, why not reduce
number of duplicate acks needed to trigger
retransmit?

46
Chapter 12 TCP Traffic Control
Limited Transmit Algorithm
Sender can transmit new segment when 3
conditions are met:
1. Two consecutive duplicate acks are
received
2. Destination advertised window allows
transmission of segment
3. Amount of outstanding data after sending
is less than or equal to cwnd + 2
47
Chapter 12 TCP Traffic Control
Performance of TCP over ATM
l How best to manage TCP’s segment size,
window management and congestion
control…
l …at the same time as ATM’s quality of
service and traffic control policies
l TCP may operate end-to-end over one
ATM network, or there may be multiple
ATM LANs or WANs with non-ATM
networks
48
Chapter 12 TCP Traffic Control
Figure 12.14 TCP/IP over
AAL5/ATM

49
Chapter 12 TCP Traffic Control
Performance of TCP over UBR
l Buffer capacity at ATM switches is a
critical parameter in assessing TCP
throughput performance
l Insufficient buffer capacity results in lost
TCP segments and retransmissions

50
Chapter 12 TCP Traffic Control
Effect of Switch Buffer Size
l Data rate of 141 Mbps
l End-to-end propagation delay of 6 μs
l IP packet sizes of 512 octets to 9180
l TCP window sizes from 8 Kbytes to 64 Kbytes
l ATM switch buffer size per port from 256 cells
to 8000
l One-to-one mapping of TCP connections to
ATM virtual circuits
l TCP sources have infinite supply of data ready
51
Chapter 12 TCP Traffic Control
Figure 12.15
Performance of TCP
over UBR

52
Chapter 12 TCP Traffic Control
Observations
l If a single cell is dropped, other cells in the
same IP datagram are unusable, yet ATM
network forwards these useless cells to
destination
l Smaller buffer increase probability of
dropped cells
l Larger segment size increases number of
useless cells transmitted if a single cell
dropped

53
Chapter 12 TCP Traffic Control
Partial Packet and Early Packet
Discard
l Reduce the transmission of useless cells
l Work on a per-virtual circuit basis
l Partial Packet Discard
– If a cell is dropped, then drop all subsequent
cells in that segment (i.e., look for cell with
SDU type bit set to one)
l Early Packet Discard
– When a switch buffer reaches a threshold
level, preemptively discard all cells in a
segment
54
Chapter 12 TCP Traffic Control
Selective Drop
l Ideally, N/V cells buffered for each of the
V virtual circuits
l W(i) = N(i) = N(i) × V
N/V N
l If N > R and W(i) > Z
then drop next new packet on VC i
l Z is a parameter to be chosen

55
Chapter 12 TCP Traffic Control
Figure 12.16 ATM Switch Buffer
Layout

56
Chapter 12 TCP Traffic Control
Fair Buffer Allocation
l More aggressive dropping of packets as
congestion increases
l Drop new packet when:

N > R and W(i) > Z × B – R


N-R

57
Chapter 12 TCP Traffic Control
TCP over ABR
l Good performance of TCP over UBR can be
achieved with minor adjustments to switch
mechanisms
l This reduces the incentive to use the more
complex and more expensive ABR service
l Performance and fairness of ABR quite sensitive
to some ABR parameter settings
l Overall, ABR does not provide significant
performance over simpler and less expensive
UBR-EPD or UBR-EPD-FBA

58
Chapter 12 TCP Traffic Control
Chapter 13
Traffic and Congestion Control
in ATM Networks

1
Chapter 13 Traffic and Congestion Control in ATM Networks
Introduction
l Control needed to prevent switch buffer
overflow
l High speed and small cell size gives
different problems from other networks
l Limited number of overhead bits
l ITU-T specified restricted initial set
– I.371
l ATM forum Traffic Management
Specification 41
2
Chapter 13 Traffic and Congestion Control in ATM Networks
Overview
l Congestion problem
l Framework adopted by ITU-T and ATM forum
– Control schemes for delay sensitive traffic
l Voice & video
– Not suited to bursty traffic
– Traffic control
– Congestion control
l Bursty traffic
– Available Bit Rate (ABR)
– Guaranteed Frame Rate (GFR)
3
Chapter 13 Traffic and Congestion Control in ATM Networks
Requirements for ATM Traffic
and Congestion Control
l Most packet switched and frame relay
networks carry non-real-time bursty data
– No need to replicate timing at exit node
– Simple statistical multiplexing
– User Network Interface capacity slightly
greater than average of channels
l Congestion control tools from these
technologies do not work in ATM
4
Chapter 13 Traffic and Congestion Control in ATM Networks
Problems with ATM Congestion
Control
l Most traffic not amenable to flow control
– Voice & video can not stop generating
l Feedback slow
– Small cell transmission time v propagation delay
l Wide range of applications
– From few kbps to hundreds of Mbps
– Different traffic patterns
– Different network services
l High speed switching and transmission
– Volatile congestion and traffic control
5
Chapter 13 Traffic and Congestion Control in ATM Networks
Key Performance Issues-
Issues-
Latency/Speed Effects
l E.g. data rate 150Mbps
l Takes (53 x 8 bits)/(150 x 106) =2.8 x 10-6 seconds to
insert a cell
l Transfer time depends on number of intermediate
switches, switching time and propagation delay.
Assuming no switching delay and speed of light
propagation, round trip delay of 48 x 10-3 sec across USA
l A dropped cell notified by return message will arrive
after source has transmitted N further cells
l N=(48 x 10-3 seconds)/(2.8 x 10-6 seconds per cell)
l =1.7 x 104 cells = 7.2 x 106 bits
l i.e. over 7 Mbits
6
Chapter 13 Traffic and Congestion Control in ATM Networks
Key Performance Issues-
Issues-
Cell Delay Variation
l For digitized voice delay across network must be small
l Rate of delivery must be constant
l Variations will occur
l Dealt with by Time Reassembly of CBR cells (see next
slide)
l Results in cells delivered at CBR with occasional gaps
due to dropped cells
l Subscriber requests minimum cell delay variation from
network provider
– Increase data rate at UNI relative to load
– Increase resources within network

7
Chapter 13 Traffic and Congestion Control in ATM Networks
Time Reassembly of CBR Cells

8
Chapter 13 Traffic and Congestion Control in ATM Networks
Network Contribution to Cell
Delay Variation
l In packet switched network
– Queuing effects at each intermediate switch
– Processing time for header and routing
l Less for ATM networks
– Minimal processing overhead at switches
l Fixed cell size, header format
l No flow control or error control processing
– ATM switches have extremely high throughput
– Congestion can cause cell delay variation
l Build up of queuing effects at switches
l Total load accepted by network must be controlled
9
Chapter 13 Traffic and Congestion Control in ATM Networks
Cell Delay Variation at UNI
l Caused by processing in three layers of
ATM model
– See next slide for details
l None of these delays can be predicted
l None follow repetitive pattern
l So, random element exists in time interval
between reception by ATM stack and
transmission
10
Chapter 13 Traffic and Congestion Control in ATM Networks
Origins of Cell Delay Variation

11
Chapter 13 Traffic and Congestion Control in ATM Networks
ATM Traffic-
Traffic-Related Attributes
l Six service categories (see chapter 5)
– Constant bit rate (CBR)
– Real time variable bit rate (rt-VBR)
– Non-real-time variable bit rate (nrt-VBR)
– Unspecified bit rate (UBR)
– Available bit rate (ABR)
– Guaranteed frame rate (GFR)
l Characterized by ATM attributes in four categories
– Traffic descriptors
– QoS parameters
– Congestion
– Other

12
Chapter 13 Traffic and Congestion Control in ATM Networks
ATM Service Category
Attributes

13
Chapter 13 Traffic and Congestion Control in ATM Networks
Traffic Parameters
l Traffic pattern of flow of cells
– Intrinsic nature of traffic
l Source traffic descriptor
– Modified inside network
l Connection traffic descriptor

14
Chapter 13 Traffic and Congestion Control in ATM Networks
Source Traffic Descriptor (1)
l Peak cell rate
– Upper bound on traffic that can be submitted
– Defined in terms of minimum spacing between cells T
– PCR = 1/T
– Mandatory for CBR and VBR services
l Sustainable cell rate
– Upper bound on average rate
– Calculated over large time scale relative to T
– Required for VBR
– Enables efficient allocation of network resources between VBR
sources
– Only useful if SCR < PCR

15
Chapter 13 Traffic and Congestion Control in ATM Networks
Source Traffic Descriptor (2)
l Maximum burst size
– Max number of cells that can be sent at PCR
– If bursts are at MBS, idle gaps must be enough to keep overall
rate below SCR
– Required for VBR
l Minimum cell rate
– Min commitment requested of network
– Can be zero
– Used with ABR and GFR
– ABR & GFR provide rapid access to spare network capacity up
to PCR
– PCR – MCR represents elastic component of data flow
– Shared among ABR and GFR flows
16
Chapter 13 Traffic and Congestion Control in ATM Networks
Source Traffic Descriptor (3)
l Maximum frame size
– Max number of cells in frame that can be
carried over GFR connection
– Only relevant in GFR

17
Chapter 13 Traffic and Congestion Control in ATM Networks
Connection Traffic Descriptor
l Includes source traffic descriptor plus:-
l Cell delay variation tolerance
– Amount of variation in cell delay introduced by
network interface and UNI
– Bound on delay variability due to slotted nature of
ATM, physical layer overhead and layer functions
(e.g. cell multiplexing)
– Represented by time variable τ
l Conformance definition
– Specify conforming cells of connection at UNI
– Enforced by dropping or marking cells over definition
18
Chapter 13 Traffic and Congestion Control in ATM Networks
Quality of Service Parameters-
Parameters-
maxCTD
l Cell transfer delay (CTD)
– Time between transmission of first bit of cell at source
and reception of last bit at destination
– Typically has probability density function (see next
slide)
– Fixed delay due to propagation etc.
– Cell delay variation due to buffering and scheduling
– Maximum cell transfer delay (maxCTD)is max
requested delay for connection
– Fraction α of cells exceed threshold
l Discarded or delivered late
19
Chapter 13 Traffic and Congestion Control in ATM Networks
Quality of Service Parameters-
Parameters-
Peak
Peak--to
to--peak CDV & CLR
l Peak-to-peak Cell Delay Variation
– Remaining (1-α) cells within QoS
– Delay experienced by these cells is between
fixed delay and maxCTD
– This is peak-to-peak CDV
– CDVT is an upper bound on CDV
l Cell loss ratio
– Ratio of cells lost to cells transmitted
20
Chapter 13 Traffic and Congestion Control in ATM Networks
Cell Transfer Delay PDF

21
Chapter 13 Traffic and Congestion Control in ATM Networks
Congestion Control Attributes
l Only feedback is defined
– ABR and GFR
– Actions taken by network and end systems to
regulate traffic submitted
l ABR flow control
– Adaptively share available bandwidth

22
Chapter 13 Traffic and Congestion Control in ATM Networks
Other Attributes
l Behaviour class selector (BCS)
– Support for IP differentiated services (chapter
16)
– Provides different service levels among UBR
connections
– Associate each connection with a behaviour
class
– May include queuing and scheduling
l Minimum desired cell rate
23
Chapter 13 Traffic and Congestion Control in ATM Networks
Traffic Management
Framework
l Objectives of ATM layer traffic and
congestion control
– Support QoS for all foreseeable services
– Not rely on network specific AAL protocols
nor higher layer application specific protocols
– Minimize network and end system complexity
– Maximize network utilization

24
Chapter 13 Traffic and Congestion Control in ATM Networks
Timing Levels
l Cell insertion time
l Round trip propagation time
l Connection duration
l Long term

25
Chapter 13 Traffic and Congestion Control in ATM Networks
Traffic Control and Congestion
Functions

26
Chapter 13 Traffic and Congestion Control in ATM Networks
Traffic Control Strategy
l Determine whether new ATM connection
can be accommodated
l Agree performance parameters with
subscriber
l Traffic contract between subscriber and
network
l This is congestion avoidance
l If it fails congestion may occur
– Invoke congestion control
27
Chapter 13 Traffic and Congestion Control in ATM Networks
Traffic Control
l Resource management using virtual paths
l Connection admission control
l Usage parameter control
l Selective cell discard
l Traffic shaping
l Explicit forward congestion indication

28
Chapter 13 Traffic and Congestion Control in ATM Networks
Resource Management Using
Virtual Paths
l Allocate resources so that traffic is
separated according to service
characteristics
l Virtual path connection (VPC) are
groupings of virtual channel connections
(VCC)

29
Chapter 13 Traffic and Congestion Control in ATM Networks
Applications
l User-to-user applications
– VPC between UNI pair
– No knowledge of QoS for individual VCC
– User checks that VPC can take VCCs’ demands
l User-to-network applications
– VPC between UNI and network node
– Network aware of and accommodates QoS of VCCs
l Network-to-network applications
– VPC between two network nodes
– Network aware of and accommodates QoS of VCCs
30
Chapter 13 Traffic and Congestion Control in ATM Networks
Resource Management
Concerns
l Cell loss ratio
l Max cell transfer delay
l Peak to peak cell delay variation
l All affected by resources devoted to VPC
l If VCC goes through multiple VPCs,
performance depends on consecutive VPCs and
on node performance
– VPC performance depends on capacity of VPC and
traffic characteristics of VCCs
– VCC related function depends on
switching/processing speed and priority
31
Chapter 13 Traffic and Congestion Control in ATM Networks
VCCs and VPCs Configuration

32
Chapter 13 Traffic and Congestion Control in ATM Networks
Allocation of Capacity to VPC
l Aggregate peak demand
– May set VPC capacity (data rate) to total of VCC peak rates
l Each VCC can give QoS to accommodate peak demand
l VPC capacity may not be fully used
l Statistical multiplexing
– VPC capacity >= average data rate of VCCs but < aggregate
peak demand
– Greater CDV and CTD
– May have greater CLR
– More efficient use of capacity
– For VCCs requiring lower QoS
– Group VCCs of similar traffic together

33
Chapter 13 Traffic and Congestion Control in ATM Networks
Connection Admission Control

l User must specify service required in both


directions
– Category
– Connection traffic descriptor
l Source traffic descriptor
l CDVT
l Requested conformance definition
– QoS parameter requested and acceptable value
l Network accepts connection only if it can
commit resources to support requests

34
Chapter 13 Traffic and Congestion Control in ATM Networks
Procedures to Set Traffic
Control Parameters

35
Chapter 13 Traffic and Congestion Control in ATM Networks
Cell Loss Priority
l Two levels requested by user
– Priority for individual cell indicated by CLP
bit in header
– If two levels are used, traffic parameters for
both flows specified
l High priority CLP = 0
l All traffic CLP = 0 + 1

– May improve network resource allocation

36
Chapter 13 Traffic and Congestion Control in ATM Networks
Usage Parameter Control
l UPC
l Monitors connection for conformity to
traffic contract
l Protect network resources from overload
on one connection
l Done at VPC or VCC level
l VPC level more important
– Network resources allocated at this level
37
Chapter 13 Traffic and Congestion Control in ATM Networks
Location of UPC Function

38
Chapter 13 Traffic and Congestion Control in ATM Networks
Peak Cell Rate Algorithm
l How UPC determines whether user is
complying with contract
l Control of peak cell rate and CDVT
– Complies if peak does not exceed agreed peak
– Subject to CDV within agreed bounds
– Generic cell rate algorithm
– Leaky bucket algorithm

39
Chapter 13 Traffic and Congestion Control in ATM Networks
Generic
Cell
Rate
Algorithm

40
Chapter 13 Traffic and Congestion Control in ATM Networks
Virtual Scheduling Algorithm

41
Chapter 13 Traffic and Congestion Control in ATM Networks
Cell Arrival at
UNI (T
(T=4.5δ)
=4.5δ)

42
Chapter 13 Traffic and Congestion Control in ATM Networks
Leaky Bucket Algorithm

43
Chapter 13 Traffic and Congestion Control in ATM Networks
Continuous Leaky Bucket
Algorithm

44
Chapter 13 Traffic and Congestion Control in ATM Networks
Sustainable Cell Rate Algorithm
l Operational definition of relationship
between sustainable cell rate and burst
tolerance
l Used by UPC to monitor compliance
l Same algorithm as peak cell rate

45
Chapter 13 Traffic and Congestion Control in ATM Networks
UPC Actions
l Compliant cell pass, non-compliant cells discarded
l If no additional resources allocated to CLP=1 traffic,
CLP=0 cells C
l If two level cell loss priority cell with:
– CLP=0 and conforms passes
– CLP=0 non-compliant for CLP=0 traffic but compliant for
CLP=0+1 is tagged and passes
– CLP=0 non-compliant for CLP=0 and CLP=0+1 traffic discarded
– CLP=1 compliant for CLP=0+1 passes
– CLP=1 non-compliant for CLP=0+1 discarded

46
Chapter 13 Traffic and Congestion Control in ATM Networks
Possible Actions of UPC

47
Chapter 13 Traffic and Congestion Control in ATM Networks
Selective Cell Discard
l Starts when network, at point beyond
UPC, discards CLP=1 cells
l Discard low priority cells to protect high
priority cells
l No distinction between cells labelled low
priority by source and those tagged by
UPC

48
Chapter 13 Traffic and Congestion Control in ATM Networks
Traffic Shaping
l GCRA is a form of traffic policing
– Flow of cells regulated
– Cells exceeding performance level tagged or
discarded
l Traffic shaping used to smooth traffic flow
– Reduce cell clumping
– Fairer allocation of resources
– Reduced average delay
49
Chapter 13 Traffic and Congestion Control in ATM Networks
Token Bucket for Traffic
Shaping

50
Chapter 13 Traffic and Congestion Control in ATM Networks
Explicit Forward Congestion
Indication
l Essentially same as frame relay
l If node experiencing congestion, set
forward congestion indication is cell
headers
– Tells users that congestion avoidance should
be initiated in this direction
– User may take action at higher level

51
Chapter 13 Traffic and Congestion Control in ATM Networks
ABR Traffic Management
l QoS for CBR, VBR based on traffic contract and
UPC described previously
l No congestion feedback to source
l Open-loop control
l Not suited to non-real-time applications
– File transfer, web access, RPC, distributed file
systems
– No well defined traffic characteristics except PCR
– PCR not enough to allocate resources
l Use best efforts or closed-loop control
52
Chapter 13 Traffic and Congestion Control in ATM Networks
Best Efforts
l Share unused capacity between
applications
l As congestion goes up:
– Cells are lost
– Sources back off and reduce rate
– Fits well with TCP techniques (chapter 12)
– Inefficient
l Cells dropped causing re-transmission

53
Chapter 13 Traffic and Congestion Control in ATM Networks
Closed-
Closed-Loop Control
l Sources share capacity not used by CBR
and VBR
l Provide feedback to sources to adjust load
l Avoid cell loss
l Share capacity fairly
l Used for ABR

54
Chapter 13 Traffic and Congestion Control in ATM Networks
Characteristics of ABR
l ABR connections share available capacity
– Access instantaneous capacity unused by CBR/VBR
– Increases utilization without affecting CBR/VBR QoS
l Share used by single ABR connection is dynamic
– Varies between agreed MCR and PCR
l Network gives feedback to ABR sources
– ABR flow limited to available capacity
– Buffers absorb excess traffic prior to arrival of
feedback
l Low cell loss
– Major distinction from UBR
55
Chapter 13 Traffic and Congestion Control in ATM Networks
Feedback Mechanisms (1)
l Cell transmission rate characterized by:
– Allowable cell rate
l Current rate
– Minimum cell rate
l Min for ACR
l May be zero

– Peak cell rate


l Max for ACR
– Initial cell rate
56
Chapter 13 Traffic and Congestion Control in ATM Networks
Feedback Mechanisms (2)
l Start with ACR=ICR
l Adjust ACR based on feedback
l Feedback in resource management (RM)
cells
– Cell contains three fields for feedback
l Congestion indicator bit (CI)
l No increase bit (NI)

l Explicit cell rate field (ER)

57
Chapter 13 Traffic and Congestion Control in ATM Networks
Source Reaction to Feedback
l If CI=1
– Reduce ACR by amount proportional to
current ACR but not less than CR
l Else if NI=0
– Increase ACR by amount proportional to PCR
but not more than PCR
l If ACR>ER set ACR<-max[ER,MCR]

58
Chapter 13 Traffic and Congestion Control in ATM Networks
Variations in ACR

59
Chapter 13 Traffic and Congestion Control in ATM Networks
Cell Flow on ABR
l Two types of cell
– Data & resource management (RM)
l Source receives regular RM cells
– Feedback
l Bulk of RM cells initiated by source
– One forward RM cell (FRM) per (Nrm-1) data cells
l Nrm preset – usually 32
– Each FRM is returned by destination as backwards RM (BRM)
cell
– FRM typically CI=0, NI=0 or 1 ER desired transmission rate in
range ICR<=ER<=PCR
– Any field may be changed by switch or destination before return

60
Chapter 13 Traffic and Congestion Control in ATM Networks
ATM Switch Rate Control
Feedback
l EFCI marking
– Explicit forward congestion indication
– Causes destination to set CI bit in ERM
l Relative rate marking
– Switch directly sets CI or NI bit of RM
– If set in FRM, remains set in BRM
– Faster response by setting bit in passing BRM
– Fastest by generating new BRM with bit set
l Explicit rate marking
– Switch reduces value of ER in FRM or BRM
61
Chapter 13 Traffic and Congestion Control in ATM Networks
Flow of Data and RM Cells

62
Chapter 13 Traffic and Congestion Control in ATM Networks
ARB Feedback v TCP ACK
l ABR feedback controls rate of
transmission
– Rate control
l TCP feedback controls window size
– Credit control
l ARB feedback from switches or
destination
l TCP feedback from destination only
63
Chapter 13 Traffic and Congestion Control in ATM Networks
RM Cell
Format

64
Chapter 13 Traffic and Congestion Control in ATM Networks
RM Cell Format Notes
l ATM header has PT=110 to indicate RM cell
l On virtual channel VPI and VCI same as data cells on
connection
l On virtual path VPI same, VCI=6
l Protocol id identifies service using RM (ARB=1)
l Message type
– Direction FRM=0, BRM=1
– BECN cell. Source (BN=0) or switch/destination (BN=1)
– CI (=1 for congestion)
– NI (=1 for no increase)
– Request/Acknowledge (not used in ATM forum spec)

65
Chapter 13 Traffic and Congestion Control in ATM Networks
Initial Values of RM Cell Fields

66
Chapter 13 Traffic and Congestion Control in ATM Networks
ARB
Parameters

67
Chapter 13 Traffic and Congestion Control in ATM Networks
ARB Capacity Allocation
l ATM switch must perform:
– Congestion control
l Monitor queue length
– Fair capacity allocation
l Throttle back connections using more than fair
share
l ATM rate control signals are explicit
l TCP are implicit
– Increasing delay and cell loss
68
Chapter 13 Traffic and Congestion Control in ATM Networks
Congestion Control Algorithms-
Algorithms-
Binary Feedback
l Use only EFCI, CI and NI bits
l Switch monitors buffer utilization
l When congestion approaches, binary notification
– Set EFCI on forward data cells or CI or NI on FRM or
BRM
l Three approaches to which to notify
– Single FIFO queue
– Multiple queues
– Fair share notification

69
Chapter 13 Traffic and Congestion Control in ATM Networks
Single FIFO Queue
l When buffer use exceeds threshold (e.g. 80%)
– Switch starts issuing binary notifications
– Continues until buffer use falls below threshold
– Can have two thresholds
l One for start and one for stop
l Stops continuous on/off switching
– Biased against connections passing through more
switches

70
Chapter 13 Traffic and Congestion Control in ATM Networks
Multiple Queues
l Separate queue for each VC or group of VCs
l Separate threshold on each queue
l Only connections with long queues get binary
notifications
– Fair
– Badly behaved source does not affect other VCs
– Delay and loss behaviour of individual VCs separated
l Can have different QoS on different VCs

71
Chapter 13 Traffic and Congestion Control in ATM Networks
Fair Share
l Selective feedback or intelligent marking
l Try to allocate capacity dynamically
l E.g.
l fairshare =(target rate)/(number of connections)
l Mark any cells where CCR>fairshare

72
Chapter 13 Traffic and Congestion Control in ATM Networks
Explicit Rate Feedback
Schemes
l Compute fair share of capacity for each VC
l Determine current load or congestion
l Compute explicit rate (ER) for each connection
and send to source
l Three algorithms
– Enhanced proportional rate control algorithm
l EPRCA
– Explicit rate indication for congestion avoidance
l ERICA
– Congestion avoidance using proportional control
l CAPC
73
Chapter 13 Traffic and Congestion Control in ATM Networks
Enhanced Proportional Rate
Control Algorithm(EPRCA)
l Switch tracks average value of current load on
each connection
– Mean allowed cell rate (MARC)
– MACR(I)=(1-α)*(MACR(I-1) + α*CCR(I)
– CCR(I) is CCR field in Ith FRM
– Typically α=1/16
– Bias to past values of CCR over current
– Gives estimated average load passing through switch
– If congestion, switch reduces each VC to no more
than DPF*MACR
l DPF=down pressure factor, typically 7/8
l ER<-min[ER, DPF*MACR] 74
Chapter 13 Traffic and Congestion Control in ATM Networks
Load Factor
l Adjustments based on load factor
l LF=Input rate/target rate
– Input rate measured over fixed averaging
interval
– Target rate slightly below link bandwidth (85
to 90%)
– LF>1 congestion threatened
l VCs will have to reduce rate

75
Chapter 13 Traffic and Congestion Control in ATM Networks
Explicit Rate Indication for
Congestion Avoidance (ERICA)
l Attempt to keep LF close to 1
l Define:
fairshare = (target rate)/(number of connections)
VCshare = CCR/LF
= (CCR/(Input Rate)) *(Target Rate)
l ERICA selectively adjusts VC rates
– Total ER allocated to connections matches target rate
– Allocation is fair
– ER = max[fairshare, VCshare]
– VCs whose VCshare is less than their fairshare get
greater increase
76
Chapter 13 Traffic and Congestion Control in ATM Networks
Congestion Avoidance Using
Proportional Control (CAPC)
l If LF<1 fairshare<-fairshare*min[ERU,1+(1-LF)*Rup]
l If LF>1 fairshare<-fairshare*min[ERU,1-(1-LF)*Rdn]
l ERU>1, determines max increase
l Rup between 0.025 and 0.1, slope parameter
l Rdn, between 0.2 and 0.8, slope parameter
l ERF typically 0.5, max decrease in allottment of fair share
l If fairshare < ER value in RM cells, ER<-fairshare
l Simpler than ERICA
l Can show large rate oscillations if RIF (Rate increase factor) too high
l Can lead to unfairness

77
Chapter 13 Traffic and Congestion Control in ATM Networks
GRF Overview
l Simple as UBR from end system view
– End system does no policing or traffic shaping
– May transmit at line rate of ATM adaptor
l Modest requirements on ATM network
l No guarantee of frame delivery
l Higher layer (e.g. TCP) react to congestion causing
dropped frames
l User can reserve cell rate capacity for each VC
– Application can send at min rate without loss
l Network must recognise frames as well as cells
l If congested, network discards entire frame
l All cells of a frame have same CLP setting
– CLP=0 guaranteed delivery, CLP=1 best efforts 78
Chapter 13 Traffic and Congestion Control in ATM Networks
GFR Traffic Contract
l Peak cell rate PCR
l Minimum cell rate MCR
l Maximum burst size MBS
l Maximum frame size MFS
l Cell delay variation tolerance CDVT

79
Chapter 13 Traffic and Congestion Control in ATM Networks
Mechanisms for supporting
Rate Guarantees
l Tagging and policing
l Buffer management
l Scheduling

80
Chapter 13 Traffic and Congestion Control in ATM Networks
Tagging and Policing
l Tagging identifies frames that conform to
contract and those that don’t
– CLP=1 for those that don’t
l Set by network element doing conformance check
l May be network element or source showing less important
frames
– Get lower QoS in buffer management and scheduling
– Tagged cells can be discarded at ingress to ATM
network or subsequent switch
– Discarding is a policing function
81
Chapter 13 Traffic and Congestion Control in ATM Networks
Buffer Management
l Treatment of cells in buffers or when arriving
and requiring buffering
l If congested (high buffer occupancy) tagged cells
discarded in preference to untagged
l Discard tagged cell to make room for untagged
cell
l May buffer per-VC
l Discards may be based on per queue thresholds

82
Chapter 13 Traffic and Congestion Control in ATM Networks
Scheduling
l Give preferential treatment to untagged cells
l Separate queues for each VC
– Per VC scheduling decisions
– E.g. FIFO modified to give CLP=0 cells higher
priority
l Scheduling between queues controls outgoing
rate of VCs
– Individual cells get fair allocation while meeting
traffic contract

83
Chapter 13 Traffic and Congestion Control in ATM Networks
Components of GFR
Mechanism

84
Chapter 13 Traffic and Congestion Control in ATM Networks
GFR Conformance Definition
l UPC function
– UPC monitors VC for traffic conformance
– Tag or discard non-conforming cells
l Frame conforms if all cells in frame conform
– Rate of cells within contract
l Generic cell rate algorithm PCR and CDVT specified for
connection
– All cells have same CLP
– Within maximum frame size (MFS)

85
Chapter 13 Traffic and Congestion Control in ATM Networks
QoS Eligibility Test
l Test for contract conformance
– Discard or tag non-conforming cells
l Looking at upper bound on traffic
– Determine frames eligible for QoS guarantee
l Under GFR contract for VC
l Looking at lower bound for traffic
l Frames are one of:
– Nonconforming: cells tagged or discarded
– Conforming ineligible: best efforts
– Conforming eligible: guaranteed delivery

86
Chapter 13 Traffic and Congestion Control in ATM Networks
Simplified Frame Based GCRA

87
Chapter 13 Traffic and Congestion Control in ATM Networks
Chapter 17
Integrated and Differentiated
Services

1
Chapter 17 Integrated and Differentiated Services
Introduction
l New additions to Internet increasing traffic
– High volume client/server application
– Web
l Graphics

– Real time voice and video


l Need to manage traffic and control congestion
l IEFT standards
– Integrated services
l Collective service to set of traffic demands in domain

– Limit demand & reserve resources


– Differentiated services
l Classify traffic in groups

l Different group traffic handled differently


2
Chapter 17 Integrated and Differentiated Services
Integrated Services
Architecture (ISA)
l IPv4 header fields for precedence and type
of service usually ignored
l ATM only network designed to support
TCP, UDP and real-time traffic
– May need new installation
l Need to support Quality of Service (QoS)
within TCP/IP
– Add functionality to routers
– Means of requesting QoS
3
Chapter 17 Integrated and Differentiated Services
Internet Traffic – Elastic
l Can adjust to changes in delay and throughput
l E.g. common TCP and UDP application
– E-Mail – insensitive to delay changes
– FTP – User expect delay proportional to file size
l Sensitive to changes in throughput
– SNMP – delay not a problem, except when caused by
congestion
– Web (HTTP), TELNET – sensitive to delay
l Not per packet delay – total elapsed time
– E.g. web page loading time
– For small items, delay across internet dominates
– For large items it is throughput over connection
l Need some QoS control to match to demand 4
Chapter 17 Integrated and Differentiated Services
Internet Traffic – Inelastic
l Does not easily adapt to changes in delay and
throughput
– Real time traffic
l Throughput
– Minimum may be required
l Delay
– E.g. stock trading
l Jitter - Delay variation
– More jitter requires a bigger buffer
– E.g. teleconferencing requires reasonable upper bound
l Packet loss 5
Chapter 17 Integrated and Differentiated Services
Inelastic Traffic Problems
l Difficult to meet requirements on network with
variable queuing delays and congestion
l Need preferential treatment
l Applications need to state requirements
– Ahead of time (preferably) or on the fly
– Using fields in IP header
– Resource reservation protocol
l Must still support elastic traffic
– Deny service requests that leave too few resources to
handle elastic traffic demands

6
Chapter 17 Integrated and Differentiated Services
ISA Approach
l Provision of QoS over IP
l Sharing available capacity when congested
l Router mechanisms
– Routing Algorithms
l Select to minimize delay
– Packet discard
l Causes TCP sender to back off and reduce load
l Enahnced by ISA
7
Chapter 17 Integrated and Differentiated Services
Flow
l IP packet can be associated with a flow
– Distinguishable stream of related IP packets
– From single user activity
– Requiring same QoS
– E.g. one transport connection or one video stream
– Unidirectional
– Can be more than one recipient
l Multicast

– Membership of flow identified by source and destination IP


address, port numbers, protocol type
– IPv6 header flow identifier can be used but isnot necessarily
equivalent to ISA flow

8
Chapter 17 Integrated and Differentiated Services
ISA Functions
l Admission control
– For QoS, reservation required for new flow
– RSVP used
l Routing algorithm
– Base decision on QoS parameters
l Queuing discipline
– Take account of different flow requirements
l Discard policy
– Manage congestion
– Meet QoS
9
Chapter 17 Integrated and Differentiated Services
ISA Implementation in Router
l Background
Functions

l Forwarding
functions

10
Chapter 17 Integrated and Differentiated Services
ISA Components – Background
Functions
l Reservation Protocol
– RSVP
l Admission control
l Management agent
– Can use agent to modify traffic control
database and direct admission control
l Routing protocol

11
Chapter 17 Integrated and Differentiated Services
ISA Components – Forwarding
l Classifier and route selection
– Incoming packets mapped to classes
l Single flow or set of flows with same QoS
– E.g. all video flows
l Based on IP header fields
– Determines next hop
l Packet scheduler
– Manages one or more queues for each output
– Order queued packets sent
l Based on class, traffic control database, current and past
activity on outgoing port
– Policing
12
Chapter 17 Integrated and Differentiated Services
ISA Services
l Traffic specification (TSpec) defined as
service for flow
l On two levels
– General categories of service
l Guaranteed
l Controlled load

l Best effort (default)

– Particular flow within category


l TSpec is part of contract
13
Chapter 17 Integrated and Differentiated Services
Token Bucket
l Many traffic sources can be defined by
token bucket scheme
l Provides concise description of load
imposed by flow
– Easy to determine resource requirements
l Provides input parameters to policing
function

14
Chapter 17 Integrated and Differentiated Services
Token Bucket Diagram

15
Chapter 17 Integrated and Differentiated Services
ISA Services –
Guaranteed Service
l Assured capacity level or data rate
l Specific upper bound on queuing delay through
network
– Must be added to propagation delay or latency to get
total delay
– Set high to accommodate rare long queue delays
l No queuing losses
– I.e. no buffer overflow
l E.g. Real time play back of incoming signal can
use delay buffer for incoming signal but will not
tolerate packet loss
16
Chapter 17 Integrated and Differentiated Services
ISA Services –
Controlled Load
l Tightly approximates to best efforts under unloaded
conditions
l No upper bound on queuing delay
– High percentage of packets do not experience delay over
minimum transit delay
l Propagation plus router processing with no queuing delay

l Very high percentage delivered


– Almost no queuing loss
l Adaptive real time applications
– Receiver measures jitter and sets playback point
– Video can drop a frame or delay output slightly
– Voice can adjust silence periods

17
Chapter 17 Integrated and Differentiated Services
Queuing Discipline
l Traditionally first in first out (FIFO) or first
come first served (FCFS) at each router port
l No special treatment to high priority packets
(flows)
l Small packets held up by large packets ahead of
them in queue
– Larger average delay for smaller packets
– Flows of larger packets get better service
l Greedy TCP connection can crowd out altruistic
connections
– If one connection does not back off, others may back
off more 18
Chapter 17 Integrated and Differentiated Services
Fair Queuing (FQ)
l Multiple queues for each port
– One for each source or flow
– Queues services round robin
– Each busy queue (flow) gets exactly one packet per
cycle
– Load balancing among flows
– No advantage to being greedy
l Your queue gets longer, increasing your delay
– Short packets penalized as each queue sends one
packet per cycle

19
Chapter 17 Integrated and Differentiated Services
FIFO and FQ

20
Chapter 17 Integrated and Differentiated Services
Processor Sharing
l Multiple queues as in FQ
l Send one bit from each queue per round
– Longer packets no longer get an advantage
l Can work out virtual (number of cycles)
start and finish time for a given packet
l However, we wish to send packets, not bits

21
Chapter 17 Integrated and Differentiated Services
Bit-
Bit-Round Fair Queuing
(BRFQ)
l Compute virtual start and finish time as
before
l When a packet finished, the next packet
sent is the one with the earliest virtual
finish time
l Good approximation to performance of PS
– Throughput and delay converge as time
increases

22
Chapter 17 Integrated and Differentiated Services
Examples
of PS and
BRFQ

23
Chapter 17 Integrated and Differentiated Services
Comparison
of FIFO, FQ
and BRFQ

24
Chapter 17 Integrated and Differentiated Services
Generalized Processor Sharing
(GPS)
l BRFQ can not provide different capacities to
different flows
l Enhancement called Weighted fair queue (WFQ)
l From PS, allocate weighting to each flow that
determines how many bots are sent during each
round
– If weighted 5, then 5 bits are sent per round
l Gives means of responding to different service
requests
l Guarantees that delays do not exceed bounds

25
Chapter 17 Integrated and Differentiated Services
Weighted Fair Queue
l Emulates bit by bit GPS
l Same strategy as BRFQ

26
Chapter 17 Integrated and Differentiated Services
FIFO v
WFQ

27
Chapter 17 Integrated and Differentiated Services
Proactive Packet Discard
l Congestion management by proactive
packet discard
– Before buffer full
– Used on single FIFO queue or multiple queues
for elastic traffic
– E.g. Random Early Detection (RED)

28
Chapter 17 Integrated and Differentiated Services
Random Early Detection (RED)
Motivation
l Surges fill buffers and cause discards
l On TCP this is a signal to enter slow start phase, reducing
load
– Lost packets need to be resent
l Adds to load and delay

– Global synchronization
l Traffic burst fills queues so packets lost

l Many TCP connections enter slow start

l Traffic drops so network under utilized

l Connections leave slow start at same time causing burst

l Bigger buffers do not help


l Try to anticipate onset of congestion and tell one
connection to slow down
29
Chapter 17 Integrated and Differentiated Services
RED Design Goals
l Congestion avoidance
l Global synchronization avoidance
– Current systems inform connections to back
off implicitly by dropping packets
l Avoidance of bias to bursty traffic
– Discard arriving packets will do this
l Bound on average queue length
– Hence control on average delay
30
Chapter 17 Integrated and Differentiated Services
RED Algorithm – Overview
Calculate average queue size avg
if avg < THmin
queue packet
else if THmin ≤ avg < Thmax
calculate probability Pa
with probability Pa
discard packet
else with probability 1-Pa
queue packet
else if avg ≥ THmax
discard packet
31
Chapter 17 Integrated and Differentiated Services
RED Buffer

32
Chapter 17 Integrated and Differentiated Services
RED Algorithm Detail

33
Chapter 17 Integrated and Differentiated Services
Differentiated Services (DS)
l ISA and RSVP complex to deploy
l May not scale well for large volumes of traffic
– Amount of control signals
– Maintenance of state information at routers
l DS architecture designed to provide simple, easy
to implement, low overhead tool
– Support range of network services
l Differentiated on basis of performance

34
Chapter 17 Integrated and Differentiated Services
Characteristics of DS
l Use IPv4 header Type of Service or IPv6 Traffic Class
field
– No change to IP
l Service level agreement (SLA) established between
provider (internet domain) and customer prior to use of
DS
– DS mechanisms not needed in applications
l Build in aggregation
– All traffic with same DS field treated same
l E.g. multiple voice connections

– DS implemented in individual routers by queuing and forwarding


based on DS field
l State information on flows not saved by routers

35
Chapter 17 Integrated and Differentiated Services
DS Terminology

36
Chapter 17 Integrated and Differentiated Services
Services
l Provided within DS domain
– Contiguous portion of Internet over which consistent set of DS
policies administered
– Typically under control of one administrative entity
l Defined in SLA
– Customer may be user organization or other DS domain
– Packet class marked in DS field
l Service provider configures forwarding policies routers
– Ongoing measure of performance provided for each class
l DS domain expected to provide agreed service internally
l If destination in another domain, DS domain attempts to
forward packets through other domains
– Appropriate service level requested from each domain
37
Chapter 17 Integrated and Differentiated Services
SLA Parameters
l Detailed service performance parameters
– Throughput, drop probability, latency
l Constraints on ingress and egress points
– Indicate scope of service
l Traffic profiles to be adhered to
– Token bucket
l Disposition of traffic in excess of profile

38
Chapter 17 Integrated and Differentiated Services
Example Services
l Qualitative
– A: Low latency
– B: Low loss
l Quantitative
– C: 90% in-profile traffic delivered with no more than
50ms latency
– D: 95% in-profile traffic delivered
l Mixed
– E: Twice bandwidth of F
– F: Traffic with drop precedence X has higher delivery
probability than that with drop precedence Y
39
Chapter 17 Integrated and Differentiated Services
DS Field v IPv4 Type of Service

40
Chapter 17 Integrated and Differentiated Services
DS Field Detail
l Leftmost 6 bits are DS codepoint
– 64 different classes available
– 3 pools
l xxxxx0 : reserved for standards
– 000000 : default packet class
– xxx000 : reserved for backwards compatibility with IPv4 TOS
l xxxx11 : reserved for experimental or local use
l xxxx01 : reserved for experimental or local use but may be
allocated for future standards if needed
l Rightmost 2 bits unused

41
Chapter 17 Integrated and Differentiated Services
Configuration Diagram

42
Chapter 17 Integrated and Differentiated Services
Configuration – Interior Routers
l Domain consists of set of contiguous routers
l Interpretation of DS codepoints within domain is
consistent
l Interior nodes (routers) have simple mechanisms
to handle packets based on codepoints
– Queuing gives preferential treatment depending on
codepoint
l Per Hop behaviour (PHB)
l Must be available to all routers
l Typically the only part implemented in interior routers
– Packet dropping rule dictated which to drop when
buffer saturated
43
Chapter 17 Integrated and Differentiated Services
Configuration – Boundary
Routers
l Include PHB rules
l Also traffic conditioning to provide desired
service
– Classifier
l Separate packets into classes
– Meter
l Measure traffic for conformance to profile
– Marker
l Policing by remarking codepoints if required
– Shaper
– Dropper
44
Chapter 17 Integrated and Differentiated Services
DS Traffic Conditioner

45
Chapter 17 Integrated and Differentiated Services
Per Hop Behaviour –
Expedited forwarding
l Premium service
– Low loss, delay, jitter; assured bandwidth end-to-end
service through domains
– Looks like point to point or leased line
– Difficult to achieve
– Configure nodes so traffic aggregate has well defined
minimum departure rate
l EF PHB
– Condition aggregate so arrival rate at any node is
always less that minimum departure rate
l Boundary conditioners

46
Chapter 17 Integrated and Differentiated Services
Per Hop Behaviour –
Explicit Allocation
l Superior to best efforts
l Does not require reservation of resources
l Does not require detailed discrimination among flows
l Users offered choice of number of classes
l Monitored at boundary node
– In or out depending on matching profile or not
l Inside network all traffic treated as single pool of
packets, distinguished only as in or out
l Drop out packets before in packets if necessary
l Different levels of service because different number of in
packets for each user

47
Chapter 17 Integrated and Differentiated Services
PHB - Assured Forwarding
l Four classes defined
– Select one or more to meet requirements
l Within class, packets marked by customer
or provider with one of three drop
precedence values
– Used to determine importance when dropping
packets as result of congestion

48
Chapter 17 Integrated and Differentiated Services
Codepoints for AF PHB

49
Chapter 17 Integrated and Differentiated Services
Chapter 18
Protocols for QoS Support

1
Chapter 18 Protocols for QoS Support
Increased Demands
l Need to incorporate bursty and stream traffic in
TCP/IP architecture
l Increase capacity
– Faster links, switches, routers
– Intelligent routing policies
– End-to-end flow control
l Multicasting
l Quality of Service (QoS) capability
l Transport protocol for streaming
2
Chapter 18 Protocols for QoS Support
Resource Reservation - Unicast
l Prevention as well as reaction to congestion
required
l Can do this by resource reservation
l Unicast
– End users agree on QoS for task and request from
network
– May reserve resources
– Routers pre-allocate resources
– If QoS not available, may wait or try at reduced QoS

3
Chapter 18 Protocols for QoS Support
Resource Reservation –
Multicast
l Generate vast traffic
– High volume application like video
– Lots of destinations
l Can reduce load
– Some members of group may not want current
transmission
l “Channels” of video
– Some members may only be able to handle part of
transmission
l Basic and enhanced video components of video stream
l Routers can decide if they can meet demand
4
Chapter 18 Protocols for QoS Support
Resource Reservation
Problems on an Internet
l Must interact with dynamic routing
– Reservations must follow changes in route
l Soft state – a set of state information at a
router that expires unless refreshed
– End users periodically renew resource
requests

5
Chapter 18 Protocols for QoS Support
Resource ReSerVation
Protocol (RSVP) Design Goals
l Enable receivers to make reservations
– Different reservations among members of same multicast group
allowed
l Deal gracefully with changes in group membership
– Dynamic reservations, separate for each member of group
l Aggregate for group should reflect resources needed
– Take into account common path to different members of group
l Receivers can select one of multiple sources (channel
selection)
l Deal gracefully with changes in routes
– Re-establish reservations
l Control protocol overhead
l Independent of routing protocol 6
Chapter 18 Protocols for QoS Support
RSVP Characteristics
l Unicast and Multicast
l Simplex
– Unidirectional data flow
– Separate reservations in two directions
l Receiver initiated
– Receiver knows which subset of source transmissions it wants
l Maintain soft state in internet
– Responsibility of end users
l Providing different reservation styles
– Users specify how reservations for groups are aggregated
l Transparent operation through non-RSVP routers
l Support IPv4 (ToS field) and IPv6 (Flow label field)
7
Chapter 18 Protocols for QoS Support
Data Flows - Session
l Data flow identified by destination
l Resources allocated by router for duration
of session
l Defined by
– Destination IP address
l Unicast or multicast
– IP protocol identifier
l TCP, UDP etc.
– Destination port
l May not be used in multicast
8
Chapter 18 Protocols for QoS Support
Flow Descriptor
l Reservation Request
– Flow spec
l Desired QoS
l Used to set parameters in node’s packet scheduler

l Service class, Rspec (reserve), Tspec (traffic)

– Filter spec
l Set of packets for this reservation
l Source address, source prot

9
Chapter 18 Protocols for QoS Support
Treatment of Packets of One
Session at One Router

10
Chapter 18 Protocols for QoS Support
RSVP Operation Diagram

11
Chapter 18 Protocols for QoS Support
RSVP Operation
l G1, G2, G3 members of multicast group
l S1, S2 sources transmitting to that group
l Heavy black line is routing tree for S1, heavy
grey line for S2
l Arrowed lines are packet transmission from S1
(black) and S2 (grey)
l All four routers need to know reservation s for
each multicast address
– Resource requests must propagate back through
routing tree

12
Chapter 18 Protocols for QoS Support
Filtering
l G3 has reservation filter spec including S1 and S2
l G1, G2 from S1 only
l R3 delivers from S2 to G3 but does not forward to R4
l G1, G2 send RSVP request with filter excluding S2
l G1, G2 only members of group reached through R4
– R4 doesn’t need to forward packets from this session
– R4 merges filter spec requests and sends to R3
l R3 no longer forwards this session’s packets to R4
– Handling of filtered packets not specified
– Here they are dropped but could be best efforts delivery
l R3 needs to forward to G3
– Stores filter spec but doesn’t propagate it
13
Chapter 18 Protocols for QoS Support
Reservation Styles
l Determines manner in which resource
requirements from members of group are
aggregated
l Reservation attribute
– Reservation shared among senders (shared)
l Characterizing entire flow received on multicast address
– Allocated to each sender (distinct)
l Simultaneously capable of receiving data flow from each
sender
l Sender selection
– List of sources (explicit)
– All sources, no filter spec (wild card) 14
Chapter 18 Protocols for QoS Support
Reservation Attributes and
Styles
l Reservation Attribute
– Distinct
l Sender selection explicit = Fixed filter (FF)
l Sender selection wild card = none

– Shared
l Sender selection explicit= Shared-explicit (SE)
l Sender selection wild card = Wild card filter (WF)

15
Chapter 18 Protocols for QoS Support
Wild Card Filter Style
l Single resource reservation shared by all senders
to this address
l If used by all receivers: shared pipe whose
capacity is largest of resource requests from
receivers downstream from any point on tree
l Independent of number of senders using it
l Propagated upstream to all senders
l WF(*{Q})
– * = wild card sender
– Q = flowspec
l Audio teleconferencing with multiple sites
16
Chapter 18 Protocols for QoS Support
Fixed Filter Style
l Distinct reservation for each sender
l Explicit list of senders
l FF(S1{Q!}, S2{Q2},…)
l Video distribution

17
Chapter 18 Protocols for QoS Support
Shared Explicit Style
l Single reservation shared among specific
list of senders
l SE(S1, S2, S3, …{Q})
l Multicast applications with multiple data
sources but unlikely to transmit
simultaneously

18
Chapter 18 Protocols for QoS Support
Reservation Style Examples

19
Chapter 18 Protocols for QoS Support
RSVP Protocol Mechanisms
l Two message types
– Resv
l Originate at multicast group receivers
l Propagate upstream
l Merged and packet when appropriate
l Create soft states
l Reach sender
– Allow host to set up traffic control for first hop
– Path
l Provide upstream routing information
l Issued by sending hosts
l Transmitted through distribution tree to all destinations
20
Chapter 18 Protocols for QoS Support
RSVP Host Model

21
Chapter 18 Protocols for QoS Support
Multiprotocol Label Switching
(MPLS)
l Routing algorithms provide support for
performance goals
– Distributed and dynamic
l React to congestion
l Load balance across network
– Based on metrics
l Develop information that can be used in handling different
service needs
l Enhancements provide direct support
– IS, DS, RSVP
l Nothing directly improves throughput or delay
l MPLS tries to match ATM QoS support 22
Chapter 18 Protocols for QoS Support
Background
l Efforts to marry IP and ATM
l IP switching (Ipsilon)
l Tag switching (Cisco)
l Aggregate route based IP switching (IBM)
l Cascade (IP navigator)
l All use standard routing protocols to define paths
between end points
l Assign packets to path as they enter network
l Use ATM switches to move packets along paths
– ATM switching (was) much faster than IP routers
– Use faster technology 23
Chapter 18 Protocols for QoS Support
Developments
l IETF working group in 1997, proposed standard
2001
l Routers developed to be as fast as ATM switches
– Remove the need to provide both technologies in
same network
l MPLS does provide new capabilities
– QoS support
– Traffic engineering
– Virtual private networks
– Multiprotocol support

24
Chapter 18 Protocols for QoS Support
Connection Oriented QoS
Support
l Guarantee fixed capacity for specific applications
l Control latency/jitter
l Ensure capacity for voice
l Provide specific, guaranteed quantifiable SLAs
l Configure varying degrees of QoS for multiple
customers
l MPLS imposes connection oriented framework
on IP based internets

25
Chapter 18 Protocols for QoS Support
Traffic Engineering
l Ability to dynamically define routes, plan resource
commitments based on known demands and optimize
network utilization
l Basic IP allows primitive traffic engineering
– E.g. dynamic routing
l MPLS makes network resource commitment easy
– Able to balance load in face of demand
– Able to commit to different levels of support to meet user traffic
requirements
– Aware of traffic flows with QoS requirements and predicted
demand
– Intelligent re-routing when congested

26
Chapter 18 Protocols for QoS Support
VPN Support
l Traffic from a given enterprise or group
passes transparently through an internet
l Segregated from other traffic on internet
l Performance guarantees
l Security

27
Chapter 18 Protocols for QoS Support
Multiprotocol Support
l MPLS can be used on different network
technologies
l IP
– Requires router upgrades
l Coexist with ordinary routers
l ATM
– Enables and ordinary switches co-exist
l Frame relay
– Enables and ordinary switches co-exist
l Mixed network
28
Chapter 18 Protocols for QoS Support
MPLS Terminology

29
Chapter 18 Protocols for QoS Support
MPLS Operation
l Label switched routers capable of switching and
routing packets based on label appended to
packet
l Labels define a flow of packets between end
points or multicast destinations
l Each distinct flow (forward equivalence class –
FEC) has specific path through LSRs defined
– Connection oriented
l Each FEC has QoS requirements
l IP header not examined
– Forward based on label value
30
Chapter 18 Protocols for QoS Support
MPLS Operation Diagram

31
Chapter 18 Protocols for QoS Support
Explanation - Setup
l Labelled switched path established prior to
routing and delivery of packets
l QoS parameters established along path
– Resource commitment
– Queuing and discard policy at LSR
– Interior routing protocol e.g. OSPF used
– Labels assigned
l Local significance only
l Manually or using Label distribution protocol (LDP) or
enhanced version of RSVP

32
Chapter 18 Protocols for QoS Support
Explanation – Packet Handling
l Packet enters domain through edge LSR
– Processed to determine QoS
l LSR assigns packet to FEC and hence LSP
– May need co-operation to set up new LSP
l Append label
l Forward packet
l Within domain LSR receives packet
l Remove incoming label, attach outgoing label
and forward
l Egress edge strips label, reads IP header and
forwards
33
Chapter 18 Protocols for QoS Support
Notes
l MPLS domain is contiguous set of MPLS enabled routers
l Traffic may enter or exit via direct connection to MPLS router or
from non-MPLS router
l FEC determined by parameters, e.g.
– Source/destination IP address or network IP address
– Port numbers
– IP protocol id
– Differentiated services codepoint
– IPv6 flow label
l Forwarding is simple lookup in predefined table
– Map label to next hop
l Can define PHB at an LSR for given FEC
l Packets between same end points may belong to different FEC
34
Chapter 18 Protocols for QoS Support
MPLS Packet Forwarding

35
Chapter 18 Protocols for QoS Support
Label Stacking
l Packet may carry number of labels
l LIFO (stack)
– Processing based on top label
– Any LSR may push or pop label
l Unlimited levels
– Allows aggregation of LSPs into single LSP for part
of route
– C.f. ATM virtual channels inside virtual paths
– E.g. aggregate all enterprise traffic into one LSP for
access provider to handle
– Reduces size of tables
36
Chapter 18 Protocols for QoS Support
Label Format Diagram

l Label value: Locally significant 20 bit


l Exp: 3 bit reserved for experimental use
– E.g. DS information or PHB guidance
l S: 1 for oldest entry in stack, zero otherwise
l Time to live (TTL): hop count or TTL value

37
Chapter 18 Protocols for QoS Support
Time to Live Processing
l Needed to support TTL since IP header not read
l First label TTL set to IP header TTL on entry to MPLS
domain
l TTL of top entry on stack decremented at internal LSR
– If zero, packet dropped or passed to ordinary error processing
(e.g. ICMP)
– If positive, value placed in TTL of top label on stack and packet
forwarded
l At exit from domain, (single stack entry) TTL
decremented
– If zero, as above
– If positive, placed in TTL field of Ip header and forwarded

38
Chapter 18 Protocols for QoS Support
Label Stack
l Appear after data link layer header, before network layer
header
l Top of stack is earliest (closest to network layer header)
l Network layer packet follows label stack entry with S=1
l Over connection oriented services
– Topmost label value in ATM header VPI/VCI field
l Facilitates ATM switching

– Top label inserted between cell header and IP header


– In DLCI field of Frame Relay
– Note: TTL problem

39
Chapter 18 Protocols for QoS Support
Position of MPLS Label Stack

40
Chapter 18 Protocols for QoS Support
FECs, LSPs, and Labels
l Traffic grouped into FECs
l Traffic in a FEC transits an MLPS domain along an LSP
l Packets identified by locally significant label
l At each LSR, labelled packets forwarded on basis of
label.
– LSR replaces incoming label with outgoing label
l Each flow must be assigned to a FEC
l Routing protocol must determine topology and current
conditions so LSP can be assigned to FEC
– Must be able to gather and use information to support QoS
l LSRs must be aware of LSP for given FEC, assign
incoming label to LSP, communicate label to other LSRs
41
Chapter 18 Protocols for QoS Support
Topology of LSPs
l Unique ingress and egress LSR
– Single path through domain
l Unique egress, multiple ingress LSRs
– Multiple paths, possibly sharing final few
hops
l Multiple egress LSRs for unicast traffic
l Multicast

42
Chapter 18 Protocols for QoS Support
Route Selection
l Selection of LSP for particular FEC
l Hop-by-hop
– LSR independently chooses next hop
– Ordinary routing protocols e.g. OSPF
– Doesn’t support traffic engineering or policy routing
l Explicit
– LSR (usually ingress or egress) specifies some or all
LSRs in LSP for given FEC
– Selected by configuration,or dynamically

43
Chapter 18 Protocols for QoS Support
Constraint Based Routing
Algorithm
l Take in to account traffic requirements of flows
and resources available along hops
– Current utilization, existing capacity, committed
services
– Additional metrics over and above traditional routing
protocols (OSPF)
l Max link data rate
l Current capacity reservation
l Packet loss ratio
l Link propagation delay

44
Chapter 18 Protocols for QoS Support
Label Distribution
l Setting up LSP
l Assign label to LSP
l Inform all potential upstream nodes of
label assigned by LSR to FEC
– Allows proper packet labelling
– Learn next hop for LSP and label that
downstream node has assigned to FEC
l Allow LSR to map incoming to outgoing label

45
Chapter 18 Protocols for QoS Support
Real Time Transport Protocol
l TCP not suited to real time distributed
application
– Point to point so not suitable for multicast
– Retransmitted segments arrive out of order
– No way to associate timing with segments
l UDP does not include timing information
nor any support for real time applications
l Solution is real-time transport protocol
RTP
46
Chapter 18 Protocols for QoS Support
RTP Architecture
l Close coupling between protocol and
application layer functionality
– Framework for application to implement
single protocol
l Application level framing
l Integrated layer processing

47
Chapter 18 Protocols for QoS Support
Application Level Framing
l Recovery of lost data done by application rather than
transport layer
– Application may accept less than perfect delivery
l Real time audio and video
l Inform source about quality of delivery rather than retransmit
l Source can switch to lower quality
– Application may provide data for retransmission
l Sending application may recompute lost values rather than storing
them
l Sending application can provide revised values
l Can send new data to “fix” consequences of loss
l Lower layers deal with data in units provided by
application
– Application data units (ADU)
48
Chapter 18 Protocols for QoS Support
Integrated Layer Processing
l Adjacent layers in protocol stack tightly
coupled
l Allows out of order or parallel functions
from different layers

49
Chapter 18 Protocols for QoS Support
RTP Architecture Diagram

50
Chapter 18 Protocols for QoS Support
RTP Data Transfer Protocol
l Transport of real time data among number
of participants in a session, defined by:
– RTP Port number
l UDP destination port number if using UDP
– RTP Control Protocol (RTCP) port number
l Destination port address used by all participants
for RTCP transfer
– IP addresses
l Multicast or set of unicast

51
Chapter 18 Protocols for QoS Support
Multicast Support
l Each RTP data unit includes:
l Source identifier
l Timestamp
l Payload format

52
Chapter 18 Protocols for QoS Support
Relays
l Intermediate system acting as receiver and
transmitter for given protocol layer
l Mixers
– Receives streams of RTP packets from one or more
sources
– Combines streams
– Forwards new stream
l Translators
– Produce one or more outgoing RTP packets for each
incoming packet
– E.g. convert video to lower quality
53
Chapter 18 Protocols for QoS Support
RTP Header

54
Chapter 18 Protocols for QoS Support
RTP Control Protocol (RTCP)
l RTP is for user data
l RTCP is multicast provision of feedback to
sources and session participants
l Uses same underlying transport protocol
(usually UDP) and different port number
l RTCP packet issued periodically by each
participant to other session members

55
Chapter 18 Protocols for QoS Support
RTCP Functions
l QoS and congestion control
l Identification
l Session size estimation and scaling
l Session control

56
Chapter 18 Protocols for QoS Support
RTCP Transmission
l Number of separate RTCP packets bundled
in single UDP datagram
– Sender report
– Receiver report
– Source description
– Goodbye
– Application specific

57
Chapter 18 Protocols for QoS Support
RTCP Packet Formats

58
Chapter 18 Protocols for QoS Support
Packet Fields (All Packets)
l Version (2 bit) currently version 2
l Padding (1 bit) indicates padding bits at end of
control information, with number of octets as last
octet of padding
l Count (5 bit) of reception report blocks in SR or
RR, or source items in SDES or BYE
l Packet type (8 bit)
l Length (16 bit) in 32 bit words minus 1
l In addition Sender and receiver reports have:
– Synchronization Source Identifier

59
Chapter 18 Protocols for QoS Support
Packet Fields (Sender Report)
Sender Information Block
l NTP timestamp: absolute wall clock time
when report sent
l RTP Timestamp: Relative time used to
create timestamps in RTP packets
l Sender’s packet count (for this session)
l Sender’s octet count (for this session)

60
Chapter 18 Protocols for QoS Support
Packet Fields (Sender Report)
Reception Report Block
l SSRC_n (32 bit) identifies source refered to by this report
block
l Fraction lost (8 bits) since previous SR or RR
l Cumulative number of packets lost (24 bit) during this
session
l Extended highest sequence number received (32 bit)
– Least significant 16 bits is highest RTP data sequence number
received from SSRC_n
– Most significant 16 bits is number of times sequence number has
wrapped to zero
l Interarrival jitter (32 bit)
l Last SR timestamp (32 bit)
l Delay since last SR (32 bit)
61
Chapter 18 Protocols for QoS Support
Receiver Report
l Same as sender report except:
– Packet type field has different value
– No sender information block

62
Chapter 18 Protocols for QoS Support
Source Description Packet
l Used by source to give more information
l 32 bit header followed by zero or more
additional information chunks
l E.g.:
l 0 END End of SDES list
l 1 CNAME Canonical name
l 2 NAME Real user name of source
l 3 EMAIL Email address

63
Chapter 18 Protocols for QoS Support
Goodbye (BYE)
l Indicates one or more sources no linger
active
– Confirms departure rather than failure of
network

64
Chapter 18 Protocols for QoS Support
Application Defined Packet
l Experimental use
l For functions & features that are
application specific

65
Chapter 18 Protocols for QoS Support

You might also like