You are on page 1of 5

Improving TCP Performance Based on Packet Loss Identification

Gaurish Kadam and Kaushik Kumar Balasankula


Department of Electrical and Computer Engineering
The Volgenau School of Information Technology and Engineering
George Mason University, Fairfax,VA 22031

Abstract-- This paper reviews and illustrates Delay product (BDP) continues to grow making
the concepts put forth in the original paper TCP far more sensitive to non-congestion based
by Joel Sing and Ben Soh. During data packet loss.
communication over TCP networks, its
performance is mainly evaluated in terms of II. REVIEW
packet loss. Most TCP performance
improvement schemes assume that the packet Various proposals and designs were introduced
loss is due to the network congestion. Apart to improve TCP’s ability to function with non-
from the congestion, packets may be lost due congestion based loss. Each of these approaches
to high bit error rate of the link. It is possible has benefits and drawbacks. Like, snoop, a TCP
to improve the TCP performance if we can aware link level protocol has been shown to
identify the cause for packet loss and act provide significant performance gains in both
accordingly. An algorithm to determine the wired and wireless networks. But it fails to work
cause for packet loss on a probabilistic basis when the network level encryption is deployed.
will be discussed and explained using various TCP Vegas utilizes per packet Round Trip Time
simulations in MATLAB and NS 2. (RTT) measurements to identify impending
congestion. Though it proved to have increased
I. INTRODUCTION TCP performance, it suffers from multiple
fairness issues. The algorithm proposed in the
TCP performance is mainly affected by packet
original paper by Joel Sing and Ben Soh,
loss during the transmission. Current TCP
consider packet loss metric supplemented with
variants assume network congestion as a major
RTT for the detection of corruption based packet
cause of packet loss and corruption of the
loss.
packets is much less than 1%. TCP is also used
over wireless links such as Wireless Local Area Packet loss is due to the network congestion.
Network and Geostationary Satellite links. The Apart from the congestion, packets may be lost
demand for such services is significantly due to corruption.
increasing. Loss of signal strength, channel
fading, noise and interference are quite high in Loss = corruption + congestion
wireless communication systems. Poor quality
of the transmission media in case of wired For multiple hop networks we can generalize the
networks also account for the high BER. In above equation as:
effect, the performance is degraded each time a
packet is lost due to corruption, since the Loss = crpt1 + cgst1 + …. + crptn + cgstn
transmission rate is reduced in order to avoid
incorrectly identified congestion. Additionally, The per packet RTT is effectively twice the sum
as networks increase in speed, the Bandwidth of delays – propagation delay, equipment delay
and queuing delay. nondeterministic probability for each state.
Currently, Jacobson based TCP variants are only
rtt = 2* (pd + ed + qd) capable of identifying two of these final states
(state 3 and state 5), with any loss being
Detailed analysis of network state and packet assumed to be a result of congestion.
loss, including the relationships between the
two, can be seen in the figure 1

In ‘normal state’, there is no packet loss and


queuing delay is zero. When the queuing delay Based on probabilistic model the pseudo code
exceeds zero, the network moves to ‘impending for the algorithm can be presented as:
congestion’ state. When the packets are dropped
in this state the network has become congested
if loss then
and moves to ‘congestion’ state and it returns to
if (total Q delay < Q delay threshold) then
‘impending congestion’ state when the packets
packet loss due to corruption
are no longer being dropped by the network.
else
When the network is in ‘normal’ state and the
packet loss due to congestion
packets are lost without queuing delay, then it
end if
moves to ‘error’ state.
end if

In reality, it is difficult to accurately measure the


total queuing delay of the network. As a result,
an approximation needs to be made utilizing the
per packet RTT measurements in a similar way
to the TCP Vegas congestion control algorithms.
During the lifetime of a packet - from creation to
A practical implementation of this algorithm
destruction - it will exist in one of the six states
requires the following steps:
shown in figure 2. The transmission of a packet
1) The lowest RTT measured during the lifetime
will result in it reaching one of three final states.
of the flow is recorded as baseRTT.
Either it will be successfully delivered (state 3),
2) For each RTT an average RTT (avgRTT) is
lost due to congestion (state 5) or lost due to
calculated from the RTT measured for each
corruption (state 4). The final state for a given
packet sent and acknowledged.
packet is dependent on the current states of the
3) The lowest non-zero value observed for
network, which in turn defines a
avgRTT - baseRTT is recorded as baseQD. This IV. RESULTS
value is used as a queuing delay threshold which
is dynamically calculated depending on the In Figure 3, we can observe slow start phase and
current network. congestion avoidance phase. Congestion
4) When three duplicate acknowledgments are window increases exponentially as BER was
observed, the current queuing delay (QD) is considered to be zero and the link is congestion
calculated as avgRTT - baseRTT. If QD is less free.
than or equal to QDthresh the normal TCP
packet recovery methods are invoked, however
the congestion window size is not reduced.
A threshold value of baseQD < QDthresh < 2 *
baseQD adjusts the sensitivity of the algorithm,
with values closer to base QD being less
susceptible to skewed RTT measurements.

By using this algorithm we are able to


distinguish the packet loss due to corruption or
congestion. Hence the TCP performance can be
improved by retransmitting the corrupted
packets instead of unnecessarily reducing the Figure 3
window size.
In Figure 4, we can observe the fluctuation in
III. ANALYSIS the congestion window as BER of 0.001 was
applied to the link. This validates the effect of
In order to observe the effect of packet loss due
packet loss due to corruption on TCP’s
to corruption on TCP’s performance we
performance. E.g. at 59.99 ms there is a sudden
simulated a network with two nodes with the
decrease in window size due to falsely identified
link speed of 2Mbps and utilizing drop tail
congestion.
queues with the buffer size of 128 packets. A
uniform distributed BER of 0.001 and 0.01 was
applied to the link. The simulation was based on
existing implementation of TCP Reno and TCP
Vegas.

To validate the 'Packet Loss Identification'


algorithm, in MATLAB, we simulated a finite
M/M/1 queue with maximum capacity of buffer
size 20, a link with a capacity of 155 Mbps,
exponentially distributed packet length with
mean packet size of 2325 bits and the link
utilization of 0.95.
Figure 4
In Figure 5 and 6, we can observe the significant number 50,100,150,200 and 250 is much less
improvement in the TCP’s performance for than the threshold in contrast to other packets.
Vegas over Reno. E.g. within 300 ms, with Reno Thereby we can identify the cause for packet
we could transmit 10522 packets while with loss.
Vegas we could transmit 11237 packets for BER
0.001. Total number of blocked packets = 25
Packets dropped due to corruption = 5
Packets dropped due to congestion = 20

Figure 5: Time vs. Sequence Number for Reno

Figure 7

V. FURTHER RESEARCH
Since the proposed algorithm is based on
queuing delay, in reality it is very difficult to
calculate queuing delay on per packet basis.
Therefore more accurate ways of identifying and
calculating this metric would be highly
beneficial.

VI. CONCLUSION
Figure 6: Time vs. Sequence Number for Vegas
An algorithm which is used to identify non-
congestion based packet loss, through a
probabilistic approach based on the network
Figure 7 shows the queuing delay for all the lost queuing delay has been discussed. We have
packets. As per ‘PLI’ if queuing delay is less simulated the Packet loss identification
than queuing delay threshold then we say that algorithm in MATLAB to explain the main idea
the Packets are lost due to corruption otherwise of the original paper. This method is highly
the packets are lost due to congestion. E.g. the effective in preventing unnecessary reductions in
queuing delay threshold is set to 0.2 and the congestion window size, resulting in increased
queuing delay for the packets with sequence TCP performance.
VII. REFERENCES

[1] Sing, J. Ben Soh, B. “Improving TCP


Performance: Identifying Corruption Based
Packet Loss” in Proceedings of the 2007 15th
IEEE International Conference on Networks,
Page(s): 400 - 405

[2] W. Stevens, “TCP slow start, congestion


avoidance, fast retransmit, and fast recovery
algorithms, Standards Track, RFC 2001, Internet
Engineering Task Force, 1997.

[3] J. Cobb and P. Agrawal, “Congestion or


corruption? a strategy for efficient wireless TCP
sessions,” in Proceedings of IEEE Symposium
on Computers and Communications 1995, pp.
265–268, IEEE, June 1995.

[4] The Network Simulator 2 (ns2).


http://www.isi.edu/nsnam/ns/ .

You might also like