You are on page 1of 30

1

CHAPTER 1

INTRODUCTION

1.1 INTRODUCTION ABOUT THE MANET

Recent advancements in wireless communication and the miniaturization of


computers have led to a new concept called the mobile ad hoc network (MANET),
where two or more mobile nodes can form a temporary network without need of
any existing network infrastructure or centralized administration [3]. Even if the
source and the destination mobile hosts are not in the communication range of each
other, data packets are forwarded to the destination mobile host by relaying
transmission through other mobile hosts which exist between the two mobile hosts.
Since no special infrastructure is required, in various fields such as military and
rescue affairs, many applications are expected to be developed for ad hoc networks.
In ad hoc networks, since mobile hosts move freely, disconnections occur
frequently, and this causes frequent network partition. If a network is partitioned
into two networks due to the migrations of mobile hosts, mobile hosts in one of the
partitions cannot access data items held by mobile hosts in the other. Thus, data
accessibility in ad hoc networks is lower than that in conventional fixed networks.
In ad hoc networks, it is very important to prevent the deterioration of data
accessibility at the point of network partition [6]. A possible and promising solution
is the replication of data items at mobile hosts which are not the owners of the
original data.
Since mobile hosts generally have poor resources, it is usually impossible for
them to have replicas of all data items in the network. For example, let us suppose a
situation where a research project team engaged in excavation work constructs an
ad hoc network on a mountain. The results obtained from the investigation may
consist of various types of data such as numerical data, photographs, sounds, and
videos. In this case, although it is useful to have the data that other members
obtained, it seems difficult for a mobile host to have replicas of all the data.
2

MANETs are also commonly used with collaborating and distributed


computing, wireless mesh networks, wireless sensor networks and hybrid wireless
networks. Figure 1.1 shows the difference between the infrastructure wireless
cellular networks operation and the multi-hop routing infrastructure less MANET.
Unlike the wireless cellular networks, the multi-hop MANETs lack a Base Station
(BS) to coordinate the packets transmission and reception among the mobile nodes
and to control the state information exchange.
While wide-scale deployment of MANETs is increasing, extensive research
efforts are aiming to enhance the operation and management of such networks.

1.2 CHARACTERISTICS OF AD-HOC NETWORKS


A MANET consists of mobile platforms (e.g., a router with multiple hosts and wireless
communications devices)--herein simply referred to as "nodes"--which are free to move about
arbitrarily. The nodes may be located in or on airplanes, ships, trucks, cars, perhaps even on
people or very small devices, and there may be multiple hosts per router. A MANET is an
autonomous system of mobile nodes. The system may operate in isolation, or may have gateways
to and interface with a fixed network. In the latter operational mode, it is typically envisioned to
operate as a "stub" network connecting to a fixed internetwork. Stub networks carry traffic
originating at and/or destined for internal nodes, but do not permit exogenous traffic to "transit"
through the stub network.

MANET nodes are equipped with wireless transmitters and receivers using antennas
which may be omnidirectional (broadcast), highly- directional (point-to-point), possibly steerable,
or some combination thereof. At a given point in time, depending on the nodes' positions and
their transmitter and receiver coverage patterns, transmission power levels and co-channel
interference levels, a wireless connectivity in the form of a random, multihop graph or "ad hoc"
network exists between the nodes. This ad hoc topology may change with time as the nodes
move or adjust their transmission and reception parameters.

MANETs have several salient characteristics:

 Dynamic topologies: Nodes are free to move arbitrarily; thus, the network topology--
which is typically multihop--may change randomly and rapidly at unpredictable times, and
may consist of both bidirectional and unidirectional links.
3

 Bandwidth-constrained, variable capacity links: Wireless links will continue to have


significantly lower capacity than their hardwired counterparts. In addition, the realized
throughput of wireless communications--after accounting for the effects of multiple
access, fading, noise, and interference conditions, etc.--is often much less than a radio's
maximum transmission rate.

 Energy-constrained operation: Some or all of the nodes in a MANET may rely on batteries
or other exhaustible means for their energy. For these nodes, the most important system
design criteria for optimization may be energy conservation.

 Limited physical security: Mobile wireless networks are generally more prone to physical
security threats than are fixed- cable nets. The increased possibility of eavesdropping,
spoofing, and denial-of-service attacks should be carefully considered. Existing link
security techniques are often applied within wireless networks to reduce security threats.
As a benefit, the decentralized nature of network control in MANETs provides additional
robustness against the single points of failure of more centralized approaches.

1.3 APPLICATIONS OF AD-HOC NETWORKS


There are a number of possible application areas for MANETs. These can
range from simple civil and commercial applications to complicated high-risk
emergency services and battlefield operations. Below are some significant examples
including civil, emergency and military domains; the interested reader can refer to
for further details and other examples.
Civil and Commercial Applications:
Two emerging wireless network scenarios that are soon likely to become part
of the daily routines are vehicular communication in an urban environment, and
personal area networking. In the vehicular communication scenario, short range
wireless communication will be used within the car for monitoring and controlling
the vehicle’s mechanical components. Another application scenario is for
communication with other vehicles on the road. Potential applications include road
safety messages, coordinated navigation and other peer-to-peer interactions.
Emergency Services:
4

MANETs can be very useful in emergency search and rescue operations,


such as in environments where the conventional infrastructure-based
communication facilities are destroyed due to natural calamities such as
earthquakes, or simply do not exist. Immediate deployment of MANETs in these
scenarios can assist rapid activity coordination. For instance, police squad vehicles
and fire brigades can remain connected and exchange information more quickly if
they can cooperate to form ad hoc networks..
Battlefield Operations:
In future battlefield operations, autonomous agents such as unmanned
ground vehicles and unmanned airborne vehicles will be projected to the front line
for intelligence, surveillance, enemy antiaircraft suppression, damage assessment
and other tactical operations. It is envisaged that these agents, acting as mobile
nodes, will organise into groups of small unmanned ground, sea and airborne
vehicles in order to provide fast wireless communication, perhaps participating in
complex missions involving several such groups

1.5 CROSS LAYER DESIDN


Cross-layer design is broadly defined as "actively exploiting the dependence
between protocol layers to obtain performance gains."CLD is a way of achieving
information sharing between all the layers in order to obtain highest possible
adaptively of any network.

1.5.1 Data Link Layer


The data link layer (DLL) performs several important functions such as error
control, flow control, addressing, framing, and communication medium access
control. The DLL consists of two sub layers the logical link control sub layer
(LLC), which is responsible for error control and flow control, and the medium
access control sub layer, which takes care of addressing, framing, and medium
access control. Since nodes in MANETs share the same communication channel,
collisions may occur if there is more than one node transmitting at the same time.
5

As a consequence, the medium access control (MAC) sub layer is tasked to


efficiently control access to the shared channel among nodes in a MANET.
The data link layer is divided into two sub layers:

1. The media access control (MAC) layer

2. The logical link control (LLC) layer

The major challenge of the MAC sub layer is the hidden terminal problem.
In the case of the hidden terminal problem, a packet collision happens at the
intended receiver if there is transmission from a hidden terminal. As shown in
Figure 1.4, when node A transmits a frame to node B, node C (a hidden terminal) is
not aware of the transmission due to its distance from node A. If node C
simultaneously transmits a frame to node B, a collision occurs at node B.

Figure 1.4 the hidden terminal problem in a MANET

Many MAC protocols have been proposed to mitigate the adverse effects of
the hidden terminal problem through collision avoidance. Most collision avoidance
schemes (such as the carrier sense multiple access with collision avoidance
(CSMA/CA) employed by the distributed coordination function (DCF) component
of the MAC sub layer of the IEEE 802.11 standard) are sender-initiated, including
6

an exchange of channel reservation control frames between the communicating


nodes prior to data transmission. In this case, all the neighbouring nodes of a given
communicating node need to be informed that the channel will be occupied for a
time period. As shown in Figure 1.5, node A, wishing to transmit a data frame to
node B, first broadcasts an RTS (request-to send) frame containing the length of the
data and the address of node B. Upon receiving the RTS, node B responds by
broadcasting a CTS (clear to send) frame containing the length of the data and
address of node A.
Overhearing an RTS or CTS from neighbouring nodes can inhibit one node
from transmitting to other nodes outside the communication range. For example, in
Figure 1.5, the communication between nodes A and B will inhibit node D from
initiating communication with node C. This problem is known as the exposed
terminal problem. This problem can potentially lead to inefficient utilisation of the
communication channel. One of the suggested solutions to mitigate the exposed
terminal problem is the use of smart antennas or directional antennas where the
propagation of CTS, RTS and DATA frames are directed towards the intended nodes
(Figure 1.6).

Figure 1.5 the exposed terminal problem in a MANET


7

Figure 1.6 An ad hoc network with directional antennas

1.5.2 Network Layer


The Network layer provides end-to-end transmission service. This includes
the exchange of routing information, finding a feasible route to a destination,
repairing broken links and providing efficient utilization of the available
communication bandwidth. One of the most important properties of MANETs is the
mobility associated with the nodes. However, the mobility of nodes results in
frequent route breaks, packet collisions, transient loops, stale routing information
and difficulty in resource reservation. As a consequence, a good routing protocol
should be able to solve the above issues with a low communication overhead.
Due to the bandwidth and battery life limitations in MANETs, the use of a
routing protocol with a low communication overhead is critical to the overall
system performance. The routing control packets exchanged for finding a new route
and maintaining existing routes should be minimised. The control packets consume
the limited bandwidth and can also cause collisions with data packets, especially
when the network is scaled in terms of number of nodes. Therefore, an efficient
routing protocol that can cope with high network density while using a small
number of routing control packets is highly desirable.
Other responsibilities of the network layer include the following:
8

 Logical addressing. The physical addressing implemented by the data link

layer handles the addressing problem locally. If a packet passes the network
boundary, we need another addressing system to help distinguish the source
and destination systems. The network layer adds a header to the packet
coming from the upper layer that, among other things, includes the logical
addresses of the sender and receiver.
 Routing. When independent networks or links are connected to create

intemetworks (network of networks) or a large network, the connecting


devices (called routers or switches) route or switch the packets to their final
destination. One of the functions of the network layer is to provide this
mechanism.
9

Figure 1.7 illustrates end-to-end delivery by the network layer.

As the figure 1.7 shows, now we need a source-to-destination delivery. The


network layer at A sends the packet to the network layer at B. When the packet
arrives at router B, the router makes a decision based on the final destination (F) of
10

the packet. As we will see in later chapters, router B uses its routing table to find
that the next hop is router E. The network layer at B, therefore, sends the packet to
the network layer at E. The network layer at E, in tum, sends the packet to the
network layer at F.

LIST OF ABBREVIATIONS:

1. UWB ULTRA WIDE BAND

2. ARF AUTOMATIC RATE FALLBACK

3. DCF DISTRIBUTION COORDINATION FREME

4. TCP TRANSMISSION CONTROL PROTOCOL

5. RTS RETURN TO SEND

6. CTS CLEAR TO SEND

7. WLAN WIRELESS LAN

8. MAC MEDIUM ACCESS CONTROL

9. ACK ACKNOWLEGEMENT

10. FER FRAME ERROR RATE

11. AP ACCESS POINT

12. BDP BANDWIDTH DELAY PRODUCT

13. RTT ROUND TRIP TIME

14. NS2 NETWORK SIMULATION 2

15. OTCL OBJECT ORIENTED VARIENT OF TCL


11

16. ARP ADDRESS RESOLUTION PROTOCOL

EXISTING WORK :

TCP Fairness in 802.11e WLANs to address transport layer unfairness in WLANs. A


simple solution is developed that uses the 802.11e AIFS, TXOP and CWmin parameters to
ensure fairness between competing TCP uploads and downloads on 802.11e tuning
algorithms is largely informed by the quality of service requirements of newer
applications such as voice over IP. However, network traffic is currently dominated by data
traffic (web, email, media downloads, etc.) carried via the TCP reliable transport protocol
and this situation is likely to continue for some time. Although lacking the time critical
aspect of voice traffic, there is a real requirement for efficient and reasonably fair sharing
of the wireless capacity between competing data flows. Unfortunately, cross-layer
interactions between the 802.11 MAC and the flow/congestion control mechanisms
employed by TCP typically lead to gross unfairness between competing flows, and indeed
sustained lockout of flows. While the literature relating to WLAN fairness at the MAC
layer is extensive, this issue of transport layer TCP fairness has received far less attention.
all of these authors seek to work within the constraints of the basic 802.11 MAC and thus
focus solely on approaches that avoid changes at the MAC layer. However, as we shall see,
the roots of the problem lie in the MAC layer enforcement of per station fairness . Hence,
it seems most natural to seek to resolve this issue at the MAC layer itself. In this paper we
investigate how we might use the flexibility provided by the new 802.11e MAC to resolve
the transport layer unfairness in infrastructure WLANs. The paper considers TCP uploads
and downloads, and mixtures of both.

Recently, hardware supporting a useful subset of the 802.11e functionality has

become available and so in this paper we investigate the behaviour of TCP traffic in a

realistic network rather than via simulations.


12

The associate editor coordinating the review of this letter and approving it for All

systems are equipped with an Atheros 802.11a/b/g PCI card with an external antenna. The

system hardware configuration is summarised in Table I.All nodes, including the AP, use a

Linux 2.6.8.1 kernel and a version of the MADWiFi [6] wireless driver modified to allow

us to adjust the 802.11e , AIFS and TXOP parameters. Specific vendor features on the

wireless card, such as turbo mode, are disabled. All of the tests are performed using the

802.11b physical maximal transmission rate of 11Mbit/sec with RTS/CTS disabled. The

configuration of the various network buffers is also detailed in Table I. In particular, we

have increased the size of the TCP buffers to ensure that we see true AIMD behaviour

(with small TCP buffers TCP congestion control is effectively disabled as the TCP

congestion window is determined by the buffer size rather than the network capacity). We

have also carried out tests investigating the impact of the size of interface and driver

queues and obtain similar results for a range of settings. the unfairness between competing

TCP upload flows and between competing upload and download flows in 802.11 WLAN’s.

A.Gross unfairness between the throughput achieved by competing flows is evident. The

source of this highly undesirable behaviour is rooted in the interaction between the MAC

layer contention mechanism (that enforces fair access to the wireless channel) and the TCP

transport layer flow and congestion control mechanisms (that ensure reliable transfer and

match source send rates to network capacity). At the transport layer, to achieve reliable

data transfers TCP receivers return acknowledgement (ACK) packets to the data sender

confirming safe arrival of data packets. During TCP uploads, the wireless stations queue

data packets to be sent over the wireless channel to their destination and the returning TCP

ACK packets are queued at the wireless access point (AP) to be sent back to the source

station. TCP’s operation implicitly assumes that the forward (data) and paths between a

source and destination have similar packet transmission rates. The basic 802.11 MAC
13

layer, however, enforces station-level fair access to the wireless channel. That is, n stations

competing for access to the wireless channel are each able to secure approximately a 1/n

share of the total available transmission. Hence, if we have n wireless stations and one AP,

each station (including the AP) is able to gain only a share of transmission opportunities.

By allocating an equal share of packet transmissions to each wireless node, with TCP

uploads the 802.11 MAC allows of transmissions to be TCP data packets yet only (the

AP’s share of medium access) to be TCP ACK packets. For larger numbers of stations, n,

this MAC layer action leads to substantial forward/reverse path asymmetry at the transport

layer. Asymmetry in the forward and reverse path packet transmission rate is a known

source of poor TCP performance in wired networks, Asymmetry in the forward and

reverse path packet transmission rate that leads to significant queueing and dropping of

TCP ACKs can disrupt the TCP ACK clocking mechanism, hinder congestion window

growth and induce repeated timeouts. With regard to the latter, a timeout is invoked at a

TCP sender when no progress is detected in the arrival of data packets at the destination -

this may be due to data packet loss (no data packets arrive at the destination), TCP ACK

packet loss (safe receipt of data packets is not reported back to the sender), or both. TCP

flows with only a small number of packets in flight (e.g. flows which have recently started

or which are recovering from a timeout) are much more susceptible to timeouts than flows

with large numbers of packets in flight since the loss of a small number of data or ACK

packets is then sufficient to induce a timeout. Hence, when ACK losses are frequent a

situation can easily occur where a newly started TCP flow loses the ACK packets

associated with its first few data transmissions, inducing a timeout. The ACK packets

associated with the data packets retransmitted following the timeout can also be lost,

leading to further timeouts (with associated doubling of the retransmit timer) and so
14

creating a persistent situation where the flow is completely starved for long periods. B.

Unfairness between TCP upload and download flows Asymmetry also exists between

competing upload and download TCP flows that can create unfairness. This is illustrated

where it can be seen that upload flows achieve significantly greater throughput than

competing download flows. Suppose we have nu upload flows and nd download flows.

Since download flows must all be transmitted via the AP, we have that the download flows

(regardless of the number nd of download flows) gain transmission opportunities at the

roughly same rate as a single TCP upload flow. That is, roughly 1/(nu + 1) of the channel

bandwidth is allocated to the download flows and nu/(nu+1) allocated to the uploads. As

the number nu of upload flows increases, gross unfairness between uploads and downloads

can result.

Existing approaches to alleviating the gross unfairness between TCP flows

competing over 802.11 WLANs work within the constraint of the current 802.11 MAC,

resulting in complex adaptive schemes requiring online measurements and, perhaps, per

packet processing. We instead consider how the additional flexibility present in the new

802.11e MAC might be employed to alleviate transport layer unfairness. To address TCP’s

performance problems, two issues must be addressed; namely, asymmetry between the

TCP data and TCP ACK paths that disrupts the TCP congestion control mechanism, and

network level asymmetry between TCP upload and download flows. Symmetry can be

restored between the TCP data and TCP ACK paths by configuring the MAC such that

TCP ACKs effectively have unrestricted access to the wireless medium. Recall that in

802.11e the MAC parameter settings are made on a per class basis. Hence, we collect TCP

ACKs into a single class (i.e. queue them together in a separate queue) and confine

prioritisation to this class1. The rationale for this approach makes use of the transport layer
15

behaviour. Namely, allowing TCP ACKs unrestricted access to the wireless channel does

not lead to the channel being flooded. Instead, it ensures that the volume of TCP ACKs is

regulated by the transport layer rather than the MAC layer. In this way the volume of TCP

ACKs will be matched to the volume of TCP data packets, thereby restoring

forward/reverse path symmetry at the transport layer. When the wireless hop is the

bottleneck, data packets will be queued at wireless stations for transmission and packet

drops will occur there, while TCP ACKs will pass freely with minimal queuing i.e. the

standard TCP semantics are recovered. In the case of competing TCP upload and download

flows, recall that the primary source of unfairness arises from the 1In our tests packet

classification is carried out based on packet size. fact that if we have nu uploads and nd

downloads then the download flows roughly win only a 1/(nu + 1) share of the available

transmission opportunities. This suggests that to restore fairness we need to prioritise the

download data packets at the AP so as to achieve an nd/(nu + nd) share. We therefore

consider using the 802.11e MAC parameters detailed in Table II. Here, the 802.11e AIFS

and CWmin parameters are used to prioritise TCP ACKs. A small value of AIFS and

CWmin yields near strict prioritisation of TCP ACKs at the AP. A larger value of CWmin is

used at the wireless stations in order to reduce contention between competing TCP ACKs.

The TXOP packet bursting mechanism in 802.11e provides a straightforward and fine

grained mechanism for prioritising TCP download data packets. By transmitting nd packets

(one packet to each of the nd download destination stations) at each transmission

opportunity it can be immediately seen that we restore the nd/(nu + nd) fair share to the

TCP download traffic. Note that the number nd of distinct destination stations can be

readily determined by inspection of the AP interface queue in real-time, with no

requirement for monitoring of the wireless medium activity itself. The effect is to

dynamically track the number of active TCP download stations and always ensure the
16

appropriate prioritisation of TCP download traffic. Hence, this approach accommodates

both bursty, short-lived traffic such as HTTP and long-lived traffic such as FTP in a

straightforward and consistent manner (see later for examples).Revisiting the impact of

the proposed prioritisation approach can be Evidently, fairness is restored between the

competing TCP flows. The 802.11e MAC parameter settings used in this example (with an

11Mbs PHY) for both TCP uploads and downloads are summarized . Although space

restrictions prevent us from including the additional results, we have measured similar

levels of fairness across a range of network conditions, including varying numbers of

upload and download stations and situations where the number of uploads is not the same

as the number of downloads, confirming the effectiveness of the proposed solution. The

performance of the proposed approach with short-lived TCP flows is illustrated in here we

model a client server application where each user opens TCP upload flows (client

“requests”) and, in response, corresponding downloads are initiated. Since, as observed

previously, lockout of TCP flows is common in 802.11b WLANs we model user

impatience by restarting a client-server session if it fails to complete within a period of 10

seconds. The average time to completion of a client-server session is plotted in versus the

number of wireless stations. As we would expect the MAC load to increase linearly with

the number of users, we normalise by dividing by the number of users. It can be seen that

in 802.11b the normalised completion time remains constant until about 15 users and then

increases rapidly.
17

VARIABLE BUFFER LENGTH FOR TCP FLOW IN IEEE 802.11E WLANs

include that the mean service rate is dependent on the level of channel contention and

packet inter-service times vary stochastically due to the random nature of CSMA/CA

operation. the typical deployment scenario where an infrastructure mode WLAN is

configured with the Access Point (AP) acting as a wireless router between the WLAN and

the Internet. TCP traffic is of particular importance in such WLANs as it currently carries

the great majority (more than 90% [8]) of network traffic.Effects of buffer related issues in

LANs have received little attention in the literature. Exceptions include which shows that

appropriate buffer sizing can restore TCP upload/ download Unfairness, and in which TCP

performance with fixed AP buffer sizes and 802.11e is investigated. The present paper,

which extends our previous work on buffer sizing for voice traffic, is to our knowledge the

first work focussing on how to tune buffer sizes for TCP traffic in 802.11e WLANs .

Router buffers are traditionally sized with two primary objectives in mind.

Due to the nature of TCP, internet traffic tends to be bursty. Should too many

packets arrive in a sufficiently short interval of time, then a router may lack the capacity to

process all of the packets immediately. The first job of the router buffer is to mitigate

packet losses due to bursts by accommodating these packets in a buffer until they can be

serviced.
18

The AIMD congestion control Algorithm used by TCP reduces the number of

packets in flight by half on detecting network congestion. If router buffers are too small,

this backoff action will cause them to empty with a corresponding reduction in link

Utilisation.The classical rule of thumb is to provision buffers to be equal to the bandwidth

of the link multiplied by the average.

The Bandwidth-Delay Product (BDP). A number of fundamental new issues arise

in 802.11e WLANs. Firstly, the mean service rate at a wireless station is strongly

dependent on the level of channel contention and thus on the number of active stations and

their load. Secondly, even when the network load is fixed, the packet inter-service times at

a station are not fixed but vary stochastically due to the random nature of the CSMA/CA

operation. These facts affect statistical multiplexing and buffer backlog behaviour, and thus

the choice of buffer sizes. In this paper we study the impact of these differences and find

that they lead naturally to a requirement for adaptation of buffer size in response to

changing network conditions. We propose an adaptive algorithm for 802.11e WLANs1 and

demonstrate its efficacy via simulations.

We consider scenarios where the Access Point (AP) acts as a wireless router

between the WLAN and the wired Internet. Upload flows are from wireless stations in the

WLAN to server(s) in the Internet, while downloads are from wired server(s) to stations in

the WLAN. At the MAC layer, we use IEEE 802.11g parameters as shown in Table I. The

bandwidth between the AP and server(s) is 100 Mbps. TCP Reno with SACK is used. We

note that in WLANs, TCP ACK packets without any prioritisation can be easily

queued/dropped due to the fact that the basic 802.11 DCF ensures that stations win a

roughly equal number of transmission opportunities. For example consider n stations each

carrying one TCP upload flow. The TCP ACKs are transmitted by the AP. While the data
19

packets for the n flows have an aggregate n/(n + 1) share of the transmission opportunities

the TCP ACKs for the n flows have only a 1/(n+1) share. Issues of this sort are known to

degrade TCP performance significantly as queuing and dropping of TCP ACKs disrupt the

TCP ACK clocking mechanism. Following 1802.11e has been approved as an IEEE

standard and much of the 802.11e functionality is already available in WLAN devices. The

proposed algorithm however is also applicable to legacy 802.11 DCF although without

TCP ACK prioritisation fairness between competing TCP flows can not be guaranteed.

As we address this problem using 802.11e. At the AP and each station we treat

TCP ACKs as a separate traffic class, collecting them into a queue which is assigned high

priority.In contrast to wired networks, the mean service rate at a wireless station is not

fixed but instead depends upon the level of channel contention and the network load. when

the number of competing upload flows (with one upload flow per wireless station) is

varied. Similarly to wired networks, the throughput always increases monotonically with

the buffer size, reaching a maximum above a threshold buffer size. It can also be seen that

the download throughput falls as the number of competing uploads increases. The variation

in throughput can be substantial, e.g., the maximum throughput changes from 14Mbps to

1.25Mbps as the number of competing uploads changes from 0 to while ensuring high

throughput, this comes at the cost of high latency. For example, it can be seen from Fig. 1

that when a fixed buffer size of 338 packets is used (which in this example ensures

maximum throughput regardless of the number of contending stations), the roundtrip

latency experienced by the download flow is about 300ms with no uploads but rises to

around 2s with 10 contending upload stations. This occurs because TCP’s congestion

control algorithm probes for bandwidth until packet loss occurs and so download flows

will tend to fill buffers with any sizes. Moreover, the mean queueing delay of a buffer of
20

size Q with mean service rate B is Q/B. Hence, the queueing delay at the AP depends on

the service rate, which in turn depends on the number of contending wireless stations and

their offered load. For a fixed-size buffer, a decrease in the service rate of a factor b.

Conversely, sizing the buffer to achieve lower latency across all network conditions comes

at the cost of reduced throughput, e.g., a buffer size of 30 packets ensures latency of 200-

300ms for up to 10 contending upload stations but when there are no contending uploads

the throughput of a download flow is only about 75% of the maximum achievable. In

addition to variations in the mean service rate, we also note that the random nature of

802.11 operations leads to short time-scale stochastic fluctuations in service rate. This is

fundamentally different from wired networks and directly impacts buffering behaviour.

Stochastic fluctuations in service rate can lead to early queue overflow and reduced link

utilisation. With 10 uploads the maximum download throughput is 1.25Mbps, yielding a

BDP of 31 packets. However, it can be seen that at this buffer size the achieved download

throughput is only about 60% of the maximum – a buffer size of at least 70 packets is

required to achieve 100% throughput. The stochastic fluctuations in service rate lead to a

requirement to increase the buffer size above the BDP in order to accommodate the impact

of these fluctuations. The amount of over-provisioning required may be bounded using

statistical arguments, but we do not pursue this further here due to space constraints. We

note, however, that a simple but effective approach is to over-provision by a fixed number

of packets above the BDP

Motivated by the foregoing observations and the difficulty of selecting a fixed

buffer size suited to a range of network conditions, we consider the use of an adaptive

buffer sizing strategy. We note that a wireless station can readily measure its own service

rate by observation of the inter-service time, i.e., the time between packets arriving at the
21

head of the network interface queue ts and being successfully transmitted te (which is

indicated by receiving correctly the corresponding MAC ACK.). Note that this

measurement can be readily implemented in real devices and incurs minor computation

burden.

This will decrease the buffer size when the service rate falls and increase the buffer

size when the service rate rises, so as to maintain an approximately constant queueing

delay of T seconds. This effectively regulates the buffer size to remain equal to the BDP as

the mean service rate varies. To account for the impact of the stochastic nature of the

service rate on buffer size requirements (see comments at the end of the previous section),

we modify this update rule to based on the measurements in and others, we have found that

a value of a equal to 40 packets works well across a wide range of network conditions. The

effectiveness of this simple adaptive algorithm. Here we plot the throughput percentage4

and smoothed RTT of download flows as the number of download and upload flows is

varied. It can be seen that the adaptive algorithm maintains high throughput efficiency

across the entire range of operating conditions .

# =====================================================

# Define Node Configuration parameters

#======================================================
22

set val(chan) Channel/WirelessChannel ;# Channel Type

set val(prop) Propagation/TwoRayGround ;# radio-propagation model

set val(netif) Phy/WirelessPhy ;# network interface type

set val(mac) Mac/802_11 ;# MAC type

set val(ifq) Queue/DropTail/PriQueue ;# interface queue type

set val(ll) LL ;# link layer type

set val(ant) Antenna/OmniAntenna ;# antenna model

set val(ifqlen) 300 ;# max packet in ifq

set val(nn) 50 ;# number of mobilenodes

set val(rp) AODV ;# routing protocol

set val(x) 1600 ;# X axis distance

set val(y) 1100 ;# Y axis distance

set opt(energymodel) EnergyModel ;# Initial Energy

set opt(initialenergy) 100 ;# Initial energy in Joules

# Creating Simulator Object

set ns_ [new Simulator]

# Creating NAM File

set namTracefile [open pro.nam w]

$ns_ namtrace-all-wireless $namTracefile $val(x) $val(y)

# Creating Multiple Trace

set traceFile [open pro.tr w]


23

$ns_ trace-all $traceFile

$ns_ use-newtrace

# Creating Topology

set topo [new Topography]

$topo load_flatgrid $val(x) $val(y)

# Creating GOD(General Operation Director) Object

create-god $val(nn)

# To vary range values

Phy/WirelessPhy set Pt_ 0.45

set god_ [create-god $val(nn)]

# Node Creation

for { set i 0 } { $i < $val(nn)} { incr i } {

set node_($i) [$ns_ node]

$node_($i) random-motion 0

$god_ new_node $node_($i)

# Assigning node positions

source setdest50.tcl

for { set i 0 } { $i < $val(nn)} { incr i } {

$node_($i) color black


24

$ns_ initial_node_pos $node_($i) 50

$ns_ at 0.0 "$node_($i) color black"

# For mobility

proc mobility {tm} {

global ns_ node_ val

for {set i 0} {$i<$val(nn)} {incr i} {

set a [expr int(rand()*2)]

set f 1

while {$f} {

if {$a==0} {

set x [expr int(rand()*100)+[$node_($i) set X_]]

} else {

set x [expr [$node_($i) set X_]-int(rand()*100)]

if {$x>0 && $x<1600} {

set f 0

set f 1

while {$f} {
25

if {$a==0} {

set y [expr int(rand()*100)+[$node_($i) set Y_]]

} else {

set y [expr [$node_($i) set Y_]-int(rand()*100)]

if {$y>0 && $y<1100} {

set f 0

# For NodeSpeed

set z [expr int(rand()*25)]

$ns_ at $tm "$node_($i) setdest $x $y $z"

for {set i 1} {$i<100} {set i [expr $i+5]} {

$ns_ at $i "mobility $i"

# For communication (agent creation)

for {set i 0} {$i<$val(nn)} {incr i} {

set sink($i) [new Agent/LossMonitor]

$ns_ attach-agent $node_($i) $sink($i)

}
26

proc attach-CBR-traffic {node sink size itval} {

#Get an instance of the simulator

set ns_ [Simulator instance]

set udp [new Agent/UDP]

$ns_ attach-agent $node $udp

#Create a CBR agent and attach it to the node

set cbr [new Application/Traffic/CBR]

$cbr attach-agent $udp

$cbr set packetSize_ $size ;#sub packet size

$cbr set interval_ $itval

#Attach CBR source to sink;

$ns_ connect $udp $sink

return $cbr

# For sample communication

set cbr001 [attach-CBR-traffic $node_(0) $sink(1) 256 0.082]

$ns_ at 1.0 "$cbr001 start"

$ns_ at 1.001 "$cbr001 stop"

#~~~~~~~~~~~~~~~~ Calculation of neighbor node ~~~~~~~~~~~~~~~~~~~

proc distance { n1 n2 nd1 nd2} {

global sink

set nbr [open Neighbor a]


27

set x1 [expr int([$n1 set X_])]

set y1 [expr int([$n1 set Y_])]

set x2 [expr int([$n2 set X_])]

set y2 [expr int([$n2 set Y_])]

set d [expr int(sqrt(pow(($x2-$x1),2)+pow(($y2-$y1),2)))]

if {$d<=250} {

if {$nd2!=$nd1} {

puts $nbr "\t$nd1\t\t$nd2\t\t$x1\t$y1\t\t$d"

close $nbr

proc Routing {stnd ednd tm itval itvl src dst clr} {

global node_ sink ns_ val

$ns_ at $tm "$node_($src) label Source"

$ns_ at $tm "$node_($dst) label Destination"

$ns_ at $tm "$node_($src) color black"

$ns_ at $tm "$node_($dst) color black"

$ns_ at $tm "$node_($src) add-mark C4 red hexagon"

$ns_ at $tm "$node_($dst) add-mark C4 red hexagon"

set btp [open btemp w]

puts $btp "$stnd $ednd $tm $itval $src $dst"


28

close $btp

exec awk -f rreq.awk btemp Neighbor

source bcast.tcl

set tmp [open Time r]

set tm [gets $tmp]

close $tmp

#-----------------------Graph-----------------------------#

set pr1 [open PDR.xg w]

puts $pr1 "Markers: true"

puts $pr1 "LabelFont: Monospace"

puts $pr1 "TitleFont: Monospace"

#puts $pr1 "0 0"

set tp1 [open Throughput.xg w]

puts $tp1 "Markers: true"

puts $tp1 "LabelFont: Monospace"

puts $tp1 "TitleFont: Monospace"

puts $tp1 "0 0"

set dp1 [open Drop.xg w]

puts $dp1 "Markers: true"

puts $dp1 "LabelFont: Monospace"

puts $dp1 "TitleFont: Monospace"

puts $dp1 "0 0"


29

proc record {sink} {

global ns_ dp1 tp1 pr1

set tm [$ns_ now]

set rec1 [$sink set npkts_]

set lst1 [$sink set nlost_]

set byt1 [$sink set bytes_]

if {$rec1!=0} {

set pdr1 [expr ($rec1+0.0)/($rec1+$lst1)]

puts $pr1 "$tm $pdr1"

set tput1 [expr ($byt1*8.0)/(5.0*1000)]

puts $tp1 "$tm $tput1"

puts $dp1 "$tm $lst1"

$ns_ at [expr $tm+5.0] "record $sink"

$sink set nlost_ 0

$ns_ at 3.0 "record $sink($dst)"

# Finish Procedure

proc finish {} {

global ns_ namTracefile tp1 dp1 pr1

close $tp1
30

close $dp1

close $pr1

$ns_ flush-trace

close $namTracefile

exec awk -f ovrhead.awk pro.tr

exec xgraph -m -P Energy &

exec xgraph -m -P delay &

exec xgraph -m -P pdr &

exec xgraph -m -P thr &

exec xgraph -m -P drop &

exec nam pro.nam &

exit 0

# Calling Finish Procedure

$ns_ at 30.0 "finish"

puts "Start of simulation..."

# Executing the Pgm

$ns_ run

You might also like