Professional Documents
Culture Documents
for scientific exploration, commercial exploitation and coastline protection. The ideal
vehicle for this type of extensive monitoring is a mobile underwater sensor network
(M-UWSN), consisting of a large number of low cost underwater sensors that can move
with water currents and dispersion. M-UWSNs are significantly different from terres-
trial sensor networks: (1) Radio channels do not work well under water. They must
be replaced by acoustic channels, which feature long propagation delays, low commu-
nication bandwidth and high channel error rates; (2) While most ground sensors are
static, underwater sensor nodes may move with water currents (and other underwater
activities), as introduces passive sensor mobility. Due to the very different environment
properties and the unique characteristics of acoustic channels, the protocols developed
for terrestrial sensor networks are not applicable to M-UWSNs, and new research at
M-UWSN design: medium access control, multi-hop routing and reliable data transfer.
(1) Medium access control (MAC): the long propagation delays and narrow communi-
cation bandwidth of acoustic channels pose the major challenges to the energy-efficient
MAC design in M-UWSNs. For the first time, we formally investigate the random
access and RTS/CTS techniques in networks with long propagation delays and low
reservation-based MAC approach, called R-MAC, for dense underwater sensor networks
with unevenly distributed (spatially and temporally) traffic. Simulation results show
that R-MAC is not only energy efficient but also supports fairness. (2) Multi-hop rout-
ing: In M-UWSNs, energy efficiency and mobility handling are the two major concerns
for multi-hop routing, which have been rarely investigated simultaneously in the same
network context. We design the first routing protocol, called Vector-Based Forwarding
(VBF), for M-UWSNs. VBF is shown to be energy efficient, and at the same time can
simulation results show that HH-VBF is more robust in sparse networks. Further,
to solve the challenging routing void problem in M-UWSNs, we design a smart void
avoidance algorithm, called Vector-Based Void Avoidance (VBVA). VBVA can effec-
tively bypass different types of voids such as convex voids, concave voids and mobile
voids. VBVA is the first algorithm to address 3-dimensional voids and mobile voids in
mobile sensor networks. Our simulation results show that VBVA can achieve almost
the same success delivery ratio as flooding, while it saves much more energy. (3) Reli-
able data transfer: In M-UWSNs, the long propagation delays, the low communication
bandwidth and the high channel error rates all pose grand challenges to the reliable
data transfer. We first tackle this problem by proposing a novel hop-by-hop erasure
coding approach. Our results indicate its significant performance improvement upon
the most advanced approaches explored in underwater acoustic networks. We also de-
velop a mathematical model for the erasure coding scheme, providing useful guidelines
to handle node mobility in the network. The second approach we explore is a network
coding scheme, in which we carefully couple network coding with multi-path routing for
efficient error recovery. We evaluate the performance of this scheme using simulations.
And the results show that our network coding scheme is efficient in both error recovery
In this dissertation, we present these three strands of research work for M-UWSNs in
detail, and at the end we point out some potential directions worth future investigation.
Underwater Acoustic Sensor Networks: Medium Access Control, Routing
Peng Xie
A Dissertation
Doctor of Philosophy
at the
University of Connecticut
2007
Copyright by
Peng Xie
2007
APPROVAL PAGE
Presented by
Major Advisor
Jun-Hong Cui
Associate Advisor
Reda Ammar
Associate Advisor
Sanguthevar Rajasekaran
Associate Advisor
Bing Wang
University of Connecticut
2008
ii
ACKNOWLEDGEMENTS
First, I would like to thank my major advisor Dr. Jun-Hong Cui for her guidance
I would also like to thank Dr. Shengli Zhou from the Department of Electrical and
Many thanks to my office mates in Room 221 ITEB. Their kindness and enthusiasm
Last but not least, I would like to thank my family. Their love and support are
truly important to me. For them, I know thanks will never suffice.
iii
TABLE OF CONTENTS
Chapter 1: An Overview 1
Chapter 2: Background 15
iv
3.4.2 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.4.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.3.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
v
5.1 Challenges and Contributions . . . . . . . . . . . . . . . . . . . . . . . . 130
vi
A.2.2 Components of Aqua-Sim . . . . . . . . . . . . . . . . . . . . . . 188
Bibliography 191
vii
Chapter 1
An Overview
Sensor networks have been envisioned as powerful solutions for applications such
as monitoring, surveillance, measurement, control and health care [5, 17, 33, 34, 41, 53,
56, 72, 76, 96]. Recently, application of sensor networks in aquatic environments (i.e.,
building underwater sensor networks (UWSNs)) has received growing interest [3, 16,
30, 77, 104]. Pioneering example projects are SNUSE (Sensor Networks for Undersea
Seismic Experimentation) for undersea seismic monitoring at USC/ISI [88] and NIMS
systems, most sensors are either anchored at the sea floor or embedded on the surface
buoys. This type of static network architecture well satisfies the requirements of the
underwater sensors that can move with water current and dispersion.
1
M-UWSN is demanded by a wide range of applications, such as estuary (river, lake,
or other water resource) monitoring, submarine detection, and harbor protection [16].
Depending on the applications, M-UWSN mobility can bring two major benefits:
Mobile sensors injected into the current in relative large numbers can help to track
changes in the water mass, thus providing 4D (space and time) environmental
estuary monitoring [110]; the alternative is to tow the sensors on boats or on wires
(like in NIMS) and carry out a large number of repeated experiments. The latter
approach would take more time and money. In addition, the multitude of sensors
Floating sensors can form dynamic monitoring coverage and increase system
reusability. In fact, through a buoyancy engine one can dynamically control the
depth of the sensor and force it to surface for recycle when its battery is low or
sensors are usually fixed to the sea floor or attached to pilings or surface buoys.
In these cases, the sensor replacement and recovery cost is very high and system
reusability is low.
equips each sensor with the capabilities of dynamically and locally monitoring, detect-
ing, and reporting events (data sampling or intruder detection). In the rest of this
section, we will first present some examples to highlight the use of and the system re-
quirements of M-UWSNs. Then we will discuss the unique features and the new design
issues of M-UWSNs.
2
Fresh
Fresh Water Current Buoyancy
Buoyancy Control
Control
Salty Water Current
Salty
Estuaries are regions of complex mixing dynamics that require observation of pro-
cess in an advecting system [36, 44, 68]. Estuaries also represent the sites of many
of the worlds great cities, and naval stations and thus potential targets. Estuaries
in a simplistic sense export freshwater at the surface and import salt water at depth
thus creating a complex circulation system that transports nutrients, sediments and
toxic metals in a process that is poorly observable from ships or fixed stations (See
difficult to quantify process and accurately account for the various inputs and outputs
via fixed and shipboard measurement approaches [39]. The use of mobile, acoustically-
linked sensors deployed as a network (i.e., forming an M-UWSN) will allow estuaries
to be examined in detail and will provide a major improvement in our ability to assess
estuaries. M-UWSNs will thus be invaluable for quantifying the intrinsic variation of
properties along the estuarine gradient, as well as providing critical inputs for data as-
similative models [110]. The challenge for estuarine M-UWSN deployments will be to
achieve both vertical resolution (favoring continuous vertical profiling) and horizontal
3
Radio
Buoys
Data
Report
Acoustic
Sonar
Transmitter
water mass tracking (favoring remaining at a particular depth). In order to meet the
observational needs and to observe process in-situ, sensors must be maintained in the
area, which is subject to both upstream and downstream transport due to the estuarine
flows. This can be achieved by sensor buoyancy control as illustrated in Figure 1.1.
The main requirements of this application of M-UWSN systems are: (1) energy
efficiency: Since this type of network systems is designed for relatively long-time
monitoring task (for several days or months). The longer the deployment, the more
important the need for command and control communication to, e.g. adjust buoyancy
and or surface for recovery. Energy saving is thus a central issue to consider in the
protocol design, such as medium access control and data forwarding. (2) sensor lo-
task to locate mobile sensors, since usually only location-aware data is useful in aquatic
monitoring.
tion (LPD) technology. To reduce a submarines acoustic signature, the hull of the
4
submarine is covered with rubber anti-sonar protection tiles to reduce acoustic detec-
tion. The LPD technology hides the submarines acoustic signature to thwart active
SONAR probing and also reduces the intra-submarine noise to foil passive SONAR
thus legible acoustic signature may only be collected within a very short distance
an ideal solution for submarine detection. A number of underwater robots and a large
amount of underwater sensor nodes can be air-dropped to the venue. At real time, each
sensor node monitors local underwater activities and reports sensed data via multi-hop
acoustic routes to a distant command center. Figure 1.2 illustrates a scenario of sub-
marine detection. The new requirements of this type of M-UWSN systems (compared
with previous M-UWSN systems for estuary monitoring) are: (1) reliable transfer:
To avoid false alarm, reliable data report delivery is required from the data source to
the data sink. (2) resilience: For surveillance applications, the network resilience is
clearly demanded.
Compared with terrestrial sensor networks, M-UWSNs have the following unique
features: First, radio does not work well in underwater environments. Instead, sound
waves are usually used as the communication carrier. Unlike RF (Radio Frequency)
channels, underwater acoustic channels feature long propagation delays, and low com-
munication bandwidth and high channel error rates. These characteristics create grand
challenges in network protocols and algorithms, such as reliable data transfer and
medium access control; Second, most sensor nodes in terrestrial sensor networks are
typically static. In contrast, majority of underwater sensor nodes (except some fixed
5
nodes equipped on surface-level buoys) are mobile due to water currents, as introduces
passive sensor mobility. Such sensor node mobility poses big challenges to network
protocol and algorithm design, such as medium access control, multi-hop data routing,
(UANs) [77, 89, 104], which are small-scale networks relying on remote telemetry or
unreasonable because of the density of nodes that would be necessary [37]. In addition,
underwater sensor nodes are redistributed by advection and dispersion and thus the
The protocols used in UANs (usually inspired from terrestrial wireless ad hoc networks)
rates and high random dispersion rates. Thus an M-UWSN must be a scalable sensor
network, which relies on localized sensing and coordinated networking among large
numbers of sensors.
the needs of M-UWSNs. The design elements necessary for M-UWSNs are:
6
Self-localization and time-synchronization: When the majority of the sensor
nodes are mobile, an M-UWSN requires (a) precision in relative locations of the
mobile sensor nodes and (b) accuracy in the absolute location of the network.
Handling mobility, and at the same time improving precision and accuracy in
Moreover, synchronizing the time clocks among many coordinating mobile sensor
(without loss or error) is challenging. (a) Data flooding consumes too much
energy; but pre-configured routes are not workable because the network is too
sound in water and mobility/loss at the same time make the TCP like end-to-end
not known.
plications because attackers can easily inject fake traffic to disturb normal com-
munication among sensors. This kind of attack is even more severe in M-UWSNs
due to the long propagation delay of acoustic signals [49]. Moreover, intermittent
network partitioning (that is, some nodes are disconnected from the other nodes)
within the M-UWSN may occur due to water turbulence, currents, obstacles (e.g.
ships), etc. and there may be situations where no connected path exists at any
7
given time between the source and the destination. DTN technique [20] shows
to be investigated.
node is critical. A good design should account for the impact of communications
design. In this dissertation work, we will focus on three fundamental networking prob-
lems in M-UWSN design, covering medium access control, multi-hop routing, and reli-
able data transfer. However, the dissertation work will leverage other research efforts
from the UConn Underwater Sensor Network (UWSN) lab and collectively addressing
a broader range of issues. The UWSN lab has an ultimate goal to make M-UWSNs
practical and useful for scientific research as well as national security and defense.
M-UWSNs face one of the hardest physical environments encountered in data com-
tion bandwidth, high channel error rate, and sensor node mobility. This makes it very
difficult to achieve high network performance (e.g., channel utilization and throughput)
8
while requiring low energy consumption at the same time. On the other hand, since
the transmitting and receiving power is much larger in comparison to the computation
power (in WHOI Micro-Modem [24], the transmit power is 10 watts, and the receive
networking problems with the most sophisticated coding and protocol techniques. In
fact, this philosophy provides us very useful guidelines to tackle the following funda-
(1) Medium access control (MAC): The long propagation delays and nar-
row communication bandwidth of acoustic channels pose the major challenges for the
energy-efficient MAC design in M-UWSNs. For the first time, we formally investigate
with long propagation delays and low communication bandwidth (as in M-UWSNs).
Our analysis shows that the RTS/CTS-based technique outperforms the random ac-
cess method in networks with dense deployment, short transmission range, large packet
size and high data traffic. Based on this study, we propose a novel reservation-based
MAC protocol, called R-MAC. R-MAC can avoid data packet collision efficiently and
sensor networks. In R-MAC, each node powers on and off periodically and loosely
synchronizes with its neighbors. When a node (sender) wants to send data to another
node (receiver), the sender first sends a reservation request to the receiver. The re-
ceiver needs to reserve the channel and notify the sender of the reservation. Afterwards
the sender transmits the data in the reserved time slot. In order to guarantee that the
reservation request, data and reservation notification are delivered to the intended node
in its working time, both the sender and receiver in R-MAC have to schedule the trans-
missions of all the control packets and data packets. We conduct extensive simulations
9
to evaluate the performance of R-MAC. Our results show that R-MAC resolves data
packet collisions more efficiently and effectively than the widely accepted efficient MAC
The problem of medium access control for M-UWSNs is extremely challenging. Our
work in this dissertation is just the beginning of a long journey. R-MAC is proposed
for static underwater sensor networks. However, through this work, we identify the
difficulties to design an energy-efficient MAC for M-UWSNs and set a solid foundation
for future work in this direction. We expect that some algorithms and ideas in R-MAC
are the two major concerns for multi-hop routing, which have been rarely investigated
simultaneously in the same network context. We design the first routing protocol,
routing protocol [65]. It employs a novel concept of routing vector, which is defined
as the vector connecting the source to the sink. In VBF, the information of the routing
vector is carried in each data packet. All nodes that are close to the vector are qualified
to forward data packets. In order to improve the robustness of VBF in sparse networks,
VBF). In HH-VBF, the routing vector is not global any more. Instead, each forwarding
node has a routing vector, which is represented by the vector from the current node
paths, we design self-adaption algorithms in VBF and HH-VBF to enable each node to
weigh the benefit to forward a packet and make the forwarding decision accordingly.
We evaluate the performance of VBF and HH-VBF by analysis and simulations. The
results show that VBF is energy efficient, and at the same time can handle node mobility
10
effectively, while HH-VBF significantly improves network robustness in sparse networks
Since both VBF and HH-VBF are geographic routing protocols, they may suffer
from the routing void problem caused by their greedy policy to select next hop. In M-
UWSNs, the routing void problem is even more challenging since the voids in M-UWSNs
are usually 3-dimensional, volatile and mobile. To address this tough issue, we propose
a Vector-Based Void Avoidance (VBVA) algorithm, which is the first void avoidance al-
gorithm addressing 3-dimensional, volatile and mobile voids in mobile sensor networks.
In VBVA, there are two key mechanisms: vector-shift and back-pressure. Once de-
tecting a void, VBVA first attempts to avoid the void by the vector-shift method, i.e.,
changes the forwarding vector of the packet. When the direction defined by the for-
warding vector leads to a dead end, VBVA withdraws the forwarding packet from the
wrong direction by the back-pressure method, i.e., routes the packet backward in the
forwarding vector. Simulation results show VBVA can effectively bypass different types
of voids such as convex voids, concave voids and mobile voids. The results also show
that VBVA can achieve almost the same successful delivery as flooding, but with much
(3) Reliable data transfer: The long propagation delays, the low communication
bandwidth, the high channel error rates and the dynamic network topology in M-
UWSNs bring significant challenges for reliable data transfer. We first tackle this
data reliable transfer (SDRT), which is essentially a hybrid approach of ARQ and
FEC. SDRT groups the original data packets into blocks and encodes each block by
efficient erasure codes. Then encoded packets are transferred block by block and hop by
hop. Compared with traditional reliable data transport protocols explored in terrestrial
11
sensor networks, SDRT can reduce the total number of transmitted packets, improve
mathematic model to estimate the expected number of packets actually needed. Based
consumption caused by the large propagation delays. Moreover, this model enables
packets are first encoded by the source using randomized linear codes. Upon receiving
the packets, all intermediate nodes linearly combine the incoming packets indepen-
dently and forward these newly combined packets. If the sink (i.e., the destination)
receives an enough number of linearly independent encoded packets, it can recover the
original data packets. We evaluate this network coding approach by analysis and sim-
ulations. Our results show that the proposed scheme significantly enhances network
use simulation tool for underwater sensor networks. Thus, in this dissertation work, we
implemented several MAC protocols, routing protocols and reliable transfer protocols
simulator, it can easily be integrated with existing work of NS-2, directly benefiting
12
In summary, this dissertation work has the following contributions: (1) A set of
protocols and algorithms for three fundamental networking problems (MAC, routing
and reliable transfer) in M-UWSNs; (2) A simulator for UWSNs, providing a useful
We first review the characteristics of underwater acoustic channels. We then discuss the
distinctions between M-UWSNs and terrestrial sensor networks. After that, we present
several state-of-the-art underwater network systems and point out their limitations.
In Chapter 3, we present our work on medium access control. We first review the
challenges for medium access control in underwater sensor networks and summarize our
contributions in Section 3.1, then we describe the related work on medium access control
in Section 3.2. After that, we present our study on random access and RTS/CTS-based
methods in underwater environments in Section 3.3 and describe our protocol, R-MAC,
in Section 3.4. Finally, we summarize our work on medium access control in Section 3.5.
challenges in Section 4.1. Then we review the related work in Section 4.2. After that,
we present our work, VBF and HH-VBF in Section 4.3 and VBVA in Section 4.4.
first discuss the design challenges on reliable data transfer for M-UWSNs in Section 5.1.
We then review the related work in Section 5.2. Following that, we present our protocol,
13
SDRT in Section 5.3 and our network coding scheme in Section 5.4. Finally, our work
We conclude this dissertation and lay out future research directions in Chapter 6.
14
Chapter 2
Background
is that sound waves are usually used as the communication carrier. Other possible
physical waves for wireless communication in underwater environments are radio and
optical. However, neither of these waves are appropriate for M-UWSNs. Firstly, radio
suffers from high attenuation in salty water. As surveyed in [4], RF (Radio Frequency)
signals can propagate at a long distance through conductive water only at extra low fre-
quencies, 30-300 Hz, which require large antennae and high transmission power. Thus,
the high attenuation of radio in water makes it infeasible for M-UWSNs. Secondly,
optical signals usually suffer from the short-range and line-of-sight problems. As re-
meters in clear water and 1.4 meters in highly turbid water. Moreover, optical signals
are severely impaired by scattering when used in underwater, and the transmission
15
of optical signals requires high precision in communication. In short, it is widely ac-
cepted that acoustic channels are the most appropriate physical communication links
unique features.
1. Long propagation delays: The propagation speed of sound in water is about 1500
m/s, which is 5 order of magnitude lower than that of radio (310 8 m/sec). More-
the water medium such as temperature and salinity. Such low propagation speed
nels is very limited due to absorption and most acoustic systems operate below
30kHz, which is extremely low compared with radio networks. Moreover, the
research and commercial system can exceed 40 kmkbps as the maximum at-
3. High channel error rates: The quality of acoustic channels is affected by many fac-
tors such as signal energy loss, noise, multi-path and Doppler spread. Energy loss
comes from signal attenuation and geometry spreading, where signal attenuation
is caused by the absorption of acoustic energy (which increases with distance and
Noise, man-made noise or ambient noise, clearly disturbs the channels, multi-path
16
possibly introduces inter symbol interference (ISI), and Doppler effect may cause
adjacent symbols to interfere at the receiver. All these factors contribute to the
In short, underwater acoustic channels are featured with large propagation de-
lays, limited available bandwidth and high error rates. Furthermore, the bandwidth
frequency of acoustic signals. The bigger the communication range, the lower the
M-UWSNs share many common properties with terrestrial sensor networks. For
example, similar to ground sensor nodes, underwater sensor nodes are usually powered
UWSNs are also energy constrained networks. However, M-UWSNs are significantly
for M-UWSNs. The unique features of underwater acoustic channels: long propa-
gation delays, low communication bandwidth and high error rates, make the pro-
tocols proposed for terrestrial sensor networks unsuitable for M-UWSNs. Specif-
ically, long propagation delays and low available bandwidth directly affect the
2. Node Mobility: Most sensor nodes in terrestrial sensor networks are typi-
17
sensor nodes and a limit amount of mobile nodes (e.g., mobile data collect-
ing entities like mules which may or may not be sensor nodes). In con-
trast, majority of underwater sensor nodes, except some fixed nodes equipped
on surface-level buoys, are with low or medium mobility due to water current
objects may move at the speed of 2-3 knots (or 3-6 kilometers per hour) in a
restrial sensor networks does not consider mobility for majority of sensor nodes,
it would likely fail when directly cloned for aquatic applications. In M-UWSNs,
the dynamic network topology resulted from node mobility makes medium access
control, multi-hop routing and reliable data transfer even more challenging.
and routing) designed for terrestrial sensor networks unsuitable for M-UWSNs.
In short, due to the unique features of M-UWSNs, new solutions and ideas are
18
2.3 Current Underwater Network Systems and Their Limitations
Despite the young age of underwater networking area, it has attracted significant
Seaweb [80] is a wireless sensor grid interconnected by acoustic links. Data is for-
warded through the network over multi-hop communication paths. The ultimate goal
signed for sparse networks with long transmission distances and the scale is usually
very small. For example, in one application project of Seaweb, the front-resolving ob-
servational network with telemetry (FRONT) led by UConn, the network only consists
of 17 nodes in total.
The NEPTUNE project [63] led by University of Washington aims to build a wired
transfer. It hosts 30 nodes with 3300 km fiber optic cable and the aggregate back-
to test the idea of using underwater vehicles for data acquisition in oceanographic
(UAVs) with static sensor networks. The deployment of UAVs improves data fidelity
and extends the coverage of sensor networks. There are three projects under the AOSN
program, namely Slocum [102], Seaglide [19] and Spray [86]. All these projects focus
19
More recently, there are efforts to apply underwater sensor networks into real ap-
led by USC/ISI is a project targeted at seismic monitoring applications. The final goal
ultra-low duty cycle applications. SNUSE is currently at the very early stage of the
development.
To summarize, none of the current underwater network systems addresses the fol-
lowing problems.
1. Large network scale: Most current underwater network systems target at small
sensor nodes are usually powered by batteries. Thus, improving energy efficiency
UWSNs.
namic due to node mobility. Thus, the algorithms and protocols designed for
error-prone. Moreover, underwater nodes have high failure rates. Thus, robust-
20
To make M-UWSNs a reality, all the above problems must be well solved. In
this dissertation, we address these issues from three aspects: medium access control,
21
Chapter 3
cient medium access control (MAC) protocol to coordinate the communication among
munity. On the one hand, there is no need for MAC protocols in existing small-scale
acoustic networks (i.e., UANs), since in UANs, sensors are sparsely separated from
each other, and point-to-point communication is sufficient. On the other hand, most
existing MAC protocols in radio-based networks assume that the signal propagation
delay between neighbor nodes is negligible, as is significantly different from the scenario
than that of radio in air. Moreover, the bandwidth capacities of acoustic channels are
very low compared with those of RF channels. While ALOHA-type protocols used in
satellite networks address the long delay issue to some extent, medium access control
handling both long propagation delay and low bandwidth is fairly unexplored. Further-
more, energy efficiency of MAC protocols in satellite networks usually is not a major
22
concern. In this dissertation work, we aim to develop an energy-efficient MAC solution
for M-UWSNs, taking long propagation delay, low bandwidth and node mobility into
account.
MAC protocols can be roughly divided into two main categories: (1) scheduled pro-
tocols that avoid collision among transmission nodes, including time-division multiple
access (TDMA), frequency division multiple access (FDMA) and code division mul-
tiple access (CDMA), etc. and (2) contention-based protocols where nodes compete
ALOHA, carrier sense access (CSMA), and collision avoidance with handshaking access
(MACA, MACAW), etc. In general, scheduled protocols are not suitable for large-scale
underwater sensor networks: TDMA requires centralized control which is not scalable
in networks with a large number of nodes; FDMA is not suitable due to the narrow
difficult to assign pseudo-random codes among large numbers of sensor nodes. More-
over, the near-far problem inherited in CDMA is not well addressed in underwater
underwater sensor networks. To answer this question, let us first examine the basic
based protocols. Due to the long propagation delay of sound, carrier sensing is almost
meaningless in acoustic networks. It has also been argued that contention-based proto-
not appropriate in underwater communications [3, 77]. The commonly cited reason is
that RTS/CTS involves large end-to-end delays, thus increasing energy consumption.
Based on a similar argument, in [81], Rodoplu et al. proposed a random access based
23
MAC protocol for underwater sensor networks, focusing on low duty cycle applications
with relatively sparse sensor deployment. However, there are several critical questions
yet to be answered: (1) Is random access an absolute winner? (2) Can RTS/CTS yield
better performance than random access in some networks conditions? (3) Is it possible
In this chapter, we explore the random access and handshaking (i.e., RTS/CTS)
techniques. We first formally model the two approaches, and then conduct extensive
on our results, we observe that the performance of random access and RTS/CTS are
affected by many factors such as data rate, transmission range, network topology,
packet size, and traffic pattern. And our results show that RTS/CTS is more suitable
for dense networks with high date rate, low/medium transmission range and bursty
traffic, whereas random access is preferred in sparse networks with low data rate and
non-bursty traffic.
called R-MAC, for large-scale dense underwater sensor networks with high data rate
(required by the monitoring type of applications). Our results suggest that RTS/CTS-
based techniques work better in this type of network scenarios. The major design goals
of R-MAC are energy efficiency and fairness. R-MAC schedules the transmissions
of control packets and data packets to avoid data packet collision completely. The
scheduling algorithms not only save energy but also solve the exposed terminal problem
nodes in the network to select their own schedules, thus loosening the synchronization
24
show that R-MAC is an energy efficient and fair MAC solution for underwater sensor
networks.
The rest of this chapter is organized as follows. In Section 3.2, we briefly review
some related work on MAC. In Section 3.3, we present our study on random access and
Many widely used MAC protocols, such as IEEE 802.11, for radio networks are
based on the RTS/CTS approach. The RTS/CTS approach was first proposed in
MACA [42]. Then a variant protocol, MACAW, was proposed in [9]. This protocol
adopts backoff and ARQ techniques in addition to the RTS/CTS control message ex-
change. Later, carrier sensing was combined with RTS/CTS in a new protocol, called
FAMA, in [25].
A major problem with these RTS/CTS based protocols is energy efficiency: too
much energy is wasted in idle state since all nodes keep powered on all the time.
With the emerging of sensor networks, energy-efficient MAC becomes a hot topic.
PAMAS proposed in [87] makes an improvement to save energy by putting nodes into
sleep state when the nodes are prohibited from sending any packet. PAMAS improves
MACA without sacrificing the throughput and end-to-end delay. However, in order
to turn off/on the nodes intelligently, PAMAS uses a separate signaling channel as a
S-MAC proposed in [114] uses the in-channel signal to control the node to listen and
ever, the fixed duty cycle in S-MAC is undesirable since the traffic in sensor networks
25
varies with locations and time. T-MAC proposed in [98] improves S-MAC by adopting
an adaptive duty cycle scheme to reduce the energy consumption while maintaining a
reasonable throughput.
[75] proposed a lightweight MAC protocol, called B-MAC, which is the default
MAC for Mica2. B-MAC provides a well-defined interface to allow end users to tailor
[79] proposed a hybrid MAC protocol, called Z-MAC, which combines TDMA and
CSMA. In Z-MAC, time is divided into slots and each node is assigned with one slot
as in TDMA. The owner of a slot has the highest priority to use the slot, but other
nodes can access the medium at this slot as long as the owner does not send data.
Our R-MAC design has benefited from the previous techniques in various aspects.
However, due to the unique characteristics of acoustic channels, we cannot directly ap-
ply radio-based ideas into underwater network scenarios. Instead, new MAC solutions
are required for underwater sensor networks. In the literature, there are only few MAC
follows.
A random access based MAC protocol for underwater sensor networks was proposed
in [81]. Due to the fundamental limitation of random access, this protocol only works
well for the networks with very low and evenly distributed traffic. Moreover, this
protocol has high overhearing and idle waste since for a given node all its neighbors
have to wake up for the possible traffic, even when this node sends nothing.
A modified FAMA, called slotted FAMA, was proposed in [60] for underwater
acoustic networks. In slotted FAMA, time is divided into slots, and all packets in-
cluding control packets and data packets are sent at the beginning of a slot. In this
26
way, the lengths of RTS and CTS are not determined by the propagation delay as is in
the original FAMA, thus making the protocol feasible for underwater acoustic networks.
It should be noted that energy efficiency is not the design goal of slotted FAMA.
A hybrid MAC protocol was proposed in [50] for underwater networks. In this
protocol, nodes are well synchronized. Time is divided into frames, and each frame
is further divided into scheduled slots and unscheduled slots. In the scheduled slots,
TDMA is adopted, while in unscheduled slots, random access protocols may be used.
This hybrid MAC protocol attempts to gain the benefits of both scheduled and random
More recently, [93] proposed a new MAC protocol, called T-Lohi, for underwater
sensor networks. In T-Lohi, nodes contend to capture the medium by sending a short
tone. After successfully winning the right to access the medium, a node then sends
data packets. T-Lohi precedes any data packet with a wake-up tone to wake up the
receiver for the subsequent data transmission. This MAC solution is mainly designed
to improve channel throughput. The major hurdle to the wide application of T-Lohi is
that tone-receivers require special hardware. Moreover, the current design of T-Lohi is
only for single-hop networks, i.e., all nodes in the network can overhear each other.
In this section, we present our study on random access and RTS/CTS. The work
is published in [106].
We use a simple network model, which is illustrated in Figure 3.1. In this model,
there is only one receiver, node B, located at the center, and all other nodes such as A,
27
E
A B C
C, D, and E in its transmission range (denoted by R) compete for the channel to send
data. We assume the data generation in each node is independent and follows a Poisson
process with a rate . We use Sp to denote the packet size and Tp to denote the packet
Sp
duration. Then we have Tp = bw , where bw is the bandwidth of each node. Further, we
use n to denote the number of nodes in the network, L to denote the average distance
from a sender to the receiver, and v to denote the sound propagation speed in water.
Throughput is defined as the number of effective bits, i.e., the size of all the data packets
successfully received by the receiver (node B) in one second. In this study, we call
all the data packets that a node can send at one time as a message, and we use m
to denote the number of data packets in one message. When a message consists of
multiple data packets, we ignore the interval between two consecutive packets since it
is usually very small compared with the data transmission time and propagation delay.
Communication overhead is defined as the ratio of the total number of bits sent by the
28
3.3.2 Modelling Random Access
In the random access approach, a sender simply starts sending whenever it has data
ready for delivery. When a data packet arrives at the receiver, if the receiver is not
receiving any other packet and, during the time of receiving this data packet, there is
no incoming data packet (i.e., in a time period of 2 T p , there are no other arriving
packets), then the receiver can receive this data packet successfully.
evaluate the success probability (denoted by P ) of one packet sent from one sender (e.g.,
node A) to the receiver (node B). In fact, P is equivalent to the probability that all the
nodes (including node A) in the transmission range of node B do not send any packet
where Di is the propagation delay from node i to node B. Please note that D i s are
1
random = = e2(n1)Tp . (3.3)
P
When there is bursty traffic in the network, i.e., m > 1 for each node, the through-
put of random access is different from the case without bursty traffic. We assume that
29
all the sender nodes have the same pattern of bursty traffic, i.e., m is the same for all
the sender nodes. Then for any sender, the probability that all m data packets are
Pm = e2(n1)Tp m , (3.4)
and the probability that exactly i (0 i < m) packets are successfully received by the
receiver is calculated as
In Equation 3.5, the first term represents the probability that the first m i packets
(of m packets) collide with preceding burst traffic, and the second term denotes the
probability that the last m i packets (of m packets) collide with subsequent burst
traffic. After obtaining Pm and Pi , the average number of packets that are successfully
m
0 i Pi . (3.6)
Then, the throughput of random access with bursty traffic can be calculated as
random (m) = (n 1) Sp m
0 i Pi . (3.7)
m
random (m) = . (3.8)
m
0 Pi
i
The basic idea of the RTS/CTS scheme is that a sender has to capture the channel
(by handshaking) before sending any data. In underwater sensor networks, due to
30
the long propagation delay, the traditional RTS/CTS model for radio-based networks
should be modified. In the following, we use two examples to show why it is infeasible
We first examine the example illustrated in Figure 3.2, where node A and node
B are 60 meters apart and node B and node C are 90 meters apart. Assuming node
A sends an RTS to node B, and then node B replies with a CTS. Considering the
CTS to arrive at node A and 60 ms to arrive at node C. After node A receives the
CTS, node A can not send data immediately since node C has not received the CTS
from node B yet and possibly sends an RTS to node B, which will collide with the data
packets sent from node A. Thus, in underwater sensor networks, to make RTS/CTS
work, we devise the following changes: when a node receives a CTS, it can not send data
immediately. Instead, it has to wait for the CTS to propagate the whole transmission
Data RTS
RTS
CTS B CTS
A C
damages the effectiveness of the traditional RTS/CTS mechanism, this network feature
in Figure 3.3, while node B is receiving a data packet from node A, node C can schedule
to send an RTS to node B. When this RTS propagates to node B, node B just finishes
receiving data from node A and is now ready for the RTS from node C.
31
Data(A)
A
RTS
Data(A) Data(B)
B
CTS Data(B)
C
RTS CTS
We now evaluate the throughput of the revised RTS/CTS approach. In this ap-
proach, the time for one transmission, T , is calculated as Tcts + Tprop + Ttrans , where
Tcts is the time that the CTS propagates through the transmission range of the receiver
(i.e., Tcts = R/v), Tprop is the time that data propagates from the sender to the receiver
(i.e., Tprop = L/v) and Ttrans is the data transmission time (i.e., T trans = Sp /bw). Due
to the effective collision avoidance of RTS/CTS, when the data rate is higher than the
channel capacity, the effective data rate for the receiver reaches the limit, and can not
increase any more. Thus, the effective data rate, t , for the receiver can be calculated
rts/cts = m Sp t . (3.9)
2 Sc + m S p
rts/cts = , (3.10)
m Sp
where Sc denotes the size of the control packets (i.e., RTS/CTS packets). Please note
that in this computation, we ignore the collision of RTS/CTS packets, since when S c
32
3.3.4 Numerical Results
communication overhead of random access and RTS/CTS under various network con-
ditions. We first explore their performance in networks without bursty traffic (referred
topology, and packet size. Then we investigate the performance of the two approaches
Unless specified otherwise, in all the numerical experiments studied in this section,
we set the bandwidth of each node, bw, to 10 kbps, and the distance between a sender
and the receiver, L, to 50 meters. In addition, the transmission range of each node, R,
is set to 100 meters, the data packet size, S p , to 40 Bytes and the control packet size,
We first check the performance of random access and RTS/CTS with varying data
rates. In this set of experiments, we change the date rate from 0 to 10 pkts/sec,
and fix all other parameters as in the general experimental settings. We measure the
throughput and communication overhead of random access (We use Random for
short in all the result figures) and RTS/CTS, and the results are plotted in Figure 3.4
From Figure 3.4, we can observe that when the data rate is low, the throughput of
both random access and RTS/CTS increases as the data rate is lifted. When the data
rate exceeds some threshold, the throughput of both approaches reaches the maximum
33
3000
2500
Random
RTS/CTS
2000
throughput (bps)
1500
1000
500
0
0 2 4 6 8 10
data rate (pkts/sec)
3.5
Random
RTS/CTS
3
communication overhead
2.5
1.5
1
0 1 2 3 4 5
data rate (pkts/sec)
value. For the RTS/CTS approach, due to the effective collision avoidance, throughput
keeps stable at the maximum value when the data rate continues to increase. However,
for the random access approach, throughput starts to drop after the data rate exceeds
some limit (and will eventually approach 0 when the data rate is extremely large).
This is because that in random access, with no collision avoidance, very high data rate
means very high chance of collision, translating into very low throughput.
The effect of data rate on the communication overhead of both approaches is illus-
trated in Figure 3.5. From this figure, we can see that the communication overhead
34
of random access increases dramatically as the data rate becomes larger, as can be
explained as follows: when the data rate is larger, there is more chance for collision,
thus more packets will be wasted, i.e., more communication overhead is introduced.
On the other hand, the communication overhead of RTS/CTS does not change with
Comparing the performance of random access and RTS/CTS in Figure 3.4 and
Figure 3.5, we can observe that when the data rate is low (less than 0.7 pkts/sec), the
communication overhead of random access is lower than that of RTS/CTS, while the
throughput of random access is almost the same as that of RTS/CTS. This indicates
that random access is preferred in networks with very low data rate.
performance of random access and RTS/CTS. We vary the data rate and fix other
parameters, and plot the throughput of the two approaches for various transmission
3500
3000
2500
throughput (bps)
2000
RandomR50
1500
RandomR100
RandomR1000
1000 RTS/CTSR50
RTS/CTSR100
RTS/CTSR1000
500
0
0 1 2 3 4 5
data rate (pkts/sec)
35
From Figure 3.6, we observe that the transmission range affects the throughput
of RTS/CTS significantly. When the transmission range is very large, for example,
1000 meters, the throughput of RTS/CTS is lower than that of random access for a
wide range of data rate. This is because that when the transmission range is large, the
time wasted for a CTS to propagate through the whole transmission area is long, thus
degrading throughput. While for random access, the transmission range has no effect
Based on Equation 3.3 and Equation 3.10, we can easily see that the transmission
summarize, this study suggests that RTS/CTS does not perform well in networks with
We now check the effect of network topology on the throughput and communication
overhead of random access and RTS/CTS. We study two topological metrics: the
number of nodes, n, in the transmission range of the receiver (representing the network
n from 5 to 15, and plot the throughput of random access and RTS/CTS in Figure 3.7.
From this figure, we can observe that random access performs worse in a denser network.
This can be explained as follows: in a denser network, the random access approach
reaches its limit more quickly, thus degrading faster. As for RTS/CTS, the network
density has some effect on throughput when the data rate is low, since in such network
conditions, throughput is determined by the date rate, and more neighbors means more
36
incoming data, thus higher throughput. However, when the throughput of RTS/CTS
climbs to its limit, due to the effective collision avoidance, the network density has no
2500
2000
throughput (bps)
1500
1000
Randomn15
Randomn10
Randomn5
500 RTS/CTSn15
RTS/CTSn10
RTS/CTSn5
0
0 1 2 3 4 5
data rate (pkts/sec)
5.5 Randomn5
Randomn10
5 Randomn15
RTS/CTSn5
RTS/CTSn10
communication overhead
4.5
RTS/CTSn15
4
3.5
2.5
1.5
1
0 0.5 1 1.5 2
data rate (pkts/sec)
ure 3.8. This figure shows that the network density has no effect on the communication
37
overhead of RTS/CTS. However, it affects the communication overhead of random ac-
cess significantly: the denser the network, the higher communication overhead, as is
Now, we investigate the effect of distance between a sender and the receiver (i.e.,
L). From Equations 3.3 and 3.10, we know that L has no effect on the communication
overhead of both random access and RTS/CTS approaches. However, it affects the
the results of throughput in Figure 3.9. From this figure, we can see that when the
distance between a sender and the receiver is shorter, the throughput of RTS/CTS
has a higher maximum value. This, from another aspect, indicates that RTS/CTS
performs better in denser networks. On the other hand, the distance L has no effect
3000
2500
2000
throughput (bps)
1500
1000 RandomL20
RandomL60
RandomL80
500 RTS/CTSL20
RTS/CTSL60
RTS/CTSL80
0
0 2 4 6 8 10
data rate (pkts/sec)
Comparing the performance of random access and RTS/CTS with different topo-
logical parameters in Figure 3.7, Figure 3.8, and Figure 3.9, we can conclude that when
the data rate is high, RTS/CTS works better in a dense network than random access,
38
Effect of Packet Size
We now investigate the effect of packet size on the throughput of random access
and RTS/CTS. We vary the data rate cases, setting the packet size to 40 Bytes,
80 Bytes and 160 Bytes respectively. The results are plotted in Figure 3.10. From this
figure, we can see that the packet size has significant impact on both random access
and RTS/CTS approaches. For RTS/CTS, as the packet size increases, the maximum
throughout increases as well. On the other hand, the increasing packet size damages
the throughput of the random access approach: when the packet size becomes larger,
the throughput of random access reaches its limit earlier, and then decreases more
dramatically as the data rate continues to increase. Another point worth notice is that
when the packet size increases, the maximum throughput of random access does not
6000
RandomSp40
RandomSp80
RandomSp160
5000
RTS/CTSSp40
RTS/CTSSp80
RTS/CTSSp160
4000
throughput (bps)
3000
2000
1000
0
0 1 2 3 4 5
data rate (pkts/sec)
Figure 3.11 shows the effect of packet size on the communication overhead of ran-
dom access and RTS/CTS. We observe that larger packet size reduces the communi-
39
8
RandomSp40
RandomSp80
7
RandomSp160
RTS/CTSSp40
6 RTS/CTSSp80
communication overhead
RTS/CTSSp160
1
0 0.5 1 1.5 2
data rate (pkts/sec)
In short, from this set of experiments, we can conclude that random access works
better in a small-packet-size network, while RTS/CTS favors the networks with a large
packet size.
under bursty traffic. Since sensor networks are widely used in the applications of
measurement and surveillance, data traffic in such networks is averagely low in long
time periods, but may be heavy in some very short time periods when sensor nodes are
triggered by some events. We believe bursty traffic would be also typical in underwater
message, to measure the burstiness of data traffic. We set m to 1, 10, and 20, and plot
the results of throughput and communication overhead in Figure 3.12 and Figure 3.13
respectively.
Figure 3.12 shows that bursty traffic significantly improves the maximum through-
put of random access and RTS/CTS. Moreover, heavier bursty traffic makes random
40
9000
8000
7000
6000 Randomm1
throughput (bps)
Randomm10
5000 Randomm20
RTS/CTSm1
4000 RTS/CTSm10
RTS/CTSm20
3000
2000
1000
0
0 1 2 3 4 5
data rate (msgs/sec)
5.5 Randomm1
Randomm10
5 Randomm20
RTS/CTSm1
communication overhead
4.5
RTS/CTSm10
RTS/CTSm20
4
3.5
2.5
1.5
1
0 0.5 1 1.5 2
data rate (msgs/sec)
access achieve its maximum throughput more quickly and then degrade more sharply
as the data rate increases. As for RTS/CTS, with bursty traffic, the system can reach
its maximum throughput when the data rate is very low, and the maximum throughput
The communication overhead of random access and RTS/CTS under different lev-
els of bursty traffic is plotted in Figure 3.13. From this figure, we observe that as the
data traffic gets more bursty, the communication overhead of random access increases
41
as there are more packets in one data burst (i.e., one message). This is due to the
fact that RTS/CTS can avoid packet collision effectively, and more packets sent mean
low overhead per packet for collision avoidance, while random access has no collision
avoidance, and more packets sent mean high probability of collision, thus high commu-
nication overhead.
This study indicates that though bursty traffic improves the throughput of both
random access and RTS/CTS approaches, random access improves its throughput at
the cost of more packet collision, and RTS/CTS improves its throughput more effi-
ciently.
3.3.4.4 Summary
We summarize our observations as follows: (1) Random access has almost the same
performance as or better performance than RTS/CTS under very low traffic and sparse
deployment (which is consistent with the argument in [81]), while when the data rate
increases and the network gets denser, the channel is saturated quickly, resulting in low
has no significant advantages at low data rate and sparse deployment, but provides
more room for performance improvement in dense networks with high data rate; (3)
the transmission range is large, RTS/CTS has a very low throughput. On the other
hand, the transmission range has no effect on random access; (4) The packet size has
significant impact on the performance of both random access and RTS/CTS. In general,
random access works better in networks with a small packet size, while RTS/CTS
of random access, but at the cost of more energy waste on packet collision. As for
42
RTS/CTS, bursty traffic not only improves throughput but also reduces communication
overhead.
In the following, we first brief the basic ideas of R-MAC, then we describe the three
phases of R-MAC in details. After that, we focus on the scheduling algorithms on both
sender and receiver. Finally, we discuss several critical issues in R-MAC design.
In R-MAC, to reduce the energy waste on idle state and overhearing, each node
works in listen and sleep modes periodically. The durations for listen and sleep are the
same for all nodes, and each node randomly selects its own schedule, as means that no
centralized scheduling and synchronization are required in R-MAC. For any node, if
there is no traffic in its neighborhood, it simply listens and sleeps periodically. When a
node (i.e., sender) wants to send data to another node (i.e., receiver), R-MAC employs
R-MAC has three phases, namely, latency detection, period announcement, and
periodic operation. The first two phases are used to synchronize nodes in the neighbor-
hood and the third one is for listen/sleep operations. A node in the latency detection
phase detects the propagation latency to all its neighbors. In the period announcement
43
phase, each node randomly selects its own listen/sleep schedule and broadcasts this
schedule. The data (if there are any) are transmitted in the periodic operation phase.
In this phase, all nodes power on, each node in R-MAC detects the propagation
latency to all its neighbors. The algorithm used is essentially the sender-receiver syn-
chronization algorithm [26, 58]. Each node randomly selects a time to broadcast a
control packet, called Neighbor Discovery packet, denoted as ND. Upon receiving NDs
from its neighbors, a node records the arrival times of these NDs, then randomly se-
the same packet size as ND, for each of the NDs it receives. In each ACK-ND, the
node specifies the duration from the arrival time of the ND packet to the transmission
time of this ACK-ND packet, I2 . After receiving an ACK-ND, a node computes the
interval from the time that the corresponding ND packet is transmitted to the arrival
time of the ACK-ND, I1 . Then the propagation latency, L, between the two nodes can
I1 I2
be calculated as L = 2 . Therefore, the propagation latency L between two nodes
is the interval from the time the first node sends the first bit of a packet to the time
the time. Upon receiving this packet, node B randomly delays some time period I B and
sends an ACK-ND packet back to node A. Node B specifies in the ACK-ND packet the
time interval IB , its ID and the ID of ND packet. Upon receiving this packet, node A
computes the time interval from the transmission time of its ND packet to the arrival
44
time of ACK-ND packet, IA . Then node A calculates the propagation latency to node
IA IB
B as LAB = 2 .
IA
T T
P D
Node A
ND ACKND
Node B
L IB L
AB
AB
Propagation latency L includes transmission delay and propagation delay. Since the
system and network architecture in sensor nodes are relatively simple, the transmission
delay is mainly determined by the hardware and the size of the packet. Thus its
variance is negligible. The propagation delay is mainly depends on the sound speed
in water, which might be affected by many factors such as temperature and pressure.
However, for a short time period, it is reasonable to assume that the sound speed
in water is constant, i.e., propagation delay does not change in a short time period.
Therefore, propagation latency L is accurate and stable for a short time period. With
time going, latency measurements become more and more inaccurate due to clock drift
and varying sound speed. Thus, it is desirable that these measurement will be updated
after a period of time, as can be done through the message exchange in the third phase.
After the latency detection phase, each node records the propagation latencies to
45
3.4.1.3 Phase Two: Period Announcement
In this phase, each node randomly selects its own start time of the listen/sleep
periodic operations (i.e., the third phase) and broadcasts this time (we also call it
schedule). After receiving broadcast packets, each node converts the received times
As shown in Figure 3.15, node A, randomly selects its listen/sleep schedule, and
There are two fields in this packet: node As ID and time interval I A , which specifies
the interval from the time to send SYN to the beginning time of its third phase. Upon
receiving a SYN packet from node A, node B calculates the time interval from the
arrival time of this SYN packet to the starting time of its third phase, I B . Then node
B converts the schedule of node A relative to its own schedule by I B IA + LAB , where
Node A
L A
I B I +L AB
A
B
IB
Node B
In the second phase of R-MAC, each node records the schedules of its neighbors
relative to its own schedule. In the real implementation, the first two phases can
run multiple rounds to make sure that all the nodes in the network have complete
46
3.4.1.4 Phase Three: Periodic Operation
In this phase, nodes wake up and sleep periodically. We call one listen/sleep cycle
as one period. All nodes have the same periods. One period, denoted as T 0 , consists of
sage exchange, where REV denotes the reservation packet, ACK-REV is the acknowl-
edgment packet for REV, and ACK-DATA is the acknowledgment packet for data
DATA. All the control packets in R-MAC, namely REV, ACK-REV and ACK-DATA
have the same size, which is much smaller than that of data packets. When a node has
data to send, it first sends a REV to reserve a time slot at the receiver. If the receiver is
ready for data transmission, it will notify all its neighbors about the reserved time slot
by ACK-REVs. Upon receiving ACK-REVs, all the nodes other than the sender keep
silent in their corresponding time slots, and the sender can send data at the reserved
time slot. In R-MAC, data are transmitted in a burst. A node queues its data for
the same receiver until it captures the channel, then injects all the queued data. The
receiver sends back an ACK-DATA to the sender at the end of the burst transmission.
In other words, in order to reduce the control packet overhead and improve the channel
utilization, the receiver acknowledges per burst instead of per packet. We refer to this
R-MAC treats ACK-REVs as the highest priority packets and reserves the first
part of the listen window exclusively for ACK-REV packets. We call this reserved part
The rationale of this design is the following: ACK-REV is used by the receiver to notify
its neighbors of not interrupting the subsequent data transmission. If a node misses
an ACK-REV, it possibly interferes the subsequent data transmission. As for the case
47
of missing a REV or ACK-DATA, no data collisions will be caused. In R-MAC, nodes
only have to sense the channel in their R-windows to get the information about the
this node knows the duration of the subsequent data transmission and keeps silent
during that time period. However, when a node senses the channel busy in its R-
window, but can not receive an ACK-REV clearly (i.e., there is a ACK-REV collision),
it will back off. When a node is in backoff state, it still needs to sense the channel in
its R-window and updates the usage information of the channel in its neighborhood.
Since R-windows are designed for receiving ACK-REVs, all other types of pack-
ets (including REV, DATA, ACK-DATA) have to avoid R-windows. To achieve this
purpose, in R-MAC, all nodes have to carefully schedule the transmission of control
and data packets. The scheduling algorithms at both the sender and receiver should
guarantee that only ACK-REVs can propagate to any node in its R-window, and all
other control packets such as REVs and ACK-DATAs are scheduled to arrive at the
target in its listen window and data packets are scheduled to arrive at the intended
receiver in its reserved time slot. We discuss the scheduling algorithms next.
When a node has queued data packets and is in idle state, it then schedules to send
a REV to the intended receiver so that the REV arrives in the receivers listen window
and, at the same time, avoids the R-windows of all its neighbors.
The sender first maps the whole listen window of the intended receiver and the R-
windows of its neighbors into its own time line, then marks all the mapped R-windows
which fall in the mapped listen window. After that, it divides the unmarked part
(i.e., the available part) of the mapped listen window into slots by the duration of one
48
control packet and randomly selects one slot as the time to transmit the REV. The
REV specifies the required data duration and the offset of the to-be-reserved time slot
to the beginning of its current period. When the sender calculates the duration needed
to transmit the queued data packets, it has to count the time to skip the mapped
Node D
Node C (R)
Node B
MC MB WR MD
D1 D2 D3
Node A (S)
A
T
C
IO ID
Figure 3.16 illustrates the scheduling algorithm at the sender. In this figure, node
A is the sender and node C is the intended receiver. Node A maps the listen window
selects a slot from the unmarked part of T CA , denoted as WR , to send the REV packet.
The REV packet will arrive at node C during its listen window without interfering
other nodes in their R-windows. In the REV packet, node A specifies the offset of
the reserved time slot to the beginning of its current period, denoted as I O , and the
duration of the reserved time slot, denoted as I D , which includes the transmission time
of all the data packets (D1, D2 and D3 in the figure) and the time to skip the mapped
R-window of node D, MD .
The scheduling algorithm at the sender guarantees that no REV arrives in any
49
data packet arrives in any nodes R-window since the sender avoids all the R-windows
Once a reservation is selected, the receiver first schedules the transmission of ACK-
REVs, and based on this schedule, arranges a time slot for the selected reservation.
During the reserved time slot, the receiver powers on and waits for incoming data pack-
ets. After a pre-defined interval, if it does not receive any data packets as scheduled 1 ,
it simply quits the receiving state and goes back to the periodic listen/sleep. After the
To avoid data transmission interference, the receiver should guarantee that before
the sender receives its ACK-REV, all the receivers neighbors have already been notified
the reserved time slot. Therefore, when the sender receives the ACK-REV from the
receiver, it is already granted the channel for the subsequent data transmission.
The receiver sends ACK-REV packets to its neighbors in their mapped R-windows
to guarantee that these ACK-REVs arrive during their R-windows. However, the
mapped R-window to send ACK-REV to the sender is the earliest mapped R-window
the sender, where Mi is the mapped R-window of node i at the receiver and L i is the
1
It is possible that the sender receives another ACK-REV (from another pair of transmission) before
the ACK-REV from this receiver, and thus can not speak. In this case, the sender has to back off and
resend a REV later.
50
propagation latency from the receiver to node i. Here we introduce an additional one-
way delay to handle the following case: after the receiver sends out an ACK-REV to one
neighbor, it is possible that the neighbor transmits an ACK-REV (for another pair of
transmission) to the receiver. In such case, the receiver checks if the reserved time slot
yes, the receiver stops scheduling to send ACK-REVs. Otherwise, the receiver records
the reserved time slot, continues to transmit its ACK-REVs and prepares for incoming
data packets.
In some cases, the mapped R-windows at the receiver possibly overlap each other.
In an even rare case, the receivers own R-window overlaps with the mapped R-window
of some neighbor. These special cases can be effectively handled in the second phase of
R-MAC: period announcement. In the second phase, if a node finds out either one of
these cases occurs, the node re-schedules its periodic sleep/listen. Since the duration
of a R-window is very short compared with the duration of one period, for example, in
our implementation of R-MAC, TR 10ms and T = 1s, it only takes a few rounds for
When the receiver determines the reserved time slot, it has to leave enough time
for the ACK-REV to reach the sender and for the data packets to propagate from the
sender to the receiver. Since the offset of the reserved time slot within a period of the
sender is already specified in the REV packet, the receiver computes the offset of the
reserved time slot to its own period and arranges the reserved time slot according to
the transmission time of its ACK-REVs. The reserved time slot is the earliest time
51
sender on the receiver and Ts is the propagation latency from the receiver to the sender.
Therefore, when the sender receives the ACK-REV, it has enough time to deliver its
In each ACK-REV packet, the receiver specifies the interval from the transmission
time of this ACK-REV to the reserved time slot and the duration of the reserved time
slot.
When the receiver receives all the data packets, it schedules to acknowledge the data
burst. R-MAC treats REV and ACK-DATA in the same way. That is, the receiver uses
the same scheduling algorithm for REVs to schedule an ACK-DATA. It needs to make
sure that the ACK-DATA arrives at the sender in its listen window. In the ACK-DATA
packet, the receiver indicates whether a packet is received or corrupted by a bit vector.
Giving an Example
Now, we use an example to illustrate the scheduling at the receiver side. Referring
Node D
Mapped time slot
M MB M MA
A D
SA
Node C (R) SB S
D
Reserved time slot
Node B
Mapped time slot
Node A (S)
Mapped time slot
to Figure 3.17, again, node A is the sender and node C is the receiver. M A , MB , and
MD are the mapped R-windows at node C for nodes A, B, and D respectively. Node
52
C transmits ACK-REVs for node B and D first, then transmits ACK-REV for node
A. SD is the earliest time that MA could be scheduled and SA is the lower bound for
3.4.2 Discussions
3.4.2.1 Fairness
two aspects. First, an intended receiver provides equal opportunity for all its neighbors
However, in some network settings, for a given receiver, some nodes have advan-
tage over others because of their locations and schedules. An example is shown in
Figure 3.18, where node A is the intended receiver, nodes B and C have less propaga-
tion latency to node A than that of node D. When they compete to make reservations,
nodes B and C always can schedule their REVs in the first period, while node D can
only schedule its REV to arrive node A in the second period. Therefore, node B or
C always wins the competition. In order to give equal opportunity to all neighbors,
a node needs to collect reservations for several periods in a row. We call the required
Clearly the collecting period is mainly determined by the period length. When
the period length of nodes is too small, the collecting interval will most probably cover
multiple periods. On the other hand, when the period length is big enough, say, greater
than the time for a node to notify its neighbors of its reserved time slot (by scheduling
ACK-REVs), which is bounded by TL + RT T , then for any node who misses the first
53
REV
Node D
REV
Node C
REV
Node B
Node A (R)
T T
period for sending reservation can definitely catch the second period. Thus, we have
Lemma 3.1 The collecting interval only needs two periods if the period length, T 0 >
sound speed in water and R is the maximum transmission range of the nodes in the
network.
Since TL and RT TM AX are usually small compared with T0 (we will discuss how to
set T0 and TL appropriately in Section 3.4.3), it is easy to satisfy the condition. Thus,
in R-MAC, each node collects reservations for two consecutive periods. Once a node
collects all the REVs, it randomly picks one of them and reserves a time slot for it. As
we can see, R-MAC supports fairness at the cost of sacrificing channel utilization. In
our targeted applications, however, channel utilization is not important compared with
Acknowledging each received packet may result in low channel utilization, high
energy waste and high control packet overhead. Thus, in R-MAC, the receiver sends
54
one acknowledgment packet (ACK-DATA) per data burst instead of per packet. In the
ACK-DATA packet, the receiver indicates the lost packet by a bit vector. The length
of the bit vector is the same as the maximum number of data packets in the burst. The
value of a bit indicates whether the corresponding data packet is lost or received. We
It is possible that an ACK-DATA packet collides with other control packets. Con-
sequently, the sender will retransmit the data packets. In order to avoid such waste,
in R-MAC, the sender specifies the burst identification in the REV packet. Each node
records the burst identifications and the most recent ACK-DATAs for all its neighbors.
When a node receives a REV, it first checks the burst identification. If it is the same
as the one recorded, it means that the sender does not receive the ACK-DATA packet,
then it simply replies with an ACK-DATA. If the burst identification is different from
the recorded one, it means that this is a new reservation. When the sender receives
the ACK-DATA, it updates its burst identification and clears the received data packets
from its data queue. Since two values are enough for the burst identification per node,
one bit in a REV is needed for the purpose of the burst identification.
In R-MAC, nodes measure the propagation latencies to and the schedules of their
neighbors in the first two phases. However, after a long time period, these measure-
ments become more and more inaccurate because of the clock drift and varying sound
updated periodically.
There are two ways to keep these measurements accurate. One way is to piggyback
a time stamp on each packet transmitted such as REV, DATA and ACK-REV. When
55
the sender and the receiver exchange these packets, they update these measurements.
Another way is to send control packets explicitly to update these measurements after
Note that all the timestamps used in R-MAC are relative values, not absolute values.
Thus the accuracy and stability of the synchronization depend on the frequencies of
clocks. This greatly loosens the accuracy requirement on the node clocks. The typical
relative drift of crystal-based clocks is in the order of 15-25 parts per million [6]. Such
3.4.3 Analysis
In this section, we first analyze the properties of R-MAC. Then we explore several
design parameters to get some guidelines for the real implementation of R-MAC.
From the schedule algorithms for REV, ACK-DATA, and DATA packets, we can see
can arrive in R-window of nodes. When a node receives ACK-REV in its R-window, it
knows there exists a transmission in its neighborhood as well as the time duration of
the transmission. In this way, the node could keep silent in that time period to avoid
interrupting the transmission. When there are more than one ACK-REVs arriving at a
node, i.e., ACK-REV packets collide, the node can sense the collision in its R-window
and conclude that there are transmissions in the neighborhood, but it has no clue
about time duration of the transmission. In this case, the node should back off for
the worst case. Moreover, in R-MAC, all nodes keep listening in their R-windows to
56
get the updated usage information of the channel in the neighborhood. Therefore, R-
MAC could well synchronize control and data packet transmissions so that even control
packets collide, data packet collisions are still effectively prevented. Thus, we have the
following theorem.
In R-MAC, since all data packets are scheduled to skip R-windows of all nodes,
if a node is exposed to a data sender, the R-window of this node will be skipped by
the data sender. It means that this node can still send REV and receive ACK-REV
without collision. Thus, the node is free to send data as long as the transmission
does not interfere with the on-going data transmission. Therefore, we get the following
theorem.
In R-MAC, there are several critical time intervals to be defined, such as the back-
off time interval and the time interval that a sender needs to wait for an ACK-REV
(referred to as ACK-REV time interval for short). These time parameters are deter-
mined by many factors such as the period length, the listen window, the maximum
propagation latency and the maximum number of neighbors. In the following, we first
discuss how to set period length and listen window appropriately, then we discuss how
57
Setting Period Length T0
Since the data transmission in R-MAC has to avoid all R-windows, we have mini-
mum requirement on the period length. A very short period possibly causes the interval
between any two mapped R-windows at a node too small, even less than the duration of
one data packet, making R-MAC infeasible. We could give a lower bound for the period
node, DP is the duration of one data packet and T R is the duration of the R-window.
This lower bound gives the sufficient condition to guarantees that there exist at least
one interval between any two consecutive mapped R-windows at a node, as allows the
of one data packet, C is the maximum number of data packets allowed in a burst and
First, we define the minimum time period for any node used to collect REVs and
packets collide is determined by the durations of the control packets and contention
window. A large contention window reduces the collision probability, but increases
the energy consumption on idle state. We show the relation between the collision
58
Let n be the number of REV or ACK-DATA packets a node receives in one period,
window. We define pi as the probability that i control packets are delivered successfully.
p1 = 1
DC
p2 = 1 TC
p3 = p2 (1 2DC (3.11)
TC )
...
(n1)DC
pn = pn1 (1 TC )
Equation 3.12 gives the probability that n control packets can be delivered success-
fully in the contention window TC . Equation 3.12 assumes that the contention window
at all nodes are slotted in the same way. This is skewed in the real implementation
since the listen windows (therefore, contention windows) are slotted differently at dif-
ferent nodes because of the different R-window mapping at these nodes. But it is still a
close estimation of the relation between the REV/ACK-DATA success probability and
contention window.
Given T0 , DC , n and the requirement for the probability that n control packets are
successfully delivered, we can set the listen window accordingly. For example, if n = 3
and DC = 0.005s, if we want the success rate of control packets is greater than 90%,
in a nodes listen window. Then the listen window T L should as follows: TL > (1 +
59
TL
Nr ) TR + TC . We can also have Nr is statistically equal to N T0 , where N is
the maximum number of neighbors and T 0 is the period time. Thus, we have TL >
TL
(1 + N T0 ) TR + TC , from which we can set TL appropriately.
When a node senses a collision in its R-window, this node knows that the channel
is reserved but does not know how long the reservation takes. Therefore, the node has
to prepare for the worst case, it keeps silence for the time interval (i.e., the backoff time
interval) that is long enough for any data transmission. If the length period is greater
the basic fairness requirement, the maximum number of periods a transmission takes at
transmissions, and 1 period for data transmission. Then we have the following lemma.
the maximum RTT in R-MAC, TL is the listen window, N is the maximum number of
neighbors, DP is the duration of one data packet, C is the maximum number of packets
Therefore, the upper bound of backoff time interval is 5 periods. Similarly, we can get
paring with Tu -MAC (a revised version of T-MAC for underwater sensor networks), we
60
demonstrate the energy efficiency of R-MAC. We also conduct experiments to explore
we use the following parameters in the simulations. We set the size of control packets
to 5 Bytes and the data packet size to 60 Bytes. The period length is 1 second and
the listen window is 100 ms. The bit rate is 10 Kbps. The data generation follows a
The maximum transmission range is 90 meters. The interference range is the same
as the transmission range. The maximum number of data packets allowed in a burst is
underwater acoustic modem, UMW1000, from LinkQuest [55]: the power consumption
on transmission mode is 2 Watts; the power consumption on receive mode is 0.75 Watts;
for each successfully delivered packet. Goodput is the number of bytes successfully
delivered to the receiver per second. End-to-end delay is the average time interval from
As we mentioned earlier, there are no energy efficient MAC solutions for underwater
sensor networks with unevenly distributed traffic in the literature. We choose to im-
plement T-MAC [98] as a reference because both T-MAC and R-MAC can adapt to
61
unevenly distributed traffic with high energy efficiency, though T-MAC is designed
for radio-based sensor networks. We apply the idea of T-MAC in underwater sensor
networks to show that it is not efficient to simply adapt radio-based network protocols
modify the active time, TA, to incorporate propagation delay, which is non-negligible
the original RTS/CTS method is not suitable for underwater sensor networks due to
we make the following change: when the sender receives a CTS from the intended
receiver, it has to wait until the CTS propagates through the whole transmission range
of the receiver; Third, since carrier sensing does make much sense in underwater sensor
networks, Tu -MAC does not adopt this technique. Instead, it has the following design:
if a node senses the channel busy when it starts to transmit a packet, it will back off
for a random time interval between one data packet duration and two data packets
durations.
medium. We check the goodputs of all the contender under different traffic rates.
We use the topology shown in Fig. 3.19. In the simulations, nodes 1, 2, 3 and 4
send data to node 0. The distances between the senders and the receiver are different.
We vary the data sending rate from 0.02 to 0.24 packets per second. We measure the
62
goodput and energy consumption of each sender. The results are shown in Fig. 3.20
60 m
80 m 20m
node 0 node 4
node 3
20 m
node 1
As shown in Fig. 3.20, all the senders have a very close share of the channel re-
gardless of their positions. The goodputs of the senders increase along with the traffic
rate. However, when the traffic rate reaches some threshold (0.2 packet/s in these sim-
ulations), the senders saturate the channel, and the goodputs become flat and do not
change with the traffic rate since then. The curves in Fig. 3.20 also shows the effect of
collision avoidance provided by R-MAC, i.e., when the traffic rate is low, all the data
packets generated get through. However, when the traffic rate exceeds the capacity
800
800 node 1
Energy consumption (Joule)
node 1 node 2
700
700 node 2 node 3
node 3 node 4
node 4 600
600
500
500
Goodput
400
400
300
300
200
200
100
100
0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24
0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24
Average datarate (pkts/s)
Average datarate
63
of the channel, only some data packets are sent, the traffic rate has no effect on the
goodput anymore.
The energy consumption of each sender with the standard deviation is shown in
Fig. 3.21. As shown in this figure, these senders also have the same energy consumption
From these figures, we can see that all the senders in R-MAC have close goodputs
at close energy consumption when they contend for the channel. R-MAC supports
We evaluate the energy efficiency of R-MAC and T u -MAC using a star topology as
shown in Figure 3.22. In this network, node 0 is the only receiver and all other nodes
are senders. All the senders can hear each other. We measure the energy consumption
of R-MAC and Tu -MAC at different traffic rates. To make the comparison fairly, we
choose the same parameters for R-MAC and T u -MAC wherever this rule applies. It
should be also noted that the ARQ technique adopted in the original T-MAC is different
from that used in R-MAC. We thus also implement ARQ per burst, i.e., burst-based
40m
30 m 20m
node 1
64
To analyze how R-MAC achieves energy efficiency, in the following we first mea-
sure the average time spent on transmitting, receiving, and idle state per successfully
delivered packet, for each we call transmitting overhead, receiving overhead and idle
overhead respectively.
From this figure, we can see that the transmitting overhead of R-MAC is higher
than that of Tu -MAC when the traffic rate is low, but it becomes less as the traffic rate
continues to increase. This is because that the receiver in R-MAC has to transmit an
ACK-REV to each of its neighbors, while the receiver in T u -MAC just needs to send one
CTS for all of its neighbors. Thus when the traffic rate is low, the number of packets
in a burst is small, which means that the probability that data packets in T u -MAC
collide is very low. Thus the transmitting overhead of R-MAC is higher than that of
Tu -MAC. As the traffic rate keeps increasing, the number of data packets in a burst
the other hand, when the data burst size is lifted, the data packet collision in T u -MAC
becomes more and more significant, which causes the increase of transmitting overheads
in Tu -MAC. When the traffic rate is relatively high (greater than 0.14 pkts/s as shown
in the figure, the data packet collision is the dominating source for the transmitting
The results for receiving overhead are shown in Figure 3.24. As we can see, the
receiving overhead for R-MAC is only half that of T u -MAC. Such significant difference
all nodes listen at the same time; therefore, when a node sends packets, all its neighbors
overhear these packets, resulting in higher receiving overhead. On the other hand, in
65
Transmitting overhead (ms/packet)
70 300
62 150
60
100
58
50
56
54 0
0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24
Average datarate Average datarate
R-MAC R-MAC
Idletime overhead (ms/packet)
T-MAC(B) T-MAC(B)
10000 T-MAC 0.4 T-MAC
8000 0.35
6000 0.3
4000 0.25
2000 0.2
0 0.15
0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24
Average datarate Average datarate (pkts/sec)
66
R-MAC, nodes have different schedules. When a node sends packets, it is most likely
Figure 3.25 shows the results of idle overhead. From this figure, we can observe
that idle overhead decreases with the growth of traffic rate and the idle overhead of
R-MAC is only half of that of Tu -MAC. This is mainly caused by two factors. First,
the method adopted in Tu -MAC to reduce end-to-end delay results in longer idle time
on each node. In Tu -MAC, nodes have to stay idle for possible future traffic when they
find the channel is captured by other nodes. Second, the long active time, TA, of each
We convert the overhead shown in Figure 3.23, Figure 3.24 and Figure 3.25 into
energy consumption per data packet based on the parameters of UMW1000 modems,
and the results are plotted in Figure 3.26. From this figure, we can observe that under
low traffic rate such as 0.02 pkts/s, even R-MAC has higher transmission overhead (per
data packet), it is still more energy efficient than T u -MAC due to its lower receiving
overhead and less idle overhead per data packet. As the traffic rate increases, the differ-
ence between R-MAC and Tu -MAC is enlarged. This is mainly due to the transmitting
From this set of simulations, we can conclude that R-MAC is much more energy
efficient than Tu -MAC under various traffic rates. This tells us that we can not di-
rectly apply T-MAC, a well designed MAC solution for radio-based sensor networks, in
underwater network scenarios. The conclusion also indicates that R-MAC can achieve
67
3.4.4.5 End-to-End Delay
MAC in a multi-hop network. The topology is shown in Figure 3.27, where node 0 is
the receiver and node n is the only sender. The distance between any two adjacent
nodes is 80 meters. The sender sends data at the average rate of 0.1 packet per second
(a light traffic rate). We vary the number of hops by adding different number of nodes
between the sender and the receiver. For each number of hops, we compute the average
The end-to-end delay results are plotted in Figure 3.28. From this figure, we can
see that the end-to-end delay of both R-MAC and T u -MAC increases linearly as the
number of hops increases. Since at light traffic, R-MAC at most takes 5T 0 to transmit
one packet per hop, and Tu -MAC takes about T0 . This is mainly due to two factors:
earlier) with the cost of sacrificing energy efficiency; 2) R-MAC introduces additional
delay (one period) to guarantee fairness. In other words, R-MAC trades end-to-end
node 2 node n
node 0 node 1
We also run simulations for high traffic rate. In fact, due to the good scheduling of
R-MAC, the increase of end-to-end delay in R-MAC is much smaller than the increase
in Tu -MAC. For example, for a traffic rate of 1 packet per second, the end-to-end
delay of Tu -MAC for 10-hops is around 100 seconds, while the result of R-MAC is
only about 200 seconds. As we increase the traffic rate to very high, the end-to-end
68
60
R-MAC
Tu-MAC
50
30
20
10
0
1 2 3 4 5 6 7 8 9 10
Number of hops
delays of R-MAC and Tu -MAC are very close. This is because the packet queuing
the delivery ratio of Tu -MAC drops much more significantly at high traffic than that of
R-MAC. Thus, if we combine the delay results with the energy results, we can conclude
that R-MAC has more advantages in the networks with high data traffic.
3.5 Summary
In this chapter, we present our work on MAC for underwater sensor networks. We
first review the challenges for efficient MAC design in underwater sensor networks,
then we analyze the performance of two widely used MAC techniques: random access
solution, R-MAC.
Our study on random access and RTS/CTS techniques shows that that there is no
absolute winner between these two approaches. Random access approach is suitable for
sparse networks with low data traffic, small data packets, and long transmission range.
69
On the other hand, RTS/CTS-based approach performs better in dense networks with
high data traffic, large data packet size and short transmission range.
Based on these results, we propose an energy efficient MAC protocol, R-MAC, for
dense underwater sensor networks where data traffic is unevenly distributed spatially
and temporally. R-MAC carefully schedules the transmissions of control and data
packets to avoid data packet collisions. The scheduling algorithms not only avoid data
packet collisions completely, but also solve the exposed terminal problem. In R-MAC,
each node adopts periodic listen/sleep to reduce the energy waste in idle state and
each node randomly selects its own schedule. Additionally, R-MAC supports fairness.
Finally, the burst-based acknowledgment technique reduces the control packet overhead
70
Chapter 4
Multi-Hop Routing
UWSNs. At the same time, M-UWSN routing should be able to handle node mobility.
This requirement makes most existing energy-efficient routing protocols unsuitable for
M-UWSNs. There are many routing protocols proposed for terrestrial sensor networks,
such as Directed Diffusion [37], and TTDD (Two-Tier Data Dissemination) [112]. These
protocols are mainly designed for stationary networks. They usually employ query
most sensor nodes are mobile, and the network topology changes very rapidly even
with small displacements. The frequent maintenance and recovery of forwarding paths
is very expensive in high dynamic networks, and even more expensive in dense 3-
dimensional M-UWSNs.
The multi-hop routing protocols in terrestrial mobile ad hoc networks fall into
two categories: proactive routing and reactive routing (aka., on-demand routing). In
71
proactive routing protocols such as OLSR [38], TBRPE [70] and DSDV [73], the cost
of proactive neighbor detection could be very expensive because of the highly dynamic
topology and large scale of M-UWSNs. On the other hand, reactive protocols such as
AODV [74] and DSR [40], the routing is triggered by the communication demand at
sources. In the phase of route discovery, the source seeks to establish a route towards
the destination by flooding a route request message, which would be very costly in
Thus, to provide scalable and efficient routing in M-UWSNs, we have to seek for
new solutions.
ing (VBF), aiming to provide robust, scalable and energy efficient routing in M-UWSNs.
a routing vector from the source to the sink. Intuitively a virtual pipe with the source-
to-sink vector as the axis is used as the abstract route for data delivery. If the pipe is
populated by nodes then the data packets can be forwarded to the sink. The radius of
the virtual pipe is a predefined distance threshold. For any sensor node which receives
data, it first computes its distance to the routing vector. If this distance is smaller
than the threshold, then the node is considered as a candidate to forward the data.
Otherwise, the node simply discards the data. To reduce the traffic in dense networks,
VBF adopts a distributed self-adaptation algorithm, in which all the candidate nodes
are coordinated and finally only several most desirable ones can forward the data
packets. Compared with naive flooding, VBF can significantly reduce network traffic,
72
However, there are two major drawbacks with VBF: (1) Because of the use of the
unique source-to-sink vector, the creation of a single virtual pipe may significantly
affect the routing efficiency in different node density areas. If nodes in one area are too
sparsely distributed, then it is quite possible that very few or even no nodes lie within
the virtual pipe eligible for data forwarding, as may lead to network disconnection,
hence data delivery ratio is degraded; (2) Again because of the single source-to-sink
vector design, VBF is too sensitive to the routing pipe radius threshold. The routing
pipe radius threshold significantly affects the routing performance, as may not be a
VBF. However, instead of using a single virtual pipe from the source to the sink, HH-
VBF defines a virtual pipe around the per-hop vector from each forwarder to the sink.
In this way, each node can adaptively make packet forwarding decisions based on its
current location. This design can directly bring the following benefits: (1) Since each
node has its own routing pipe, the maximum pipe radius is the transmission range. In
other words, there is no necessity to increase the pipe radius beyond the transmission
range in order to enhance routing performance; (2) In sparse networks, though the
number of eligible nodes may be small, HH-VBF can find a data delivery path as long
as there exists one in the network. Thus, HH-VBF enhances data delivery ratio in
Geographic routing protocols, such as VBF and HH-VBF, are shown to be suitable
to handle the node mobility in M-UWSNs. These protocols usually rely on the geo-
graphical information and exploit greedy policies to optimize the selection of nodes in
the next hop. However, the greedy policies are not always feasible. For example, a node
73
can not forward data further when none of its neighbors is qualified for the next hop
make routing possible in such case, some measures should be taken to avoid routing
voids. In M-UWSNs, many network characteristics make this void avoidance problem
very challenging. First of all, the voids in our targeted networks are three-dimensional.
Secondly, the mobility of most nodes causes the voids in the forwarding path volatile.
Sometimes the voids are convex, and sometimes the voids could be concave. Thirdly,
the voids themselves might be mobile. For example, when a ship passes an underwater
sensor network, the communication around the ship will be interrupted, thus generating
a mobile void.
forwarding path is originally represented by a vector from the source to the destination
of the data packet. If there is no presence of void in the forwarding path, VBVA is
essentially the same as VBF [109]. When there is a void in the forwarding path, VBVA
adopts two methods to handle the void, namely, vector-shift and back-pressure. In the
vector-shift method, VBVA attempts to route the packet along the boundary of the
void to the target by shifting the forwarding vector of the data packet. If the void is
convex, vector-shit method can successfully route the packet around the void to the
target. However, if the void is concave, the vector-shift method may fail. In this case,
VBVA resorts to the back-pressure method to retreat the packet to some nodes suitable
for shifting the forwarding vector of the packet. We can prove that if the network is
stable in the time period of one data packet delivery, and if there exist paths between
the source and the target, VBVA can always finds a path from the source to the target.
On the other hand, since VBVA avoids voids on demand, i.e., VBVA does not rely on
74
the topology information of the network, VBVA can handle the mobile network and
The rest of this chapter is organized as follows. We first review some related work
in Section 4.2, then we present VBF and HH-VBF in Section 4.3. After that, we
describe VBVA in Section 4.4. Finally, we conclude our work on multi-hop routing in
Section 4.5.
In this section, we review some related work on routing for terrestrial sensor
In last few years, many energy efficient routing protocols have been proposed for
terrestrial sensor networks, such as Directed Diffusion [37], Two-Tier Data Dissemi-
nation [112], GRAdient [113], Rumor routing [10], and SPIN [31]. In the following,
we brief these protocols and discuss why they are unsuitable for underwater sensor
network environments.
Directed Diffusion is proposed in [37]. In the target application scenario, the sink
floods its interest into the network, and the source node responds with data. The
data are first forwarded to the sink along all possible paths. Then, an optimal path is
enforced from the sink to the source recursively based on the quality of the received
data. Directed Diffusion works well in low dynamic networks, where most nodes are
stationary and forwarding paths are relatively stable. However, if we apply Directed
75
In the Rumor routing algorithm [10], both event notifications and data queries are
forwarded randomly. The successful data delivery depends on the chance that these
two types of forwarding paths interleave. In stationary networks, it is most likely that
these paths will meet since they are relatively stable. However, in underwater sensor
networks, this is unlikely to happen. Even in networks with low mobility, the instability
SPIN (Sensor Protocol Information via Negotiation) is proposed for low data rate
networks [31]. When a node wants to send data, it broadcasts a description message
of the data, and each neighbor decides whether to accept the data based on its local
resource condition. Once again, the high propagation delay in mobile underwater
sensor networks makes the throughput of this protocol low. Moreover, flooding in
SPIN depletes the energy of the network, especially for medium or high data rate
networks.
sinks [112]. In this protocol, the source sensor initiates the process to construct a grid
covering the whole field. Data and queries are forwarded along the cross points in
the grid. The impact of the sink mobility is confined within each cell. When most
nodes in the network are fixed, it costs less energy to maintain the grid. However, the
overhead to maintain the grid will increase significantly when most nodes are mobile
dient, a cost field is built in the whole network by the sink, which has the lowest cost.
Data packets are forwarded along the direction from the higher cost nodes to the lower
cost nodes. The width of the path is controlled by the credits of each packet. In this
76
where sensor nodes are mobile, the protocol will consume a large amount of scarce en-
ergy to update the cost field in order to keep relatively accurate paths from the source
to the sink.
In short, the existing protocols for terrestrial sensor networks are unsuitable for
mobile underwater sensor networks. All these protocols depend on the relatively stable
would be costly to maintain and recover the frequently broken routing paths due to
node mobility.
Geographic routing protocols leverage the position information of each node to de-
termine forwarding paths. Since geographic routing does not rely on stable neighbor-
hood to find forwarding paths, it is a very promising technique for routing in networks
Most geographic routing protocols adopt some greedy policies to select the next hop.
However, these greedy policies can not be directly applied to M-UWSNs. First, all the
existing geographic routing protocols are proposed for 2-dimensional networks, and so
are the policies to select the next hop. Second, most geographic routing protocols do
not consider the reliability issue. They usually adopt single forwarding path, and thus
are vulnerable to node failure. Third, many greedy policies are still based on relatively
follows.
GPSR in [43] always selects the node geographically closest to the destination of
the target. If GPSR can not find any node closer to the destination of the packet
than the forwarder, it adopts right-hand rule to forward the packet. GPSR requires
77
the underlying topology of the network is planar graph. GPSR works well in the
stationary networks, but unsuitable for the highly dynamic networks. It is expensive
to generate planar graph for a highly dynamic network. Moreover, the nodes in GPSR
have to keep the position information of their neighbors, which also results in energy
Beacon-less routing algorithm (BLR) [32] selects the the next hop through Dynamic
forwarding delay (DFD). Upon receiving a packet, each node computes its DFD value
determined by its position. The node with the least DFD value forwards the packet.
DFD is similar to the self-adaption algorithms in our work. However, they differs in
several respects: 1) The purposes are different. DFD is used to eliminate the duplicate
packets, but self-adaption algorithms are to eliminate the unnecessarily duplicate pack-
ets; 2) The position of the source is ignored in DFD, and thus the forwarding path might
be deviated from direction from source to the destination. However, the self-adaption
algorithm in VBF attempts to keep the forwarding paths as close to the direction from
area contend to forward the packet. The forwarding node first broadcasts a request
packet, and the nodes in the suppression area reply with another control packet. The
forwarding node then selects the node with the largest forwarding progress as the
next hop. If there is no node in the suppression area, the forwarding node has to try
other areas. The suppression area is defined by the Reulaux triangle in CBF. CBF
dynamic networks, the selected next hop can not guarantee to be the node with largest
78
progress due to the delay caused by the contention. Moreover, the forwarding path
M-UWSNs.
The protocols proposed in both [85] and [52] take not only the position but also
the quality of the link into consideration when selecting the next hop. In [85], distance
is not the only concern in selection of the next hop. The quality of the link to the
next hop is also an equally important factor. The forwarding node selects the node
with the largest progress to the destination whose link quality must be greater than
some minimum threshold. The forwarding node keeps a blacklist for all the neighbors
with poor links. In the next hop selection, all the nodes in the blacklist are excluded.
In [52], a new metric, called normalized advance (NADV) is used in the greedy policy
to select the next hop. NADV essentially is the advance per unit cost achieved by the
nodes in the network. The forwarding node selects the node with the largest NADV.
Both the blacklist scheme and NADA metric work well for static networks where the
node can have enough long time to statistically detect the quality of the link. However,
routing and Cartesian forwarding. Source routing relieves the nodes en-route of the
trajectory carried in data packets. The trajectory can be represented as either a func-
tion or an equation. The idea of TBF looks promising for large scale sensor networks,
since it neither involves query flooding nor employs route discovery and maintenance.
block for other protocols. The major concern in [66] is how to use trajectory based
forward algorithm to implement other routing protocols such as unicast, multicast and
79
broadcast. There are many implementation issues remain un-addressed. For example,
the efficient and effective representation of the trajectory, the selection of the next hop,
To some extent, our two routing protocols, VBF and HH-VBF, are simplified ver-
sions of TBF. To our best of knowledge, VBF and HH-VBF are the first protocols
proposed for 3-dimensional network deployment. The forwarding paths in VBF and
HH-VBF are interleaved and very robust against link failures and node failures. In
VBF and HH-VBF, the forwarding path is specified by a vector, an extremely sim-
In the literature, there are numerous studies addressing the void problem in radio
The planar-based void avoidance technique is widely used for the static networks.
These algorithms first extract a planar subgraph from the original network, then ap-
ply some planar graph traversal algorithm to bypass the void. The typical traver-
sal algorithm is right-hand rule, which is adopted in many geographic routing proto-
cols [43, 111]. In this algorithm, a node bypasses a void by forwarding the data to the
node first traversed by the arriving edge of the packet counterclockwise. The right-hand
rule traverses the interior of a closed polygonal region in clock-wise order. However,
this rule requires that the underlying graph is planar. Otherwise, it potentially results
a planar subgraph from the network topology. The right hand rule is unsuitable for
80
underwater sensor networks since underwater sensor networks are usually deployed in
3-dimensional space. Moreover, the right-hand rule requires that the network is static,
ing is the simplest and most robust method to overcome the voids in a network. How-
ever, it is also the most expensive method in terms of energy consumption. Some
routing protocols restrict the scale of flooding to handle the voids. In the algorithm
proposed in [92], when a node can not forward a packet further, this node floods the
packet just one-hop, then the neighbors of this node forward this packet in a greedy
way. However, in this algorithm, the node needs the position information of its neigh-
bors and only the node with best position is selected as next hop. This is infeasible for
M-UWSNs where most nodes are mobile and the propagation delays are high.
Some studies proposed to use preventive methods to overcome the void problem
in networks. In these methods, the void is avoided by preventing the packet from
being routed into the void in the first place. The void avoidance algorithm, distance
upgrading algorithm (DUA) is a preventive way to handle void [15]. In this algorithm,
each node has a virtual distance to the target, and the packets are forwarded from the
nodes with larger virtual distances to the nodes with lower virtual distances. When
a node finds out that it is a dead end, this node upgrades its virtual distance, which
possibly triggers other nodes to update their virtual distances. The update of virtual
distances makes the dead end node with a higher virtual distance. Therefore, the data
are prevented from being forwarded to the end nodes. DUA is proposed for static
networks and static voids. When the occurring frequency of voids is high or the void
itself is mobile, DUA has to update the virtual distances of all the nodes in the network
81
In [21], the void problem is addressed based on geometric properties. The authors
first propose a localized rule, TENT, to find the stuck node. In TENT, a node only
needs the positions of its neighbors to determines if it is a stuck node. The stuck
node initializes a distributed algorithm, BOUNDHLOE, which uses the right-hand rule
to connect a node with its next neighbor, to form a closed region. The algorithms,
TENT and BOUNDHOLE, only involve the nodes around the holes. The algorithm
BOUNDHOLE can be used in geographic routing to avoid voids. For example, if the
target is outside the hole, the packet can be forwarded along the nodes surrounding
the hole until a node closer to the target is reached, then this node can use any greedy
algorithm to forward this packet to the target. If the target node is inside a hole,
then restricted flooding can be used to forward the packet, i.e., all the nodes along the
hole forward this packet to their one-hop neighbors, which then flood the packet in
the hole. The algorithm TENT and BOUNDHOLE exploit the geometric information
of the network to find a hole, it is feasible only for static networks, where geometric
information is stable for a long time period. Moreover, these two algorithms work only
where flooding is used to build a shortest path tree. When a void presents, the flow of
the shortest path tree forks near the void and merges again past the void. The so-called
cut pair, two adjacent nodes who are in separate branches of the shortest tree, are used
to construct a coarse inner boundary. Then, all the nodes who are the inner boundary
are synchronized to flood the network to find the outer boundary. Finally, the coarse
inner bound are refined to form final boundary for the voids. The boundary recognition
algorithm [101] only uses the connectivity information of the sensor network to identify
the void in the network. The boundary recognition algorithm only makes sense for
static networks.
82
In short, all the existing void-avoidance protocols consider static and 2-dimensional
networks. They are not suitable for M-UWSNs where networks are deployed in 3-
dimensional space, most nodes are mobile, and the void itself is probably mobile.
In this section, we present VBF and HH-VBF. The preliminary version of this work
We assume that each node in VBF knows its positions information, which can be gotten
by some location algorithms [7, 27, 51, 116118]. If there is no such localization service
available, the sensor nodes still can estimate its relative position to the forwarding
node by measuring the angle of arrival (AOA) and the strength of the signal through
the hardware device. This assumption is justified by the fact that acoustic directional
antennae are of much smaller size than RF directional antennae due to the extremely
small wavelength of sound. Moreover, underwater sensor nodes are usually larger than
land-based sensors, and they have room for such devices. In this work, we assume
the position information is calculated by the measuring the AOA and strength of the
signal.
83
In VBF, each packet carries the position information of the sender, the target and
the forwarder (i.e., the node which transmits this packet). The forwarding path is
specified by the routing vector from the sender to the target. Upon receiving a packet,
a node computes its relative position to the forwarder. Recursively, all the nodes
receiving the packet compute their positions. If a node determines that it is close to
the routing vector enough (e.g., less than a predefined distance threshold), it puts its
own computed position in the packet and continues forwarding the packet; otherwise, it
simply discards the packet. In this way, all the packet forwarders in the sensor network
form a routing pipe: the sensor nodes in this pipe are eligible for packet forwarding,
and those which are not close to the routing vector (i.e., the axis of the pipe) do not
forward. Figure 4.1 illustrates the basic idea of VBF. In the figure, node S 1 is the
source, and node S0 is the sink. The routing vector is specified by S1 S0 . Data packets
are forwarded from S1 to S0 . Forwarders along the routing vector form a routing pipe
Not close to
the vector =>
No Forward
S1
As we can see, like all other source routing protocols, VBF requires no state infor-
mation at each node. Therefore, it is scalable to the size of the network. Moreover, in
84
VBF, only the nodes along the forwarding path (specified by the routing vector) are
VBF is a source routing protocol where each packet carries simple routing informa-
tion. In a packet, there are three position fields, OP, TP and FP, i.e., the coordinates
of the sender, the target and the forwarder. In order to handle node mobility, each
packet contains a RANGE field. When a packet reaches the area specified by its TP,
this packet is flooded in an area controlled by the RANGE field. The forwarding path
is specified by the routing vector from the sender to the target. Each packet also has
if they are close enough to the routing vector and eligible for packet forwarding.
There are two types of queries. One is location-dependent query. In this case, the
sink is interested in some specific area and knows the location of the area. The other
type is location independent query, when the sink wants to know some specific type
of data regardless of its location. For example, the sink wants to know if there exist
abnormal high temperatures in the network. Both of these two types of queries can be
1. Query Forwarding For location dependent queries, the sink is interested in some
specific area, so it issues an INTEREST query packet, which carries the coordinates of
the sink and the target in the sink-based coordinate system. Each node which receives
this packet calculates its own position and the distance to the routing vector. If the
distance is less than RADIUS (i.e., the distance threshold), then this node updates the
FP field of the packet and forwards it; otherwise, it discards this packet. For location-
dependent queries, the INTEREST packet may carry some invalid positions for the
85
target. Upon receiving such packets, a node first checks if it has the data which the
sink is interested in. If so, the node computes its position in the sink-based coordinate
system, generates data packets and sends back to the sink. Otherwise, it updates the
2. Source initiated Query In some application scenarios, the source can initiate
the query process. VBF also supports such source initiated query. If a source senses
some events and wants to inform the sink, it first broadcasts a DATA READY packet.
Upon receiving such packets, each node computes its own position in the source-based
coordinate system, updates the FP field and forwards the packet. Once the sink re-
ceives this packet, it calculates its position in the source-based coordinate system and
transforms the position of the source into its own coordinate system. Then the sink
can decide if it is interested in such data. If so, it may send out an INTEREST packet
Handling Source Mobility Since the source node keeps moving, its location cal-
culated based on the old INTEREST packet might not be accurate any more. If no
measure is taken to correct the source location, the actual forwarding path might get
far away from the expected one, i.e., the destination of the data forwarding path most
probably misses the sink. We propose the following sink-assisted approach to solve this
problem.
The source keeps sending packets to the sink, and the sink can utilize the source
location information carried in the packets to determine if the source moves out of
the targeted scope. For example, if the sink calculates its position as P c = (xc , yc , zc )
based on the coordinates of the source, P source = (xsource , ysource , zsource ), and its real
position is P = (x, y, z), then the sink can calculate the relative position of the sink to
86
0
position of the source is Psource = (x x , y y , z z ). By comparing Psource and
0
Psource , the sink can decide if the source moves out of the scope of the interested area.
0
If so, the sink sends the SOURCE DENY packet to the source using P source . Once the
source gets such packets, it stops sending data. At the same time, the sink initiates a
In the basic VBF protocol, all the nodes close enough to the routing vector are
qualified to forward packets. The protocol is simple and introduces little computa-
tion overhead. However, when sensor nodes are densely deployed, VBF may involve
too many nodes in data forwarding, which in turn increases the energy consumption.
Thus, it is desirable to adjust the forwarding policy based on the node density. Due to
the mobility of the nodes in the network, it is infeasible to determine the global node
density. On the other hand, it is inappropriate to measure the density at the trans-
mission ends (i.e., the sender and the target) because of the low propagation speed of
acoustic signals. We propose a self-adaptation algorithm for VBF to allow each node
to estimate the density in its neighborhood (based on local information) and forward
packets adaptively.
Definition 4.1 Given a routing vector S1 S0 , where S1 is the source and S0 is the
87
Sink (S 0) Sink (S 0)
D
A
d B
p
R
F F A
W W
W W
Source (S 1)
Source (S 1)
F A. R is the transmission range and W is the radius of the routing pipe (i.e., the
distance threshold).
Figure 4.2 depicts the various parameters used in the definition of desirableness factor.
From the definition, we see that for any node close enough to the routing vector, i.e.,
0 p W , the desirableness factor of this node is in the range of [0, 3]. For a node, if
its desirableness factor is large, it means that either its projection to the routing vector
is large or it is not far away from the forwarder. In other words, it is not desirable for
this node to continue forwarding the packet. On the other hand, if the desirableness
factor of a node is 0, then this node is on both the routing vector and the edge of the
transmission range of the forwarder. We call this node as the optimal node, and its
position as the best position. For any forwarder, there is at most one optimal node
and one best position. If the desirableness factor of a node is close to 0, it means this
ableness factor. This algorithm aims to select the most desirable nodes as forwarders.
In this algorithm, when a node receives a packet, it first determines if it is close enough
88
to the routing vector. If yes, the node then holds the packet for a time period related
to its desirableness factor. In other words, each qualified node delays forwarding the
Rd
Tadaptation = Tdelay + , (4.1)
v0
where Tdelay is a pre-defined maximum delay, called maximum delay window and v 0
is the propagation speed of acoustic signals in water, i.e., 1500 m/s, and d is the
distance between this node and the forwarder. In the equation, the first term reflects
the waiting time based on the nodes desirableness factor: the more desirable (i.e., the
smaller the desirableness factor), the less time to wait. The second term represents
the additional time needed for all the nodes in the forwarders transmission range to
During the delayed time period Tadaptation , if a node receives duplicate packets from
n other nodes, then this node has to compute its desirableness factors relative to these
Analysis Essentially, the above self-adaptation algorithm gives higher priority to the
desirable node to continue broadcasting the packet, and it also allows a less desirable
node to have chances to re-evaluate its importance in the neighborhood. After re-
ceiving the same packets from its neighbors, the less desirable node can measure its
importance by computing its desirableness factor relative to its neighbors. If there are
many more desirable nodes in the neighborhood, we exponentially reduce the proba-
bility of this node to forward the packet. That is, it is useless for this node to forward
89
the packet anymore since many other more desirable nodes have forwarded the packet.
In fact, if a node receives more than two duplicate packets during its waiting time,
it is most likely that this node will not forward the packet no matter what initial
value c takes. In this way, we can reduce the computation overhead by skipping the
From Equation 4.1, we can see that the optimal node does not defer forwarding
Lemma 4.1 If there exists an optimal path from the sender to the target, i.e., each
node in the path is the optimal node for its upstream node, then the self-adaptation
An Example We illustrate VBF with self-adaptation in Figure 4.3. In this figure, the
forwarding path is specified as the routing vector S1 S0 from the source S1 to the sink
S0 . The node F is the current forwarder. There are three nodes namely, A, B and D
in its transmission range. Node A has the smallest desirableness factor among these
nodes. Therefore, A has the shortest delay time and sends out the packet first. As
shown in this figure, node B is most likely to discard the packet because it is in the
transmission range of A and it has to re-evaluate the benefit to send the packet. Node
The performance of VBF is sensitive to the forwarding pipe radius. If the radius
is set too small, it is most likely that there is no sensor node in some parts of the
forwarding pipe, VBF can not deliver the packets. On the other hand, if the radius
is set too large, VBF will consume much unnecessary energy on packet forwarding.
However, it is unrealistic to set a radius that is appropriate for the whole network,
90
4.3.2 Hop-by-Hop Vector-Based Forwarding (HH-VBF)
In HH-VBF, we redefine the routing virtual pipe to be a per-hop virtual pipe cre-
ation, instead of a unique pipe from the source to the sink. This hop-by-hop approach
allows the expansion of the probability of finding a routing path in comparison with
VBF. Consider a node Ni which receives a packet from the source or a forwarder node
Sj . Upon receipt of the packet, the node computes the vector from the sender S j to
the sink. In this way, the forwarding pipe changes each hop in the network, giving
the name hop-by-hop vector based forwarding (HH-VBF). After a receiver com-
putes the vector from its sender to the sink, it calculates its distance to that vector. If
this distance is smaller than the predefined threshold then it is eligible to forward the
packet, and we refer to such a node as a candidate forwarder for the packet.
pends on the desirableness factor. The timer represents the time the node holds the
packet before forwarding it. We modify Definition 4.1 and get a new definition of the
of a node A, is defined as
(R d cos)
0 = ,
R
where d is the distance between node A and node F , and is the angle between F S0
and F A. R is the transmission range and S 0 is the sink.
recall, due to the effective packet suppression strategy adopted in VBF, only a few paths
could be selected to forward packets. This may cause problems in sparse networks. To
91
enhance the packet delivery ratio in sparse networks, we introduce some redundancy
In HH-VBF, when a node receives a packet, it first holds the packet for some time
period proportional to its desirableness factor (this is similar to VBF). Therefore, the
node with the smallest desirableness factor will send the packet first. Following this
way, each node in the neighborhood may hear the same packet multiple times. HH-
VBF allows each node overhearing the duplicate packet transmissions to control the
forwarding of this packet as follows: the node calculates its distances to the various
vectors from the packet forwards to the sink. If the minimum one of these distances
is still larger than a pre-defined minimum distance threshold , this node will forward
the packet; otherwise, it simply drops the packet. Obviously, the bigger is, the more
nodes will be allowed for packet forwarding. Thus, HH-VBF can control forwarding
redundancy by adjusting .
Each node that qualifies as a candidate forwarder delays the packet forwarding by
an interval Tadaptation which is computed the same way as in VBF. Then each node still
Compared with VBF, the major innovation of HH-VBF is the hop-by-hop approach.
Though the basic idea is simple, it can bring two significant benefits: (1) HH-VBF can
find more paths for data delivery in sparse networks; (2) HH-VBF is less sensitive to
the routing pipe radius (i.e., the distance threshold). Correspondingly, we have the
Lemma 4.2 Given the same routing pipe radius, if a packet is routable in VBF, then
92
Proof 1 If we can show that any routing-involved node in VBF is also involved in
routing in HH-VBF, then we prove the lemma. Now, we assume that in HH-VBF a
node Ni is not involved in routing. This implies that in the network there is no path
leading from the source to Ni give the distance threshold. Thus, the source-to-sink
routing pipe does not cover node Ni , that is, Ni is not involved in routing. Using the
Lemma 4.3 The valid range of routing pipe radius of HH-VBF is [0, R], while the
valid range of VBF is [0, D], where R is the node transmission range, and D is the
network diameter (here we assume all nodes have the same transmission range).
Proof 2 In HH-VBF, each node makes packet forwarding decisions based on its dis-
tance to the vector from its forwarder to the sink. If the distance is bigger than the
predefined pipe radius, the node will forward the packet, otherwise it will discard the
packet. In this way, when the pipe radius is bigger than the transmission range of the
forwarder, those nodes which are outside the transmission range while still lie in the
routing pipe are useless since they can not hear the packets from the forwarder. Thus,
the valid range of routing pipe radius of HH-VBF is [0, R], where R is the transmission
range.
In VBF, each node makes packet forwarding decisions based on its distance to the
vector from the source to the sink. When the pipe radius is bigger than the transmission
range, those nodes which are outside the transmission range of one forwarder while still
lie in the routing pipe may hear packets from other forwarder. This means that they
may be still eligible for packet forwarding. Thus, theoretically there is no upper limit
93
for the pipe radius of VBF, while in practice, the valid range of routing pipe radius of
In VBF, the bigger the pipe radius, the higher successful data delivery ratio VBF
can achieve, and the more optimal the paths VBF can select. Thus, for networks with
different density, a proper pipe radius should be carefully chosen. While for HH-VBF,
from Lemma 4.3, we can see that the biggest value of the pipe radius is R, which
will clearly yield the highest successful data delivery ratio. Thus, in HH-VBF, we can
eliminate the trouble of tuning the pipe radius by simply choosing the transmission
range R.
4.3.3 Analysis
VBF is robust against to packet loss and node failure in that VBF uses redun-
dant paths to forward the data packets. Some of these paths are interleaved, some
are parallel. The forwarding paths are determined by the deployment of nodes. We
to denote the transmission range and W to denote the radius. We assume that all the
R
nodes are deployed in layers, and the adjacent layers are parted by 2. For one layer,
if one node receives a packet, we assume that all the normal nodes will eventually
receive the data packet. This assumption is justified by the fact that all these nodes
can receive the data packets from nodes in lower layer or from the nodes in the same
layer. The probability of such event is much higher than the successful delivery from
layer to layer. The routing pipe is a cylinder from source to the sink dimensioned by
W and R. All the nodes inside the routing pipe are qualified forwarders. We use d
94
to denote the density of nodes, pl to denote the loss probability of packets and p e to
denote the failure probability of nodes. We use h to denote the number of hops. Since
there are h layers in the cylinder, the number of nodes in each layer is estimated as
W 2 R hd W 2 Rd
Nl = h
2
= 2 . Therefore, the number of normal nodes at each layer
with radius R. When this node transmits a packet, all the nodes inside the sphere can
hear that packet. There are three layers of nodes inside the sphere. We estimate the
4
number of nodes in each layer as nt = 3 R3 d 31 . For a normal sender, the
probability that its packet is received by the nodes in the upper layer is estimated as
The transmission fails only if all of its neighbors fail or the packet is lost in all the
links to these neighbors. For the probability that a packet is transmitted successfully
P = PLh (4.4)
From Equation 4.2, 4.3 and 4.4, we can see that when density d increases, the
equations, we can see that VBF is robust against the failures of links and nodes. For
example, if d = 1/1000m3 , the radius and transmission range are both 20 m, that
means the average number of nodes in the range of the sender is about 17, one-third
of which are valid forwarders, i.e., 5. In such case, if there is no packet loss, when the
95
node failure is 65%, the probability that a packet is forwarded 20 hops successfully is
90%. If there is no failed node, when the packet loss is 77%, the probability that a
The self-adaptation algorithms in VBF and HH-VBF introduces extra delay in data
forwarding. The purpose of the delay is to differentiate the importance of the nodes in
Tdelay , we can reduce the end-to-end delay. However, due to the purpose of the delay
time used by VBF and HH-VBF, Tdelay must be set large enough. We give the lower
Let N be the total number of nodes in the network and the space be X Y Z.
X Y Z
x = N , y = N and z = N. Let W be the radius of the routing pipe and R be
the transmission range. The average time for an acoustic signal to travel between two
d
neighbor nodes is T = v0 , where v0 is the propagation speed of acoustic signals in
water. The delay time Tadapation in the self-adaptation algorithm should be greater
than T . Let D = min{W, R}, and be the difference of the desirableness factors
d
of these two nodes. It is easy to get 2 D. Therefore, we obtain the following
lemma.
Lemma 4.4 The lower bound for Tdelay is Dd .
2v0
Even though the distance between the neighbor nodes is not always exactly equal to
the average distance, the probability that the distance, d, between two adjacent nodes
96
is bounded by
deviates from the average distance, d,
2
e d
P r[d (1 )d] 2 . (4.5)
That is, the probability that there is a large difference between d and d is very small.
It justifies the lower bound for Tdelay . In other words, it is useless to set T delay smaller
than Dd in the self-adaptation algorithms in VBF and HH-VBF.
2v0
the importance of two adjacent nodes. For a forwarder, if the desirableness factors
of these two nodes approach 0, then these nodes are close to the best position of the
forwarder. This implies that these nodes are close to each other, and most of their
transmission spaces overlap; thus, the benefit for both nodes to forward the packet is
not justified. The self-adaptation algorithms attempt to differentiate the delay time of
these nodes to the extent such that the difference between their delay time periods is
large enough to allow the optimal node to suppress the other node. If the difference
of their desirableness factors is very small, we want such small difference to cause the
T ()
= C KK1 . (4.6)
T ()
It is easy to see that if K < 1, lim0 . Therefore, for our purpose, any
0 < K < 1 satisfies our purpose. In our algorithms, we set K = 1/2 (please refer
to Equation 4.1). When we set the delay time for a node proportional to the square
97
root of the desirableness factor, we actually distinguish the nodes with slightly different
In this section, we evaluate the performance of VBF and HH-VBF through extensive
MAC protocol. When a sender has packets to send, it first senses the channel. If
the channel is free, the sender broadcasts its packets. If the channel is busy, it uses a
We first define the performance metrics and describe the simulation methodology.
We then evaluate how network parameters such as node density, node mobility, and
Performance Metrics
We propose three metrics: success rate, energy cost and energy tax. Success rate
is defined as the ratio of the number of packets successfully received by the sink to
the number of packets generated by the source. Energy cost is measured by the total
energy consumption of all the nodes in the network. Energy tax is the energy cost per
Experimental Methodology
In all the simulation experiments described in this section, sensor nodes are ran-
domly distributed in a 3D field of 1000m 1000m 500m. There are one data source
and one sink. The source is fixed at location (900, 900, 500) near one corner of the field
98
at the floor, while the sink is at location (100, 100, 0) near the opposite corner at the
surface. Besides the source and the sink, all other nodes are mobile as follows: they
can move in horizontal two-dimensional space, i.e., in the X-Y plane (which is the most
destination and moves toward that destination. Once the node arrives at the desti-
nation, it randomly selects a new destination and moves in a new direction. In order
to avoid packet transmission interference, we limit the sending rate to one packet per
10 seconds, which is low enough to avoid interference caused by two continuous pack-
ets. For each test, the results are averaged over 100 runs, with a randomly generated
topology in each run. The total simulation time for each run is 1000 seconds.
We first investigate the impact of node density and mobility. In this set of experi-
ments, all the mobile nodes have the same speed. We vary the mobility speed of each
node from 0 m/s to 3 m/s and the number of nodes from 500 to 4000. The simulation
Figure 4.4 shows the success rate as the function of the number of nodes and the
speed of nodes. When the node density is low, the success rate increases with density.
However, when more than 4000 nodes are deployed in the space, the success rate
remains above 90%. The success rate decreases slightly when the nodes are mobile,
Figure 4.5 depicts the energy cost as the number of nodes and the speed of nodes
vary. The energy cost increases when the number of nodes increases since more nodes
are involved in packet forwarding. For the same number of nodes in the network, this
figure also shows that the energy cost in static networks is slightly less than that in
99
rate-density-mobility energy-density-mobility
1 45000
40000
0.8
35000
0.6 30000
25000
0.4
20000
0.2 15000
0 10000
4000 4000
3500 3500
3000 0 3000
0
2500 Number 2500 Number of nodes
1 of nodes 1
2000 2 2000
2
Speed of nodes Speed of nodes 31500
31500
Figure 4.4: Impact of density & mobility Figure 4.5: Impact of density & mobility
on success rate on energy cost
dynamic networks. However, the energy cost remains relatively stable as we vary the
This set of simulation experiments have shown that in VBF, node speed has some
impact on success rate and energy cost, but not significantly. It demonstrates that
We test the impact of the routing pipe radius (i.e., the distance threshold) in this
set of simulations. There are 2000 nodes in the network, and their speed is fixed at
1.5 m/s. We vary the radius from 0 meters to 200 meters. The results are shown in
From Figure 4.6, we can see that the success rate increases as the radius is lifted;
meanwhile, as shown in Figure 4.7, more energy is consumed because more qualified
nodes forward the packets. The curve in Figure 4.6 becomes flat when the radius
exceeds 150 meters. This is caused by the topology of the network and the positions
of the sink. The sink is located at the corner of a cube. It does not help to improve
100
1 30000
28000
0.8
26000
22000
0.4
20000
0.2 18000
16000
success rate energy cost
0
0 50 100 150 200 0 50 100 150 200
Radius Radius
Figure 4.6: Success rate vs. routing pipe Figure 4.7: Energy consumption vs. rout-
radius ing pipe radius
the success rate further once the radius exceeds some threshold since there no nodes in
As shown in the above figures, the routing pipe radius does affect the given metrics
greatly. In short, the bigger the radius is, the higher success rate VBF can achieve, the
more energy VBF consumes, and the more optimal path VBF selects.
versions of VBF, one is armed with self-adaptation algorithm, and the other is not.
experiments, the speed of each node is fixed at 1.5 m/s, and the routing pipe radius is
fixed at 100 m. The results are shown in Figure 4.8, and Figure 4.9.
From Figure 4.8, we can see that even in a spare network, VBF with self-adaptation
algorithm spends only half as much energy as the one without self-adaptation algorithm.
When the number of nodes increases, the difference between these two curves tends to
increase, indicating that the self-adaptation algorithm can save more energy when the
101
100000 1
90000
0.8
80000
success rate
0.6
60000
50000
0.4
40000
30000
0.2
20000
self-adaption self-adaption
non-self-adaption non-self-adaption
10000 0
1500 2000 2500 3000 3500 4000 1500 2000 2500 3000 3500 4000
Nodes Nodes
Figure 4.8: The effect of self-adaptation Figure 4.9: The effect of self-adaptation
algorithm on energy consumption algorithm on success rate
As shown in Figure 4.9, the success rate of VBF with self-adaptation is slightly
less than the one without self-adaptation. However, the difference between these two
curves tends to dwindle as the number of nodes increases. With more than 1000 nodes
in the network, the difference is less than 5%. This result shows that the side effect of
The results from this set of simulations show that the self-adaptation algorithm can
save energy effectively, especially for dense networks. Even though the self-adaptation
algorithm achieves this goal by introducing extra end-to-end delay and slightly reducing
success rate, the success rate reduction is less than 10% in the spare network case and
the extra end-to-end delay is also limited. Furthermore, these side effects tend to
In this set of simulations, we evaluate the robustness of VBF against packet loss (or
channel error) and node failure. In the experiments, the number of nodes are fixed at
1000, the radius is set 100 m and the speed of nodes is set 0 m/s. In order to increase
102
the density of the node deployment, we set the space to 500m 500m 500m. The
source and the sink are located at (250, 250, 0) and (250, 250, 500) respectively.
The simulation results are shown in Figure 4.10. The x-axis is the error probability,
which has different meanings. For the packet loss curve, node failure is set 0 and x-axis
is packet loss probability. For the node failure curve, packet loss is fixed at 0 and the
x-axis is node failure probability. From this figure, we can see that VBF is robust
against both packet loss and node failure. When the packet loss is as high as 50%,
the success rate can still reach 90%. We also observe that VBF is more robust against
packet loss since the packet in VBF is forwarded in interleaved forward paths. If a
node does not receive a packet from one forwarding node, this node still has chance to
receive the same packet from another forwarding node since the forwarding paths in
0.8
Success rate
0.6
0.4
0.2
packet-loss
node-failure
0
0 0.1 0.2 0.3 0.4 0.5
Error probability
103
Impact of Node Density
In this set of simulations, we examine the impact of node density. We fix the node
speed at 0 (i.e., static networks), and change node density by varying the number of
nodes deployed in the field from 500 to 3000. The results for success rate, energy cost
and energy tax are plotted in Figure 4.11, Figure 4.12, and Figure 4.13 respectively.
From Figure 4.11, we can clearly observe the general trend of success rate for both
VBF and HH-VBF: with the increasing node density, the success rate is enhanced.
This is intuitive: for any node in the network, as the network density becomes larger,
more nodes will fall in its routing pipe (with fixed radius as the transmission range).
In other words, more nodes are qualified for packet forwarding, as naturally leads to
higher success rate. Future, we can see that the success rate of HH-VBF is significantly
improved upon VBF, especially when the network is sparse. This observation is consis-
tent with our early analysis: HH-VBF can find more paths for data delivery in sparse
networks.
Figure 4.12 shows us that the energy cost of HH-VBF is higher than that of VBF,
and the gap becomes more significant as the network gets denser. This is reasonable
as the higher the node density, the more paths HH-VBF can find. We normalize the
energy consumption, i.e., compute the energy tax, and the results are illustrated in
Figure 4.13. From this figure, we can observe that when the network is sparse, the
normalized energy cost of HH-VBF is greatly lower than that of VBF. For example,
when the number of nodes is 1000, the energy tax of HH-VBF is 226 J/pkt, while
the energy overhead of VBF is as high as 4919 J/pkt. This is mainly because the
data delivery ratio of VBF is extremely low (2% when the network size is 1000). This
further confirms that VBF is not good for sparse networks. On the other hand, when
104
the network gets denser, VBF shows its advantage over HH-VBF: HH-VBF still tends
to find more paths, while the delivery ratio has reached the maximum. In this case,
more paths do not help to increase the success rate, but more energy cost will be
introduced.
105
100 80000
VBF
HH-VBF
70000
80
60000
40000
40
30000
20000
20
10000
HH-VBF
VBF
0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Number of Nodes Number of Nodes
Figure 4.11: Success rate vs node density Figure 4.12: Energy cost vs node density
100
VBF VBF
5000 HH-VBF HH-VBF
80
4000
Energy Tax (J/pkt)
40
2000
1000 20
0 0
500 1000 1500 2000 2500 3000 0 0.5 1 1.5 2 2.5 3
Number of Nodes Speed of Nodes (m/s)
Figure 4.13: Energy tax vs node density Figure 4.14: Success rate vs node speed
30000 5000
VBF VBF
HH-VBF HH-VBF
25000
4000
Energy Tax (J/pkt)
20000
Energy Cost (J)
3000
15000
2000
10000
1000
5000
0 0
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Speed of Nodes (m/s) Speed of Nodes (m/s)
Figure 4.15: Energy cost vs node speed Figure 4.16: Energy tax vs node speed
106
Impact of Node Mobility
In this set of simulations, we explore how node mobility impacts the performance
of HH-VBF. We fix the network size at 1000 (a relatively sparse network), and vary
the node speed from 0 to 3 m/s. Figure 4.14, Figure 4.15, and Figure 4.16 plot the
From Figure 4.14, we can observe that the node mobility has different effects on
the success rate of VBF and HH-VBF when the node speed is low. By conducting
many additional simulation experiments, we find this is mainly due to the randomness
of network topology generation. For VBF, when node pattern changes from static
to mobile, the mobility actually helps to increase the chance that non-connected
paths become connected, while for HH-VBF, since there are more routing pipes in
the network, light node mobility causes the chance that non-connected paths become
connected smaller. Furthermore, from Figure 4.14, we can see that as the node speed
gets higher, the success rate of both VBF and HH-VBF becomes stable. This indirectly
confirms that experiencing more topologies will help eliminate the difference caused by
Figure 4.14, Figure 4.15 and Figure 4.16 together convey the major information:
both HH-VBF and VBF are robust to node mobility, while HH-VBF has much bet-
ter performance (in terms of both success rate and energy tax) than VBF in sparse
networks.
4.3.4.7 Summary
where almost all the nodes move at small or medium speed (1-3 m/s). The results show
107
that VBF addresses the node mobility issue effectively and efficiently. In addition, these
results also show that self-adaptation algorithm contributes significantly to save energy.
Moreover, the simulation results show that VBF is robust against node failure and
channel error. Additionally, our simulation results also prove that HH-VBF improves
Both VBF and HH-VBF are geographic routing protocols which exploit the position
information of the sensor nodes to determine the forwarding paths for the data packets.
However, the greedy policy used to select next hop in geographic routing protocols is not
always feasible. For example, in VBF and HH-VBF, a forwarding node always forwards
the packets to nodes with smaller desirableness factors. However, the forwarding node
possibly has the smallest desirableness factor in its neighborhood. This phenomena
is called void problem. As introduced in Section 4.1, the void problem in M-UWSNs
are most challenging in that the voids are 3-dimensional objects, volatile and mobile.
VBVA has the same assumptions as VBF, i.e., the node can overhear the transmis-
sion of its neighbors. By the nature of broadcast in underwater sensor networks, this is
easily satisfied. Same as VBF, in VBVA, the information about the forwarding vector
of a packet is carried in the packet. In VBVA, the radius of the forwarding pipe is set
to the transmission range of the sensor nodes in the network. If there is no void in
the forwarding path, VBVA behaves the same as VBF. However, VBVA significantly
differs from VBF in that VBVA potentially has multiple forwarding vectors for one
108
4.4.1 VBVA Protocol Design
Like VBF, a node in VBVA needs to estimate its local topology by overhearing
the transmission of a data packet. However, due to the long propagation delays of
acoustic signal and the extra delay introduced by VBF, the time period that a node
has to listen the channel must be long enough for the node to get the the accurate
estimation of its local topology. We define this time period as void-avoidance delay.
The void-avoidance delay should be set long enough to leave room for the propagation
of the packet back and forth from the next hop and the time needed by the nodes
algorithm (described in Section 4.4.1.4). The lower bound for void-avoidance delay
is 2 Tm + 3Tdelay , where Tm is the maximum possible propagation delay, i.e., the
transmission range divided by the propagation speed of acoustic signal in water (1500
m/s) and Tdelay is the maximum delay window defined in Section 4.3.1.3. In VBVA,
VBVA detects the voids by overhearing the transmission of the packet from the
node in next hop. When a node overhears the transmission of a data packet, the node
records the position information of the forwarding nodes. In the following, we denote
the start point and end point of the forwarding vector as S and D respectively. For
any node N , we define the advance of node N on the forwarding vector of the packet
is the projection of the vector SN on the forwarding vector SD. We call a node a
void node if all the advances of its neighbors on the forwarding vector of some packet
are less than its own advance. An example is shown in Figure 4.17, the forwarding
vector of a packet is SD, the advances of nodes B, C and F on the forwarding vector
109
are denoted as AB , AC and AF , respectively. As shown in Fig 4.17, all the neighbors
of node F have less advances than F on the forwarding vector SD, thus node F is a
void node. In VBVA, if a node finds out that it has the largest advance on the current
forwarding vector among all the nodes it can overhear, the node concludes that it is a
A AF
C
S D
A
B
A void node is essentially the edge node surrounding a void in the forwarding
direction of a packet and can not forward the packet any further in the direction
When a node determines it is a void node, the node bypasses the void by changing
the forwarding vector to find a detour. In VBVA, the node broadcasts a control packet
to all its neighbors. Upon receiving this type of control packet, all the nodes outside
the current forwarding pipe forward the packet based on a new forwarding vector from
themselves to the target. This process is called vector-shift. In this case, we say the
void nodes shift the forwarding vector. All the control packets related with vector-shift
Due to the lack of global topology information, the direction defined by the for-
warding vector of a packet in VBVA is possibly a wrong direction. VBVA corrects the
wrong direction by allowing the data packet to be routed in the direction opposite to
the forwarding vector of the data packet, thus, routing the packet backward. We call
110
this process as back-pressure. All the control packets used in back-pressure are called
The examples for vector-shift and back-pressure are shown in Figure 4.18 and Fig-
ure 4.19. In Figure 4.18, the dashed area is the void area, node S is the sender and
node T is the target node. When node S forwards the packet with forwarding vector
ST , node S keeps listening the channel for some time period. If node S does not over-
hear any transmission of the same packet, node S shifts the current forwarding vector
of the packet to DT and AT as shown in Figure 4.18. Nodes D and A repeat the
same process. The arrowed lines in Figure 4.18 are the forwarding vectors used by the
forwarding nodes. From this figure, we can see that if the void area is convex, it can
Figure 4.19 shows the an example for back-pressure. The dashed area is a concave
void. The node S is the sender and node T is the target. When node S forwards the
packet with forwarding vector ST to node C. If node C can not forward the packet
along the vector ST , node C first shifts the forwarding vector of the packet. If node
C still can not overhear the transmission of the packet, then node C forwards the
data packet back to node B. Node B first shifts the forwarding vector of the packet,
if it fails, node B forwards the data packet back to node A. Finally, the data packet
is routed from node A to the source S. Node S then shifts the forwarding vector of
the packet. The packet is then forwarded to the target by vector-shift method from
nodes H and D as shown in Figure 4.19. In this figure, the dashed arrowed lines are
the forwarding paths used by VBF, the think black arrowed lines are the paths used
by back-pressure mechanism and the arrowed lines are the forwarding vectors used by
vector-shift mechanism.
111
L
T
K
E F J
C G
T
D H B
A F
S C S
E
A B D
In VBVA, each node keeps two tables, transmission-status table and next-hop-
status table. The transmission-status table is used to record the transmission status of
the packets, and the next-hop-status table is used to record the status of the nodes in
the next hop. All the items in these two tables are just kept for a short time period,
In VBVA, in addition to data packet, there are two types of control packets, shift
control packets and backward control packets. The shift control packets include VS
and VSD, the backward control packets include EPN, EPND, and BP. VS and
EPN packets are small control packets, which carry the id of the forwarding node and
the sequence number of the data packet. VSD, EPND and BP are the large control
VS and VSD are used to shift the forwarding vector of a data packet. The only
difference between these two types of packets is the scenarios to use them. If the
vector-shift of a packet happens immediately after the forwarding of the packet, the
void node broadcasts a VS to shift the forwarding vector of the data packet to reduce
112
the transmission energy. Because all the nodes in VBVA are required to keep the data
packet for a short time period when they receive it. Upon receiving a VS, the neighbors
of the void node can retrieve the original data packet based on the sequence number
carried in VS, change its forwarding vector and forward it. However, in back-pressure,
when a node needs to shift the forwarding vector, i.e., the node receives a BP and never
shifts the forwarding vector of the corresponding data packet, the node uses VSD to
shift the forwarding vector. Since the neighbors of the node possibly do not keep the
copies of the corresponding data packet at this time, the node needs to provide the
data packet.
the direction from the source to the target leads to a dead end, the data packet has
to be withdrawn from the source in a direction opposite to the target, at each step of
retreat, the nodes attempts to forward the data packet to the target. We call these
nodes (including the source) the first forwarding nodes. The first forwarding nodes
use EPN or EPND in back-pressure mechanism, whereas, the nodes located between
the first forwarding node and the target use BP to route the data packet back to the
first forwarding node. The difference between EPN and EPND is similar to VS and
VSD, they are used in different scenarios, i.e., when a first forwarding node finds out
that it is a void node immediately after forwarding the data packet, it uses an EPN
node forwards a packet normally, the transmission status of the packet on this node
is FORWARDED. However, if the node forwards a packet upon receiving the cor-
responding EPN or EPND, the transmission status of the packet on this node is
113
CENTER-FORWARDED. Only the first forwarding nodes possibly have CENTER-
FORWARDED transmission status, these nodes can potentially route the packet from
the source along the direction opposite to the target. If a node drops a packet due
to the duplication suppression, this node marks the transmission status of the packet
sion status of the packet in this node is FLOODED. If a node broadcasts a backward
control packet, then the transmission status of the corresponding data packet is TER-
MINATED which means that this node will not process any packets related with this
1. The source first broadcasts the packet, and sets the timer to the void-avoidance
delay.
2. The source listens to the channel and records the position information and id of
3. After the timer expires, the source determines if it is a void node by checking its
next-hop-status table. If so, the source broadcasts an EPN, and marks the trans-
mission status of the packet as TERMINATED. Otherwise, the source sets the
status table.
Upon receiving a data packet, the node checks whether it has received this packet
before, i.e., whether the next-hop-status table and transmission-status table are not
114
empty for the packet. If so, this node ignores the packet. Otherwise, the node treats
1. The node first copies the packet in its buffer and keeps it for time period of void-
avoidance delay, calculates its distance to the forwarding vector of the packet.
2. If the node is inside the forwarding pipe, the node runs self-adaption algorithm
in VBF [109] to determine whether to forward the data packet. If the node
decides to drop the packet, marks the transmission status of the the packet as
SUPPRESSED.
3. If the node forwards the packet, then marks the transmission status of the data
packet as FORWARDED. The node then sets the timer to void-avoidance delay
4. During the time period, if the nodes overhears the transmission of the data packet,
this node records the position information and id of the forwarding nodes in the
next-hop-status table.
5. After the timer expires, the node checks its next-hop-status table to determine
if it is a void node. If not, this node deletes the data packet from its buffer.
Otherwise,
6. The node broadcasts a VS, marks the transmission status of the packet as
7. After the timer expires, this node checks if it shifts the forwarding vector suc-
cessfully, i.e., it checks if it overhears the transmission of the data packet where
the forwarding vector of the data packet starts from the forwarding node itself.
If not, the node concludes that can not shift the forwarding vector of the data
115
packet, it then broadcasts a BP and marks the transmission status of the data
packet as TERMINATED.
1. The node checks if there is the corresponding data packet in its buffer. If the
node can not find the corresponding data packet, the node ignores the VS packet.
Otherwise,
2. The node checks if it is outside of the current forwarding pipe. If not, the node
3. The node changes the forwarding vector of the data packet to the one from itself
to the target, records the transmission status of the packet as FORWARD and
VBVA processes VSD the same way as VS except that the receiving node does not
need to check its buffer for the data packet, instead, the receiving node can extract
1. checks if there is corresponding data packet its data buffer. If not, drops the
packet, otherwise,
2. changes the forwarding vector to the one from itself to the target, records the
packet.
3. sets timer to void-avoidance delay, then listens the channel and records the posi-
116
4. After void-avoidance delay, if the node finds out it is void node, it broadcasts
Otherwise, the source sets the transmission status of the packet as CENTER-
VBVA processes an EPND the same way as an EPN except that the receiving node
1. If the transmission status of the packet is FLOODED, this node generates and
broadcasts a BP packet and marks the transmission status of the data packet as
TERMINATED.
2. If the transmission status of the packet is TERMINATED, the node ignores the
packet.
WARDED, the node first marks the node in its next-hop-status table as DEAD.
4. If all the nodes in the next hop are marked as DEAD, then the node
(a) generates and broadcasts a VSD packet if the transmission status of the
packet is FORWARDED.
(b) generates and broadcasts an EPND packet if the transmission status of the
packet is CENTER-FORWARDED.
From the protocol, we can see that if a node receives a VS, VSD, EPN, or an
EPND packet, then the node potentially can forward the data packet with a new
forwarding vector from itself to the target. Therefore, it probably results in several
forwarding vectors for one data packet, thus, wastes energy. We propose a self-center
117
adaption algorithm for VBVA to enable each node to weigh the gain to initiate a new
forwarding vector and drop the low-benefit ones. We call the sender of VSs, VSDs,
In VAVB, when a node is qualified to initialize a new vector from itself to the tar-
get, the node delays the process for some time period, called self-center delay, which is
determined by the positions of the receiving node and the requester. After self-center
delay, if the node does not overhear any transmission of the same packet, it means that
the node is at the best position to initiate a new forwarding vector. Then this node
forwards the packet with a new forwarding vector from itself to the target. Upon over-
hearing the transmission, other nodes stop their processes to initiate a new forwarding
vector and marks the transmission status of the data packet as SUPPRESSED. We
(R D) R R cos
= + (4.7)
R R
where R is the maximum transmission range and D is the distance between the node
and the vector from the requester to the target, is the angle between the vector from
the requester to the target and the vector from the requester to the node. Basically, self-
center factor is used to evaluate the position of the receiving node, i.e., if the receiving
node is closer to the target and farther from the requester, then the self-center factor
of the receiving node is smaller. When a node is a void node, then there exists a void
between the node and the target. VBVA attempts to avoid the void by shifting the
forwarding vector away from the void node as far as possible, meanwhile, close to the
target as much as possible. The self-center delay is defined as Tdelay , where Tdelay
118
is the maximum delay window. The mathematical rationale to use square root is to
maximize the difference of self-center delays between two nodes when their self-center
factors are close, and minimize the difference when these two nodes have significantly
different self-center factors, which is the same as the analysis discussed in Section 4.3.3.
As shown in the Figure 4.20, node A and node B receive VSs from forwarding node F,
node T is the target node. Since the self-center factor of node A is smaller than that
of node B, node A would forward the data packet with a new forwarding vector AT ,
once node B overhears the transmission of the data packet, node B just drops its VS.
A
F
DA
D
B B
In VBVA, once a node finds a void, the node first avoids the void by vector-shift
method. If the vector-shift method fails, then the node adopts back-pressure to route
the packet backward. If the void is convex, the packet is actually forwarded by the nodes
on the boundary of the void to the target. However, if the void is concave, vector-shift
can not always forward the packet in the direction defined by the forwarding vector.
In this case, the back-pressure allows the data packet to be routed backward several
hops until it can be forwarded again. VBVA handles the void problem on demand. It
means that VBVA is robust against mobile and dynamic voids since VBVA does not
119
require any topology information before forwarding a packet. On the other hand, the
mobility of the nodes in the networks affects the performance of VBVA. If the void can
is not necessary in VBVA. However, when VBVA has to retreat to avoid a concave void,
BP packet can only be routed in the reverse order of the corresponding data packet.
This means that if the forwarding nodes of the data packet keep their relative positions
the same as when they forwarded the data packet, BP packet can be routed back to
one of the forwarding nodes, which then forwards the corresponding data packet with
a new forwarding vector to the target. Since the nodes in networks moves at low or
medium speed (0-3 m/s), it take longer time for the forwarding nodes to change their
most cases. The simulation results in Section 4.4.3 confirm our conjecture.
4.4.2 Analysis
VBVA is very capable of delivering packet from the source to the target. If the
underlying MAC is collision free and the topology of the network is stable during the
time period of the packet delivery, VBVA guarantees to find a path from the source to
Theorem 4.1 If the underlying MAC is collision free and the topology of the networks
is stable during the data delivery, VBVA without any duplication suppression can
Before we prove the theorem 4.1, we have to define segment of the forward path. A
forward path is a acyclic sequence of nodes. The first node is the data source and the
last one is the target. A segment is maximum part of the sequence of nodes such that
120
all the nodes are in the forwarding pipe define by the first node of the sequence and
the target node. For example, as shown in Figure 4.21. S 1 S2 . . . S8 is a path from the
source S1 to target S8 , where only the adjacent nodes are in the transmission range
that the minimum number of segment is 1 and the maximum number of segments in a
S7
S6
S8
S5
S4
S3
S2
S1
We prove that if there exists only one path from the source to the target in the
path.
We denote the acyclic path between the source and the target as P 1 P2 . . . Pn , where
P1 and Pn are the source and target respectively. First, it is easy to see that if the
number of segments in the forwarding vector from the source and the target is 1, then
this path is in the forwarding path defined by the vector from the source to the target,
VBVA definitely finds this path. Assume that when the number of segments in the
forwarding path is less than k, VBVA will find the path. We now assume the path,
121
P1 P2 . . . Pn , has k segments. Let P1 P2 . . . Pi be the nodes in the first segment. There
1. P1 , P2 , . . . , Pi are the nodes in the forwarding pipe defined by the vector from
the source to the target. Therefore, P i will receives the packet. There are two
possibilities.
(a) Node Pi is a void node. If node Pi is the void node, by VBVA, Pi sends a
VS or an EPN. Therefore, node Pi+1 forwards the packet with the forward-
ing vector started from itself to the target. Since the number of segments
Pi+1 Pi+2 . . . Pn is k 1, by our hypothesis, VBVA can route the packet from
(b) Node Pi is not a void node, that means, there are still nodes in the forwarding
pipe defined by P1 Pn . Since there is no path to the target in the direction,
forwarding vector to node Pi+1 . Node Pi+1 then forwards the packet with
the forwarding vector started from itself to the target. Since the number of
2. P1 , P2 , . . . , Pi are the nodes in the half space opposite to the vector from the
source to the target. By VBVA, the packet first is forwarded along the vectors
Pi+1 then forwards the packet with the forwarding vector started from itself to the
122
target. Since the number of segments P i+1 Pi+2 . . . Pn is k 1, by our hypothesis,
Therefore, we can see that if there exist only one path in the topology, VBVA can find
this path.
underlaying MAC used by flooding, VBF and VBVA is broadcast MAC protocol. In
broadcast MAC, if the node has data to send, this node first senses the channel. If
the channel is free, then the node broadcasts the packet. Otherwise, it backs off. The
Parameter Settings
In all our simulations, we set the parameters similar to UWM1000 [55]. The bit
rate is 10 kbps, and the transmission range is 100 meters. The energy consumption
on sending mode, receiving mode and idle mode are 2 W, 0.75 W and 8 mW respec-
tively. The size of the data packet and large control packet for VBF and VBVA in the
simulation is set to 50 Byes. The size of the small control packet for VBVA is set to
5 Bytes. The pipe radius in both VBVA and VBF is set 100 meters. The maximum
In all the simulation experiments described in this section, sensor nodes are ran-
domly distributed in a space of 1000 m 1000 m 500 m. There are one data source
and one sink. The source sends one data packet per 10 seconds. For each setting,
the results are averaged over 100 runs with a randomly generated topology. The total
123
Performance Metrics
We examine three metrics: success rate, energy cost and energy tax. The success
rate is the ratio of the number of packets successfully received by the sink to the number
of packets generated by the source. The energy cost is the total energy consumption
of the whole network. The energy tax is the average energy cost for each successfully
received packet.
In this simulation setting, the source is fixed at (500, 1000, 250), while the sink is
at located at (500, 0, 250). Besides the source and sink, all other nodes are mobile
as follows: they can move in horizontal two-dimensional space, i.e., in the X-Y plane
(which is the most common mobility pattern in underwater applications). Each node
randomly selects a destination and speed in the range of 0 3 m/s, and moves toward
that destination. Once the node arrives at the destination, it randomly selects a new
destination and speed, and moves in a new direction. We compare the success rate
Intuitively, flooding is the most powerful and naive method to avoid voids in the
networks. We compare the success rate of VBVA with that of flooding to show that
VBVA is very robust against voids. On the other hand, we present the success rate of
VBF in the same topology and mobility pattern to show the existence of voids in the
forwarding path since VBF can not deliver the packet to the target if there exists a
From Figure 4.22, we can see that the success rate of VBVA is very close to that of
flooding and much higher than that of VBF when the number of nodes is low. When
124
1 200000
Flooding
0.9 180000 VBF
0.8
Success rate
140000
0.6 120000
0.5 100000
0.4 80000
0.3 60000
0.2 Flooding
VBF 40000
0.1
VBVA 20000
0
1000 1500 2000 2500 3000 1000 1500 2000 2500 3000
Number of nodes Number of nodes
Figure 4.22: The comparison of success Figure 4.23: The comparison of energy
rate: flooding, VBF and VBVA cost: flooding, VBF and VBVA
2000
Flooding
1800 VBF
Energy tax (Joule/pkt)
1600 VBVA
1400
1200
1000
800
600
400
200
1000 1500 2000 2500 3000
Node number
the number of nodes in the networks increases, the probability of the presence of void is
decreased, the difference among these three protocols is becoming less and less. When
the network is very sparse, flooding algorithm and VBVA outperform VBF. Moreover,
VBVA shows almost the same capability to overcome the voids in the networks as
flooding. The success rate of flooding roughly is the upper bound for all the routing
As shown in Figure 4.23, VBF is the most energy efficient among these three pro-
tocols since VBF never attempts to consume more energy to overcome the voids in the
networks. VBVA is energy efficient than flooding under all the network deployments.
125
Notice that the difference between the energy cost of VBVA and flooding algorithm
increases as the network becomes denser. This is attributed to the fact that VBVA
overcomes voids with an energy efficient way. On the other hand, VBVA consumes
The energy taxes of flooding, VBF and VBVA are shown in Figure 4.24. From
this figure, we can see that when the network is sparse, both flooding and VBVA
consume less energy to successfully deliver one packet than does VBF. However, when
the number of nodes exceeds 1200, flooding algorithm consumes more energy per packet
than VBF and VBVA. VBVA consumes more energy per packet than VBF when the
number of nodes in the network exceeds 1400. VBVA has lower energy tax than flooding
algorithm under all the network topologies. Notice that after some point (1400), VBVA
has higher energy tax than that of VBF, VBVA is still preferable over VBF for some
application scenario where success rate is important. Since VBVA is based on VBF, it
is easy to integrate VBVA as an option for VBF such that the application can determine
From this simulation setting, we can see that VBVA approximates flooding algo-
rithm on the success rate, but at much less energy cost. VBVA can avoid voids in
In this simulation setting, the target is fixed at location (500, 0, 250) and the the
source is fixed at location (500, 1000, 250). We generate two different voids: concave
and convex voids for the network. In order to generate concave and convex voids,
we divide the whole space into smaller cubes (50 50 50). A node is randomly
deployed in each cube. We generate an ellipsoid centered at (500, 500, 250) with radius
126
void success rate energy tax (Joule/pkt)
concave void 0.992600 606.450097
convex void 0.99700 603.503211
Table 1: Concave void vs convex void
(300, 300, 150). The larger ellipsoid is the convex void in which there no nodes deployed.
(500, 300, 250) with radius (200, 200, 150). These two ellipsoids overlap each other.
The cubes in the overlapped part within the larger ellipsoid are deployed with sensor
nodes, other cubes inside other parts of the large ellipsoid are empty. Thus, we create
From table 1, we can see that VBVA can effectively bypass the concave and convex
voids. From the table, we can see that it costs more energy to bypass the concave
void than convex void since there involves back-pressure mechanism in VBVA when it
In this simulation setting, the target is fixed at location (500, 0, 250) and the the
source is fixed at location (500, 1000, 250). In order to make sure there exist a path
from the source to the target, we divided the whole space into small cubes (50 m 50
m 50 m). A sensor node is randomly deployed in each cube to guarantee that there
exists a path from the source to the target. We can generate a sphere with the radius
of 120 meters. The sphere moves back and forth along the straight line from the source
to the target. All the nodes in the sphere are blank out, i.e., they can not receive and
transmit any packets. The simulation results are shown in Figure 4.25 and Figure 4.26.
127
1.2
60000
1
Success rate
0.8
40000
0.6
30000
0.4 20000
0.2 10000
rate energy
0 0
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Speed of void Speed of void
Figure 4.25: The success rate under mo- Figure 4.26: The energy cost under mobile
bile void void
From Figure 4.25, we can see that the VBVA can get almost 100% success rate
under various mobility speeds of the void. From Figure 4.26, we can see that the
mobility of the void has less effect on the energy cost of VBVA.
The simulations in this setting show that VBVA address mobile void efficiently and
effectively. When the mobility speed of the void is less than 3 meters/s, the mobility of
void has no effect on the success rate and energy efficient if there exists a path between
4.5 Summary
In this chapter, we present our work on multi-hop routing in M-UWSNs. We first re-
view the challenges for routing in M-UWSNs, then we present our solution, vector-based
forwarding (VBF) protocol. After that, we improve the robustness of VBF in sparse
gorithms for both VBF and HH-VBF. The self-adaption algorithms enable the nodes
in VBF and HH-VBF to weigh the benefit and make forwarding decision based on
128
local information. Further, we propose a void avoidance protocol, called vector-based
void avoidance (VBVA) to address the void problem in M-UWSNs. We also develop
a self-center adaption algorithm for VBVA to suppress the packet duplications. The
self-center adaption algorithm enables the nodes in VBVA to weigh the benefit to avoid
ysis and simulations. Our results show that 1) VBF is an energy efficient, robust
and scalable routing protocol (especially for dense networks); 2) HH-VBF significantly
improves the performance of VBF in sparse networks; and 3) VBVA efficiently and
effectively handles the void problem in M-UWSNs which feature 3-dimensional space,
129
Chapter 5
Reliable data transfer is important in M-UWSNs, especially for those aquatic explo-
ration applications requiring reliable information. There are typically two approaches
to reliable data transfer: end-to-end and hop-by-hop. The most common solution for
high error rates incurred on the acoustic links, which was already encountered in wire-
less radio networks. Under the water, we have an additional problem: propagation
time is much larger than transmission time, setting the stage for the well-known large
bandwidth delay product problem. There are a number of techniques that can be
used to render TCPs performance more efficient. However, the performance of these
TCP variants in M-UWSNs is yet to be investigated. Another type of approach for re-
liable data transfer is hop-by-hop. The hop-to-hop approach is favored in wireless and
error-prone networks, and is believed to be more suitable for sensor networks [100].
130
There are a couple of reliable data transfer protocols proposed for terrestrial sensor
networks, such as PSFQ [100], RMST [91], and RBC [115]. These protocols mainly
take hop-by-hop approach with ARQ. Due to the long propagation delay of acoustic
signals, conventional ARQ will cause very low channel utilization in underwater en-
vironments. Thus, new approaches are desired for efficient reliable data transfer in
M-UWSNs.
One possible direction to solve the reliable data transfer problem in M-UWSNs is
to investigate coding schemes, including erasure coding and network coding, which,
though introducing additional computational and packet overhead, can avoid retrans-
mission delay and significantly enhance the network robustness. Coding schemes have
also been investigated for reliable data transfer in terrestrial sensor networks [18, 46].
answered.
UWSNs by two approaches. The first approach is called Segmented Data Reliable
Transport (SDRT), which is essentially a hybrid approach of ARQ and FEC. It adopts
efficient erasure codes, transferring encoded packets block by block and hop by hop.
Compared with traditional reliable data transfer protocols used in underwater acoustic
networks, SDRT can reduce the total number of transmitted packets significantly, im-
mathematic model to estimate the expected number of packets actually needed in both
propagation delay. Moreover, this model enables us to set the appropriate size of the
block to address the mobility of the nodes in the network. We conduct simulations to
131
evaluate our model and SDRT. The results show that our model can closely predict
the number of packets actually needed, and SDRT is energy efficient, and can achieve
coding and multi-path routing efficient error recovery. Through an analytical study,
we provide guidance on how to choose parameters in this scheme and demonstrate that
the scheme is efficient in both error recovery and energy consumption. We evaluate the
performance of the proposed scheme using simulations, and the simulations confirm
The rest of this chapter is organized as follows. We first review some related work in
Section 5.2, then we present SDRT in Section 5.3. After that, we describe our network
coding scheme in Section 5.4. Finally, we conclude our work on reliable data transfer
in Section 5.5.
Automatic Repeat Request (ARQ) and Foward Error Correction (FEC) are two
The simplest ARQ is the stop-and-wait protocol which is suitable for half-duplex
operation, but is well known for low channel utilization. There are some complex ARQ
techniques to improve the channel utilization such as Go-Back-N and selective repeat
protocols [84]. In Go-Back-N and selective repeat protocols, the sender continues to
send a window of packets meanwhile the acknowledgments from the receiver arrive. In
132
Go-Back-N protocol, the receiver does not buffer the received packets and just acknowl-
edges the in-order packet with the highest sequence number. The sender retransmits
the lost packet and all the subsequent packets. In selective repeat protocol, the receiver
buffers all the received packets and acknowledges each received packet, the sender just
retransmits the lost packets. Go-Back-N and repeat selective protocols improve the
There are some variants of stop-and-wait protocol for half-duplex operation. The
first version is proposed in [83]. In this protocol, when a packet is lost, the sender sends
several copies of the lost packet, thus, reduces the number of retransmissions. A more
general version is proposed in [59] where the number of copies of the lost packets in each
in [61,97]. In these protocols, the sender sends a group of data packets and waits for the
acknowledgment from the receiver. The difference between these two protocols is that
in the protocol proposed in [61], after the first round of data transmission, the sender
sends a group of new data packets and lost data packets combined to the receiver in
next round, whereas, in the protocol in [97], the sender retransmits only lost packets
in subsequent rounds until the packets transmitted in the first round are delivered
successfully.
The forward error correction technique can be used on two levels, namely packet-
level and byte-level. In byte-level FEC, each packet is appended some redundant bits
such that each packet has some capability to correct its bit-errors in its transmission. In
packet-level FEC, additional check packets are transmitted to recover the lost packets.
We are only interested in packet-level FEC. There are large number of coding schemes
for packet-level FEC. We discuss two of them, Reed-Solomon coding and parity coding.
133
In parity coding, the exclusive-or (XOR) operation is applied across groups of data
packets to generate corresponding check packets. The coding scheme closely related
Tornado codes [57] are appealing in many network applications in that Tornado
codes are easily and fast encoded and decoded, therefore have been used in the net-
In [12], Tornado codes are used for reliable multicast. The goal of this work is to
deliver bulk data to a large number of users. Original data are encoded and distributed
over a multicast tree. Using Tornado codes in multicast eliminates the need for retrans-
mission and feedback and reduces the number of packets each receiver need to receive.
This is similar to our work. However, the context and objective are totally different.
In [12], the purpose is to reduce the inefficiency on the receiver side. Our goal is to
reduce the number of packets on the sender side. In [12], the data are assumed to be
huge, whereas they are very limited in our case. Moreover, the network topology in
our case is highly dynamic. All these differences bring up some new problems. In [13],
Tornado codes are applied to speed up the parallel download. In this work, multiple
sources pump the stream of encoded data in parallel, and the receivers receive these
streams without collision. Encoding data enable the receiver can quickly reconstruct
the original data. Tornado codes are also used in informed content delivery in over-
lay networks [11]. The collaboration among peers in networks reduces the duplicated
data substantially, enables peers to supplement ongoing download and improves the
throughput greatly.
Reed-Soloman codes [78] are widely used in FEC because of its excellent error
correcting properties and robustness against burst losses. In [47], Kim et al. evaluate
several methods to improve the reliability of data transport, namely erasure code,
134
retransmission and route fix. All the methods are implemented and evaluated in a
real testbed of Mica2Dot. The results shows that each of these methods are effective
to overcome some failures. The erasure codes adopted in this work are Reed-Solomon
codes.
In the literature, there are several reliable transport protocols proposed for terres-
trial sensor networks [47, 82, 91, 100, 115]. However, due to the significant distinctions
between underwater sensor networks and terrestrial sensor networks, these protocols
are unsuitable for underwater sensor networks. We review and discuss each of the
protocols as follows.
Wan et al. designed PSFQ (Pump Slowly and Fetch Quickly) in [100], which
employs the hop-by-hop approach. In this protocol, a sender sends data packet to its
immediate neighbors at very slow rate. When the receiver detect a packet loss, it has
to fetch the lost packet quickly. The rate of fetching lost packet is 5 times as fast
as the rate of transmitting data. However, PSFQ employs ARQ mechanism in the
transmission from hop to hop, it suffers the problems inherited in ARQ paradigms in
sensor networks, where each node has the same sending rate [82]. This protocol achieves
the reliable detection of an event by controlling the sending rate of sources. When
the sink detects the congestion in the network, it commands all sources to reduce
the sending rate by broadcasting a control message. ESRT uses congestion avoidance
different from the problem we are investigating. Furthermore, the assumptions such as
135
homogeneous sensor networks and instant feedback from the sink are not applicable to
cast) are two other recently proposed reliable data transport protocols for terrestrial
sensor networks [91, 115]. RMST [91] is designed for the directed diffusion paradigm
[37]. In the proposed architecture, reliable data transfer schemes are implemented at
both transport and MAC levels, and ARQ mechanism is adopted. In RBC [115], a
window-less block ACK is used to improve the channel utilization, and a differentiated
contention control mechanism is employed to reduce the end-to-end delay. Both RMST
and RBC have the problems inherited in ARQ paradigm when applied in underwater
sensor networks.
Network coding was first proposed in [1]. R. Ahlswede et al, showed that, using
network coding, they can achieve the broadcast capacity of a multicast tree, which
cannot be achieved by simply copying and forwarding packets alone. The main idea
of network coding [1], [23] is that, instead of simply forwarding a packet, a node may
encode several incoming packets into one or multiple packets and forward these encoded
packets. Afterwards, Li et al, in [54] proved that linear network coding is sufficient for
the encoding functions. Koetter et al, in [48] showed how to find the coefficients of
the linear coding and decoding functions. Because of the simplicity of linear coding,
we choose this scheme for our protocol. Moreover, Fragouli et al, presented an instant
primer on network coding in [23], which discussed the implications of theoretical results
136
5.3 SDRT: An FEC-Based Reliable Transfer Protocol
in [107].
SDRT is essentially a hybrid approach, exploring both FEC and ARQ. In SDRT,
we adopt the hop-by-hop approach to achieve reliable data transfer. The source first
groups original data packets into the blocks of appropriate size and sends one block
each time. This is because the amount of data transmitted in one hop each time is very
limited due to the node mobility. In mobile underwater sensor networks, most nodes are
time between any pair of sender and receiver is limited. Moreover, the bandwidth
of communication channels is relatively low, and the propagation delay is very high.
These factors restrict the sender from sending very large bulk data one time. Therefore,
SDRT delivery data hop-by-hop and block by block. In order to reduce the number of
feedbacks from the receiver, SDRT adopts efficient erasure codes, called simple variant
of Tornado codes (SVT) to encode each block. The sender keeps injecting the encoded
packets into the channel and the receiver keeps receiving the data and decoding them.
After recovering the original block, the receiver sends back a positive feedback to the
sender, which in turn stops the sender injecting more data, thus saving transmission
energy. The receiver then encodes the block again and becomes the sender for the next
hop.
137
5.3.1.2 Simple Variant of Tornado (SVT) Codes
Since SVT codes are simplified Tornado codes, we will first introduce Tornado codes
in the following.
Tornado codes [57] are one type of erasure codes. Erasure codes typically encode a
set of the original n messages into a set of N encoded messages, where N n. In order
to reconstruct n original messages, the receiver has to receive a certain number (larger
N
than n) of encoded messages. The stretch factor, defined as n, is used to measure the
Tornado codes are preferred in many network applications in that they can be
easily and fast encoded and decoded. The encoding and decoding of Tornado codes
only involve XOR operations which are suitable for the limited computation capability
of sensor nodes. The encoding and decoding algorithms of Tornado codes are faster
give some definitions. For any type of erasure codes, we call original n messages as
data packets, all other messages are called check packets. In this paper, we use packet
and P2 be two packets of the same size, then P 1 P2 is the result of bitwise-XORing
packets P1 and P2 .
Encoding
bipartite graph where the nodes in each level are only randomly connected to the nodes
in adjacent levels. In the graph, the nodes in the leftmost level denote original data
138
packets, and the nodes in other levels are the XORs of all their neighbors (nodes) in
the left adjacent level. The randomly ordered packets including both data packets and
d5
G0 G1
For a multiple-level bipartite graph, any two adjacent levels are connected by edges
and constitute a subgraph, denoted as G k , which specifies the way to generate the check
packets. As shown in Figure 5.1, G0 and G1 are two subgraphs. In a subgraph, an edge
is an edge of degree i on the left (right) if it is adjacent to a node of degree i on the right
(left). For example, in subgraph G0 , edges adjacent to node d1 are edge of degree 2 on
the left since node d1 is a node with degree 2 on the right. A subgraph is determined
i on the left and i the fraction of edges with the degree of i on the right. Thus for
139
Decoding
The encoding process of Tornado codes is to construct a graph from left to right.
While, the decoding process of Tornado codes is to reconstruct the graph from the right
level to the left level. In a subgraph G k , if most of the nodes are known, then we have
a higher chance to recursively reconstruct the unknown nodes until all the nodes in
the left level (which is the right level for the subgraph G k1 ) are recovered. Following
this way, we can recover all the nodes till the leftmost level are reconstructed. The
reconstruction processes within each subgraph is one step of the decoding process.
In each decoding step, for a node on the right side, if all but one of its neighbors
are known, then this lost packet can be recovered by XOR all the related packets. For
For a subgraph Gk , its decoding subgraph consists of the nodes on the left side that are
lost and not decoded yet, all nodes on the right side (already recovered in subgraph
Gk+1 ) and all the edges between them. One decoding step is equivalent to finding
a node of degree 1 on the right side in the decoding subgraph, and removing it, its
left neighbor and all edges adjacent to its neighbor from this decoding subgraph. This
procedure is repeated until all the lost packets are reconstructed or no more node of
Tornado codes have encoding time and decoding time proportional to N ln( 1 )P ,
where P is the length of each packet, N is the number of encoded packets and is the
decoding inefficiency factor, i.e., (1+)n packets are needed for a receiver to reconstruct
n packets [57].
SVT Codes: Tornado codes are preferred in many network applications in that they
can be easily and fast encoded and decoded [1113]. However, the original Tornado
140
codes are designed for bulky data delivery, which is not true in our scenarios since the
number of data packets is very limited. As a result, the left degree and right degree
distributions proposed in [57] are not feasible in our case. Moreover, the composition
information of a check packet in our protocol is carried in the check packet. The large
degrees as in [57] cause large control overhead. Thus, we adopt a simple variant of
Tornado codes (referred to as SVT codes) for SDRT. SVT codes are essentially two-
layer Tornado codes, but, the left degree and right degree distributions in SVT codes
are different from that of original Tornado codes. However, in [57], the requirement
that the first two elements of the left degree vector are fixed to 0, i.e., 1 = 2 = 0,
is based on combinatorial argument, and still true for SVT codes. In [57], it is proved
that codes based on regular graphs are not optimal. This is also validated by our
simulation results in Section 5.3.3. Moreover, we expect that codes with different degree
distributions behave similarly since the number of the data packets is very limited. Our
The key idea of the segmented data reliable transport (SDRT) protocol is to transfer
encoded packets (using SVT), block by block and hop-by-hop. In order to reconstruct
the original data packets, the receiver has to receive sufficient encoded packets. Because
between any pair of sender and receiver, the transmission time for the encoded packets
141
is limited. Thus SDRT has to guarantee that the receiver can receive enough encoded
packets in such a limited time interval. By setting the block size n (the number of
original data packets in each block) appropriately, SDRT can control the transmission
time and allow the receiver to be able to receive enough packets in order to reconstruct
In SDRT, a data source first groups data packets into blocks of size n, i.e., there
are n data packets in each block. Then the source encodes these blocks of packets, and
sends the encoded blocks into the network. The data packets are forwarded from the
source to the destination block by block and each block is forwarded hop-by-hop.
In each hop-by-hop relay, the sender keeps sending the encoded packets until re-
ceiving a positive feedback from the receiver. While receiving packets, the receiver tries
back a positive feedback. On reception of a feedback, the sender stops sending packets,
while the receiver encodes the original data packets again and relays them to the next
hop.
The operations performed on the sender and the receiver are presented in the
following.
Sender
2. Keeps pumping a stream of encoded packets (in a random order), until re-
Receiver
1. Keeps receiving packets until it can reconstruct the original data packets, and
142
2. Encodes the reconstructed packets again and relay them to the next hop.
From the above description, we can see that SDRT unloads the burden of the
sender and the receiver by requiring only one feedback per block. The sender has no
additional responsibility except encoding and injecting packets, and the receiver only
Discussions
multiple nodes in the transmission range of a sender. All these nodes can overhear the
transmission of the same packets and they are potentially capable of relaying the block
to the next hop. There are routing protocols to provide alternative paths to overcome
link failures [47, 109]. Our SDRT protocol can be smoothly integrated with these
routing protocols that support alternative paths to improve the network robustness. In
fact, our later theoretical analysis and simulation results will show that SDRT performs
packets into the network until receiving a positive feedback for the target block. On
the one hand, this improves the channel utilization since the sender doesnt need to
wait the feedback from the receiver to send subsequent packets. On the other hand, the
lack of synchronization between the sender and the receiver in SDRT may cause more
energy consumption because the sender does not know the exact time to stop sending
packets. Due to the extremely low propagation speed of sound, the delay between the
feedback sending time (at the receiver side) and receiving time (at the sender side)
is very large, thus the sender will waste a certain amount of energy to keep sending
143
unnecessary packets till it receives the feedback from the receiver. To reduce such
for SDRT.
The acoustic modems are half-duplex by current technology. The underwater sensor
nodes can not receive any ACKs from the receiver while they are sending packets. This
pose one problem for all the protocols that require feedback from the receiver, i.e., how
does the sender receive the ACK from the receiver? As for SDRT, the sender has to
determine when to stop transmission state and wait for the positive ACK from the
receiver. Even acoustic modem supports full duplex communication, the long propa-
gation delay still results in energy waste. The bandwidth of acoustic channels is low
(about 10 kbps within the transmission range of 1500 m) compared with RF channels,
however, the propagation speed of sound is even lower (about 1500 m/s) compared
with that of radio. This causes large RTTs even for hop-by-hop communication. For
example, if the distance between the sender and the receiver is 1000 m, then RTT is at
least 1.3 s, during which about 13000 bits of data can be transmitted. In other words,
the sender has to send 1625 bytes of unnecessary data before it receives a feedback. If
the packet size is 50 bytes, it means that about more than 30 packets are transmitted
Window control
The basic idea of the window control mechanism is simple. A sender estimates
the expected number of packets needed for a receiver to reconstruct the original data
144
packets and sets a window accordingly. The window is controlled in such a way that
with a very low probability, the packets within a window are more than what are
needed. The sender pumps the packets within the window quickly to improve channel
utilization without consuming extra energy. After a window of packets are injected
into the network, the receiver most likely only needs a few more packets to reconstruct
the original data packets. The sender becomes careful beyond this point and extends
the time interval between any two continuous packets to some pre-defined value, which
RTT. The sender keeps sending packets at such a slow rate until receiving a positive
feedback from the receiver. In this way, SDRT reduces the number of packets to be
sent after the receiver reconstructs the original data packets, thus saving energy.
Discussions
It is worth noting that due to the very low propagation speed and relatively high
bandwidth, even small fraction of packets sent outside the window cause a significant
degradation on throughput compared with the one without window control mechanism.
However, for long-term mission sensor networks, saving energy is much more important
than throughput. Moreover, compared with a pure ARQ approach, such as PSFQ
in [100], the window control mechanism improves the throughput significantly since all
the packets in an ARQ approach (e.g., PSFQ) are sent at a low rate.
The crucial issue in window control mechanism is to accurately estimate the number
of packets actually needed. The model we present in Section 5.3.2 will enable the
sender to predict the number of packets actually needed and thus set the window size
accordingly. If the window size is appropriate, SDRT can reduce the waste of energy
greatly. Meanwhile, SDRT enables the sender to pump a window of packets into the
145
network fast, thus improves the channel utilization. As shown in Section 5.3.3, we can
Recall that SDRT is a transport protocol not a routing protocol. The highly dy-
namic network topology issue in underwater sensor networks is a major concern of the
setting appropriate block size such that the next hop has sufficient time to reconstruct
the whole block even in motion. The block size n is determined by many factors such
as the implementation of SVT codes, the mobile speed of nodes, the bandwidth of
communication channels, the propagation speed and the distance between the sender
and the receiver. We will present an approach to estimate the block size, n, in Sec-
tion 5.3.2.4.
5.3.2 Analysis
In this section, we present a model to estimate the number of data packets sent for
both single-receiver case and multiple-receiver cases. Based on this model, we can set
We have the following assumptions for the SVT codes we use. First, we assume
packet loss is independent. Second, we assume that the sender sends at most three
rounds of encoded packets to guarantee the receiver successfully recovers the original
data packets. This can be achieved by selecting appropriate left degree vectors and
146
We still use bipartite graphs to denote SVT codes. Let G = {V, E} be the bipartite
graph, where E is the set of edges and V is the set of nodes in the graph. V = D C
and D C = , where D is the set of data packets and C is the set of check packets.
The edges in G randomly connect the nodes in D and the nodes in C. For each node
is a corresponding box, which is the set of this node and its neighbors. There are |C|
totally boxes in graph G. If the left degree of node v C is i, we say the degree of b v
is i and the capacity of bv (i.e., the number of nodes in this box) is i + 1. We call the
set of all the boxes with the same degree a cluster. The cluster of boxes with degree i
is denoted as Bi .
For a sender-receiver pair, the receiving process at the receiver side is equivalent
to filling packets into |C| boxes. When a check packet is delivered successfully, it is
equivalent to putting one packet into one box. If a data packet of degree j is received,
it is equivalent to put j packets into j different boxes. For clarity, we refer the packets
We assume that sibling replicas are independent of each other. This assumption
is reasonable to some extent since sibling replicas are independently and randomly
put into different boxes. When a box of capacity k is filled with k 1 or k replicas,
we consider this box full, i.e., all the replicas in this box can be recovered. When
the associated packet is reconstructed, all the replicas generated by this packet are
considered to be recovered. Thus, in SVT codes, one full box can potentially cause
the left degree vector and = {1 , 2 , . . . , M } is the right degree vector. We use p to
147
denote the erasure probability, n to denote the number of data packets or block size,
and 1 + to denote the stretch factor. Then |D| = n, |C| = n, and N = |C| + |D| =
1
Lemma 5.1 The average degree of data packets, l = j , and the average degree of
j j
1
check packets, r = .
j jj
Proof 4 Assume the total number of edges is E, by definition, the number of edges of
Ei
degree i on the left is Ei , therefore, the number of data packets of degree i is i , the
Ej
total number of data packets is j j . The average degree of data packets is calculated
as:
2 M
1E1 2E M E E(1 +2 +3 ++M ) E 1
l= E + Ej
2
+ + Ej
M
= Ej = Ej = .
j j j j j j j j j
j j
j jj
1
Similarly, the average degree of check packets is r = .
j jj
Lemma 5.2 Given the left degree vector , right degree vector and the number of
j
j j
data packets n, then = .
j jj
1
Proof 5 By lemma 5.1, we know the average degree of data packets is l = and
j jj
1
the average degree of check packets is r = . Since the number of edges emanating
j jj
from data packets is equal to the number of edges entering check packets, we get,
j
l j j
n l = n r. Therefore, = r =
j jj
Lemma 5.3 The probability that a replica is put into a box in cluster B i (called the
i
(i+1)
ration of Bi ), qi = i
.
j ( jj (j+1))
Proof 6 For a check packet of degree i, the capacity of the corresponding boxes is
i
i + 1. The number of boxes of degree i is E i . Therefore, the total capacity of all
148
j
boxes is j E j (j + 1). The capacity of the cluster with degree i is E ii (i + 1).
For any replica, the probability that this replica is put into the cluster with degree i is
i
E i
(i+1)
j E jj (j+1)
i
(i+1)
= i
.
j jj (j+1)
Lemma 5.4 For a randomly selected packet, the probability that this packet is a check
1
packet, pc = +1 , the probability that this packet is a data packet, p d = 1 pc = +1 ,
i
and the probability that this data packet has degree i is p d P
i
j .
j j
Proof 7 Since the total number of packets is ( + 1)n, the number of check pack-
ets is n. Therefore, for a randomly selected packet, the probability that this packet
is a check packet, pc = +1 and the probability that this packet is a data packet,
1 i
pd = 1 +1 = +1 . The number of data packets of degree i is E i , where E
j
is the total number of edges, the number of data packets is j E j , therefore, the
i
probability that a data packet has degree i is p d i
.
j jj
From lemma 5.2, we can see that left degree vector , right degree vector and the
number of data packets determine SVT codes. In this rest of this paper, we use l to
denote the average degree of data packets, q i to denote the ration of Bi , pd to denote the
probability of data packets, and pc to denote the probability of check packets. Then we
can define a function (x) to represent the number of replicas generated by x packets
149
5.3.2.2 Estimation of the Number of Packet Sent
packets actually needed by the receiver to reconstruct original data packets for the
case of single receiver, then we extend the model for the case of multiple receivers.
Single receiver
Let f (x) be the probability that the receiver reconstructs original data packets
given that x packets has been sent out before. Let d(1|x) be the probability that a
successfully delivered packet is duplicate given that x packets has been sent out before
and r(1|x) be the probability that the receiver can reconstruct original data packets
when a non-duplicate packet is successfully delivered given that x packets has been
f (x + 1) = 1 if d(1|x) = 1, otherwise
f (0) = 0
Calculating d(1|x): When there is no packet sent before, it is impossible that the
new packet is duplicate. In the first round of transmission of the encoded packets, the
only chance for the new packet is duplicate is that it has been recovered earlier. After
the first round of transmission, the new packet can be received or recovered before. If
we use R(x) to denote the expected number of packets received or recovered by the
receiver given that x packets are sent out before, then we have d(1|x) and R(x) as
150
follows.
0 x=0
d(1|x) = R(x)x(1p) (5.2)
N x(1p) 0<xN
R(x)
x>N
N
and
R(x + 1) = R(x) + (1 d(1|x)) (1 p)
(D(x, 1) + 1) (5.3)
R(0) = 0
where D(x, 1) is the number of newly recovered packets when a non-duplicate packet
is delivered given that x packets are sent out before, and it is evaluated as
1
D(x, 1) = (N R(x) 1) (1 (1 )(x) ) (5.4)
N R(x) 1
In Equation 5.4, (x) is the number of replicas newly recovered when a non-duplicate
packet is delivered given that x packets are set out before, which is computed by
algorithm RecoveryReplica(x, 1). The second term in the equation is the probability
that a lost packets is recovered caused by the recovery of (x) replicas. Here we assume
that each of these (x) replicas has the same probability to be associated with the lost
In SVT codes, when a box is one replica short, then the lost replica in this box
can be recovered. In Algorithm 5.1, we count the number of boxes with one replica
short as the initial recovered replicas. Then we compute the number of extra replicas
recovered due to these recovered replicas, we repeat these steps until no more replica
can be recovered. The number of boxes with one replica short is very important for the
recovery of lost packets. In Algorithm 5.1, we begin the loop of recoveries when there
exists a box with one replica short averagely. As shown in line 9 computes the number
151
Algorithm 5.1 RecoveryReplica(x,1)
1: tp = N R(x) 1 {tp : number of lost packets}
2: tr = (R(x) + l) {tr : number of received replicas}
3: lr = (N ) tr {lr : number of lost replicas}
4: r=0
5: repeat
6: for all i such that i 6= 0 do
7: lr,i = lr qi {lr,i : number of lost replicas in Bi }
8: if lr,i 2.0 |Bi | then
l 1
9: ri = lr,i (1 |B1i | ) r,i {ri : number of recovered replicas in Bi }
10: r = r + ri
11: lr = lr ri
12: end if
13: end for
r
14: rp = (1 (1 t1p ) ) tp {rp : number of recovered packets}
15: tn = (rp ) r{tn : number of extra recovered replicas}
16: lr = lr tn
17: tp = tp rp
18: r = r + tn
19: until tn < 1
20: return r
of boxes with one replica short for cluster B i . After getting all the number of recovered
replica, line 14 computes the number of packets recovered by these replicas. Then line
15 computes the number of extra replicas recovered. The procedure from line 5 to line
where V (x, 1) is a function to compute the success probability (i.e., the probability
that the original data packets can be reconstructed) when a non-duplicate packet is
152
delivered given that exact x packets are sent out before, and it is evaluated as
X
V (x, 1) = pj (x, j) (5.6)
j
where pj is the probability that a packet has degree j, and the check packets can
using Lemma 5.4. (x, j) is the success probability when the newly added packet has
degree j given that x packets have been sent out. It is computed by the algorithm
Recovery(x, j) as follows.
In the algorithm, the success probability, (x, j) is the product of the success
probabilities of all the clusters. Each cluster has two states, namely recovered, and
done. If a cluster is in state done, it means that all the boxes in this cluster are full.
If a cluster is in state recovered, then some lost replicas in this cluster are recovered.
If the number of lost replicas is smaller than the number of boxes in a cluster, we
assume this cluster is full and the success probability for this cluster is 1. After a
series of recoveries, if each box in a cluster is less than 1 replica short, the success
probability is the probability that these lost replicas are located in different boxes,
which is essentially the bin and ball problem [62], i.e., given m balls and n bins, the m
balls are randomly thrown into n bins, the probability that all these balls are in different
m(m1)
bins Pr e 2n . Line 21 computes the number of recovered replicas for the clusters
whose shortage of replicas per box is less than 2. Line 30 computes the average number
of recovered packets caused by the r recovered replicas. Line 32 computes the number
of newly recovered replicas due to these recovered packets. The loop of recoveries stops
Now, we can evaluate the probability that a receiver recovers the original data
packets when the sender sends out exactly x encoded packets. Let P(x) denote this
153
Algorithm 5.2 Recovery(x,j)
1: tp = N R(x) 1 {tp : number of lost packets}
2: tr = (R(x)) {tr : number of received replicas}
3: lr = (N ) tr j {lr : number of lost replicas}
4: repeat
5: r=0
6: for all i such that i 6= 0 do
7: lr,i = lr qi {lr,i : number of lost replicas in Bi }
8: if lr,i |Bi | then
9: if Bi is not marked recovered or done then
10: pi = 1
11: r = r + lr,i
12: mark Bi done
13: end if
14: if Bi is recovered then
lr,i (lr,i 1)
15: pi = e 2|Bi |
16: r = r + lr,i
17: mark Bi done
18: end if
19: end if
20: if |Bi | < lr,i 2 |Bi | then
21: r = r + lr,i (1 |B1i | )lr,i
22: pi = 0
23: mark Bi recovered
24: end if
25: if lr,i > 3 |Bi | then
26: pi = 0
27: end if
28: end for
29: lr = lr r
r
30: rp = (1 (1 t1p ) ) tp {rp : number of recovered packets}
31: tp = tp rp
32: tn = (rp ) r{tn : number of extra recovered replicas}
33: lr = lr tn
34: until tnQ< 1
35: return pi
154
probability, and it is computed as
Then, the average number of packets a sender needs to send can be evaluated as
E1 =
i=0 i P (i) (5.8)
Let R be the number of receivers. When a sender sends one packet, we assume that
For one receiver, we use random variable X to denote the number of successfully
received packets given that k packets have been sent out by the sender. We can present
P
X as X = j Xj , where random variable Xj has value 1 if a packet is successfully
the same binomial distribution. If any one receiver can reconstruct the original data
packets, thus sending a positive feedback, the sender then stops sending packets. In
other words, if we can find the expected maximum value among these R trials, we can
trials, X falls in the range, [ + t, k] is 0.1. In other words, in R trails, X has deviation
155
less than t for R 0.1 times. We estimate the possible maximum value of X in R
t
trails is + t. Therefore, the expected maximum value of X in R trails is + 2.
+ t
In the case of R receivers, sending n packets has the same effect as sending p
2
in
+ t
the case of one receiver, i.e., p
2
= E1 . Substituting , and t with R, p and n, we
10Rkp(1p)
k(1p)+
get 2
p = E1 . Solving this equation, we obtain k, i.e., E R as follows
( 10Rp + 16pE1 10Rp)2
ER = (5.10)
16(1 p)
From Equation 5.10, we can easily prove that E r E1 . In other words, the sender
actually sends fewer packets when there are more than one receivers.
Using the Equation 5.8 and Equation 5.10, the sender can determine the window size
accordingly. Let R be the number of receivers. Then the optimal window size should
As we state earlier, in order to address the node mobility issue in underwater sensor
networks, SDRT limits the transmission time by grouping the original data packets
into blocks. The block size, n, i.e., the number of data packets is determined by many
factors such as the stretch factor, the confidence factor, the mobile speed of the nodes,
use 1 + to denote the stretch factor, to denote the confidence factor and bw to
acoustic signals, and Tw to be the waiting interval when sending packets beyond the
window threshold. Further, we denote the possible minimum time interval for the
156
protocol) as Tr . Then the block size, n, must satisfy the following inequity:
ER L
+ ER (1 )Tw < Tr (5.11)
bw
where L is the packet size. Note that n is actually one important parameter in com-
puting ER .
p = 0.5, the packet size L = 40 bytes, T w = 1.5 s, the left degree vector = {0, 0, 0, 1}
and the right degree vector = {0, 0, 0, 0, 81 , 0, 78 }, in the case of single receiver, the
average number of packets needed is 332. If = 0.8, i.e., the window size is 266, then
the transmission time is less than 107.5 seconds. In other words, a block size of 100
requires that the minimum time interval for the communication between a sender and
Note that the window size estimation can be done off-line and stored in a table
when the sensor nodes are deployed. When SDRT gets the statistical value of the
erasure probability from the underlying protocols, SDRT can check the table and set
In this section, we evaluate the accuracy of the model proposed in Section 5.3.2
and the performance of SDRT by simulations. In the following simulation setting, the
number of data packets is set to 100 and stretch factor is 1.6, i.e., we use 60 check
packets. The size of the data packet is 50 bytes. For the case of multiple receivers,
we set the number of receivers to 3. The bandwidth of the channel is 10 kbps and
distance between the sender and the receiver is 1000 meters. The propagation speed of
157
acoustic signal is set 1500 m/s. The maximum degree for SVT codes is set 8. All the
5.3.3.1 Metrics
# of packets sent
= . (5.12)
# of original data packets
smaller the sending inefficiency, the fewer packets are sent into the network, thus, less
energy consumption. We use goodput to measure the channel utilization. SDRT em-
ploys a window control mechanism to reduce the number of packets sent unnecessarily
due to the large RTT of acoustic channel. We measure the number of packets actually
adopted by SVT codes on the performance of SDRT. Since the number of the data
packets is very limited, we expect that SVT codes based on different degree distribu-
tions behave similarly. We implement SVT codes with 5 different degree distributions
as follows:
158
1. Case 1: = (0, 0, 0, 1.0, 0, 0, 0) and = (0, 0, 0, 0, 0, 1.0);
The simulation results for both single-receiver and 3-receiver cases are shown in
Figure 5.2. In Figure 5.2, the last number in the legend denotes the number of receivers.
For example, Sim-case-1-3 is the result for the code following the degree distribution in
Case 1 under 3 receivers. From Figure 5.2, we can see that the code based on regular
graph, i.e., code following the degree distribution in Case 1, shows worse performance
compared with other codes under both of single-receiver and 3-receiver. This conforms
the conclusion proved in [57] that codes based regular graphs are not optimal. Figure 5.2
also shows that there is no significant difference among all other codes, which validates
our expectation that degree distributions have less effect on the performance given
that degree distributions satisfy the basic requirements for SVT codes proposed in
Section 5.3.1.
of packets needed for the receiver(s) to reconstruct the original data packets given
the erasure probability, degree distributions, and the number of receivers. In this
subsection, we compare the value calculated by our model with the results from the
simulations. In our model, we set the maximum number of rounds to 3 since beyond
159
4
Sim-case-1-1
Sim-case-1-3
Sim-case-2-1
3.5 Sim-case-2-3
Sim-case-3-1
Sim-case-3-3
3 Sim-case-4-1
Sim-case-4-3
Inefficiency
Sim-case-5-1
Sim-case-5-3
2.5
1.5
1
0.1 0.2 0.3 0.4 0.5 0.6
Erasure Probability
different SVT codes. The simulation results show that our model approximates the
simulation results well in all the simulations. Due to the space limit, we only show the
results for codes following degree distribution in Case 2 and Case 3 in Figure 5.3 and
Figure 5.4.
Figure 5.3 shows the results with standard deviations for SVT codes with
= (0, 0, 0, 0, 1.0, 0, 0, 0) and = (0, 0, 0, 0, 0, 0.125, 0, 0.875) and Figure 5.4 for SVT
codes with = (0, 0, 0, 0.3, 0.2, 0.5) and = (0, 0, 0, 0, 0, 0.125, 0, 0.875). The last num-
ber in the legend in both these two figures denotes the number of receivers.
From Figure 5.3 and Figure 5.4, we can see that our model approximates the
simulation results closely for both single-receiver and 3-receiver. Furthermore, the
simulation results as shown in these two figures show that the number of packets
actually sent decreases as the number of receivers increases. This result agrees with
Equation 5.8 and Equation 5.10. Therefore, SDRT performs better when there are
multiple receivers.
160
6
Model-case-2-1
Model-case-2-3
Sim-case-2-1
5 Sim-case-2-3
1
0.1 0.2 0.3 0.4 0.5 0.6
Erasure Probability
Figure 5.3: The computed value vs. simulation results for = (0, 0, 0, 0, 1.0, 0, 0, 0) and
= (0, 0, 0, 0, 0, 0.125, 0, 0.875)
6
Model-case-5-1
Model-case-5-3
Sim-case-5-1
5 Sim-case-5-3
Number of packets sent (100)
1
0.1 0.2 0.3 0.4 0.5 0.6
Erasure Probability
Figure 5.4: The computed value vs. simulation result for = (0, 0, 0, 0.3, 0.2, 0.5) and
= (0, 0, 0, 0, 0, 0.125, 0, 0.875)
161
5.3.3.4 Effect of SVT Codes
In this set of simulations, we evaluate the effect of SVT codes on the performance of
SDRT. We compare the performance of three protocols from three paradigms, namely,
SDRT, data carousel approach, and ARQ approach. We conduct simulations in both
For the simple ARQ approach, data packets are not encoded at all. The sender
sends a packet and waits for ACK. After some pre-defined time period, if the sender
does not receive ACK yet, the sender retransmits the packet. We assume that the
erasure probability of the ACK packet is one-fifth of that of a data packet. In the case
of multiple receivers, all the potential receivers can overhear the packet and they can
detect lost packets independently. If a node detects a packet loss, this node sends back a
negative ACK, otherwise, sends back a positive ACK. The sender keeps retransmitting
a packet until receiving positive ACKs from all the receivers and then repeats the
count the number of ACK packets and data packets actually sent as the total number
We also modify the Go-Back-N selective ARQ protocols to fit the half-duplex
acoustic modem. In this protocol, the sender sends N packets at a time and wait
for the ACK, the receiver replies an ACK for all these N packets. In the ACK packet,
the receiver specifies the IDs of packets successfully received. After receiving an ACK,
the sender sends another N packets, includes the previously lost packets and some new
packets. Since Go-Back-N selective ARQ is not suitable for multiple receiver case,
we dont evaluate the performance of Go-Back-N selective ARQ in this case. In the
162
For the data carousel approach, data packets are not encoded, either. However,
there is no need for feedback in this approach. The sender keeps sending (a block of)
data packets in a random order until one of the receivers receives these data packets
successfully.
SDRT adopts the SVT codes with degree distribution = (0, 0, 0, 0, 1.0, 0, 0, 0)
ties, we vary the erasure probabilities from 0.1 to 0.5. We check both the single-receiver
case and three-receiver case. The results are shown in Figure 5.5. The last number
in the legend in Fig 5.5 denotes the number of receivers. From this figure, we observe
that when the erasure probability increases, the sending inefficiency also increases.
SDRT outperforms other approaches under all erasure probabilities. In the case of
single receiver, SDRT reduces the number of packets actually needed by more than
half compared with ARQ approach and even more for the case of three-receiver. From
Figure 5.5, we can also observe the effect of check packets. The benefit of using check
packets is more significant when the erasure probability increases. Once again, Fig-
ure 5.5 demonstrates that more receivers actually reduce the number of packets actually
sent for SDRT and the data carousel approach. But for the ARQ approach, they cause
We also evaluate the goodputs of simple ARQ, Go-Back-N selective ARQ and SDRT
in the case of single receiver. In Go-Back-N selective ARQ, we set N=10, i.e., each
time, the sender fills the pipe full and wait for an ACK from the receiver. In SDRT,
the confidence factor = 0.8, i.e., the window size is 80% of the expected number of
packets actually needed and the waiting interval T w = 1.34 s. The results are shown in
Figure 5.6. As shown in this figure, SDRT outperforms simple ARQ and Go-Back-N
163
5000
Carousel-1 SDRT-1
14 Carousel-3 ARQ-1
4500
SDRT-1 Go-back-10-ARQ-1
SDRT-3 4000
Goodput (bps)
3000
8 2500
2000
6
1500
4 1000
500
2
0
0.1 0.2 0.3 0.4 0.5 0.6 0.1 0.2 0.3 0.4 0.5 0.6
Erasure Probability Erasure Probability
Figure 5.5: Number of packets sent vs. Figure 5.6: Goodput vs. the erasure
the erasure probability probability
selective ARQ under all the erasure probabilities. When the erasure probability is low,
Go-Back-N selective ARQ improves the goodput significantly compared with simple
ARQ, however, when the erasure probability increases, the goodput of Go-Back-N
ARQ degrades because the lost ACKs cause the sender to retransmit the N packets
already sent.
To summarize, SVT codes reduce the number of packets under various erasure
probabilities, thus, reducing energy expenditure. On the other hand, SDRT also im-
proves the goodput compared with simple ARQ and Go-Back-N selective ARQ.
without window control and one with window control mechanism. For the SDRT
with window control mechanism, we set the confident factor, = 1.0, 0.9 and 0.8
respectively, i.e., we set the actual window size to 100%, 80% and 90% of the expected
value calculated from our models. For SDRT without window control, we assume that
the acoustic modem is full-duplex. This is unrealistic assumption favoring the SDRT
164
400 400
Non-window Non-window
Window(80%) Window(80%)
Window(90%) Window(90%)
350 Window(100%) 350 Window(100%)
250 250
200 200
150 150
100 100
0.1 0.2 0.3 0.4 0.5 0.6 0.1 0.2 0.3 0.4 0.5 0.6
Erasure Probability Erasure Probability
Figure 5.7: Effect of window control for Figure 5.8: Effect of window control for
single-receiver 3-receiver
without window control mechanism. The distance between the sender and the receiver
is set to 1000 m. We measure the number of the packet actually sent in these two
versions of SDRT. In the simulation setting, the erasure probability of the positive
ACK is one-fifth of that of a data packet. The T w is set to 1.5 seconds. The results for
single-receiver and 3-receiver are shown in Figure 5.7 and Figure 5.8 respectively.
From these figures, we can observe that when the window size is set to 80% of the
calculated expected value, SDRT reduce the waste packets most. If is set large, say
1.0, we can see that the window control still reduces the number of unnecessary packets
compared with SDRT without window control. These two figures show that if we set
small, we can reduce or elimiate the number of waste packets more effectively, however,
on the other hand, we decrease the throughput significantly since we slow down the
In this section, we present our network coding scheme for efficient error recovery
and [28].
165
R1 R1
C/B/A Y 12 /Y 11 C /B/A C /A
Source R2 Sink Source R2 Sink
C/B/A Y 22 /Y 21 C /B/A B/A
Y 32 /Y 21 /Y 11 B/A /A
Figure 5.9: An example illustrating the benefits of using network coding in underwater
sensor networks. A packet crossed out means that the packet is lost.
node, called relay, may encode several incoming packets into one or multiple packets
and forward these encoded packets. Network coding is suitable for underwater sensor
networks because (1)underwater sensor nodes are usually larger than land-based sensors
and posses more computational capabilities [109]; (2) the broadcast property of acoustic
channels naturally renders multiple highly interleaved routes from a source to a sink.
The computational power at the sensor nodes coupled with the multiple routes provides
Figure 5.9(a),a source sends packets A, B and C to a receiver. These packets will reach
relays R1 , R2 and R3 because of the broadcast property of the acoustic channel. Relay
R1 receives packet A and C and encodes them into packets Y 11 and Y12 . Similarly, relays
R2 and R3 encode their incoming packets into packets Y 21 , Y22 and Y31 , Y32 respectively.
166
The receiver receives three encoded packets Y 11 , Y21 , and Y31 . When using random
linear coding [35], the receiver can recover the three original packets with high proba-
bility. Figure 5.9(b) illustrates the results when the relays simply forward the incoming
packets without using network coding and discard duplicated packets. In this case, the
codes and VBF and provides much better error recovery ability than using VBF alone.
We now describe our scheme to apply network coding in underwater sensor networks.
To achieve a good balance between error recovery and energy consumption at the sensor
nodes, our scheme carefully couples network coding and VBF. In the following, we first
describe how to apply network coding (we use randomized linear coding [35] due to its
simplicity) given a set of paths from a source to a sink. We then describe how to adapt
Scheme Description
Packets from the source are divided into generations, with each generation contain-
ing K packets. The source linearly combines K packets in a generation using randomly
eration. The source linearly combines these K packets to compute K 0 outgoing packets,
denoted as Y1 , Y2 , . . . , YK 0 where Yi = K
P
j=1 gij Xj . The coefficient gij is picked ran-
domly from a finite field F28 . The set of coefficients (gi1 , . . . , giK ) is referred as the
167
s ink
Figure 5.10: Illustration of routing (nodes in a dashed circle form a relay set).
the impact of packet loss on the first hop (which cannot be recovered at later hops)
A relay in forwarding paths stores incoming packets from different routes in a local
buffer for a certain period of time, then linearly combines the buffered packets belonging
transmitting dependent packets is not useful for decoding at the sink, relay r computes
packets, Yir = M r r
P
j=1 hij Xj , where hij is picked randomly from the finite field F 28 . Let
When the sink receives K packets with linearly independent encoding vectors, it
The efficiency of network coding relies on the quality of the underlying paths deter-
168
under which network coding is efficient (in terms of both error recovery and energy
multi-path. The source broadcasts the packet to its downstream neighbors (nodes
within its transmission range and in the forwarding paths), referred to as a relay set.
Nodes in the relay set further forward the packet to their neighbors, forming another
relay set. Intuitively, a multi-path suitable for network coding should contain a similar
number of nodes in each relay set. This is because, a relay set with too few nodes may
not provide sufficient redundancy; a relay set with too many nodes wastes energy to
improve the efficiency of network coding. In both schemes, a node uses the number of its
downstream neighbors to approximate the size of the next relay set. This is because the
(e.g., [14]) and localized communication between a node and its neighbors, while the
The first scheme requires that sensor nodes are equipped with multiple levels of
transmission power [103]. A node selects a transmission power so that the estimated
number of downstream neighbors is between N l and Nu , where Nl and Nu are lower and
adaptation. The second scheme does not require multiple levels of transmission power
for a node (i.e., each node has a fixed transmission range). In this scheme, a node
adapts the amount of redundancy that it injects to the network. More specifically,
a node with less than a Nl0 downstream neighbors encodes more outgoing packets to
169
Similarly, a node with more than Nu0 downstream neighbors encodes less outgoing
packets to reduce the amount of redundancy (we only reduces the number of outgoing
packets when the coefficient matrix has full rank of K to reduce the risk of perma-
nent information loss). We investigate how to choose N l0 and Nu0 using simulation in
5.4.2 Analysis
We now analytically study the performance of our error-recovery scheme. Our goal
5.4.2.1 Model
We assume that there are H relay sets from the source to the sink, indexed from
1 to H, as shown in Figure. 5.10. The sink is in the H-th relay set. Let N i be the
number of relays in the i-th relay set. For simplicity, we assume that the relay sets
do not intersect. Furthermore, a node in a relay set can receive from all nodes in the
previous relay set. Last, a node only uses packets forwarded from its previous relay set
(i.e., packets received from nodes in the same relay set are discarded). We derive the
Ti denote the average number of times that it is transmitted from the nodes in the
previous relay set (or the source) to those in the i-th relay set. Then
PH
i=1 Ti
T = (5.14)
R
We assume that the acoustic channels have the bit error rate of p b and all links
suffer the same bit error rate. Let p be the probability that a packet has bit error.
170
Then p = 1 (1 pb )L for independent bit errors and a packet size of L bits. We next
when a sink receives at least K packets in the generation, the probability that it can
recover the K original packets is high for a sufficiently large finite field [35]. Therefore,
for simplicity, we assume that the sink recovers the K original packets as long as it
receives at least K packets in the generation. We do not differentiate nodes in the same
relay set. Let i,k be the probability that a node in the i-th relay set receives k packets
(when 0 k < K) or at least k packets (when k = K) from all nodes in the previous
relay set, 1 i H. Since the sink is in the H-th relay set and the generation is
We next derive i,k , 1 i H, 0 k K. The nodes in the first relay set receive
For i 1, 0 k < K, we obtain i+1,k as follows. We index the nodes in the i-th
relay from 1 to Ni . Let i,j,k denote the probability that a node in the i-th relay set
receives k packets from the j-th node in the previous relay set, 1 i H, 1 j N i1 ,
For a node in the (i + 1)-th set, let kj be the number of packets that it receives from
the j-th node in the previous relay set. To obtain i+1,k , we need to consider all
171
P Ni
combinations of kj s such that j=1 kj = k, kj = 0, . . . , k. That is,
X Ni
Y
i+1,k = i+1,j,kj (5.17)
kj =k j=1
PN i
kj =0,...,k s.t. j=1
For a small generation size K, the above quantity is easy to compute. We use small K
From the above, we calculate R = H,K as follows. We first obtain 1,k , which is
used to compute 2,j,n and 2,k , 0 k K. This process continues until eventually
H,K is obtained.
from (5.14).
We next discuss our error recovery scheme based on our model. The bit error rate is
in the range of 104 to 1.5103 to account for potentially high loss rate in underwater
sensor networks (e.g., due to fast channel fading). For network coding, a generation
contains 3 packets (e.g., K = 3). The source transmits K 0 = 5 packets. For network
coding, we set the number of relay sets, H, to 7 or 9, and assume all relay sets contain
172
1
0.8
0.7
0.6
0.5
0.4
network coding
network coding (N=2)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5
Bit Error Rate x 10
3
Figure 5.11 plots the successful delivery ratio for N = 2 and N = 3 when H = 9.
We observe that when the number of nodes in each relay set, N , is decreased from
3 to 2, the successful delivery ratio of network coding drops sharply. This implies
that a node should have 3 downstream neighbors for efficient error recovery (under our
source and sink are deployed respectively at bottom corner and surface corner, on the
diagonal of the cube. Other sensor nodes are randomly deployed in this area. Each
The multiple routes from the source to the sink is determined by VBF with a 150m
routing pipe [109]. In VBF, a routing pipe is a pipe centered around the vector from
the source to the sink. Nodes inside the routing pipe are responsible for routing packets
from the source to the sink; nodes outside the routing pipe simply discard all incoming
173
packets. Each packet is 50 bytes. For network coding, each generation contains K = 3
packets; the source outputs K 0 = 5 packets and each relay outputs at most 3 packets
for each encoded generation. We choose a finite field of F 28 [35]. Therefore, each packet
In the simulator, we choose sensors with the commercial product UWM1000 modem
from LinkQuest [55]. For this modem, the power consumptions under transmit mode,
receive mode and sleep mode are 2 watt, 0.75 watt and 0.008 watt respectively. We use
the consumed energy in terms of Joule to calculate the normalized energy consumption,
which is the totally consumed energy normalized to the successful delivery ratio. Every
sensor can transmit with bit rate 10 kbps. We use the broadcast MAC in our simulation.
In the broadcast MAC, when a node sends a packet, the node senses the channel first.
If the media is busy, then the node backs off. The maximum number of backoff is 4
times.
The source generates data with a constant rate of 1 packet/second and runs 300
seconds to stop, while the whole network stops 50 seconds later to guarantee no packets
on the paths. In network coding scheme, the source waits for K packets to form
a generation then send them out back-to-back. Relay nodes delay a random time
intervals (between 3 4 seconds) to wait the arrivals of packets belonging to the same
generation, then generates new independent encoded packets and forwards them back-
to-back. Late packets of the generation will be simply discarded. While in VBF,
all nodes forward packets when they are generated or received after a similar delay
intervals.
ment, the target area is divided into grids; a number of nodes are randomly deployed
in each grid.
174
1
0.9 7
6.5
0.85
6
0.8
Network coding (simu.) 5.5
VBF (simu.)
0.75 5
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
packet loss rate Packet loss rate
We present the results in grid random deployment from the simulator. The target
area is divided into 125 grids, each of 20 m 20 m 20 m. Each grid contains 2 nodes,
randomly distributed in the grid. We set the transmission power and pipe radius of a
node to cover 3 to 4 downstream neighbors. This is achieved when each node uses a
transmission range of 300 m and a pipe radius of 150 m. In the rest of this section, we
Figures 5.12 (a) and (b) plot the successful delivery ratio and normalized energy
consumption (in Joule) for network coding and multi-path forwarding (VBF) with 95%
confidence intervals from 25 runs. Each blocks adds d28/933e = 7 redundant packets
since the routing pipe used in network coding and multi-path forwarding contains 28
nodes. This is a little more redundant than multi-path forwarding and network coding.
We observe that network coding provides a significantly better error recovery than
VBF for high packet loss rate, but VBF is better when the packet loss rate is very
175
16 1 40
0.2 5 0 0
130 140 150 160 170 180 240 260 280 300 320 340 360
Radius (m) Transmission range (m)
(a) (b)
Figure 5.13: Successful delivery ratio and normalized energy consumption under grid
random deployment: (a) Transmission range is 300 m, (b) Pipe radius is 150 m.
low (lower than 0.1). This is because under low packet loss rate, packets are not ease
to be lost through the channels, but network coding scheme causes more collisions
and leads to packets losses since network coding scheme sends a generation in a burst.
However, network coding performs better under high error rate, which is a important
We compare the energy efficiency for simulation results. Figure 5.12 (b) shows the
normalized energy consumption in terms of Joule for network coding and VBF. Under
low packet loss rate, VBF is more efficient than network coding, because network
coding introduces more collision but achieves a little bit lower successful delivery ratio.
However, when the packet loss rate increases, network coding becomes more efficient
stream neighbors for efficient network coding, as indicated by the analytical results.
For this purpose, we either fix the transmission range to 300 m and vary the pipe
176
1 9
0.75 7
0.7
6.5
0.65
6
Network coding node failure 0.05
0.6
VBF node failure 0.05
Network coding node failure 0.1 5.5
0.55
VBF node failure 0.1
0.5 5
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
Packet loss rate Packet loss rate
Figure 5.14: Grid random deployment: performance comparison under different node
failure probabilities.
radius, or fix the pipe radius to 150 m and vary the transmission range. The results
are plotted in Figures 5.13(a) and (b) respectively, where the packet loss rate is 0.4.
In both cases, we observe that a good balance between successful delivery ratio and
normalized energy consumption is achieved when the transmission range is 300 m and
the pipe radius is 150 m (i.e., when a node has 3 to 4 downstream neighbors). A too
small pipe radius or transmission range cant include sufficient number of downstream
neighbors to achieve a high delivery ratio. A too large pipe radius or transmission
range is not preferred either, because it cant improve the performance significantly,
while introducing too much unnecessary redundancy and hence wasting energy.
Node failures occur due to many reasons such as fouling, corrosion and out of
battery [2]. We next investigate the impact of node failures when using network coding.
Figure 5.14 compares the performance of network coding and VBF under different
node failure probabilities. Failure nodes are selected randomly at the beginning of
the simulations. As shown in the figure, network coding scheme outperforms VBF
177
1 1
0.95
0.6
0.5
network coding (adapt.) network coding (adapt.)
network coding (nonadapt.) network coding (nonadapt.)
0.4
0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8
Bit Error Rate x 10
3
Bit Error Rate x 10
3
in terms of both successful delivery ratio and energy efficiency most of the time. This
again confirms network coding is more suitable for efficient error recovery in underwater
sensor networks.
free MAC and dose not implement the real packet scheduling process and bandwidth
limitation. All sensor nodes send packets one by one and each time they send a whole
energy consumption. The topology adopted in the simulation setting is uniform random
Under uniform random deployment, we find that using the same transmission range
and pipe radius for all the nodes cannot ensure 3 to 4 downstream neighbors for each
node. We therefore allow a node to adjust its transmission range or the amount of
178
We first present the result under transmission-range adaptation. The pipe radius
is set to 150 m. A node set its transmission range to have 3 to 4 downstream neighbors
(The transmission range of the nodes varies from 100 to 400 m). Figure. 5.15 (a) plots
the successful delivery ratio under network coding. The confidence intervals are from
20 simulation runs. For comparison, we also plot the successful delivery ratio when
all nodes uses a transmission range of 300 m, which is significantly lower than that
is effective for error recovery. Furthermore, the normalized energy consumption with
transmission-range adaptation is lower than that when all nodes use the same trans-
We next present the results when all nodes uses the same transmission range of 300
m and adjusts the amount of redundancy according to the number of its downstream
neighbors. In Figure. 5.15 (b), a node adds one more outgoing packet when it has less
than 3 downstream neighbors and remove an outgoing packet when it has more than
5.5 Summary
first review the challenges for reliable data transfer in M-UWSNs. We then present our
SDRT groups data packets into blocks and encodes each block by efficient FEC
codes. SDRT forwards data packets from the source to the destination block by block
and forwards each block hop by hop. We propose a mathematical model in SDRT to
estimate the number of the data packets actually needed by the receiver. The model
179
enables SDRT to set the appropriate block size to adapt to the highly dynamic network
topology, and to adopt the window control mechanism to further reduce the energy
consumption resulted from the long round trip time (RTT) of acoustic channels. We
evaluate the performance of our model and SDRT by extensive simulations. Our model
approximates the simulation results under various settings. The simulation results also
In the network coding approach, we carefully couple randomized linear codes with
vector-based forwarding (VBF). We evaluate this approach by simulations, and the re-
sults show that applying network coding in VBF improves the success rate significantly
180
Chapter 6
6.1 Conclusions
activities. However, the significant difference between M-UWSN and terrestrial sensor
network poses many new challenging problems for the network design.
underwater sensor networks, namely, medium access control, multi-hop routing and
reliable transfer. Our contributions are two-fold: (1) Protocol design and evaluation.
We address the problems caused by the characteristics of M-UWSNs and propose sev-
eral feasible protocols, namely R-MAC, VBF, HH-VBF, VBF coupled with network
coding, VBVA and SDRT. (2) A simulator for underwater sensor networks. This simu-
lator is an integral part of widely used simulator NS-2 and provides a research platform
bandwidth and mobility of most sensor nodes in M-UWSNs make it very difficult
181
to design an energy efficient MAC protocol. We first propose mathematic models
R-MAC for underwater sensor networks where the data traffics are unevenly
distributed. We prove that R-MAC resolves both the hidden terminal problem
and the exposed terminal problem. Moreover, the simulation results shows that
R-MAC is energy efficient and fair. In this dissertation, we only address the MAC
design problems caused by the long propagation delays in static underwater sensor
networks. We expect that our work on MAC serves a solid basis for the MAC
lays, highly dynamic network topology and more noisy communication channels.
These properties bring the challenges for routing protocol design. We propose a
robust, scalable and energy efficient routing protocol, VBF, to address these prob-
for both VBF and HH-VBF to allow them to adapt to the uneven deployment of
of VBF and HH-VBF. The results show that VBF and HH-VBF handle mobility
the worst case for void avoidance. The voids in M-UWSNs are 3-dimensional
simulation results show that VBVA handles voids in M-UWSNs effectively and
efficiently.
182
3. Reliable Data Transfer The reliable transfer design for M-UWSNs faces the
problems caused by the long propagation delays, highly dynamic network topol-
of M-UWSNs. We address these problems with two different approaches. The first
hybrid of ARQ and FEC. We also develop a model for the coding scheme adopted
in SDRT to estimate the number of actually needed packets. The model allows
the sender to reduce the energy waste caused by the long propagation delays and
enables SDRT to set the appropriate block size to facilitate to handle the highly
dynamic network topology. The simulation results show that SDRT is more en-
ergy efficient compared with approaches proposed for terrestrial sensor networks.
fully couple a network coding scheme with multi-path forwarding protocol. The
simulation results show that applying network coding improves the data packet
delivery ratio with low energy cost in the harsh underwater environments.
level simulator, Aqua-Sim for underwater sensor networks. Auqa-Sim can sim-
ulate the propagation and attenuation of acoustic signals and packet collisions
with feasible algorithms on medium access control, multi-hop routing and reliable data
183
transfer for underwater sensor networks. We also provides the research community the
Our work in this dissertation addresses the network design problems on medium
access control, multi-hop routing and data reliable transfer. However, it is still far
from completeness. There is much work needed to be done in these three aspects of
In medium access control, as near future work, we need to improve the performance
assumes that communication channels are error free, which is not true in real world.
The noisy channels damages the channel utilization of R-MAC significantly. It is worth
investigating the approach to apply some FEC coding schemes in R-MAC to improve
to design a medium access control protocol for M-UWSNs. There is no satisfied MAC
for M-UWSNs yet. The long propagation delays and mobility of sensor nodes in M-
UWSNs make coordination among sensor nodes very difficult if not impossible. It
nodes in M-UWSNs and integrate it with multi-hop routing and void avoidance proto-
cols. Our work in this dissertation has no consideration for the pattern of node mobility.
If we can predict the locations of the sensor nodes based on some mobility pattern suc-
cessfully, we can use this prediction in routing and void avoidance, thus, improve the
184
performance of routing protocols and void-avoidance protocols, meanwhile, reduce the
energy consumption.
In reliable data transfer, we do not address the congestion control and avoidance in
probe the congestion control and avoidance in M-UWSNs. Due to the characteristics
of M-UWSNs, the commonly used methods for wired networks and terrestrial sensor
networks are unfeasible, we need to design a feasible algorithm to detect, control and
The project of our underwater sensor network simulator, Aqua-Sim is at its final
stage. Most functions have been implemented and tested. However, there still is
room to improve the conceptual design of the Aqua-Sim. As near future work, the
traffic generators, such as uw sink and uw sink vbva can be easily combined into one
module and become independent from the underlying routing protocols. Furthermore,
all the variants of VBF should be independent each other and co-exist concurrently.
Most important, we plan to evaluate Aqua-Sim with real experiment data and publish
Aqua-Sim.
185
Appendix A
A.1 Overview
tion models into simulation of underwater acoustic networks [90, 105]. To the best of
our knowledge, however, there is no packet level underwater sensor network simulator
published yet. There are several widely used packet level network simulators such as
NS-2 [69] and OPNET [71]. But they were developed for radio wireless and/or wired
networks, not for underwater sensor networks. They can not be used for underwater
sensor networks without significant modifications for the following reasons: First, the
propagation of acoustic signal in water is very slow (around 1500 m/s), significantly
different from that of radio signal. Second, the acoustic signal attenuation model is
different from that of radio signal. Thirdly, underwater sensor networks are usually
ment. These characteristics of underwater sensor networks make the existing network
simulators infeasible.
186
We have developed a simulator, called Aqua-Sim, for underwater sensor networks.
We choose NS-2 [69] as our platform since NS-2 is a very powerful simulator, easily
accessed and widely used. NS-2 provides efficient and convenient methods to configure
the network and nodes in the networks. Two languages are used in NS-2, C++ and
Otcl. The user can use Otcl scripts to easily tune the parameters of protocols and
algorithms implemented in C++. NS-2 also enables us to take the advantage of the
abundance of the existing source codes. NS-2 is widely used by the research community.
NS-2 abounds with many wireless protocols and utilities that are potentially useful for
underwater sensor networks. Most of all, the codes of NS-2 are open and easily accessed.
Aqua-Sim can simulate the attenuation and propagation of acoustic signal and
tions of NS-2, it can easily be integrated with all the existing codes of NS-2. In the
(f ) l
A(l, f ) = l k (10 10 ) (A.20)
where l is the propagation distance, k is the spread factor, which is determined by the
spreading types of the acoustic signals. k is set 1 for cylindrical, 1.5 for practical and
187
expressed by the Thorps equation [8] as
0.11 f 2 44 f 2
(f ) = + + 2.75 104 f 2 + 0.003 (A.21)
1 + f2 4100 + f 2
where f is in kHz.
The propagation speed of acoustic signal adopted in Aqua-Sim is 1500 m/s. When
a node sends a packet, Aqua-Sim first finds all the nodes in the transmission range of
the sender in the 3D space, then computes the propagation time of the packet to reach
delay equal to the propagation time of the packet. Aqua-Sim provides the user two
ways to determine the transmission range, one way is to set the transmission power
and the threshold of received power level, another way is to set the transmission range
directly.
late the packet collisions. When the packets overlap at the receiver side, their received
power levels, derived from attenuation model are compared. If the difference of the
received power level is less than a pre-defined threshold, then these packets are consid-
ered collided, otherwise, the packet with the highest received power level is considered
collision free. In Aqua-Sim, the transmission power, the threshold of received energy
for the receiver to sense the packet and the threshold of received energy for the receiver
are implemented as classes in C++. Currently, almost all the source codes in Aqua-Sim
are organized in one directory, called underwatersensor. Under this directory, all the
188
codes are further organized into four sub-directories, namely, uw common, uw mac,
3-dimensional space. Moreover, this class can also simulate the node failure, and car-
rier sense. The class uw sink vbva is used to generate the traffic for vector-based void
avoidance (VBVA) and class uw sink is used to generate traffic for vector-based for-
warding(VBF). Both of these two classes support exponential data traffic and constant
All classes related with acoustic channel are contained in folder uw mac, namely
mac. Class underwaterphy allows the user to power on/off the transmitter/receiver.
the packet collisions. The packet corruption and channel failure are actually simulated
in this class. All the MAC protocols proposed for underwater sensor networks have to
be the derived class of class underwatermac. In class underwatermac, there are five
virtual functions,
189
3. recv(Packet*, Handler*) is the method to receive packet from upper layer and
lower layer.
These methods are defined to be virtual for the subclass of class underwatermac to
to implement its own version of these methods. Currently, there are broadcast MAC,
R-MAC and T-MAC, which are defined in broadcast.cc(h), rmac.cc(h) and tmac.cc(h)
respectively.
The protocol of R-MAC and Tu -MAC are introduced in Chapter 3. The broadcast
MAC is the MAC protocol we develop for mobile underwater sensor networks. In
broadcast MAC, when a node want to send data, the node senses the channel first, if
the channel is busy, then the node backs off, otherwise, the node sends the data packet.
The routing protocols and auxiliary classes are grouped in folder uw routing. At
present, there are vector-based forwarding (VBF), Hop-by-hop vector based forwarding
(HH-VBF), enhanced version of VBF with network coding and vector-based void avoid-
All the Tcl transcripts to run these protocols are organized into the directory,
uw routing. Under this directory, there are examples to show how to use the protocols
implemented in C++.
190
Bibliography
tion in underwater acoustic sensor networks. ACM Sigbed Review, vol. 1, no. 2,
July 2004.
2006.
sensor networks. IEEE Communications Magazine, Vol. 40, No. 8, pp. 102-114,
August 2002.
191
[7] T. C. Austin, R. P. Stokey, and K. M. Sharp. PARADIGM: A Buoy-based System
[10] D. Braginsky and D. Estrin. Rumor Routing Algorithm For Sensor Networks. In
[15] S. Chen, G. Fan, and J. Cui. Avoid Void in Geographic Routing for Data Ag-
192
[16] J.-H. Cui, J. Kong, M. Gerla, and S. Zhou. Challenges: Building Scalable Mobile
[21] Q. Fang, J. Gao, and L. Guibas. Locating and Bypassing Routing Holes in Sensor
Networks. In Proc. of IEEE INFOCOM 2004, Hong Kong, China, March 2004.
193
[24] L. Freitag, M. Grund, S. Singh, J. Partan, P. Koski, and K. Ball. The WHOI
[28] Z. Guo, B. Wang, and J.-H. Cui. Efficient Error Recovery Using Network Coding
[29] Z. Guo, P. Xie, J.-H. Cui, and B. Wang. On Applying Network Coding to
tion with ACM MobiCom06, Los Angeles, California, USA, September 2006.
[30] J. Heidemann, W. Ye, J. Wills, A. Syed, and Y. Li. Research Challenges and
tions and Networking Conference, Las Vegas, Nevada, USA, April 2006.
194
[32] M. Heissenbuttel, T. Braun, T. Bernoulli, and M. Walchi. BLR: Beacon-Less
[33] J. Hill and D. E. Culler. Mica: a wireless platform for deeply embedded networks.
[34] J. Hill, R. Szewcyk, A. Woo, D. Culler, S. Hollar, and K. Pister. System ar-
fluxes in the tidal freshwater Hudson river. Estuaries, 19, 848-865, 1996.
able and Roust Communication Paradigm for Sensor Networks. In ACM In-
195
[39] D. A. Jay, W. R. Geyer, R. J. Uncles, J. Vallino, J. Largier, and W. R. Boynton.
Networks. In IEEE Computer Society, Santa Cruz, CA, USA, December 1994.
[41] P. Juang, H. Oki, Y. Wang, M. Martonosi, L.-S. Peh, and D. Rubenstein. Energy-
efficient computing for wildlife tracking: design tradeoffs and early experiences
[42] P. Karn. MACA - A New Channel Access Method for Packet Radio. In
140, 1990.
[43] B. Karp and H. Kung. GPSR: Greedy Perimeter Stateless Routing for Wireless
ganic carbon balance and net ecosystem metabolism in Chesapeake Bay. Marine
[45] D. B. Kilfoyle and A. B. Baggeroer. The State of the Art in Underwater Acoustic
munication and Networks (SECON04), Santa Clara, CA, October 4-7 2004.
196
[47] S. Kim, R. Fonseca, and D. Culler. Reliable Transfer on Wireless Sensor
munication and Networks (SECON04), Santa Clara, CA, USA, October 4-7
2004.
[50] K. B. Kredo and P. Mohapatra. A Hybrid Medium Access Control Protocol for
14 2007.
tocols, and Applications, Chapter 4, ed. D. J. Cook and S. K. Das, John Wiley,
[54] S.-Y. R. Li, R. W. Yeung, and N. Cai. Linear Network Coding. IEEE Transac-
197
[55] LinkQuest. http://www.link-quest.com/.
[60] M. Molins and M. Stojanovic. Slotted FAMA: A MAC Protocol for Underwater
[61] J. Morris. Optimal Blocklength for ARQ Error Control Schemes. IEEE Trans-
Press, 2000.
198
[65] D. Niculescu and B. Nath. Trajectory Based Forwarding and its Application. in
[66] D. Niculescu and B. Nath. Trajectory Based Forwarding and Its Application. In
[68] S. W. Nixon, S. Granger, and B. Nowicki. An assessment of the annual mass bal-
199
[75] J. Polastre, J. Hill, and D. Culler. Versatile Low Power Media Access for Wireless
2004.
Networks. IEEE Communications Magazine, vol. 39, no. 11, pp. 114-119, Nov.
2001.
[78] I. Reed and G. Solomon. Polynomial Codes Over Certain Finite Fields. In
Journal of the Society for Industrial and Applied Mathematics, pages 8:300304,
June 1960.
[79] I. Rhee, A. Warrier, M. Aja, and J. Min. Z-MAC: A Hybrid MAC for Wireless
2005.
ber 2000.
[81] V. RoDoplu and M. K. Park. An Energy Efficiency MAC Protocol for Underwater
200
[83] A. Sastry. Improving Automatic Repeat-Request (ARQ) Performance on Satellite
Channels under High Error Rate Conditions. IEEE Trans. Commun., 27(4):436
2001.
with Signaling for Ad Hoc Networks. In ACM Computer Comm. Rev., pages
IEEE Journal of Oceanic Engineering, vol. 25, no. 1, pp. 72-83, Jan. 2000.
nication and Digital Signal Processing Center, Boston, MA, USA, 1999.
[91] F. Stann and J. Heidemann. RMST: Reliable Data Transport in Sensor Networks.
201
[92] I. Stojmenovic and X. Lin. Loop-Free Hybrid Single-Path/Flooding Routing Al-
[93] A. Syed, W. Ye, and J. Heidemann. T-Lohi: A New Class of MAC Protocol for
[94] A. A. Syed and J. Heidemann. Time Synchronization for High Latency Acoustic
Water, and Soil. John Wiley and sons, NY, page 501, 1979.
Mobile Radio Systems. IEEE Trans. Communication, pages 6871, Jan. 1981.
for Wireless Sensor Networks. In ACM SenSys03, Los Angeles, California, USA,
November 2003.
September 2002.
202
[101] Y. Wang, J. Gao, and J. S. B. Mitchell. Boundary Recognition in Sensor Networks
2006.
[103] J. Wills, W. Ye, and J. Heidemann. Low-Power Acoustic Modem for Dense
CA, 2006.
[106] P. Xie and J.-H. Cui. Exploring Random Access and Handshaking Techniques
[107] P. Xie and J.-H. Cui. An FEC-based Reliable Data Transport Protocol for
2007.
203
[108] P. Xie and J.-H. Cui. R-MAC: An Energy-Efficient MAC Protocol for Underwater
- 3 2007.
[109] P. Xie, J.-H. Cui, and L. Lao. VBF: Vector-Based Forwarding Protocol for
[111] Y. Xu, W.-C. Lee, J. Xu, and G. Mitchell. PSGR: Priority-based Stateless Geo-
November 2005.
[112] F. Ye, H. Luo, J. Cheng, S. Lu, and L. Zhang. A Two-tier Data Dissemination
[113] F. Ye, G. Zhong, S. Lu, and L. Zhang. GRAdient Broadcast: A Robust Data
Delivery Protocol for Large Scale Sensor Networks. ACM Wireless Networks
(WINET), Vol.11(2), March 2005. The earlier version appears in IPSN 2003.
[114] W. Ye, J. Heidemann, and D. Estrin. Medium Access Control With Coordinated
204
[115] H. Zhang, A. Arora, Y. Choi, and M. G. Gouda. Reliable Bursty Convergecast
[116] Y. Zhang and L. Cheng. A Distributed Protocol for Multi-hop Underwater Robot
[117] Z. Zhou, J.-H. Cui, and A. Bagtzoglou. Scalable Localization with Mobil-
[118] Z. Zhou, J.-H. Cui, and S. Zhou. Localization for Large-Scale Underwater Sensor
205