You are on page 1of 17

Clustering Based Fuzzy Logic for

Multimodal Sensor Networks: A


Preprocessing to Decision Fusion
Rabie A. Ramadan
1
Computer Engineering Department, Faculty of Engineering, Cairo University, Giza, Egypt
Abstract. The advances of Micro-electromechanical systems (MEMS) technology lead to new types of sensors named mul-
timodal sensors where multiple features can be sensed and reported by one sensor. Forming a wireless sensor network of such
sensors poses new challenges to the wireless sensor networks in addition to the current challenges. Currently, each multimodal
sensor reports periodically a message for each feature or a long message that contains all the features compared to the tradi-
tional sensors. Such multimodal sensor networks could be used for multiple purposes and serve different applications. Howev-
er, data handling and information processing as well as data/decision tasks became much harder than before. In this paper, we
introduce a set of clustering algorithms taking into consideration the reported multiple features as well as some of the sensors
parameters such as nodes residual energy and clusterheads degree. The paper utilizes different clustering techniques includ-
ing fuzzy logic. The proposed algorithms are designed to simplify the next step operation which is data/decision fusion and
decision making operations. Through an extensive set of experiments, the proposed algorithms are evaluated.
Keywords: Sensor Networks, multimodal sensor networks, clustering, intelligent classroom

1
Rabie A. Ramadan. E-mail: rabie@rabieramadan.org
1. Introduction
Wireless sensor networks (WSNs) have a scientific
interest from academia and industry alike due to their
wide range of applications. Some of these applica-
tions are the battle field [19], habitat environment [1],
critical infrastructure [17], acoustic [6] monitoring ,
and chemical and radiation detection [11] as well as
in smart environments. Such applications raise new
challenges to wireless sensor networks. For instance,
real time and reliable monitoring is now essential
requirements. At the same time, sensors have to func-
tion for long time to reduce the overall energy cost
and to keep the overall network operational. There-
fore, energy consumption minimization task is the
main concern of WSN algorithm. In fact, energy con-
sumption is considered at different phases of the
WSN formation; for example, energy consumption is
considered during the deployment process , the de-
sign of Medium Access Control (MAC) algo-
rithms [7], the developing of routing protocols, and
during the implementation of information processing
techniques [10] [28].
WSN consists of many tiny but smart sensing de-
vices; these devices are capable of sensing some of
the monitored field phenomena/features, process the
captured features, and transmit them to one or more
of their neighbors. The sensed data is transmitted in
ad hoc fashion from one node to another to a centra-
lized node named sink. The sink node collects the
sensed information from different sensors for deci-
sion making. In a traditional WSN, a sensing device
is used to sense single feature from the monitored
field. However, with new advances in MIMS tech-
nology, a sensing device could have one or more
sensors on board. One example on such types of sen-
sors is Imote2 board, shown in Fig 1a, where 3-Axis
Accelerometer, Temperature, Humidity, and Light
Sensors are mounted on its board [9]. Another exam-
ple is Intel sensor board, shown in Fig 1b, in which it
is designed to have connectors for 3D Accelerometer,
advanced temp/humidity sensor, and light sensor [8] .
As can be seen, these types of new sensors raise new
complexity issues to the data processing in sensor
networks. Huge data will be reported to the sink node
for analysis, pattern recognition, and decision mak-
ing. Therefore, some preprocessing operations such
as clustering and data fusion became essential tasks
for reliable WSN operation.


(a)

(b)
Fig. 1: sensor boards (a) Imote2 basic sensor board
(ITS400) [9], (b) Intel sensor board [8]

Clustering techniques are not new in wireless
sensor networks in which many of the clustering
algorithms are proposed mainly for energy savings.
Some of these algorithms are the ones reported in
and [14] and [24]. However, these algorithms did
not take into consideration the number of the re-
ported features or the data similarities among these
features. In addition, fuzzy logic clustering such as
C-Mean is heavily used in other fields as well as in
WSN field. For instance, in [27] , the authors used
fuzzy logic to form normalized clusters where Each
sensor node uses the energy level, local density
within its sensing range and time as parameters for
clustering. The authors used fuzzy logic also for the
purpose of reducing the overlapping among cover-
age among the selected cluster heads.
Our proposal in this paper is different in which
along with the sensors residual energy and cluster
heads distribution, we consider the number of re-
ported features and the similarities among the sen-
sors readings during the clustering process. The
term features used in the rest of the paper means
data generated by each sensing device mounted on
the sensors board. For instance, if the sensors
board has three sensing devices mounted to measure
temperature, humidity, and pressure, this sensing
board is considered to have three sensing features.
In addition, we also consider the cluster head degree
during the clustering process. The cluster head de-
gree means, in this context, the number of nodes
that can join this cluster head. We believe that con-
sidering such parameters will lead to better cluster-
ing as well as it prolongs the WSN lifetime.

In summary, our contributions in this paper
include proposing a set of new multimodal WSN
clustering algorithms. The new algorithms consider
the monitored features by the sensors as well as some
of the sensors parameters such as cluster heads
degree and residual energy. In the first contribution,
two new clustering algorithms are proposed namely
LEACH with Multimodal support (LEACH-M) and
Multimodal Limited Similarity Clustering (MFLC)
which are extensions to LEACH algorithm with
considering number of features during the clustering
process. The only thing different from LEACH in
LEACH-M is that each node reports multiple features
instead of one. MFLC on the other hand adds the
features into the clustering probabilities. Another
algorithm named Data Similarity Based Clustering
(DSBC) is also investigated. DSBC takes most of the
sensed features as well as some of the sensors
parameters into consideration during the clustering
process. The last contribution in this paper is an
algorithm named Data Similarity Based Fuzzy
(DSBF). The algorithm does the clustering as other
algorithms but applies fuzzy logic in its two phases
which are defining the similar nodes and clustering
phases. The motivation behind using fuzzy logic is
that fuzzy logic proves its efficiency in case of
uncertainty in the input parameters. In our problem,
definitely, we have uncertainty in the sensors reading.
For instance, the temperature that leads to fire could
be within a range not based on a certain threshold.
Fuzzy logic can efficiently handle such uncertainty in
the input readings as well as in the clustering
parameters.
The used parameters in all algorithms are sensors
residual energy and node degree (number of similar
neighbors), and sensors monitored features.
Regarding the sensors monitored features; there are
large numbers of features that can be monitored. In
fact, these features may differ from a sensor network
to another based on the target applications. Therefore,
in this paper, we describe a general solution that is
applicable for any kind of sensor network
applications.
The paper is organized as follows: the following
section includes our motivation to the proposed clus-
tering algorithms, section 3 reviews of the related
work, section 4 is an overview on some of the con-
cepts used in this paper, the used network model is
presented in section 5 while the clustering algorithms
are detailed in section 6, section 7 includes the simu-
lation results, finally, the paper concludes in section
8.
2. Motivation
The proposed algorithms in this paper are moti-
vated by two different scenarios. The first scenario is
the data and decision fusion at intelligent classrooms
where different heterogeneous sensors report their
data to a centralized node (computer) that controls
different actuators such as blind, air condition, light,

a) Weather station

b) External temperature, hu-
midity , and light sensor

c) Internal temperature
and light sensor

d) Internal Humidity sensor

e) Two presence sensors

f) XBee wireless sensor


g) i.LON server

h) Star network board
Fig. 2: Set of sensors in intelligent classroom

and smart board operations. Our intelligent classroom
at Ambient Intelligent Center (AMIC) located at
German University in Cairo (GUC) has many of the
heterogeneous sensors that collaborate together to
ubiquitously control the classroom environment.
These sensors are connected through two types of
networks which are LonWork (Ethernet) and a Star
networks. The LonWork network has different sen-
sors such as humidity, temperature, presence, and
light sensors. The star network connects a weather
station with multiple sensors, and internal and exter-
nal temperature, humidity, and light sensors as well
as RFID devices to recognize the lecturer. Fig 2
shows some of these sensors and the main controller
of the LonWork and Star networks. Analyzing the
huge data collected from this large number of sensors
and recognizing the right event to control the differ-
ent actuators require accurate data mining and fusion.
In large classrooms, where thousands of students
might be present, especially in countries like Egypt,
hundreds of wireless sensors might be required and
multi-hop network might be formed. Getting the right
decision based on the reported data from these sen-
sors would be impossible without suitable clustering
and data/decision fusion technique.
The second motivation scenario is WSN for fire
detection; in this scenario, sensors are deployed to
detect if there is a fire or not in critical infrastructure
such as airport and important buildings. In such case,
sensors are used to report different features that col-
lectively lead to the detection of fire such as humidi-
ty, temperature, pressure, and light. If we depend on
the sink node to analyze the received data, it might
not discover it. The failure of detecting the fire is due
to the huge data the sink has to analyze as well as the
inefficiency of centralized data mining techniques in
such data stream scenario. Therefore, decision and
data mining at a suitable cluster head might lead to
accurate event detection and correct decision mak-
ing.


3. Related Work
Clustering of sensor nodes is considered as one of
the very successful techniques of mining useful in-
formation and discovering patterns in distributed
environments. It is a particularly useful technique
especially for applications that require scalability to
hundreds and thousands of nodes. Clustering also
supports aggregation of data in order to summarize
the overall transmitted data. However, the current
literatures either focus on node or data clustering
alone. Clustering of sensor nodes deals with two
main operations: 1) identifying cluster heads, and 2)
assigning nodes to respective cluster heads. These
two operations should be done at a very energy-
efficient level. On the other hand, data clustering
deals with collecting similar data for aggregation
purposes. The process of choosing the cluster head
should take into consideration node design factors
such as energy level of the sensor node and load ba-
lancing, as well as their similarities in terms of the
sensed data. A successful clustering algorithm is the
one that produces an optimal amount of clusters, with
each having a single cluster head responsible for inter
and intra-cluster communication.
The problem of clustering data has been greatly
studied. It has been used for very large databases [22].
Another main use of clustering protocols has been
investigated for ad-hoc networks such as in [20]
and [12]. Similarly, sensor networks clustering algo-
rithms have been proposed by several researches. For
instance, the Low-Energy Adaptive Clustering Hie-
rarchy (LEACH) [24] is one of the early clustering
algorithms in WSN. LEACH depends on a random
function in selecting the cluster heads and it rotates
between cluster heads in order to preserve energy and
distribute evenly the load across the nodes in the
network. A more adaptive approach is (Hybrid Ener-
gy Efficient Distributed Clustering) (HEED) [14],
where the cluster head formation depends on the
energy level of the sensor node. In case of HEED, the
authors argue that the algorithm yield more distri-
buted clusters and is efficient in terms of processing.
However, HEED is hard to be adapted to multimodal
WSNs.
Little attention has been given to clustering of sen-
sor nodes according to their data readings similarity.
For example, H. Jin et al. [26] suggest a framework
for data mining in sensor networks, and propose mul-
ti-dimensional clustering, which clusters the nodes
according to their sensed attributes. Similarly, the
Distributed, Hierarchical Clustering and Summariza-
tion algorithm (DHCS) seems to provide a better
performance for dense networks [4]. The algorithm
adopts several techniques, such as difference and
hopcount thresholds to model node and distance-
based clustering, but does not consider energy level
during the clustering process. Smarter clustering al-
gorithm based on fuzzy logic is proposed by Indranil
G. et. al. in [16]. The author uses a fuzzy logic to
select cluster heads based on their energy and cen-
trality. Another recent clustering algorithm based on
fuzzy logic controller (FLC) is proposed by Yahya et
al. in [27] where the authors tried to select a best
cluster heads for the purpose of coverage and load
balancing. The main clustering parameters that the
authors considered were the sensors energy and the
number of loyal followers where they assume smart
nodes that can decide to join or not a cluster head
node. The results of these clustering algorithms seem
promising. However, the estimation of the centrality
point of each node in [27] is computed based on a
centralized manner where it is assumed to be one of
the functions of the sink node.
A common problem with the previous clustering
algorithms is that they tend to ignore the data similar-
ity in their clustering processes. In addition, some of
the algorithms are centralized where the sink node
has to be involved during the clustering process
which costs many of the message overheads. In this
paper, we introduce hybrid algorithms that utilize
some of the sensors parameters as well as data simi-
larities. Our algorithms are mainly designed to con-
sider multimodal sensors. However, it fits the tradi-
tional wireless networks as well.
4. Overview
For the paper to be self contained, in this section,
the main concepts of multimodal WSN is introduced,
fuzzy logic controller (FLC), and LEACH as a WSN
clustering algorithm.
4.1. Multimodal WSN
In our previous work [18], a framework that simpl-
ifies dealing with heterogeneous multimodal sensor
networks was proposed. Our view to the multimodal
wireless sensor network goes beyond the current
usage of the traditional WSNs in which the network
could be designed to serve different purposes. There-
fore, even the number of features reported by each
sensor might differ. To reduce the amount of data
reported, we forced the nodes to use a sliding win-
dow. In addition, nodes report only when a change of
the sensed data (output of the sliding window) is ef-
fective. Therefore, the sink node or the cluster head
(in a clustered network) has to save the previous
readings received from the nodes to keep track of
their status. Certainly, this approach saves much of
the sensors energy and prolongs the network lifetime.
However, clustered networks are proved to be energy
efficient than non-clustered networks. Not only that,
but also clustering methods play an important role in
the reliability and load balancing of the network.
Thus, in this paper, our clustering algorithms consid-
er the type of the reported features during the cluster-
ing process.
4.2. Fuzzy Logic Controller
Fuzzy logic is a powerful method that deals with
problems with uncertainties. It has been used to solve
many of the problems that traditional methods were
not able to solve. This science is introduced by Za-
deh [23] in 1965 to emulate human usage of linguis-
tic variables instead of precise numerical variables.
Instead of using a crisp set with a collection of ele-
ments with each element belongs to specific set or
not, a fuzzy set is a set with elements belongs to a
graded membership function within interval [0,1]. A
fuzzy logic controller consists of a fuzzifier, fuzzy
rules, fuzzy inference engine, and defuzzifier func-
tion. The fuzzifier takes the crisp input from the sys-
tem and determines the degree that it belongs to the
appropriate fuzzy sets. Fuzzy rules according to Ma-
madani method are conditional statements in the
form of:
IF a is A
THEN b is B ,
where a and b are linguistic variables and A and B
are linguistic values determined by fuzzy sets on the
universe of discourse X and Y, respectively. The
output rules aggregation is the function of the infe-
rence engine. Finally the defuzzifier is to transfer the
fuzzy output to the crisp output back to control the
desired system. In the following sections, we elabo-
rate on each module according to its usage in cluster-
ing process.

4.3. LEACH Overview
In this subsection, we introduce one of the most
used clustering algorithms in sensor networks that we
will be using for comparison which is LEACH [24].
LEACH is one of the first major improvements on
conventional clustering approaches in wireless sensor
networks. Conventional approaches algorithms in-
cluding Minimum Transmission Energy (MTE) [21]
and direct-transmission do not lead to even energy
dissipation throughout a network. LEACH provides
load balancing of energy usage [25] by the rotation of
clusterheads. The algorithm is also organized in such
a way that data-fusion can be used to reduce the
amount of data transmission. The decision of whether
a node elevates to clusterhead is made dynamically at
each interval. In addition, the elevation decision is
made by each node independent of other nodes to
minimize overhead in clusterhead establishment.
Moreover, the clustering results of LEACH seems
promising; thus it is one of the most studied and refe-
renced algorithm. Due to the previous reasons, we
selected LEACH to be the base algorithm for our
proposed ones.
Since it is simple LEACH is a round based
protocol in which it starts by a clustering phase
followed by a reporting phase. In the clustering
phase, an adaptive selection of the cluster heads is
operated. On the other hand, sensors report their data
in the reporting phase; TDMA protocol might be
used in this phase. After certain period of time, the
protocol starts over to select other cluster heads.
Selecting other cluster heads in later rounds avoids
selecting the used cluster heads. Fig. 3 shows the
flow chart of LEACH. As shown in the Figure, the
protocol terminates only when the network lifetime is
close to end. The important thing in this protocol is
the distribution process in selecting the cluster heads.
In such process, at the beginning of each round a
node S s e , where S is the number of nodes in the
network, computes its probability to be a cluster
head; if the computed probability is larger than
certain threshold, it announces itself a cluster head
and close proximity node might send a join request to
it.


Fig. 3: LEACH protocol flow chart [3]

As can be seen, LEACH is very simple;
however, it is efficient in terms of nodes clustering.
In addition, it allows the cluster heads to aggregate
the received data which saves a lot of the sensors
energy. However, the algorithm assumes that single
feature is reported by all the nodes and does not even
take the reported features into consideration during
the clustering process. Therefore, one of our
proposals in this paper is to extend LEACH to
support multimodal WSN and evaluate its
performance. In addition, we propose another
algorithm, where the number of reported features by
the nodes is taken into consideration during the
clustering process.
5. Network Model
Given a set of sensor nodes S deployed in a moni-
tored field A, each sensor drives a continuous stream
of data. These sensors are assumed heterogeneous in
terms of their hardware as well as their reporting
phenomena. For instance, sensors may differ in their
initial energy, memory, sensing range, communica-
tion range, and processing capabilities. Sensors are
assumed reporting only discrete events. In other
words, each reported feature (attribute) is associated
with discrete time event as well as the sensors loca-
tion (through GPS or any other estimation method).
Continues sensor data stream can simply be con-
verted to discrete events with some preprocessing
such as quantization. In addition, to save the sensors
energy and reduce the amount of reported data as
much as possible, sensors reporting is limited to the
change of state. A certain threshold [18] is primarily
set for each sensors feature; if the measured value
increases more than the threshold, the sensor will
report its value; otherwise, no reporting occurs. On
the other hand, removing duplicate information tech-
niques could be used if the previous assumption is
not applicable to some of the sensors. The reader is
referred to the reporting framework proposed in [18]
for more information on how to save sensors energy
by sending only the changed features based on a
specific threshold.
For the energy model used in this paper, we follow
the same model presented in [5]. As shown in equa-
tion (1), the total energy consumption ) , ( d L
Tx
E for
transmitting L-bit message over a distance d can be
expressed as the sum of both terms ) (L elec
Tx
E
and ) , ( d l amp
Tx
E where ) (L elec
Tx
E is energy
consumption due to the electronics parameters such
as digital coding and modulation and filtering.
) (L elec
Tx
E could be extended to include energy
consumption of a single bit
elec
E .
) , ( d l amp
Tx
E is the amplifier energy consumption
to transmit acceptable bit error rate for signal trans-
mitted to a receiver. ) , ( d l amp
Tx
E can be ex-
pressed in terms of
fs
c or
mp
c based on the trans-
mitter amplifier mode. In addition, there are loss fac-
tors for free spaces (
2
d loss) and multipath fading
(
4
d loss) , respectively.
0
d is a threshold that can
be determined by equating the two expressions in
which an empirical value of
mp
fs
d d
c
c
= =
0
.

+
> +
=
+ =
0
2
. . .
0
4
. . .
) 1 ( ) , ( ) ( ) , (
d d if d
fs
L
elec
E L
d d if d
mp
L
elec
E L
d l amp
Tx
E L elec
Tx
E d L
Tx
E
c
c


Features that are reported by each sensor is mod-
eled by a vector of three attribute-value tuples
(attribute-name = v, location=(x,y,z), time = t),
where v is the sensed feature, (x,y,z) is the sensors
location, and t is the event time stamp. Sensors report
their location with each update due to mobile nodes
(if any). In stationary sensor network, the location
information might be reported once and thereafter
could be omitted from update message. Location
information is considered important in WSN due to
the decision making process. In other words, even, in
stationary network, the cluster head as well as the
sink node might need to know from where they got
the reported data for further analysis and decision
making. In addition, the location information is im-
portant in case of sensors query in a query-based
WSN. If a sensor is used to sense multiple features
from the monitored field, a vector of these attributes
will be reported; single location information and time
stamp will be added to the end of the vector. For
instance, if s1 is used to measure the temperature,
humidity, and wind speed, the reported attributes will
in the following form:
s1 = {temp = v1, hum=v2, wind=v3, location =
(x,y,z), time=t}, where temp represents the tempera-
ture attribute, hum is the humidity, wind is the wind
speed, (x,y,z) is the sensors location, and t is the time
stamp. If there is no change in the attribute value, the
value is replaced by 0 to refer to invalid data. As can
be seen, each node is considered as data source with
single or multiple features. With huge number of
sensors, data explosion might occur at the receiving
end as well as the network energy will be depleted in
short time. Therefore, in the next sections, efficient
clustering is proposed as a solution to the previous
problems. However, there are many parameters and
uncertainties that make such algorithms not an easy
task.
6. Clustering Algorithms
In this section, we introduce a set of clustering al-
gorithms that we stated in the introduction section.
The algorithms are LEACH-M, MFLC, DSBC, and
DSBF. Throughout the next sections, we elaborate on
the details of these algorithms.

6.1. Multimodal Limited Similarity Clustering
(MFLC)
As mentioned, there are two types of sensor net-
works which are single and multimodal sensor net-
works. A single feature sensor network is a network
with each sensor node reports only one feature. On
the other hand, a multimodal sensor network is a
network with nodes report more than one feature.
Further, the network could be classified into homo-
genous and heterogeneous sensor networks. In ho-
mogenous sensor networks, nodes are typical in
every aspect. Node clustering based on similarity in
this case will be beneficial in terms of energy and
reliability wise if nodes with similar reporting fea-
tures are clustered together. Similarity in this context
means nodes that report close data values. Close va-
riance in sensed values might indicate that sensors
are in close proximity as well. In addition, sensors
energy could be saved due to aggregation of similar
data. However, in heterogeneous sensor networks,
sensors differ in their characteristics such as initial
energy, sensing range, and communication range(s).
Here, we introduce MFLC as a new clustering
algorithm that fits the purpose of multimodal sensor
networks either in heterogeneous or homogenous
networks. MFLC adapts LEACH clustering tech-
nique to support the multimodal sensor networks.
MFLC differs from the LEACH on the criteria used
for a node to decide to be a cluster head or not.
LEACH selects the cluster head randomly; such cri-
teria would not be appropriate in multimodal sensor
networks as we will see later in the simulation result
section. Therefore, MFLC includes more appropriate
criteria which are the number of features to be re-
ported by each node and the nodes residual energy.
Taking the number of features into consideration
during the clustering process will balance the load
over the selected cluster heads. At the same time, it
enhances the aggregation and data fusion which leads
to less number of messages to be transmitted from
the cluster head to the sink node.
Equation (2) shows the new formula for the
nodes cluster head selection:
) 2 (
) (
) (
*
max
) (
*
) / 1 mod( * 1
) (
s
m
E
s
c
E
F
s F
p r p
p
s T

=

The formula considers p, r , ) (s F ,
max
F , ) (s E
c
,
and ) (s E
m
parameters where p is the nodes desire
to be a cluster head, r is the current round, ) (s F is
the number of features reported by node S se ,
max
F is the maximum features reported by the net-
work, ) (s E
c
is nodes S se residual energy, and
) (s E
m
is nodes S se initial energy .
Fig. 4 shows the MFLC algorithm details where
the sink node is assumed powerful enough to connect
to all nodes in the monitored field. The algorithm
works as follow:
1- In the initialization phase where the sink node
(SN) broadcasts its position, its residual ener-
gy, and the maximum number of features ex-
pected to be reported from all nodes. After
nodes received the SN message, each node
looks for its neighbors and fills its neighbors
list l.
2- In the second step, nodes start working on the
clustering where each node applies equation
(2) and computes T(s). At the same time, each
node runs a random generator algorithm to
generate a random number between 0 and 1.
Based on these two values, T(s) and the gen-
erated random number, the node decides to be
a cluster head or not. If a node decided to be a
cluster head, it sets the CHparam to true and
broadcasts a message to all of its neighbors to
notify them that it is assigned itself as a clus-
terhead. Then, it waits for a certain period of
time to hear from other cluster heads (if any).
If it hears from any other cluster head, it adds
it to its CH-list for further usage. If a node is
not a cluster head, it decides to join one of its
neighbor cluster heads based on the cluster
heads residual energy. Any node without a
cluster head, it is forced to be a cluster head.
3- In step 3, nodes start to report to their cluster
heads using TDMA protocol. Nodes may ap-
ply a sliding window for the sensed data.
Now, the cluster heads applies an appropriate
aggregation method such as the average on the
received similar features and try to send the
aggregated value(s) to the sink node. A clus-
ter head might not be directly connected to the
sink node. Therefore, a multi-hop reporting
must be used. We propose, as shown in step
4a, the cluster head to choose one of its neigh-
bor cluster heads found in its CH-list with the
highest residual energy to send to. However,
the CH-list might be empty; thus, the cluster
head has to select one of its neighbors that it
does not belong to its cluster to report to. If all
of the neighbors belong to other clusters, it
might select a node at random or based on the
neighbors energy or number of features hop-
ing that it will reach one of the other cluster
heads.
4- Finally, in step 4, when the round time expires
and the network still alive, the clustering algo-
rithm is repeated; otherwise, the algorithm
terminates. The network is considered dead or
out of service when a node goes out of energy
and cannot function any more.

6.2. Data Similarity Based Clustering (DSBC)
In this subsection, we present our second cluster-
ing algorithm. The algorithm is designed to cluster
the multimodal sensor nodes based on their similarity
measures. It considers a similarity threshold that is
expected to enhance the clustering especially if mul-
tiple features are reported by a sensor node. In DSBC,
nodes are considered similar when they report similar
number of features and this number of features is
greater than a predefined threshold value called si-
milarity threshold. Such characteristic might save a
lot of the sensors energy as well.
DSBC algorithm is divided into two main phases
which are clustering phase and data reporting phase
shown in Fig. 5. These two phases are periodically
repeated and new cluster heads are elected for the
purpose of load balancing and to cope with the envi-
ronment changing conditions.
6.2.1. Clustering Phase
In the clustering phase, we assume that each node
has o as similarity factor which is a predetermined
value based on the total number of features that are
measured by the network. Also, it is assumed that
nodes are able to cooperate to exchange their sensed
data with their neighbors. This allows the nodes to
know the measured features by each other. Each node
constructs an attribute/features vector A; values in the
attribute vector are arranged according to previous
knowledge of the sensed features of the network.
Each node also keeps what is named Difference
Threshold Vector (dt) which is a vector stating the
maximum allowable difference to measure nodes
features similarity (follows the same order as the
attribute vector). In addition, each node has a node
degree variable (X) to store the number of similar
nodes. As can be seen in Fig. 5, DSBC algorithm
clustering phase consists of five main steps.
1. The first step of the clustering process,
marked 1.1 in the figure, instructs each sensor
node to save its readings in the designated
field of the attribute vector after sensing the
surrounding environment. Then, each node
broadcasts its readings to all immediate 1-hop
neighbors that are within its communication
range.
2. Step 1.2 deals with data readings comparison,
where the sensor node S s e (S is a set of sen-
sors), determines its similarity to the other
node S s e 1 using the threshold vector (dt).
The node degree of S s e is then incremented
if the two nodes are similar. S s e 1 identifier
(ID) will be placed in the similar neighbor
list
s
l . Therefore, the node degree in this
context means the number of similar sensors
around S s e . Such consideration increases
the reliability of the data reported to the sink
node even if simple aggregation method is
used per cluster head.
3. In the third step, each node broadcasts its node
degree along with its residual energy
s
E as
given in step 1.3. Each node S s e compares
its node degree with corresponding node de-
grees of other nodes belonging to
s
l , and if it
finds itself the highest node degree, it broad-
casts a CH announcement message. In case of
more than one potential CH with the same
node degree, highest residual energy is used to
break ties. All surrounding nodes that receive
the announcement check if it is coming from a
node in the
s
l and if so, the node checks the
value of chReceived. If false, it is not a clus-
ter head or a member of a cluster yet, it sets it
to true and sets the source node in the selected
CH announcement to be its cluster head. Then,
the node sends a registration message back to
the cluster head, including its current readings.
4. Step 1.5 states that nodes that are not similar
to any neighboring nodes and have not ac-
cepted any CH announcements become forced
cluster heads themselves.
5. The final step of the clustering phase shown in
Fig. 5 is step 1.6. In this step, each elected CH
collects all registration messages and their IDs
and places them in the member node list
6.2.2. Data Reporting Phase
After receiving all the registration messages from
cluster members nodes, the CH chooses a suitable
fusion method to fuse the data received from the sim-
ilar nodes. For instance, the CH sends the fused data
to the base station (Sink Node) in the form of a mes-
sage that states the number of nodes in the cluster
(including the CH), as well as the average vector of
all the readings of the nodes in the cluster. The CH
then periodically collects information from all its
members nodes and fuses the information to the base
station. Data and decision fusion will perfectly suite
this phase as well.


6.3. Data Similarity Clustering Based Fuzzy
Logic (DSBF)
Here, we introduce a new clustering algorithm that
is similar to DSBC. However it considers the uncer-
tainty in the clustering parameters. Again, the algo-
rithm considers sensors residual energy, the number
of similar neighbors, and the sensed features. The
similarity here is considered in terms of the number
of similar features reported by each sensor as well as
the close variance in the similar reported features.
DSBF works in three phases; in the first phase, the
node degree based similarity feature is computed
while in the second phase the cluster heads are
elected. Both phases use fuzzy logic in their core
processes. In the third phase data is reported. After
all, the algorithm is periodically repeated for load
balancing since some of the cluster heads energy
might be exhausted due to sending and receiving.
One may think that using fuzzy logic in the first
two phases of the algorithm may consume much of
the sensors energy especially when the algorithm is
periodically repeated. However, based on the fact
that in a platform like Telos platform [2], sending a
single byte is equivalent to executing about 4720
instructions. Thus, to reduce energy consumption, it
is imperative to minimize communication overhead
even if the number of computations increases.
6.3.1. Phase One: Computing Node Degrees
In this phase, node degrees are computed based on
nodes similarities in terms of their reported features.
To do so, we use a rule based fuzzy logic controller.
However, since a network may contain a large num-
ber of features to be reported to the sink node, we
limit the linguistic variables used to describe the
crisp input to three variables which are low, medium,
and high. Fig. 6 shows an example on the fuzzy set
for three measured features which are feature 1 (f1),
feature 2 (f2), and feature 3 (f3).
The low and high variables are represented by
semi trapezoid membership function while the me-
dium is represented by a triangle membership func-
tion. Therefore, the number of rules used in the fuzzy
rule based is 3
(N)
rules where N is the number of con-
sidered features. It is worth mentioning that with
increasing number of features, generating all of the
number of rules might be a problem. However, dy-
namic rule generation and rule reduction methods
might be a solution. The addressing of such problem
will be considered in the future work.
The fuzzy set for the output which is the opportu-
nity for a node being similar is represented using five
linguistic variables which are very low, low, medium,
high, and very high. Again, the low and very high are
represented by semi-trapezoid while other variables
are represented by triangle functions. For the defuzzi-
fication, it seems that the center of gravity (COG) of
fuzzy sets is an essential feature that concurrently
reflects the location and shape of the fuzzy sets con-
cerned. Therefore, we use COG as our defuzzifica-
tion process as shown in equation (3).

=
) (
* ) (
a
a a
COG
A
A

(3)
Where, ) (a
A
is the membership function of set A.

Thus, given two nodes s1 and s2, their features are
used as input to the fuzzy sets. If the fuzzy output for
both of them falls into the same category, then both
nodes are considered similar.
6.3.2. Phase Two: Cluster Head Election
In phase 1, node degrees based on node similari-
ties are identified. In this phase, the cluster heads are
elected based on different parameters such as nodes
residual energy and degree. It is worth mentioning
that using multiple parameters in electing the cluster
heads in DSBC algorithm where fuzzy logic is not
used was not possible. Using many parameters in
DSBC leads to multi-objective problem which com-
plicates the clustering process in each node. On the
other hand, fuzzy logic allows using multiple para-
meters in the cluster heads election phase. The only
complexity in this case is the number of generated
rules with the increasing number of parameters which
could be solved using any of the rule reduction me-
thods such as rough set.
Again the linguistic variables used to describe the
crisp input are limited, in our case, to low, medium,
and high. Also, the fuzzy set for the output which is
the opportunity for a node being a cluster head is
represented using six linguistic variables which are
very low, low, medium, high, and very high. Again,
the low and very high are represented by semi-
trapezoid while other variables are represented by
triangle functions. For the defuzzification, COG is
applied on the output. Nodes will join the elected
cluster head by the fuzzy logic controller; if there is
more than one cluster head announced, a node choos-
es the cluster head with the highest residual energy.
Assigned cluster heads without members, if any, are
forced to join one of their neighbors cluster heads.
Fig. 8 shows an example of fuzzy sets for nodes de-
gree and energy as well as the output set.

6.3.3. Phase Three: Data Reporting

In this phase, cluster heads applies suitable data
fusion and/or aggregation method and start sending
their data to the sink node. The sink node makes its
final decision based on the minded received data.
7. Simulation Results
In this section, we evaluate the proposed clustering
algorithms through different set of experiments. Our
simulation environment is designed especially for the
test purposes. Since multimodal sensors are not yet
implemented in the current sensor network simula-
tors, we designed our simulator using java frame-
work. In addition, for fuzzy logic controller, we used
jFuzzyLogic library implemented by Pablo Cingolani
et al. [15]. Sensors are deployed randomly based on a
normal distribution function. In addition, sensors
parameters follow the specifications of MICA2 [13].
Moreover, two different types of environments are
tested in these experiments which are stable and un-
stable environments. Environments could be classi-
fied into several categories; such as hostile, unreach-
able, dormant, etc. Environments that are of interest
to our research are the ones that affect the value of
the monitored features. For example, a stable envi-
ronment is one with features that dont change very
often or not by much, such as a fire monitoring net-
work. In this application, temperature, for instance,
does not suddenly drop or increase. On the other
hand, an unstable environment is described as a con-
tinuously changing environment, where values of
features could be low at one point and high at the
next. These types of environments are usually known
as event-driven environments. Examples of these
environments include presence detection in intelli-
gent classrooms or tsunami detection systems. In
tsunami monitoring system, for instance, the wave
strength could change suddenly and frequently. At
the same time, the waves might greatly differ from
one place to another.
Throughout the following experiments, we tend to
use sensors with heterogeneous initial energy, com-
munication range, and three features. We limited
ourselves to three features per sensors for fair estima-
tion to the performance of the proposed algorithms.
The selected features are temperature, humidity, and
pressure. In addition, all of the results presented in
the following subsections are based on the average
results over different runs with different environment
settings. For DSBC algorithm, we conducted some
experiments to show the sensitivity of different val-
ues for . However, the results were obvious since
increasing the value of adds more restriction on
considering two nodes are similar. Therefore, we
concluded that setting to 50% is fair choice and it is
fixed throughout our experiments in this section.
Based on the three features used, the fuzzy sets for
the features used in the following experiments are
shown in Fig. 7 while the fuzzy sets for nodes de-
gree, node energy, and output fuzzy set are plotted
in Fig. 8. Fig. 9 shows a sample from the set of rules
used by DSBF algorithm in its first phase while
Fig .10 shows a sample from the rules used at the
second phase.



Step 1: Initialization
1- The sink node (SN) broadcasts its position and the maximum number of features that expected to be reported to all
nodes
initMsg(
max
F , x, y)
2- nodes S in the sensor network
a. Find all neighbors within your range using HELLO msg.
HELLOMsg( NodeID, ) (s E
c
)
b. Generate a neighboring list l based on the received msgs
Step 2: Clustering
3- nodes S in the sensor network
a. Compute

) (
) (
*
) (
*
) / 1 mod( * 1
) (
max
s E
s E
F
s F
p r p
p
s T
m
c

=
b. If (T(s) > rand[0,1] )
i. broadcast cluster head announcement CHMsg
ii. Set CHparam true
iii. Construct CH-list based on the CHMsgs received from other CHs
c. Else
i. Wait for CHMsg
ii. If received CHMsg , join a node with more ) (s E
c

iii. Else , be a cluster head
d. If a cluster head remains without any members, it joins any neighbor cluster head.
Step 3: Reporting
4- S CHe
a. If (CH-list is not empty and SN is not reachable)
i. Select a CH with larger ) (s E
c

// forces multi-hop routing through a neighbor CH)
ii. Else if (SN is reachable )
- report to the SN
iii. Else // forces multi-hop routing through a neighbor node)
- Select one of its neighbors that it is not in its cluster (highest ) (s E
c
node)/ or a random node to
be a next hop routing node. The selected node is forced to report the received data to its current
cluster head.
Step 4: Re-Clustering
5- If the round time is expired and the network still a live
a. Go to step 2
6- Else
a. Stop
Fig. 4: MFLC algorithm details



















Algorithm 1: DSBC Algorithm
A: Attribute Vector
dt: Difference threshold vector

s
l : Similar Neighbor list for sensor S s e .
M
CH
: Member Node list for CH sensor S s e

s
X : Node degree for node S s e which is the number of similar node in the similarity list
CHreceived: A binary variable that it is set to true if S s e received a cluster head (CH)
announcement; false
o : Similarity factor
| : Similarity value

s
E : Residual energy for node S s e

Phase 1: Clustering Phase
1. For each sensor S s e
1.1 Broadcast A ando to all neighbors
1.2 For all of the received vectors A, apply the following rules to define the similarity
- If the same readings are present in both attribute vectors, and the values are within range of dt, compute the
similarity value according to:
has S s features of number Total
features similar of number
e
= |
- ) ( o | > if Add
i
ID to
s
l , where S i e
-
Increment X
s

1.3 Broadcast
s
X and
s
E to all neighbor nodes
1.4 If
s
X has the maximum degree, S s e announces itself as a CH
1.4.1 Other nodes join the CH with the maximum degree If more than one CH with the same degree,
choose CH with the highest energy
1.4.2 Set CHreceived= true.
1.4.3 Sends registration message to elected CH along with A.
1.5 If S s e cannot find a similar node and didnt receive any CH announcements, it announces itself a clus-
ter head CH.
1.6 Node S s e that is a CH saves all registration messages in MCH.

Phase 2: Data Reporting
1. Each node S s e sends its sensed data based on the selected window size to its CH
2. The CH aggregates the received data and sends it to the sink node
Fig. 5: DSBC Algorithm Outline








x0 x1 x2 x3 x4 x5
(a) Feature 1







y1 y2 y3 y4 y5 y6
(b) Feature 2









z0 z1 z2 z3 z4 z5
(c) Feature 3

Fig.6: Example of sensors features fuzzy sets




Fig.7: Example of sensors features fuzzy sets


Fig.8: Example of clustering parameters using fuzzy

RULE 1 : IF Feature1 IS low AND Feature2 IS low AND Feature3 IS low
THEN chance IS vlow;
RULE 2 : IF Feature1 IS low AND Feature2 IS low AND Feature3 IS high
THEN chance IS low;
RULE 3 : IF Feature1 IS low AND Feature2 IS low AND Feature3 IS medium
THEN chance IS low;
RULE 4 : IF Feature1 IS low AND Feature2 IS high AND Feature3 IS low
THEN chance IS low;
RULE 5 : IF Feature1 IS low AND Feature2 IS high AND Feature3 IS medium
THEN chance IS medium;
RULE 6 : IF Feature1 IS low AND Feature2 IS high AND Feature3 IS high
THEN chance IS high;
RULE 7 : IF Feature1 IS low AND Feature2 IS medium AND Feature3 IS low
THEN chance IS low;
RULE 8 : IF Feature1 IS low AND Feature2 IS medium AND Feature3 IS medium
THEN chance IS medium;
RULE 9 : IF Feature1 IS low AND Feature2 IS medium AND Feature3 IS high
THEN chance IS medium;
RULE 10 : IF Feature1 IS medium AND Feature2 IS low AND Feature3 IS low
THEN chance IS low;
RULE 11 : IF Feature1 IS medium AND Feature2 IS low AND Feature3 IS high
THEN chance IS medium;
RULE 12 : IF Feature1 IS medium AND Feature2 IS low AND Feature3 IS medium
THEN chance IS medium;
RULE 13 : IF Feature1 IS medium AND Feature2 IS high AND Feature3 IS low
THEN chance IS medium;
RULE 14 : IF Feature1 IS medium AND Feature2 IS high AND Feature3 IS medium
THEN chance IS high;
RULE 15 : IF Feature1 IS medium AND Feature2 IS high AND Feature3 IS high
THEN chance IS high;
RULE 16 : IF Feature1 IS medium AND Feature2 IS medium AND Feature3 IS low
THEN chance IS medium;
RULE 17 : IF Feature1 IS medium AND Feature2 IS medium AND Feature3 IS medium
THEN chance IS high;
RULE 18 : IF Feature1 IS medium AND Feature2 IS medium AND Feature3 IS high
THEN chance IS high;

Fig. 9: DSBF Phase 1 sample rules




RULE 1 : IF Energy IS low AND nDegree IS low
THEN chance IS vlow;
RULE 2 : IF Energy IS low AND nDegree IS medium
THEN chance IS low;
RULE 3 : IF Energy IS low AND nDegree IS high
THEN chance IS medium;
RULE 4 : IF Energy IS medium AND nDegree IS low
THEN chance IS low;
RULE 5 : IF Energy IS medium AND nDegree IS medium
THEN chance IS medium;
RULE 6 : IF Energy IS medium AND nDegree IS high
THEN chance IS high;
RULE 7 : IF Energy IS high AND nDegree IS low
THEN chance IS medium;
RULE 8 : IF Energy IS high AND nDegree IS medium
THEN chance IS high;
RULE 9 : IF Energy IS high AND nDegree IS high
THEN chance IS vhigh;
Fig. 10: DSBF Phase 2 sample rules


7.1. Average Number of Nodes per Cluster and
Average Number of Unclustered Nodes
Fig. 11 shows the average number of cluster heads
per network when different number of nodes is used
in a stable environment. In this set of experiments,
the average results over 10 networks starting from
100 nodes to 1000 nodes per network are presented.
Due to large number of charts, we show only the av-
erage results which seems to represent a trend in the
algorithms behaviors. As can be seen in Fig. 11, due
to the total randomness of LEACH-M in electing the
cluster heads, the percentage of the cluster heads
cannot be controlled. That is why LEACH authors
restrict the number of cluster heads to 5%. This per-
centage allows better distribution to the clusterheads
in the network. On the other hand, since MFLC uses
the number of features as a clustering condition, it
seems to perform a bit better than LEACH in terms of
unclustered nodes. However, DSBF algorithm gives
the best results in terms of the number of cluster
heads as well as the number of unclustered nodes.
DSBF produces in the first round 7% of the total
number of nodes as cluster heads and almost 0.3% of
unclustered nodes. Almost similar results are pro-
duced by DSBC. In conclusion, the results show that
data similarity algorithms have direct positive effect
on the number of cluster heads and the number of
unclustered nodes in the first round; other rounds
follow the same trend as well.


Fig. 11: Percentage of number of cluster heads and unclustered
nodes in stable environment

In unstable environment, the algorithms behavior
is changed. Fig. 12 shows the average percentage of
the cluster heads as well as the number of unclustered
nodes. It is worth mentioning that LEACH-M and
MFLC have not been affected by the change from
stable to unstable environment. However, DSBC is
largely affected. In fact, it performed worth than
LEACH-M algorithm. The reason behind this per-
formance drop is that DSBC mainly depends on a
similarity threshold vector. In a stable environment, it
is easy to adjust this similarity vector while in unsta-
ble environment, each round requires different simi-
larity vector. On the other hand, this set of experi-
ments show the beauty of fuzzy logic in DSBF where
the percentage of the cluster heads is almost the same
as well as the percentage of the clustered nodes .


Fig. 12: Percentage number of cluster heads and unclustered
nodes in unstable environment
7.2. Cluster Formation Cost vs. Number of
Nodes
In this section, a new set of experiments are con-
duct on wide range of number of sensors starting
from 100 to 4000 nodes. With each network size, the
average percentage of consumed energy due to the
clustering is computed compared to the overall net-
work energy. The overall network energy is simply
the sum of the initial nodes energy. Fig. 13 shows the
cluster formation cost for LEACH-M, MFLC, DSBC,
and DSBF. As can be seen, the clustering overhead of
LEACH-M is the minimum while the DSBF is the
maximum. The results seem reasonable due to the
amount of computations that each node needs to per-
form. However, DSBF overhead is rewarded by even
cluster distribution that leads to network load balanc-
ing as well as efficient data aggregation and energy
saving in the reporting phase. This observation is
clearly confirmed in section 8.4 where the network
lifetime is studied.
7.3. Average Dead Nodes Per Round
In Fig. 14, we plot a sample of dead nodes per
round for a network with 500 nodes with different
problem settings. The experiments are conducted
over stable and nonstable environments as well.
These set of experiments simply show how the pro-
posed clustering algorithms perform within each
round. As shown in the Figure, nodes in LEACH-M
and MFLC die much faster than in DSBC and DSBF.
The reason behind this feature is the weak distribu-
tion of the cluster heads in LEACH-M and MFLC.


Fig. 13: Clustering overhead percentage



Fig. 14: Number of dead nodes per round
7.4. Network Lifetime
In this subsection, we evaluate the network life-
time for a network with 500 nodes. The average re-
sults over 10 simulation runs are presented in Fig. 15.
In these experiments, the clustering algorithm termi-
nates when the network is disconnected due to lack of
nodes energy or nodes cannot reach the sink node.
Although DSBF has the largest overhead, see Fig. 13,
it survives for longer than any of the other algorithms
in both stable and unstable environments due to the
cluster head distribution and best node similarity
clustering. DSBF depends on the uncertainty in re-
ported data and considers a range of values to take
decide if nodes are similar or not. This is different
from other algorithms where a strict threshold is used.
LEACH-M and MFLC are almost having the same
number of rounds. However, DSBC does not perform
well in unstable environments. As mentioned before,
the lack of adjustment to the threshold vector leads to
bad distribution to the cluster heads in DSBC.

Fig. 15: Algorithms lifetime
8. Conclusion and Future Work
In this paper, we presented different clustering al-
gorithms for multimodal wireless sensor networks.
The first algorithm, LEACH-M extends the LEACH
concept to include different features to be reported by
each sensor. In the second algorithm, MFLC, we
adapted the LEACH-M to include the number of fea-
tures in the clustering equation. Data similarity clus-
tering algorithm, DSBC, is also presented to involve
the similarity of the features readings of different
sensors. In the last algorithm, DSBF, we utilized the
fuzzy logic in the two phases of the algorithm. In the
first phase, we used fuzzy logic to handle the simi-
larities among the nodes while in the second phase
fuzzy logic handles the clustering process. After
large number of experiments with different problem
settings as well as stable and nonstable environments,
we could conclude that DSBF has the best perfor-
mance in terms of overall energy consumption and
network lifetime. In addition, it works fine with dif-
ferent environments.
The only concern that needs to be considered in
our future work is the time taken to run the algorithm
in each node with large number of sensing features.
In other words, currently there is few number of sens-
ing devices that mounted on the sensors board.
Therefore, few numbers of features will be reported.
However, with large number of features, a large
number of rules will be generated by the fuzzy logic
controller and might lead to rule explosion. In the
future work, this problem also needs to be tackled.
On the other hand, DSBC seems to perform well in
most of WSN condition. However, its performance is
comparable to MFLC in case of using unstable envi-
ronments. LEACH-M and MFLC are fast and easy
algorithms to be implemented. However, they did not
seem to fit well the multimodal WSN.
The obvious future work to the clustering done in
this paper is the investigation to the data/decision
fusion algorithms. Other clustring issues that actually
raised by one of the reviewers of this paper which it
is worth investigated are: 1) sensors with multi sens-
ing features might consume energy more than other
sensors with less number of sensing feature. In other
words, the correlation between the number of sensing
features and the sensors consumed energy might
have effect on the clustering performance and on the
selection of the clusterheads; 2) sometimes, energy
parameter could have more priority (weight) more
than the reported features similarities or vice versa;
the question is how this will affect the clustering per-
formances.

9. References

[1] A. Mainwaring, D. Culler, J. Polastre, R. Szewczyk, and J.
Aderson (2002). Wireless sensor networks for habitat monitor-
ing. In Proc. ACM International Workshop on Wireless Sensor
Networks and Applications , (2002), pp. 8897.
[2] D. Xia, N. Vlajic (2006). Near-Optimal Node Clustering in
Wireless Sensor Networks for Environment Monitoring. Ca-
nadian Conference on Electrical and Computer Engineering,
CCECE, (2006).
[3] F. Xiangning, and S. Yulin (2007). Improvement on LEACH
Protocol of Wireless Sensor Network. In Proceedings of the
2007 international Conference on Sensor Technologies and
Applications SENSORCOMM, IEEE Computer Society,
(2007), pp. 260-264.
[4] G. Dong ( 2007). Distributed, Hierarchical Clustering and
Summarization in Sensor Networks. LNCS 4505,( 2007) pp.
168175.
[5] H. Chen; C. Wu; Y. Chu; C. Cheng; and L. Tsai ( 2007).
Energy Residue Aware (ERA) Clustering Algorithm for
Leach-based Wireless Sensor Networks. Second International
Conference on Systems and Networks Communications,
ICSNC 2007. vol. 25 , (2007), pp.40-40.
[6] H. Wang (2006).Wireless Sensor Networks for Acoustic Mon-
itoring. Doctoral Thesis, (2006), SBN:978-0-542-79693-7.
[7] I. Demirkol, C. Ersoy, F. Alagoz (2006). MAC protocols
for wireless sensor networks: a survey. IEEE Communication
Magazine, 44, (2006), pp. 115- 121.
[8] Intel sensor board ,
http://graphics.stanford.edu/projects/AHPCRC_TA3/ta3a/reso
urces/sensorboard%20Data%20Sheet.pdf
[9] ITS400 board,
http://www.xbow.com/Products/Product_pdf_files/Wireless_p
df/ITS400_Datasheet.pdf
[10] J. Al-Karaki and A. Kamal (2004). Routing techniques in
wireless sensor networks: a survey. IEEE Wireless Commu-
nication, (2004), pp. 6- 28.
[11] J. Kenneth, N. Johnson, R. Stephen , and S. William. (2007).
Chemical Reviews. In American Chemical Society, (2007),
pp 623640.
[12] M. Chatterjee, S. Das, D. Turgut (2002). WCA: a weighted
clustering algorithm for mobile ad hoc networks. Cluster
Computing 5, (2002).
[13] MICA2 specifications, accesses at
http://www.xbow.com/products/Product_pdf_files/Wireless_p
df/MICA2_Datasheet.pdf
[14] O. Younis, S. Fahmy (2004). Heed: A hybrid, energy-efficient,
distributed clustering approach for ad-hoc sensor networks.
IEEE Transactions on Mobile Computing, vol. 3, no. 4, (2004),
pp. 366379.
[15] P. Cingolani, P.Leaonard, S. Wankhede, M.Fischer, L. yua-
nhao, and M. Arkadiuz,and C. Carmelo, Jfuzzylogic. Open
Source Fuzzy Logic library and FCL language implementation.
accessed at http://jfuzzylogic.sourceforge.net/html/about.html
[16] R. Indranil and D. Sampalli (2005). Cluster-head election
using fuzzy logic for wireless sensor networks. Proceedings of
the 3rd Annual Communication Networks and Services Re-
search Conference, (2005), pp. 255- 260.
[17] R. Rabie and E. Hesham (2008). Deployment of Sensor Net-
works on Critical Infrastructures: A Survey. 4th International
Computer Engineering Conference Information Society Appli-
cations in the Next Decade (ICENCO), (2008),pp. 29-30.
[18] R. Ramadan (2009). Data Mining in Multimodal sensor Net-
works. At ICCES09, (2009).
[19] S. Anenbaum, C. Gamage, and B. Crispo (2006). Taking Sen-
sor Networks from the Lab to the Jungle. Computer 39,
(2006), pp. 98-100.
[20] S. Basagni (1999). Distributed Clustering Algorithm for Ad-
hoc Networks. In International Symposium on Parallel Archi-
tectures, Algorithms, and Networks (I-SPAN), (1999).
[21] T. Meng and R. Volkan (1998). Distributed Network Protocols
for Wireless Communication. Proc. IEEE ISCAS, (1998).
[22] T. Zhang, R. Ramakrishnan, and M. Livny (1996). BIRCH:
An efficient data clustering method for very large databases.
In PTOC. ACM SIGMOD, (1996), pp. 103-114.
[23] V. Constantin (1995). Fuzzy logic and NeuroFuzzy applica-
tions explained . ISBN 0-13-368465-2, 1995.
[24] W. Heinzelman, A. Chandrakasan, and H. Balakrishna (2002).
An Application-Specific Protocol Architecture for Wireless
Microsensor Networks. IEEE Transactions on Wireless Com-
munications, vol. 1, no. 4, (2002), pp. 660670.
[25] W. R. Heinzelman, A. Chandrakasan, and H. Balakrishnan,
(2002). Energy-Efficient Communication Protocol for Wire-
less Sensor Networks. Proceedings of the 33th Hawaii Interna-
tional Conference on System Sciences, (2000).
[26] X. Ma, D. Yang, S. Tang, Q. Luo, D. Zhang, S. Li (2004).
Online Mining in Sensor Networks. vol. 3222. In NPC, (2004).
[27] Y. Tashtoush and M. Okour (2008). Fuzzy Self-Clustering for
Wireless Sensor Networks. IEEE/IFIP International Confe-
rence on Embedded and Ubiquitous Computing, 2008. EUC
'08. , vol.1, no. 20, (2008) , pp.223-229.
[28] Y. Yang , P. Viktor , and K. Bhaskar (2006). Information
Processing and Routing in Wireless Sensor Networks. World
Scientific Publishing Company (2006), ISBN: 978-981-270-
146-6.

You might also like