You are on page 1of 17

Autonomous mobile robot navigation system designed in dynamic

environment based on transferable belief model


Wang Yaonan
a
, Yang Yimin
a,
, Yuan Xiaofang
a
, Zuo Yi
a,b
, Zhou Yuanli
a
, Yin Feng
a
, Tan Lei
a
a
College of Electrical and Information Engineering, Hunan University, Changsha, Hunan 410082, PR China
b
Key Laboratory of Regenerative Energy, Electric-Technology of Hunan Province, Changsha University of Science and Technology, Changsha, Hunan 410004,
PR China
a r t i c l e i n f o
Article history:
Received 2 February 2010
Received in revised form 19 March 2011
Accepted 12 May 2011
Available online 20 May 2011
Keywords:
Navigation
Path planning
Mobile robot
Robust tracking control
Transferable belief model
Dynamic environment
a b s t r a c t
This paper investigates the possibility of using transferable belief model (TBM) as a prom-
ising alternative for the problem of path planning of nonholonomic mobile robot equipped
with ultrasonic sensors in an unknown dynamic environment, where the workspace is
cluttered with static obstacles and moving obstacles. The concept of the transferable belief
model is introduced and used to design a fusion of ultrasonic sensor data. A new strategy
for path planning of mobile robot is proposed based on transferable belief model. The
robots path is controlled using proposed navigation strategy that depends on navigation
parameters which is modied by TBM pignistic belief value. These parameters are tuned
in real time to adjust the path of the robot. A major advantage of the proposed method
is that, with detection of the robots trapped state by ultrasonic sensor, the navigation
law can determine which obstacle is dynamic or static without any previous knowledge,
and then select the relevant obstacles for corresponding robot avoidance motion. Simula-
tion is used to illustrate collision detection and path planning.
2011 Elsevier Ltd. All rights reserved.
1. Introduction
During the past few years, autonomous navigation of
nonholonomic systems such as nonholonomic mobile
robot has received wide attention and is a topic of great
research interest. The navigation systems including map
building and path planning implies that the robot is
capable of reacting to static obstacles and unpredictable
dynamic object that may impede the successful exaction
of a task. To achieve this level of robustness, many litera-
tures that deals with path planning is rapidly growing.
The work of Rueb and Wong [1], Habib and Yuta [2],
Mataric [3], Rimon and Koditschek [4] and Borenstein
and Koren [5] are among the earliest attempts to solve
the problem of path planning. Various classical approaches
designed originally are extended in order to be applicable
in real applications. Probabilistic roadmap methods are
used in [69]. Potential eld method is suggested in
[1012]. These methods perform well in static environ-
ments. However, this does not automatically imply good
performance in dynamic environment. Additionally, these
methods have limited performance when obstacles are al-
lowed to move in the workspace.
Recently, other kinds of research were proposed [13
21], which extended the various approaches to dynamic
environment. For example, dynamic potential eld method
[1315], kinematics methods to solve the problem of colli-
sion detection [16], sensor-based path-planning methods
[1720]. Sensor-based path-planning methods are widely
used to navigation in dynamic environments. The robot
calculates and estimates the motion of the moving obsta-
cles based on its sensory system. There also exist methods
with other algorithms such as genetic algorithms [21], fuz-
zy system [22], intelligent algorithms [23,24] to solve this
problem. These methods have the ability of wheeled mo-
bile robots to navigate safely and avoid moving obstacle
0263-2241/$ - see front matter 2011 Elsevier Ltd. All rights reserved.
doi:10.1016/j.measurement.2011.05.010

Corresponding author.
E-mail address: Yangyi_min@126.com (Y. Yimin).
Measurement 44 (2011) 13891405
Contents lists available at ScienceDirect
Measurement
j our nal homepage: www. el sevi er. com/ l ocat e/ measur ement
in dynamic environment. However, most of the proposed
approaches for mobile robots in the literature did not con-
sider autonomous navigation in an environment which in-
cludes both static obstacles and dynamic obstacles.
Furthermore, in the nature of the real world, any prior
knowledge about the environment is, in general, incom-
plete, uncertain, or completely unknown. These methods
assume that, each moving objects velocity and direction
is exactly known, but this prior knowledge is not easy to
gain.
The transferable belief model (TBM) is a model for the
quantied representation of epistemic uncertainty and
which can be an agent, an intelligent sensor, etc., and pro-
vides a highly exible model to manage the uncertainty
encountered in the multi-sensor data fusion problems.
Application of the transferable belief model (TBM) to many
areas has been presented in [2529] including classica-
tion and target identication during recent times. And we
feel it appealing when using navigation of mobile robots.
This work investigates the possibility of using transfer-
able belief model (TBM) as a promising alternative for nav-
igation system of nonholomonic mobile robot. First, a
neural robust tracking controller, comprising adaptive
wavelet neural network controller (AWNN), is used to
achieve the tracking control of the mobile manipulator un-
der some uncertainties without any knowledge of those ro-
bot parameters. Then, based on transferable belief model
(TBM) to design a fusion of ultrasonic sensor data, a new
approach for path planning is proposed. The local map,
represented as an occupancy grid, with the time update
is proven to be suitable for real-time applications. Attrac-
tive and repulsion potential function is modied by TBM
pignistic belief value. Taking the velocity of the obstacles
and static object into account, the suggested method can
determine which obstacle is dynamic or static without
any previous knowledge of moving obstacles, then select-
ing the relevant obstacles for corresponding robot avoid-
ance motion.
The rest of the paper is organized as follows: Section 2
shows the mathematical representation of mobile robot
with two actuated wheels. Section 3 discusses the nonlin-
ear kinematics-WNN back stepping controller as applied to
the tracking problem. Section 4 proposed a new method
for map building based on sonic rangender. A path-plan-
ning approach is discussed in Section 5. Simulation and
experiment are shown in Section 6.
2. Model of a mobile robot with two actuated wheels
The kinematic model of an ideal mobile robot is widely
used in the mobile robot (MR) control [3037]. The MR
with two driven wheels shown in Fig. 1 is a typical exam-
ple of nonholonomic mechanical systems. OXY is the refer-
ence coordinate system; O
r
X
r
Y
r
is the coordinate system
xed to the mobile robot; O
r
the middle between the right
and left driving wheels, is the origin of the mobile robot; d
is the distance from O
r
to P; 2b is the distance between the
two driving wheels and r is the radius of the wheel. In the
2-D Cartesian space, the pose of the robot is represented by
q x; y; h
T
1
where (x, y)
T
is the coordinate of O
r
in the reference coordi-
nate system, and the heading direction h is taken counter-
clockwise from the OX-axis.
The motion model including kinematics and dynamics
of the nonholonomic mobile robot system can be described
by Hou et al. [34]. It is assumed that the wheels of the ro-
bot do not slide. This is expressed by the nonholonomic
constraint.
_
x sinh
_
y cos h 0 2
All kinematic equality constraints are assumed to be
independent of time and to be expressed as follows:
Aq
_
q 0 3
By appropriate procedures and denitions, the robot
dynamics can be transformed as [30,34]
M
_
v
w
Vv
w
F s
ed
s 4
where
M
r
2
4b
2
mb
2
I
0
I
w
r
2
4b
2
mb
2
I
0

r
2
4b
2
mb
2
I
0

r
2
4b
2
mb
2
I
0
I
w
_
_
_
_
V
0
r
2
2b
2
m
c
d
_
h

r
2
4b
2
m
c
d
_
h 0
_
_
_
_
_
_
_
_
F J
T
F

s
ed
J
T
s
ed
B
1 0
0 1
_ _
where m = m
c
+ 2m
w
, I
0
= m
c
d
2
+ 2m
w
b
2
+ I
c
+ 2I
m
; m
c
and
m
w
are the masses of the robot body and wheel with actu-
ator, respectively; I
c
, I
w
and I
m
are the moments of inertia of
the robot body about the vertical axis through p, the wheel
with a motor about the wheel axis, and the wheel with a
motor about the wheel diameter, respectively.
y
x
r
X
r
Y
r
O
P
d
2r

2b
left wheel
right wheel
O
Fig. 1. Nonholonomic mobile robot.
1390 W. Yaonan et al. / Measurement 44 (2011) 13891405
Since the wheels of the robot are driven by actuators, it
is necessary to take the dynamics of the wheel actuators
into account. The motor models attained by neglecting
the voltage on the inductances are:
s
r
k
a
v
r
k
b
x
r
=R
a
; s
l
k
a
v
l
k
b
x
l
=R
a
5
where v
r
and v
l
are the input voltages applied to the right
and left motors; k
b
is equal to the voltage constant multi-
plied by the gear ratio; R
a
is the electric resistance con-
stant; s
r
and s
l
are the right and left motor torques
multiplied by the gear ratio; and k
a
is the torque constant
multiplied by the gear ratio. The dynamic equations of the
motor-wheels are:
s
r

k
a
v
r
R
a

k
a
k
b
x
r
R
a
; s
l

k
a
v
l
R

k
a
k
b
x
l
R
a
6
By using (4)(6), the mobile robot model including the
robot kinematics, robot dynamics, and wheel actuator
dynamics can be written as:
M
_
v
w
Vv
w
F s
ed

k
a
v
r
R
a

k
a
k
b
x
r
R
a
7
w
H
XV 8
where V, X is selected as
X
1
r
R
r
1
r

R
r
_ _
9
V
v
w
_ _
10
3. Nonlinear kinematic-WNN backstepping controller
In area of research of trajectory tracking in mobile robot,
based on whether the system is described by a kinematic
model or a dynamic model, the tracking-control problem
is classiedas either a kinematic or a dynamic tracking-con-
trol problem. Using kinematic or dynamic models of non-
holonomic mobile robots, various approaches [21,3237]
consider that wheel torques are control inputs though in
reality wheels are driven by actuators and therefore using
actuator input voltages as control inputs is more realistic.
To this effect, actuator dynamics is combined with the mo-
bile robot dynamics. In this section, a neural robust tracking
controller is discussedbriey to achieve the tracking control
under some uncertainties without any knowledge of those
robot parameters from our recent work [36]. More detail,
proof and simulation can be also found in [36].
Recall the robot dynamics (10)
M
_
v
w
Vv
w
F s
ed

k
a
v
r
R
a

k
a
k
b
x
r
R
a
11
The controller for s is chosen as
s
^
f K
4
e
c
c 12
where K
4
is a positive denite diagonal gain matrix, and
^
f is
an estimation of f(x) that is dened by
f x Mq
_
v
c
V
m
q;
_
qv
c
Fv 13
where x v
T
v
T
c
_ v
T
c
, so error can be dened as
e
c
v
c
v 14
Using (11)(14), the error dynamics stable the input term
is chosen as
M
_
e
c
V
m
e
c
K
4
e
c

~
f

s
d

k
a
k
b
R
a
BXv
c

k
a
R
a
Bu
c 15
The function in braces in (15) can be approximated by
an AWNN, such that
M
_
v
c
V
m
v
c
F

Wwx
i
; c; m 16
where the term

Wwx
i
; c; m represents an adaptive
approximation WNN. The structure for the tracking control
system is presented in Fig. 2. In this gure, no knowledge
Fig. 2. Structure of the wavelet neural network-based robust control system.
W. Yaonan et al. / Measurement 44 (2011) 13891405 1391
of the dynamics of the cart is assumed. The function of
WANN is to reconstruct the dynamic (16) by learning it
on-line.
Assumption 3.1. The approximation errors and distur-
bances are bounded, i.e., the specied constants D
e
and D
r
satisfy ke
f
k 6 D
e
and k s
d
k 6 D
r
, respectively.
For case of notation, the approximation error
~
f can be
rewritten as
~
f W
T

W

W
T

W e
f
17
where
~
W W



W and
~
w w


^
w. The controller is
designed to achieve good tracking and stability results
with the connecting weights, dilation, and translation
parameters of the WNN tuned on line. To achieve this,
the linearization technique is used to transform the
nonlinear output of WNN into partially linear from so
that the Lyapunov theorem extension can be applied.
The expansion of
~
w in Taylor series is obtained as
follows:
~
w
~
w
11
~
w
12
.
.
.
~
w
pq
_

_
_

@w
11
@m
@w
11
@m
.
.
.
@w
11
@m
_

_
_

_
T

m^ m
m

^ m
@w
11
@m
@w
11
@m
.
.
.
@w
11
@m
_

_
_

_
T

c^c
c

^c H
18
H is a vector of higher-order terms and assume to be
bounded by a positive constant. Substituting (18) into
(17), it is revealed that
~
f

s
d


W
T
~
w
~
W
T
~
w
~
W
T
^
w e
f

s
d


W
T

^
w A
T
^
mB
T
^
c

W
T
A
T
~
mB
T
~
c r 19
where r W
T
A
T
m

B
T
c

H

W
T
A
T
m

B
T
c

e
f

s
d
By substituting (19) into (15), the closed-loop system
dynamics can be rewritten as
M
_
e
c
K
4
V
m
_ _
e
c

k
a
k
b
R
a
BXv
c

k
a
R
a
Bu

~
W
T
^
w A
T
^
mB
T
^
c
_ _


W
T
A
T
~
mB
T
~
c
_ _
r c
20
Moreover, assume the following inequality:
krk
@X
2
m
4
W
T
A
T
m

B
T
c

H
_ _


W
T
A
T
m

B
T
c

_ _ _
_
_
e
f
s
d
_
_

@X
2
m
4
6
@X
2
m
4
W
T
A
T
m

B
T
c

H
_ _ _
_
_
e
f
s
d
_
_

_
_
_
_
_
_ A
T
m

B
T
c

_
_
_
_
_
_ 6 k
p
p
where @ is a positive constant; p
T
1k

Wk; k
p

k
p1
k
p2
; k
p1
P kW
T
A
T
m

B
T
c

; H e
f
s
d
k; k
p2
P
kA
T
m

B
T
c

k, i.e., k
p
is an uncertainty bound.
The robust term c is designed as
c
^
k
p
p sgne
c
21
Denition 1. considering (21) for nonholonomic mobile
robot, if using controller (12), the closed loop error
dynamics given by (20) is locally asymptotically stable
with the approximation network parameter tuning laws
given by.
_

W g
1

^
w A
T
^
mB
T
^
ce
c
g
1
@ke
c
k

W 22
_
^
m g
2

WAe
c
g
2
@ke
c
k
^
m 23
_
^
c g
3

WBe
c
g
3
@k@k
^
c 24
_

K g
4
ke
c
kp 25
where g
1
, g
2
, g
3
, g
4
are positive constants;
^
k
p
is an on line
estimated value of the uncertain bound k
p
.
4. Map building
An occupancy grid is essentially a data structure that
indicates the certainty that a specic part of space is occu-
pied by an obstacle, which is widely used in map building
in mobile robot [1721]. It represents an environment as a
two-dimensional array. Each element of the array corre-
sponds to a specic square on the surface of the actual
world, and its value shows the certainty that there is some
obstacle there. When new information about the world is
received, the array is adjusted on the basis of the nature
of the information.
Here, the proposed map-building process utilizes Trans-
ferable Belief Model (TBM). The sonar sensor readings are
interpreted by this theory and used to modify the map
using transferable belief model rule. Whenever the robot
moves, it catches new information about the environment
and updates the old map. Because of using this uncertain
theory to build an occupancy map of the whole environ-
ment, it represent important point of view what we must
to consider: any position in the updated map of the whole
environment do not exist absolute exactness for the reason
that any obstacle can not be measured absolutely right-
ness. So every discrete region of the path position may be
in two states: belief degree E, O and uncertain degree H.
For this purpose, the map building system to process the
readings in order to assess, as accurately as possible, which
cells are occupied by obstacles (partially certain) and
which cells are (partially) empty and thus suitable for ro-
bot navigation.
4.1. Review theory of transferable belief model
In this section, we briey regroup some basic of the be-
lief function theory as explained in the transferable belief
model ( TBM). More details can be found in [2529].
A.1 (Frame of discernment). The frame of discernment is
a nite set of mutually exclusive elements, denoted
X hereafter. Beware that the frame of discernment
is not necessarily exhaustive.
1392 W. Yaonan et al. / Measurement 44 (2011) 13891405
A.2 (Basic belief assignment). A basic belief assignment
(bba) is a mapping m
X
from 2
X
to [0, 1] that satises

A#X
m
X
A 1. The basic belief mass (bbm) m(A),
A # X, is the value taken by the bba at A.
A.3 (Categorical belief function). A categorical belief
function on Xfocused on A

# X, is a belief function
which related bba m
X
satises:
m
X

1 if A A

0 otherwise
_
26
When all bbas are categorical, the TBM becomes equivalent
to classical propositional logic. Two limiting cases of cate-
gorical bbas have received special names.
A.4 (Normalizing or nonnormalizing) The basic belief
mass m(A) represents the part of belief exactly com-
mitted to the subset A of Xgiven a piece of evidence,
or equivalently to the fact that all we know is that A
holds. Normalizing a bba means requiring that
E(H) = 0 and that the sum of all bbms givens to
the non-empty subsets is 1. This means closing the
world. When a bba means requiring that m(H) > 0
and we call this open world. In TBM, we do not
require m(H) = 0 as in Shafers work.
A.5 ( Related function) the degree of belief bel(A) is
dened as: bel(A): 2
X
?[0, 1] with, for all A # X
belA

hB#A
mB 27
The degree of plausibility pl(A) is dened as: pl: 2
X
?[0, 1]
with, for all A # X
plA

B#H;B\Ah
mB belH belA 28
The commonality function q is dened as: q: 2
X
?[0, 1]
with, for all A # X
qA

A#B;B#X
mB 29
The function q,bel,pl are always in one to one correspon-
dence. More details and proofs of the relationship among
functions above can be found in [26,27].
A.6 (The conjunctive rule). Given two bbas m
X
1
; m
X
2
from
different sensor respectively, the bba that results
from their conjunctive combination dened by
mE
1
\ mE
2
A

B;C #H;B\CA
mE
1
B mE
2
C;
8A#H 30
A.7 (The pignistic transformation for decision). The
pignistic transformation maps bbas to so called pig-
nistic probability functions. The pignistic transfor-
mation of m
X
is given by
BetP
X
A

B#X
jA \ Bjm
X
B
jBj1 m
X
h
; 8A 2 X 31
where jAj is the number of elements of Xin A. This solution
is a classical probability measure from which expected
utilities can be computed in order to take optimal deci-
sions. Some of its detail and justications can be found in
[25,29]
4.2. sensor modeling and measurements interpretation
The Polaroid Ultrasonic Rangender is used for map
building. This is a very common device that can detect dis-
tances in the range of 0.475 m with 1% accuracy. In [21],
the sensor model converts the range information into
probability values. The model in Figs. 3 and 4 is given by
Eqs. (32)(37).
In region I, where R e < r < R + e
x
b@
b
_ _
2

ejRrj
e
_ _
2
2
32
EOjx
0 1 < x < 0
1
2

2x
2
1x
2
_ _
1
_ _
0 6 x 6 1
_
33
EEjx 0:8 EOjx 34
In region II, where R
min
< r < R e
x
b@
b
_ _
2

Rer
Re
_ _
2
2
35
EOjx
0 1 < x < 0
1
2

2x
2
1x
2
_ _
1
_ _
0 6 x 6 1
_
36
EEjx 0:8 EOjx 37
R is the range response form the ultrasonic sensor, and
(r, @) is the coordinate of a point inside the sonar cone. e is
the range error, and it distributes the evidence in Region I.
b is the half open beam angle of the sonar cone.
4.3. The fusion of data from sonar
The sonar data interpreted by Transferable Belief Model
of evidence are collected and updated into a map using the
same theory of evidence. In our approach, the basic
30
o
270
300
330
0
30
60
90
Fig. 3. Typical beam pattern of Polaroid ultrasonic sensor.
W. Yaonan et al. / Measurement 44 (2011) 13891405 1393
probability assignment corresponding to a range reading r
is obtained directly using Eqs. (32)(37). The next example
illustrates the usage of transferable belief model for map-
building process.
Example 1. Let the robot be located in cell (15, 10) in the
beginning of the navigation process. The condition is
shown in Table 1 and Figs. 5 and 6. The new basic
probability assignments for cells in the map become (the
rst lies in region I and the second lies in region II):
(Static obstacles)
x
S
0
0

153
15

2

1027
102

2
2
0:1878 x
S
1
0

153
15

2

1027
102

2
2
0:1878
E
S
0
0
x
1
2

2x
2
1x
2
_ _
1 0:5341 E
S
0
0
x
1
2

2x
2
1x
2
_ _
1
_ _
0:5341
E
S
0
0
O 0:8 Ex
S
0
0
0:2659 E
S
0
0
O 0:8 Ex
S
0
0
0:2659
EH 0:2 EH 0:2
and
x
S
0
1

156
15

2

2j109j
2

2
2
0:3050 x
S
1
1

150
15

2

2j109j
2

2
2
0:6250
E
S
0
1
x
1
2

2x
2
1x
2
_ _
1
_ _
0:5851 E
S
1
1
x
1
2

2x
2
1x
2
_ _
1
_ _
0:7809
E
S
0
1
O 0:8 Ex
S
1
1
0:2149 E
S
1
1
O 0:8 Ex
S
1
1
0:0191
EH 0:2 EH 0:2
(Dynamic obstacles)
x
S
2
4

153
15

2

1025
102

2
2
0:3903 x
S
2
5

153
15

2

2j1011j
2

2
2
0:4450
E
S
2
4
x
1
2

2x
2
1x
2
_ _
1
_ _
0:6322 E
S
2
4
x
1
2

2x
2
1x
2
_ _
1
_ _
0:6625
E
S
0
1
O 0:8 Ex
S
1
1
0:1678 E
S
0
1
O 0:8 Ex
S
1
1
0:1375
EH 0:2 EH 0:2
In the next moment, the robot is moving in a direction de-
picted by the broken line and sonars scan the environment
again. This situation is given in Fig. 6. S
0
, S
1
and S
2
detect
some possible obstacles on @
S
1
0
6; r
S
0
0
4; r
S
1
0
5;
r
S
0
1
6; r
S
1
1
6; r
S
2
1
7; r
S
1
2
9; r
S
2
2
9; @
S
0
0
3; @
S
0
1

6; @
S
1
1
0; @
S
2
1
3; @
S
1
2
3; @
S
2
2
6; r
S
2
6
3; @
S
2
6
6:
x
S
0
0

153
15

2

1024
102

2
2
0:3100 x
S
1
0

156
15

2

1025
102

2
2
0:2503
Ex
S
0
0

1
2

2x
2
1x
2
_ _
1
_ _
0:5877 Ex
S
1
0

1
2

2x
2
1x
2
_ _
1
_ _
0:5590
EO
S
0
0
0:8 Ex
S
0
0
0:2123 EO
S
1
0
0:8 Ex
S
0
0
0:2410
EH 0:2 EH 0:2
x
S
0
1

156
15

2

1026
102

2
2
0:2112 x
S
1
1

150
15

2

1026
102

2
2
0:6250
Ex
S
0
1

1
2

2x
2
1x
2
_ _
1
_ _
0:5427 Ex
S
1
1

1
2

2x
2
1x
2
_ _
1
_ _
0:7809
EO
S
0
1
0:8 Ex
S
0
0
0:2573 EO
S
1
1
0:8 Ex
S
0
0
0:0191
EH 0:2 EH 0:2
y

angle Pr ofile for the

l
15
15
Pr ofile for the range
l
R R
R +
min
R
Fig. 4. The prole of the ultrasonic sensor model.
Table 1
Condition of this example.
Robot position x
r
= 15, y
r
= 10
Dynamic
obstacles
position
and its
moving
direction
X
4x
= 14, X
4y
= 10 moving direction
in a positive X-axis direction.
X
5x
= 15, X
5y
= 7 moving direction
in a positive X-axis direction.
Static
obstacles
position
X
0x
8; X
0y
8
X
1x
10; X
1y
5
Distances r
and angles
@ detected
by different
sonar
X
0
: r
S
0
0
7; @
S
0
0
3; r
S
1
0
7; @
S
1
0
3
X
1
: r
S
0
1
9; @
S
0
1
6; r
S1
1
9; @
S
1
1
0
X
4
: r
S
2
4
5; @
S
2
4
3
X
5
: r
S
2
5
11; @
S
2
5
3
1394 W. Yaonan et al. / Measurement 44 (2011) 13891405
x
S
2
1

153
15

2

1027
102

2
2
0:3278 x
S
1
2

153
15

2

2 109 j j
102

2
2
0:3278
Ex
S
2
1

1
2

2x
2
1x
2
_ _
1
_ _
0:5970 Ex
S
2
1

1
2

2x
2
1x
2
_ _
1
_ _
0:5970
EO
S
2
1
0:8 Ex
S
0
0
0:2030 EO
S
2
1
0:8 Ex
S
0
0
0:2030
EH 0:2 EH 0:2
(Dynamic obstacle)
x
S
2
2

156
15

2

2j109j
102

2
2
0:1878 x
S
2
6

156
15

2

1023
102

2
2
0:3753
Ex
S
2
2

1
2

2x
2
1x
2
_ _
1
_ _
0:5341 Ex
S
2
6

1
2

2x
2
1x
2
_ _
1
_ _
0:6235
EO
S
2
2
0:8 Ex
S
0
0
0:2659 EO
S
2
6
0:8 Ex
S
2
6
0:1765
EH 0:2 EH 0:2
Fig. 5. Initial position.
Fig. 6. Occupancy determination based on sonar measurements in t = 2 s.
W. Yaonan et al. / Measurement 44 (2011) 13891405 1395
Evidence about occupancies of cells including dynamic ob-
ject are combination using transferable belief model,
which is shown in Table 2.
From the new results it can be concluded that the evi-
dence about the occupancy of cells is increased. This pro-
cess continues until the target is reached.
5. Path planning
In this section, we adopt a new path planning strategy,
in which the robot replans the path as new observations
are acquired. Whats more, the navigation recognition
law that can determine which obstacle is dynamic or static
without any previous knowledge is discussed in this
section.
5.1. dynamic object recognition law
Condition 1. Consider the pignistic value and basic prob-
ability assignments given by transferable belief model
based on update information. At ttime, suppose the
number of the sensor is i; interval time is 1, Event A is
moving obstacle if satisfying that:
(1) m
S
1
S
i
12t
AE m
S
1
S
i
12t
AH ! 1
(2) m
S
1
S
i
12t
AH P0:5
Proof. Let H = {h
1
, h
2
, . . . , h
n
} denote a set of n hypothesis.
Given the likelihoods l(h
i
jx) for every h
i
2 H Let X denote
the set of possible values this observation can take.
mxA

h
i
2A
lh
i
jx

h
i
2

A
1 lh
i
jx 38
plxA 1

h
i
2

A
1 lh
i
jx 39
BetPA

A#X
m
X
A
A1 m
X
H
; 8A 2 X 40
Suppose the sensor measurement is A 2 Xandits likelihoods
are m
S
1
S
2
S
i
12t
AO a; m
S
1
S
2
S
i
12t
AE b, with (38)
m
S
1
S
2
S
i
12t
AH 1 a1 b 41
From (40),
Bet
S
1
S
2
S
i
12t
Ah
1

a1 b

n1
i1
C
i
ni
1 b
ni1
=i 1
1 1 a1 b

n1
i0
C
i
ni
b
i
1 b
ni1
=i 1
1 1 a1 b
42
The sum can be simplied to from Delmotte [28]:

n1
i0
C
i
ni
b
i
1 b
ni1
=i 1
1 1 b
n
nb
43
Thus:
Bet
S
1
S
2
S
i
12t
Ah
1

a
nb
1 1 b
n
1 1 a1 b
n1
44
Let n = 2 for the reason these are two hypothesis including
occupancy (O) and empty (E):
Bet
S
1
S
2
S
i
12t
AO
a
2b
1 1 b
2
1 1 a1 b

a
2b
1 1 2b b
2

1 1 b a ab

a
2b
2b b
2
b a ab

a
2b
2b b
2
b a ab

a2 b
21 mH
45
If Bet
S
1
S
2
S
i
12t
AO ( Bet
S
1
S
2
S
i
12t
AH, it means event A of
occupancy is unbelievable, thus if "e and e is a small con-
stant, and
Bet
S
1
S
2
S
i
12t
AO
Bet
S
1
S
2
S
i
12t
Ah
< e we can belief that event A is
occupancy is unlikelihood.
Bet
S
1
S
2
S
i
12t
AO
Bet
S
1
S
2
S
i
12t
AH

a2b
21m
X
x
1
; . . . ; x
i
H
_
m
S
1
S
2
S
i
12t
AH
1m
S
1
S
2
S
i
12t
AH

a2b
2m
S
1
S
2
S
i
12t
AH
46
If
Bet
S
1
S
2
S
i
12t
AO
Bet
S
1
S
2
S
i
12t
AH
< e, we get:
m
S
1
S
2
S
i
12t
AO ! 0 and m
S
1
S
2
S
i
12t
AE
! 1 and m
S
1
S
2
S
i
12t
AH P0:5
Table 2
Probabilitic result from transferable belief model.
H H
(unknown)
Occupancy Empty
m
S0
1
T
0

0.3362 0.3921 0.1293


m
S1
1
T
0

0.3362 0.3921 0.1293


m
S0S1
1
T
0

0.6642 0.2561 0.0505


m
S0
1
T
1

0.3248 0.4629 0.0875


m
S0
2
T
1

0.3347 0.4243 0.1063


m
S0S1
1
T
1

0.5409 0.4550 0.0022


m
S2
1
T
4

0.3007 0.5261 0.0671


m
S2
1
T
5

0.2911 0.5714 0.0464


m
S0
2
T
0

0.3248 0.4629 0.0875


m
S1
2
T
0

0.3320 0.4243 0.1063


m
S0S1
2
T
0

0.5371 0.3117 0.0344


m
S0S1
12
T
0

0.5281 0.3874 0.0476


m
S0
2
T
1

0.3396 0.4031 0.1177


m
S1
2
T
1

0.2149 0.7660 0.0042


m
S2
2
T
1

0.3212 0.4758 0.0818


m
S0S1S2
2
T
1

0.5478 0.2516 0.0004


m
S0S1S2
12
T
1

0.5273 0.3582 0.0682


m
S1
2
T
2

0.4212 0.4758 0.0818


m
S2
2
T
2

0.3420 0.3921 0.1239


m
S1S2
2
T
2

0.6444 0.3016 0.0368


m
S2
2
T
6

0.3086 0.5149 0.0661


m
S2
12
T
6
m EH
S2
6
_ _
1
(dyanmic obstacle)
0.8444 0.1203 0.0309
mT
S2
4

1
m EH
S2
4
_ _
2
(dynamic obstacle)
0.8442 0.1222 0.0292
mT
S2
5

1
m EH
S2
5
_ _
2
(dyanmic obstacle)
0.8436 0.1289 0.0239
1396 W. Yaonan et al. / Measurement 44 (2011) 13891405
Because m
S
1
S
2
S
i
12t
AH m
S
1
S
2
S
i
12t
AE m
S
1
S
2

12t
S
i
AO 1, we get the condition:
(1) m
S
1
S
i
12t
AE m
S
1
S
i
12t
AH ! 1
(2) m
S
1
S
i
12t
AH P0:5If satisfying the condition, we
believe this obstacle is dynamic. And the velocity
of this dynamic obstacle is dened:
v
obs
t

x
BetP
S
1
S
2
S
i
t
O
x
BetP
S
1
S
2
S
i
tT
O

2
y
BetP
S
1
S
2
S
i
t
O
y
BetP
S
1
S
2
S
i
tT
O

2
_ _
T
47
where T is interval.
5.2. potential eld method based on TBM
The potential eld method has been studied extensively
for autonomous mobile robot path planning in the past
decade. An attractive potential which drives the mobile ro-
bot to its destination can be described by Ge and Cui
[10,13]. In this paper, we proposed a new method to
modify this potential eld method based on TBM in com-
plex dynamic environment under static and dynamic
obstacle.
Let x; y
BetP
X
Tt O
and [x
r
,y
r
] is the position both from
dynamic obstacle and robot. v
obs
t

x
BetP
S
1
S
2
S
i
t
O
x
BetP
S
1
S
2
S
i
tT
O

2
y
BetP
S
1
S
2
S
i
t
O
y
BetP
S
1
S
2
S
i
tT
O

2
_ _
T,
v
RO
(t) = [v(t) v
obs
(t)]
T
n
RO
, at t time,
Fatt q; v
BetP
S1S2S
i
12t
Om@qkq
tar
t qtk
2
nRT if not satisfy condition:1
BetP
S1S2S
i
t
Om@
q
kq
tar
t qtk
2
n
RT
BetP
S1S2S
i
t
On@
v
kv
tar
t vtk
2
n
VRT
if satisfy condition:1
_

_
48
where q(t) and q
tar
(t) denote the positions of the robot and
the target at time t, respectively; q = [xyz]
T
in a 3-dimen-
sional space or q = [xy]
T
in a 2-dimensional space; v(t)
and v
tar
(t) denote the velocities of the robot and the target
at time t, respectively; kq
tar
(t) q(t)k is the Euclidean dis-
tance between the robot and the target at time t;
kv
tar
(t) v(t)k is the magnitude of the relative velocity be-
tween the target and the robot at time t; @
q
and @
v
are sca-
lar positive parameters; nand mare positive constants;
BetP
X
(O) is pignsitic decision value that sensor detect the
occupancy belief value both for static and dynamic respec-
tively; n
RT
being the unit vector pointing from the robot to
the target and n
VRT
being the unit vector denoting the rela-
tive velocity direction of the target with respect to the
robot.
Also a repulsive potential can be written as:
F
rep
q; v
0; if q
s
q; q
obs
q
m
v
RO
Pq
0
orv
RO
60
or satisfy condition:1
F
rep1
F
rep2
; if 0 <q
s
q; q
obs
q
m
v
RO
<q
0
and v
RO
>0
or satisfy condition:1
not defined; if v
RO
>0 and q
s
q; q
obs
<q
m
v
RO

or satisfy condition:1
1
2
BetP
S
1
S
2
S
i
12t
Og
S
1
qq;q
obs

1
q
0
_ _
2
if not satisfy condition:1
_

_
49
where
F
repv1

gBetP
X
O
v

q
s
q; q
obs
q
m
v
RO

2
1
v
RO
a
max
_ _
n
RO
F
repv2

BetP
X
O
v
gv
RO
v
RO?
q
s
q; q
obs
a
max
q
s
q; q
obs
q
m
v
RO

2
n
RO?
q
m
v
RO

v
2
RO
t
2a
max
BetP
X
O
v

O#X
m
X
O
v

jO
v
j1 m
X
H
; 8A#X
With q
0
is a positive constant describing the inuence
range of the obstacle; q
s
(q(t), q
obs
(t)) is denoted the short-
est distance between the robot and the body of the obsta-
cle; v
RO
(t) is the relative velocity between the robot and the
obstacle in the direction from the robot to the obstacle; n
RO
is a unit vector pointing from the robot to the obstacle; a
and g is a positive constant; BetP
X
(O) and BetP
X
(O
v
) are
pignsitic decision value that sensor detect the occupancy
belief value both for static and dynamic respectively;
m
X
(H) is unknown value from pignistic transformation.
From Eqs. (48) and (49), we obtain the potential total
virtual force which drives the reference point of the mobile
robot to the destination, described by:
F
total
F
att
F
rep
50
Assuming that the target moves outward or synchronously
with the obstacle, the robot is obstructed by the obstacle
and cannot reach the target. Refs. [1317] indicate in highly
dynamic environment, waiting method is often adopted
where both the target and the obstacle are moving.
5.3. Local minimum problems
Like other local path planning approaches, the proposed
navigation algorithm has a local minimum problem. To de-
tect the outbreak of local minimum, we adopt the method
in [5] (see Fig. 7), which compares the robot-to-target
direction, h
t
, with the actual instantaneous direction of tra-
vel, h
0
. If the robots direction of travel is more than a cer-
tain angle (90 in most cases) off the target point, that is,
when
jh
t
h
0
j > h
s
51
where h
s
is a trap warning angle, and typically 90. We re-
gard it is very likely about to get trapped.
Fig. 7. Incorporating a virtual target on local-minimum alert.
W. Yaonan et al. / Measurement 44 (2011) 13891405 1397
Consider a common situation as shown in Fig. 8, in
which the robot needs to detour around a long wall to
reach the goal. At position A, the robot is heading to the
goal position and hence h
s
= 0.
But at position B, the robot changes its direction by
repulsive force from detected wall and h
s
P90. In such
a situation, the original target point is replaced with the
virtual target point. The virtual target point is placed by
following equations:
x
v
x
i
m cosh
i
b
vt

y
v
y
i
m sinh
i
b
vt
52
h
v
h
0
b
vt
where m is the distance that the robot needs to travel to
reach the virtual goal. Also
v L BetP
X
O R 53
where L is the distance between the robot and the obstacle
near the virtual target detected by the longest-range sen-
sor in the corresponding direction. BetP
X
(O) is a pignsitic
decision value that sensor detect the occupancy belief va-
lue in h
v
that direction. R is an offset which depends on
the size of the robot. b
vt
is a certain angler, and typically
45.
Based on (51)(53), the robot escapes from this trap-
ping point and moves to point C. When the robot escapes
from the condition above or when no obstacle is detected,
the original target point is restores, and the robot contin-
ues to proceed to the target point.
6. Simulation and experiment results
To show the usefulness of the proposed approach, a ser-
ies of simulations have been conducted using an arbitrarily
constructed environment including obstacles.
6.1. Simulation results
In these simulations, the positions of all the obstacles
including moving obstacles in the workspace are unknown
to the robot. The robot is aware of its start and nish posi-
tions only. Simulation results obtained in three working
scenarios are presented in Figs. 913. The robot has been
modeled as a small circle, and imaginary sensors (sixteen
in number) are placed in the form of the arc along the cir-
cumference of the robot. The minimum distance obtained
within the cone of each sensor is considered as the dis-
tance of the obstacle from the sensor which detects any
obstacle. And any distance information detected from so-
nar is adding Gaussian White Noise. The rst to fourth
experiments are conducted to verify the proposed method
in the MATLAB environment, and the fth experiment is
carried out from Mobilesim environment.
Experiment 1: Fig. 9 represents the obstacle avoidance
path of the mobile robot in static case. Table 3 shows the
condition settings of this experiment. As seen from the vir-
tual plane shown in Fig. 9a, with the decrease of distance
between Obs6 and robot, BetP
X
(O) became much large
which lead the repulsive force much large. Ultimately, ro-
bot changes its path by deviating to the right. The complete
process is shown in Fig. 9.
Experiment 2: This example illustrates the collision
avoidance process in dynamic environment. The scenario
Fig. 8. Anti-deadlock mechanism.
Fig. 9. Collision avoidance (static case).
1398 W. Yaonan et al. / Measurement 44 (2011) 13891405
shown in Fig. 10 has some similarity with the scenarios of
example 1 but adding two moving obstacles. Fig. 10 thor-
oughly plots the obstacle avoidance path of the mobile ro-
bot in discrete time while it is maneuvered in a complex
environment (including dynamic and static obstacle)
where two rectangular shapes of obstacles are moving in
Fig. 10. Collision avoidance (dynamic case).
W. Yaonan et al. / Measurement 44 (2011) 13891405 1399
different directions and different velocity, meanwhile, ve
static obstacles also set in this environment. Table 4 shows
the condition settings of this experiment.
Fig. 10 represents the scenario and the virtual plane in
different time intervals: (a and b) 0 6 t 6 5s; (c and d)
0 6 t 6 10s; (e and f) 0 6 t 6 15s; (g and h) 0 6 t 6 37s. As
seen from Fig. 10(c and d), the robot is in a collision course
with both ObsD1, Obs1 and Obs2. However, it is more ur-
gent to avoid collision with ObsD1, and the robot slows
down to accomplish this. The robot avoids collision with
Obs2 as shown in Fig. 10e and f. Fig. 10e and f show the ro-
bot neglect this moving obstacle and can not change its
path for the reason there is no collision risk with ObsD2.
The complete process is shown in Fig. 10g and h.
Experiment 3: For local minimum problem avoidance, a
rigorous testing is carried out in cluttered maze consisting
of concave obstacles and dynamic obstacles (Fig. 11). In the
following Figs. 11 and 12, h represents the static obstacle
and represents the dynamic obstacle. Point a, b, c, . . . rep-
resent several trapping points. represents the real goal
Fig. 11. Collision avoidance (local minimum case).
1400 W. Yaonan et al. / Measurement 44 (2011) 13891405
Fig. 11 (continued)
Fig. 12. Collision avoidance (mazes case).
W. Yaonan et al. / Measurement 44 (2011) 13891405 1401
and represents the current virtual goal. ?represents the
potential total virtual force which drives the reference
point of the mobile robot to the current position. Table 5
shows the dynamic obstacles sets of this experiment.
Fig. 11 represents the scenario and the virtual plane in
different time intervals: (b and c) 0 6 t 6 10s; (d and e)
0 6 t 6 30s; (f and g) 0 6 t 6 40s. As seen from Fig. 11b
and c, the robot does not detect the obsD1 until it reaches
point A at which time it turns to the right-the side with the
lowest collision possibility. Since the obsD1 is now to
change its moving direction and move away from the ro-
bot, the robot continues to track the virtual goal in a coun-
terclockwise direction until it reaches the virtual goal at
point B. Since the goal is now to the left of the robot, it con-
tinues to track the current virtual goal (point C) at which
time the robot detects obsD2. But in Fig. 11b, it shows
the robot neglects this moving obstacle and can not change
its path for the reason that there is no collision risk with
ObsD2. In Fig. 11d and e, there are several local minimum
points including point D, E, F. At these points where the ro-
bots direction of travel is more than a certain angle
(jh
t
h
0
j > 90) off the target point, the robot turns its
direction and tracks current virtual goal. Then the robot
continues to see the wall on its left until it arrives at point
H and I after which it goes towards a real target point. The
complete process is shown in Fig. 11f and g and the robot
moves counterclockwise around obstacles through point J
and K to the nal goal.
Fig. 13. Collision avoidance based on Mobilesim environment (dynamic case).
Table 3
Experimental conditions of obstacle avoidance (static case).
Initial robot position x
r
= 0[m], y
r
= 0[m]
Goal position Goal
x
= 30[m], Goal
y
= 20.5[m],
Obstacle position Obs1
x
= 8[m], Obs1
y
= 10[m]
Obs2
x
= 13[m], Obs2
y
= 10[m]
Obs3
x
= 18[m], Obs3
y
= 20[m]
Obs4
x
= 20[m], Obs4
y
= 15[m]
Obs5
x
= 24[m], Obs5
y
= 20[m]
Obs6
x
= 9[m] ?12[m], Obs6
y
= 10[m]
Processing time 30 s
1402 W. Yaonan et al. / Measurement 44 (2011) 13891405
Experiment 4: In this experiment, the more complex
test is performed to present the effectiveness of the pro-
posed approach in cluttered environment with obstacle
loops and mazes. Similar as experiment 3, h represents
the static obstacle and represents the dynamic obstacle.
Fig. 12a, c and e thoroughly plot the obstacle avoidance
path of the mobile robot in discrete time while it is maneu-
vered in a dynamic environment where many dynamic
obstacles are moving in different direction. Whereas,
Fig. 12b, d and f give some detail information about how
the robot navigation system behaves at each discrete time.
The proposed algorithm performs well in a cluttered dy-
namic environment, as shown in Fig. 12a and b. The pro-
cesses for wall following, passing through a narrow door
and escaping from local minimum point using our algo-
rithm are shown in Fig. 12cf.
Experiment 5: This example is shown in Fig. 13. This
illustrates the collision avoidance process by change both
seed and orientation when facing with multi-moving
obstacle. Fig. 13a represents the scenario, while Fig. 13b
e indicate the virtual plane in different time intervals: (b)
0 6 t 6 1.2; (c) 0 6 t 6 5.1; (d) 0 6 t 6 6.3; (e) 0 6 t 6 8.1.
As seen from the virtual plane shown in Fig. 13b, there is
a collision risk with D
1
in the time 0 6 t 6 1.2. The robot
changes its path by deviating to the right, as shown in
Fig. 13b. In Fig. 13c and d, robot changes its orientation
by avoiding collision with D
1
and D
2
using proposed
method.
6.2. Experiment with a pioneer robot
The method presented in this paper has been tested on
a Pioneer robot. The robot and the experiment setup are
shown in Fig. 14. The laptop on top on the robot is in
charge of all computation: motion control, planning, SLAM
and so on. The navigation is carried out in real-time and we
only use sonar to detect obstacle.
In order to simulate dynamic obstacle, two football mo-
bile robots (MT-Robot) are used based on remote control.
The orientation of the two MT-Robot is shown in Fig. 15.
The track line of Pioneer robot is indicated approximately
with blue color in this gure. The room is a clean environ-
ment with long wall obstacles and measures approxi-
mately 8.6 m by 6.5 m. The objects were placed in the
room and their positions are point out as shown in
Fig. 15. The position of the robot is recorded at regular time
intervals. During the test executions, all programs were
run under the Windows NT operating system, directly on
the main processor of the robot, a Pentium 2.3G. A test case
is presented in Fig. 15.
In this experiment, the goal position is aligned to the
front of the mobile robot which is obstructed by obstacles
wall and two MT-Robots. The mobile robot starts to move
towards according to the real goal. As it moves, it detects
obstacles in front and in the left from its ultrasonic sensors.
The navigation law makes a correct decision by indicating
the right direction to the mobile robot until it reaches the
virtual goal. Then the mobile robot keeps moving and de-
tects dynamic obstacles in right direction. The robot makes
the correct decision and turns to the left avoiding the de-
tected obstacles successfully and then reaches the goal.
Although feature detectors using sonar is not very
accurate especially in complex environment and real
Table 4
Experimental conditions of obstacle avoidance (dynamic case).
Initial robot position x
r
= 0[m], y
r
= 0[m]
Goal position Goal
x
= 29[m], Goal
y
= 25[m],
Initial dynamic obstacle
position and its moving
direction and velocity
ObsD1
x
= 10[m], ObsD1
y
= 0[m]
moving direction and velocity
moving in a positive Y-axis
direction at 0.88(m/s)
ObsD2
x
= 12[m], ObsD1
y
= 15[m]
moving direction and velocity
moving in a positive X-axis
direction at 0.19(m/s).
Static obstacle position
Obs1
x
8m; Obs1
y
10m
Obs2
x
12m; Obs2
y
10m
Obs3
x
18m; Obs3
y
20m
Obs4
x
20m; Obs4
y
15m
Obs5
x
24m; Obs5
y
20m
Processing time 37 s
Fig. 14. Pioneer robot.
Table 5
Experimental conditions of dynamic obstacles.
Initial robot position x
r
= 0[m], y
r
= 0[m]
Goal position Goal
x
= 29[m], Goal
y
= 25[m],
Dynamic obstacle position and
its moving direction and
velocity
ObsD 1
x
= 0[m] M6[m],
ObsD1
y
= 5[m], moving back and
forth along X-axis and its velocity
at 1.20(m/s)
ObsD2
x
= 10[m],
ObsD1
y
= 5[m] M10[m], moving
back and forth along Y-axis and its
velocity at 1.00(m/s)
ObsD3
x
= 15[m] M22[m],
ObsD3
y
= 10[m], moving back and
forth along X-axis and its velocity
at 0.70(m/s)
ObsD4 = (0, 20) M(5, 25), moving
back and forth and its velocity at
0.71(m/s)
Processing time 40 s
W. Yaonan et al. / Measurement 44 (2011) 13891405 1403
performance, in this experiment, it is not very robust as
compared with simulation experiment in Matlab or
Mobilesim (Process time became long, waiting motion
happened frequently), it also can achieve the goal based
on this navigation method. Furthermore, the performance
of the simulation and the real robot is different. The simu-
lation has a faster cycle time for the iterations of the pro-
posed method. This allows the robot to follow the
smoother path. The reactions of the robot in the simulation
are also different from the real robot. For instance, once a
speed command is given to the real robot, the set point is
obtained over a period of time. In the simulation, the speed
set point is almost immediately achieved.
From these results, it shows that based on our collision
avoidance algorithm, the mobile robot can nd a safe and
smooth path for attaining the target position autono-
mously whether in a static or dynamic environment when
adding highly noise to the sensor.
7. Conclusion
In this paper, a new mobile robot navigation strategy
for nonholomonic mobile robot in dynamic environment
was designed and fully tested in this work based on trans-
ferable belief model. The aim was to let the robot nd a
collision-free trajectory between the starting conguration
and the goal conguration in a dynamic environment con-
taining some obstacle (including static and moving object).
For this purpose, a navigation strategy which consists of
sonar data interpretation and fusion, map building, and
path planning is designed. The sonar data fusion and dy-
namic object distinguish law were discussed using trans-
ferable belief model. Based on proposed judging law, a
path planning algorithm is modied based on potential
eld method. Simulation and experiment results validate
the effectiveness of the proposed method. Though mobile
robot is discussed, the method is generally applicable to
other type of problem, as well as to pattern recognition,
objective classication.
Acknowledgements
The authors thank the anonymous editor and reviewer
for their invaluable suggestions, which has been incorpo-
rated to improve the quality of this paper dramatically.
This work was supported by National Natural Science
Foundation of China (60775047, 60673084), National High
Technology Research and Development Program of China
(863 Program: 2008AA04Z214), National Technology Sup-
port Project (2008BAF36B01) and Research Foundation of
Hunan Provincial Education Department (10C0356).
References
[1] K.D. Rueb, A.K.C. Wong, Structuring free space as a hypergrarph for
roving robot path planning and navigation, IEEE Transactions on
Pattern Analysis and Machine Intelligence 9 (2) (1987) 263273.
[2] M.K. Habib, S. Yuta, Efcient online path planning algorithm and
navigation for a mobile robot, International Journal of Electronics 69
(2) (1990) 187210.
[3] M.J. Mataric, Integration of representation into goal-riven behavior-
based robots, IEEE Transactions on Robotics and Automation 8 (3)
(1992) 304312.
[4] E. Rimon, D.E. Koditschek, Exact robot navigation using articial
potential function, IEEE Transactions on Robotics and Automation 8
(5) (1992) 501518.
[5] J. Borensein, Y. Koren, Real-time obstacle avoidance for fast mobile
robots, IEEE Transactions on Systems Man and Cybernetics 19 (5)
(1989) 11791187.
[6] S. Thrun, W. Burgard, D. Fox, A probabilistic approach to concurrent
mapping and localization for mobile robots, Machine Learning 31
(13) (1998) 2953.
[7] P. Svestka, M.H. Overmars, Motion planning for carlike robots using a
probabilistic learning approach, International Journal of Robotics
Research 16 (2) (1997) 119143.
[8] S. Thrun, Probabilistic algorithms in robotics, Ai Magazine 21 (4)
(2000) 93109.
[9] C.M. Clark, Probabilistic road map sampling strategies for multi-
robot motion planning, Robotics and Autonomous Systems 53 (34)
(2005) 244264.
[10] S.S. Ge, Y.J. Cui, New potential functions for mobile robot path
planning, IEEE Transactions on Robotics and Automation 16 (5)
(2000) 615620.
[11] K.P. Valavanis, T. Hebert, R. Kolluru, et al., Mobile robot navigation in
2-D dynamic environments using an electrostatic potential eld,
IEEE Transactions on Systems Man and Cybernetics Part A Systems
and Humans 30 (2) (2000) 187196.
Fig. 15. Mobile robot and experiment.
1404 W. Yaonan et al. / Measurement 44 (2011) 13891405
[12] N.C. Tsourveloudis, K.P. Valavanis, T. Hebert, Autonomous vehicle
navigation utilizing electrostatic potential elds and fuzzy logic, IEEE
Transactions on Robotics and Automation 17 (4) (2001) 490497.
[13] S.S. Ge, Y.J. Cui, Dynamic motion planning for mobile robots using
potential eld method, Autonomous Robots 13 (3) (2002) 207222.
[14] J. Ren, K.A. McIsaac, R.V. Patel, Modied Newtons method applied to
potential eld based navigation for nonholonomic robots in dynamic
environments, Robotica 26 (2008) 285294.
[15] L. Yin, Y.X. Yin, C.J. Lin, A new potential eld method for mobile robot
path planning in the dynamic environments, Asian Journal of Control
11 (2) (2009) 214225.
[16] F. Belkhouche, B. Belkhouche, Kinematics-based characterization of
the collision course, International Journal of Robotics & Automation
23 (2) (2008) 127136.
[17] Y.C. Chang, Y. Yamamoto, On-line path planning strategy integrated
with collision and dead-lock avoidance schemes for wheeled mobile
robot in indoor environments, Industrial Robot An International
Journal 35 (5) (2008) 421434.
[18] A.O. Djekoune, K. Achour, R. Toumi, A sensor based navigation
algorithm for a mobile robot using the DVFF approach, International
Journal of Advanced Robotic Systems 6 (2) (2009) 97108.
[19] J. Minguez, L. Montano, Sensor-based robot motion generation in
unknown, dynamic and troublesome scenarios, Robotics and
Autonomous Systems 52 (4) (2005) 290311.
[20] L. Moreno, E. Dapena, Path quality measures for sensor-based
motion planning, Robotics and Autonomous Systems 44 (2) (2003)
131150.
[21] J. Velagic, B. Lacevic, B. Perunicic, A 3-level autonomous mobile
robot navigation system designed by using reasoning/search
approaches, Robotics and Autonomous Systems 54 (12) (2006)
9891004.
[22] X.Y. Yang, M. Moallem, R.V. Patel, A layered goal-oriented fuzzy
motion planning strategy for mobile robot navigation, IEEE
Transactions on Systems Man and Cybernetics Part B Cybernetics
35 (6) (2005) 12141224.
[23] T. Kondo, Evolutionary design and behavior analysis of
neuromodulatory neural networks for mobile robots control,
Applied Soft Computing 7 (1) (2007) 189202.
[24] J.A. Fernandez-Leon, G.G. Acosta, M.A. Mayosky, Behavioral control
through evolutionary neurocontrollers for autonomous mobile robot
navigation, Robotics andAutonomous Systems 57(4) (2009) 411419.
[25] P. Smets, Analyzing the combination of conicting belief functions,
Information Fusion 8 (4) (2007) 387412.
[26] F. Delmotte, P. Smets, Target identication based on the transferable
belief model interpretation of DempsterShafer model, IEEE
Transactions on Systems Man and Cybernetics Part A Systems
and Humans 34 (4) (2004) 457471.
[27] T. Denoeux, P. Smets, Classication using belief functions:
Relationship between case-based and model-based approaches,
IEEE Transactions on Systems Man and Cybernetics Part A
Systems and Humans 36 (6) (2006) 13951406.
[28] F. Delmotte, Comparison of the performances of decision aimed
algorithms with Bayesian and beliefs basis, International Journal of
Intelligent Systems 16 (8) (2001) 963981.
[29] P. Smets, Application of the transferable belief model to diagnostic
problems, International Journal of Intelligent Systems 13 (3) (1998)
127157.
[30] R. Fierro, F.L. Lewis, Control of a nonholonomic mobile robot:
Backstepping kinematics into dynamics, Journal of Robotic Systems
14 (3) (1997) 149163.
[31] Q.J. Zhang, J. Shippen, B. Jones, Robust backstepping and neural
network control of a low-quality nonholonomic mobile robot,
International Journal of Machine Tools & Manufacture 39 (7)
(1999) 11171134.
[32] S.X. Yang, M. Meng, Real-time ne motion control of robot
manipulators with unknown dynamics, Dynamics of Continuous
Discrete and Impulsive Systems-Series B Applications &
Algorithms 8 (3) (2001) 339358.
[33] G. Antonelli, S. Chiaverini, G. Fusco, A fuzzy-logic-based approach for
mobile robot path tracking, IEEE Transactions on Fuzzy Systems 15
(2) (2007) 211221.
[34] Z.G. Hou, A.M. Zou, L. Cheng, et al., Adaptive control of an electrically
driven nonholonomic mobile robot via backstepping and Fuzzy
approach, IEEE Transactions on Control Systems Technology 17 (4)
(2009) 803815.
[35] W. Sun, Y.N. Wang, A robust robotic tracking controller based on
neural network, International Journal of Robotics & Automation 20
(3) (2005) 199204.
[36] Y. Zuo, Y.N. Wang, X.Z. Liu, et al., Neural network robust control for a
nonholonomic mobile robot including actuator dynamics,
International Journal of innovative Computing, Information and
Control 6 (8) (2010) 34373449.
[37] Y.N. Wang, W. Sun, Y.Q. Xiang, et al., Neural network-based robust
tracking control for robots, Intelligent Automation and Soft
Computing 15 (2) (2009) 211222.
Wang Yaonan received the B.S. degree in
computer engineering from East China Sci-
ence and Technology University (ECSTU),
Shanghai, China, in 1982 and the M.S., and
Ph.D. degrees in electrical engineering from
Hunan University, Changsha, China, in 1990
and 1994, respectively.
From 1994 to 1995, he was a Postdoctoral
Research Fellow with the Normal University
of Defence Technology. From 1981 to 1994, he
worked with ECSTU. From 1998 to 2000, he
was a senior Humboldt Fellow in Germany,
and from 2001 to 2004, he was a visiting professor with the University of
Bremen, Bremen, Germany. He has been a Professor at Hunan University
since 1995. His research interests are intelligent control and information
processing, robot control and navigation, image processing, and industrial
process control.
Yang Yimin received the B.E.E., and M.S.E.E.
degrees in 2005 and 2009, respectively, from
Xiangtan University and Hunan University,
Hunan, China. Now, He is currently working
toward the Ph.D. degree in Hunan University.
His research interests include robot control
and navigation, intelligent information pro-
cessing, and articial neural networks.
Yuan Xiaofang received the B.S., M.S., and
Ph.D. degrees in electrical engineering from
Hunan University, Changsha, China, in 2001,
2006 and 2008, respectively.
He is currently a lecturer with the College of
Electrical and Information engineering, Hunan
University. His research interests include
intelligent control theory and applications,
kernel methods, and articial neural networks.
Zuo Yi received the Ph.D. degree in Control
Science and Engineering from Hunan Univer-
sity, Changsha, China, in 2009. He is a visiting
scholar in University of Waterloo from 2008
to 2009. His scientic interests include neural
networks and robotic robust control.
W. Yaonan et al. / Measurement 44 (2011) 13891405 1405

You might also like