You are on page 1of 10

Design of Data Association Filter Using Neural

Networks for Multi-Target Tracking


Yang Weon Lee and Chil Woo Lee
Department of Information and Communication Engineering, Honam University,
Seobongdong, Gwangsangu, Gwangju, 506-714, South Korea
Department of Computer Engineering, Chonnam University, Yongbongdong,
Gwangju, South Korea
ywlee@honam.ac.kr, cwlee@chonnam.ac.kr

Abstract. In this paper, we have developed the MHDA scheme for data
association. This scheme is important in providing a computationally feasible alternative to complete enumeration of JPDA which is intractable.
We have proved that given an articial measurement and tracks conguration, MHDA scheme converges to a proper plot in a nite number
of iterations. Also, a proper plot which is not the global solution can
be corrected by re-initializing one or more times. In this light, even if
the performance is enhanced by using the MHDA, we also note that
the diculty in tuning the parameters of the MHDA is critical aspect
of this scheme. The diculty can, however, be overcome by developing
suitable automatic instruments that will iteratively verify convergence
as the network parameters vary.

Introduction

Generally, there are three approaches in data association for MTT : non-Bayesian
approach based on likelihood function[1], Bayesian approach[2,4,3], and neural
network approach[5]. The major dierence of the rst two approaches is how
treat the false alarms. The non-Bayesian approach calculates all the likelihood
functions of all the possible tracks with given measurements and selects the
track which gives the maximum value of the likelihood function. Meanwhile,
the tracking lter using Bayesian approach predicts the location of interest using a posteriori probability. These two approaches are inadequate for real time
applications because the computational complexity is tremendous.
As an alternative approach, Sengupta and Iltis[5] suggested a Hopeld neural
network probabilistic data association (HNPDA) to approximately compute a
posteriori probability jt , for the joint probabilities data association lter
(JPDAF)[7] as a constrained minimization problem. This technique based on
the use of neural networks was also started by comparison with the traveling
salesman problem(TSP). In fact jt is approximated by the output voltage Xjt
of a neuron in an (m + 1) n array of neurons, where m is the number of measurements and n is the number of targets. Sengupta and Iltis[5] claimed that the
performance of the HNPDA was close to that of the JPDAF in situations where
the numbers of measurements and targets were in the ranges of 3 to 20 and 2
D.-S. Huang, K. Li, and G.W. Irwin (Eds.): ICIC 2006, LNCS 4113, pp. 981990, 2006.
c Springer-Verlag Berlin Heidelberg 2006


982

Y.W. Lee and C.W. Lee

to 6, respectively. The success of the HNPDA in their examples was credited to


the accurate emulation of all the properties of the JPDAF by the HNPDA.
However, the neural network developed in [5] has been shown the two problems. First, the neural network developed in [5] has been shown to have improper
energy functions. Second, heuristic choices of the constant parameters in the energy function in [5] didnt guarantee the optimal data association.
The outline of this paper is as follows. In section 2 the Hopeld neural network
used in [5] is briey reviewed and some comments are made on the assumptions
which used to set up the energy function in [5]. Then, the modied scheme of
HNPDA is proposed as an alternative data association method for MTT. Finally,
we present our simulation results in Section 4, and conclusions in Section 5.

Review of the Energy Function in the HNPDA and


Comments

Suppose there are n targets and m measurements. The energy function used in
[5] is reproduced below.
EDAP =

m n
n m
n
m
n
m
A  
B  
C t
Xjt Xj +
Xjt Xlt +
(
X 1)2
2 j=0 t=1
2 t=1 j=0
2 t=1 j=0 j
=1 =t

l=0l=j

m n
m n
m
n

D  t
E  
+
(Xj tj )2 +
(Xjt
l )2 .
2 j=0 t=1
2 j=0 t=1
=1 =t

(1)

l=0l=j

In [5], Xjt is the output voltage of a neuron in an (m + 1) n array of neurons


and is the approximation to the a posteriori probability jt in the JPDAF[7]. This
a posteriori probability, in the special case of the PDAF[7] when the probability
PG that the correct measurement falls inside the validation gate is unity, is
denoted by tj . Actually, PG is very close to unity when the validation gate size
is adequate. In (1), A,B,C,D, and E are constants.
In the HNPDA, the connection strength matrix is a symmetric matrix of order
n(m + 1). With the given energy function EDAP in (1), the connection strength
Wjlt from the neuron at location (, l) to the neuron at location (t, j) is

Wjlt

[C + D + E(n 1)]

A
=

(B + C)

if
if
if
if

t=
t =
t=
t =

and
and
and
and

j
j
j
j

=l
=l
= l
= l

self feedback,
row connection,
(2)
column connection,
global connection.

The input current Ijt to the neuron at location (t, j), for t = 1, 2, . . . , n, and
j = 0, 1, . . . , m, is
n



j .
Ijt = C + (D + E)tj + E n 1
=1

(3)

Design of Data Association Filter Using Neural Networks

983

Clearly from (2)and (3), the input current Ijt but not the connection strength
depends on the tj s, which are computed form the measurements that comprise the input data. Ironically, in the neural network for the TSP[9], only the
connection strengths depend on the input data which, in this case, are the distances between pairs of cities.
In order to justify the rst two terms of EDAP in (1), the authors of [5]
claimed that the dual assumptions of no two returns form the same target and
no single return from two targets are consistent with the presence of a dominating Xjt in each row and each column of the (m + 1) n array of neurons.
However, these assumptions are not constraints on the values of the jt s in
the original JPDAF. Those assumptions should be used only in the generation
of the feasible data association hypotheses, as pointed out in [7]. As a matter of fact, there could be two jt s of comparable magnitude in the same row
and in the same column as shown in Chapter 4 of [8]. Therefore, the presence of a dominating Xjt in each row and each column is not a property of the
JPDAF.
The third term of EDAP is used to constrain the sum of the Xjt s in each

t
column to unity i.e. m
j=0 Xj = 1. This constraint is consistent with the re m
t
quirement that j=0 j = 1 in both the JPDAF and the PDAF[10]. Therefore,
this constraint, by itself, does not permit us to infer whether the jt s are from
the JPDAF, or from the PDAF. The assumption used to set up the fourth term
is that this term is small only if Xjt is close to tj , in which case the neural network simulates more closely the PDAF for each target rather than the intended
JPDAF in the multitarget scenario. Finally, the fth term is supposed to be minimized if Xjt is not large unless for each = t there is a unique l = j such that
l is large. Unfortunately, this constrained minimization may not be possible as
shown in [8]. This is consistent with the heuristic nature of the derivation of the
energy function in [5], which could lead to the problems in the implementation
of the HNPDA as discussed next.
Tjlt

3
3.1

Modied Scheme of HNPDA


Modication of Energy Equation

In Hopeld network, when the operation is approaching the steady state, at


most one neuron gets into the ON state in each row and column and the other
neurons must be in the OFF state. To guarantee the this state, we add the
following constraints additionally for Hopeld network :
1 
1 
jt j +
jt lt .
2 j=1 t=0
2 t=1 j=1
m

Es =

=t

(4)

l=j

By summing (4) and (10) in [11], we get the nal energy equation for Hopeld
network:

984

Y.W. Lee and C.W. Lee

EHDA =

m n
n m m
n m
n
A 
B 
C 
jt j +
jt lt +
(w
jt wjt )2
2 j=1 t=0
2 t=1 j=1
2 t=1 j=1

D
2

n


=t
m


w
jt 1)2 +

t=1 j=1

F
2

m


l=j
n


G 
rjt w
jt .
2 t=1 j=1
n

w
jt 1)2 +

j=1 t=0

(5)

The rst two terms of (5) correspond to row and column inhibition and the
third term suppressed the activation of uncorrelated part( i.e. if jt = 0, then

jt = 0). The fourth and fth terms biased the nal solution towards a normalized set of numbers. The last term favors associations which have a nearest
neighbor in view of target velocity.
3.2

Transformation of Energy Function into Hopeld Network

A Hopeld network with m(n + 1) neurons was considered. The neurons were
subdivided into n + 1 targets column of m neurons each. Henceforward we
will identify each neuron with a double index, tl(where the index t = 0, 1, . . . , n
relates to the target, whereas the index l = 1, . . . , m refers to the neurons in each
column), its output with Xlt , the weight for neurons jt and l with Wjlt , and
the external bias current for neuron tl with Ilt . According to this convention, we
can extend the notation of the Lyapunov energy function[12] to two dimensions.
Using the Kronecker delta function

1 if i = j,
(6)
ij =
0 if i = j,
The Lyapunov energy function[12] can be written as
EHDA =

A 
B 
lj (1 t )Xlt Xj +
t (1 lj )
2
2
t

j
j
l


C 
t lj (1 ot )Xlt Xj C
(1 ot )Xlt Xj +
2
t

t
j
l


C  2
Dn
D
jt (1 ot )Xlt +
jt (1 ot ) +
(1 ot )Xlt
2
2
t
t
l
l

D 
F
m
F
+
t (1 ot )Xlt Xj +
Xlt
2
2
t

t
j
l

F 
G 
+
lj Xlt Xj +
rjt (1 ot )Xlt .
2
2
t

t
j
l

(7)

We also can extend the notation of the Lyapunov energy function[12] to two
dimensions:
1     t t   t t
Wjl Xl Xj
Il Xl .
(8)
E=
2
t

t
j
l

Design of Data Association Filter Using Neural Networks

985

-(C+D+F)

-(A+F)
-F
-(A+F)

-(A+F)

-(A+F)
-(B+D)

-(B+D)

-F
-(A+F)

-(A+F)

-(B+D)

-F
-(A+F)

-(A+F)
-(A+F)

Fig. 1. Example of Hopeld network for two targets and three plots

By comparing (8) and (7), we get the connection strength matrix and input
parameters each :





Wjlt = A(1 t )+Ct (1ot )+F lj B(1 lj ) + D t (1 ot ),
(9)
Ilt = (Cjt + D G
2 rjt )(1 ot ) + F.
2
m
+ C2
jt (1ot ) . These
Here we omit the constant terms such as Dn+F
2
terms do not aect the neurons output since they just act as a bias terms during
the processing.
Using the (9), the connection strength Wjlt from the neuron at location (, l)
to the neuron at location (t, j) is

[(C + D)(1 ot ) + F ] if t = and j = l self feedback,

(A + F )
if t = and j = l row connection,
Wjlt =
(10)
(B + D)(1 ot )
if t = and j = l column connection,

0
if t = and j = l global connection.
Fig.1 sketches the resulting two-dimensional network architecture as a directed
graph using the (10). We note that only 39 connections of possible 81 connections
are achieved in this 3 3 neurons example. This means that modied Hopeld
network can be represented as a sparse matrix. In Fig.1, we also note that there
are no connections between diagonal neurons.
With the specic values from (9), the equation of motion for the MTT becomes
St   
dSlt
{A(1 t ) + Ct (1 ot ) + F }lj
= l
dt
so

j

{B(1 lj ) + D}t (1 ot ) Xj
rjt
G)(1 ot ) + F.
+(Cjt + D
2

(11)

986

Y.W. Lee and C.W. Lee

The nal equation of data association is


n
n
m



dSlt
St
= l A
Xl B(1 ot )
Xjt D(1 ot )(
Xjt 1)
dt
So
j=1
=1, =t

F (

m


j=1,j=l

Xl 1) C(1 ot )(1 jt )

=1

rjt
(1 ot )G.
2

(12)

The parameters A, B, C, D, F and G can be adjusted to control the emphasis


on dierent constraints and properties. A larger emphasis on A, B, and F will
produce the only one neurons activation both column and row. A large value of
C will produce Xlt close to jt except the duplicated activation of neurons in the
same row and column. A larger emphasis on G will make the neuron activate
depending on the value of targets course weighted value. Finally, a balanced
combination of all six parameters will lead to the most desirable association. In
this case, a large number of targets and measurements will only require a larger
array of interconnected neurons instead of an increased load on any sequential
software to compute the association probabilities.
3.3

Computational Complexity

The computational complexity of the modied Hopeld data association


(MHDA) scheme, when applied to the target association problem, depends on
the number of tracks and measurements and the iteration numbers to be reached
stable state. Suppose that there are n tracks and m measurements. Then according to the (12) for data association, the computational complexity per iteration of
the MHDA method require O(nm) computations. When we assume the average
the total data association calculations require O(knm)

iteration number as k,
computations. Therefore, even if the tracks and measurements are increased ,
the required computations are not increasing exponentially . However JPDAF as
estimated in [5] requires the computational complexity O(2nm ) , so its computational complexity increases exponentially depending on the number of tracks
and measurements.

4
4.1

Simulation Results
Data Association Results

To exactly test the data association capability of the MHDA method, predened
targets and measurements value are used to exclude any eects due to miss
detection that are moderately occurring in real environment. An example of
three targets and seven measurements is depicted in Fig. 2. In Fig. 2, large circles
represent track gates and symbol * means plots of measurements and small circles
on the some measurements plots represent the plots of measurements which
are associated with tracks by MHDA. During the iteration, Fig. 3 and 4 show
how the distance and matching energy change respectively. In this example, the

Design of Data Association Filter Using Neural Networks


20
18
1

16

m4

14

m6
y axis [m]

12
m3

m7

10

m5

8
2

m2

m1

4
2
0
0

10
x axis [m]

12

14

16

18

20

Fig. 2. Diagram of Hopeld network for three targets and seven plots

25

Distance Energy

20

15

10

0
0

50

100

150
Iteration Number

200

250

300

Fig. 3. Distance energy convergence for three targets and seven plots

2.5

Matching Energy

1.5

0.5

0
0

50

100

150
Iteration Number

200

250

300

Fig. 4. Matching energy of Hopeld network for three targets and seven plots

987

988

Y.W. Lee and C.W. Lee

Position Error[km]

0.6

0.4

0.2

0
0

10

15

20
Time (k=40)

25

30

35

40

10

15

20
Time (k=40)

25

30

35

40

Velocity Error[km/sec]

0.08
0.06
0.04
0.02
0
0

Fig. 5. RMS errors in X axis for target 8 : MHDA HNPDA

Position Error[km]

0.12
0.1
0.08
0.06
0.04
0.02
0

10

15

20
Time (k=40)

25

30

35

40

10

15

20
Time (k=40)

25

30

35

40

Velocity Error[km/sec]

0.06

0.04

0.02

0
0

Fig. 6. RMS errors in X axis for target 9 : MHDA

HNPDA

association pairs are track 1 and measurement 4, track 2 measurement 2 ,and


track 3 and measurement 6. Note that the results of data association is correct
with respect to nearest neighbor. In the simulation, the constants A = 50, B =
50, C = 100, D = 1000, F = 1000and G = 100 appeared to be suitable for this
scenario. So was selected to be 1 s.
4.2

Sequential Tracking Results

The crossing, parallel and maneuvering targets whose initial parameters are
taken from target 1,2,3,4,8 and 9 respectively in Table 1 in [11] are tested. In
Fig. 5 and 6, the rms estimation errors for the maneuvering targets are shown.
HNPDA can not track the dog leg maneuvering targets but the constant acceleration target. Table 1 summarizes the rms position and velocity errors for each
target. The rms errors of the HNPDA about maneuvering targets have not been
included since it loses track of one of targets. The performance of the MHDA is
superior to that of HNPDA in terms of tracking accuracy and track maintenance.

Design of Data Association Filter Using Neural Networks

989

Table 1. RMS Errors in case of ten targets


Position error Velocity error Track maintenance
Target
(km)
(km/s)
(%)
i
HNPDA MHDA HNPDA MHDA HNPDA MHDA
1
0.048 0.044 0.024 0.021
95
98
2
0.051 0.048 0.028 0.018
95
98
3
0.065 0.044 0.021 0.018
85
98
4
0.049 0.041 0.020 0.018
93
98
5
0.041 0.044 0.018 0.018
100
100
6
0.042 0.043 0.021 0.018
100
100
7
0.040 0.040 0.018 0.018
100
100
8
0.295
0.118
0
53
9
0.058 0.047 0.027 0.022
100
100
10
0.037 0.039 0.011 0.012
100
100

Conclusions

In this chapter, we have developed the MHDA scheme for data association. This
scheme is important in providing a computationally feasible alternative to complete enumeration of JPDA which is intractable. We have proved that given an
articial measurement and tracks conguration, MHDA scheme converges to a
proper plot in a nite number of iterations. Also, a proper plot which is not the
global solution can be corrected by re-initializing one or more times. In this light,
even if the performance is enhanced by using the MHDA, we also note that the
diculty in tuning the parameters of the MHDA is critical aspect of this scheme.
The diculty can, however, be overcome by developing suitable automatic instruments that will iteratively verify convergence as the network parameters vary.

Acknowledgements
This research has been supported by research funding of Center for High-Quality
Electric Components and Systems, Chonnam National University, Korea.

References
1. Alspach, D. L.: A Gaussian Sum Approach to the Multi-Target Identication
Tracking Problem. Automatica , 11(1975) 285-296
2. Bar-Shalom, Y.: Extension of the Probabilistic Data Association Filter in MultiTarget Tracking. Proceedings of the 5th Symposium on Nonlinear Estimation,
Sep.(1974) 16-21
3. Reid, D. B.: An Algorithm for Tracking Multiple Targets. IEEE Trans. on Automat.
Contr., 24(1979) 843-854
4. Reid, D. B.: A Multiple Hypothesis Filter for Tracking Multiple Targets in a Cluttered Environment. Lockheed Palo Alto Research Laboratory Tech. Report LMSCD560254, Sept. (1977). (J. Basic Eng.,) 82(1960) 34-45

990

Y.W. Lee and C.W. Lee

5. Sengupta,D., Iltis, R. A.: Neural Solution to the Multitarget Tracking Data Association Problem. IEEE Trans. on AES, AES-25, Jan.(1999) 96-108
6. Kuczewski, R.: Neural Network Approaches to Multitarget Tracking. In proceedings of the IEEE ICNN conference, (1987)
7. Fortmann, T. E., Bar-Shalom, Y., Schee, M.: Sonar Tracking of Multiple Targets
Using Joint Probabilistic Data Association. IEEE J. Oceanic Engineering, OE-8,
Jul. (1983) 173-184
8. Zhou, B.: Multitarget Tracking in Clutter : Algorithms for Data Association and
State Estimation. PhD thesis, Pennsylvania State University, Department of Electrical and Computer Engineering, University Park, PA 16802, May (1992)
9. Hopeld, J. J., Tank, D. W.: Neural Computation of Decisions in Optimization
Problems. Biological Cybernatics,(1985) 141-152
10. Fortmann, T. E., Bar-Shalom, Y.: Tracking and Data Association. Orland Acdemic
Press, Inc., (1988) 224
11. Yang Weon Lee: Adaptive Data Association for Multi-target Tracking Using Relaxation. LNCS 35644, (2005) 552-561
12. Yang Weon Lee, Seo, J. H., Lee, J. G.: A Study on the TWS Tracking Filter for
Multi-Target Tracking. Journal of KIEE, 41(4), (2004) 411-421

You might also like