You are on page 1of 16

Sensor Fusion

Sensor Fusion
Fredrik Gustafsson

Lecture 1 2 3 4 5 6 7 8 9 10

Content Course overview. Estimation theory for linear models. Estimation theory for nonlinear models Sensor networks and detection theory Nonlinear lter theory. The Kalman lter. Filter banks. Kalman lter approximation for nonlinear models (EKF, UKF). The point-mass lter and the particle lter. The particle lter theory. The marginalized particle lter. Simultaneous localization and mapping (SLAM). Modeling and motion models. Sensors and lter validation.

Chapters 12 3 45 67, 10 8 9 9 11 1213 1415

Literature: Statistical Sensor Fusion. Studentlitteratur, 2010 or 2012 (updated version).


Lecture 1, 2012 1

Sensor Fusion

Exercises: compendium sold by Bokadademin. Software: Signals and Systems Lab for Matlab.

Lecture 1, 2012

Sensor Fusion

Lecture 1: Estimation theory in linear models


Whiteboard:

The weighted least squares (WLS) method. The maximum likelihood (ML) method.
The Cramer-Rao lower bound (CRLB).

Fusion algorithms.
Slides:

Examples Code examples Algorithms

Lecture 1, 2012

Sensor Fusion

Example 1: sensor network


250

200

150

[m]

100

50

50

150

100

50

50 [m]

100

150

200

250

12 sensor nodes, each one with microphone, geophone and magnetometer. One moving target. Detect, localize and track/predict the target.

Lecture 1, 2012

Sensor Fusion

Example 2: fusion of GPS and IMU

GPS gives good position. IMU gives accurate accelerations. Combine these to get even better position, velocity and acceleration.

Lecture 1, 2012

Sensor Fusion

Example 3: fusion of camera and radar images


Radar gives range and range rate with good horizontal angle resolution, but no vertical resolution. Camera gives very good angular resolution, and color, but no range. Combined, they have a great potential for situation awareness.

Lecture 1, 2012

Sensor Fusion

Chapter 2: estimation for linear models


Least squares and likelihood methods. Sensor network example. Fusion and safe fusion in distributed algorithms

Lecture 1, 2012

Sensor Fusion

Code for Signals and Systems Lab:

p1=[0;0]; p2=[2;0]; x=[1;1]; X1=ndist(x,0.1*[1 -0.8;-0.8 1]); X2=ndist(x,0.1*[1 0.8;0.8 1]); plot2(X1,X2)
1.5

x2

0.5

S1

S2

0.5 0.5

0.5

1 x1

1.5

2.5

Lecture 1, 2012

Sensor Fusion

Sensor network example, contd


X3=fusion(X1,X2); X4=0.5*X1+0.5*X2; plot2(X3,X4)
1.5

% WLS % LS

x2

0.5

S1

S2

0.5 0.5

0.5

1 x1

1.5

2.5

Lecture 1, 2012

Sensor Fusion

Information loops in sensor networks


Information and sufcient statistics should be communicated in sensor networks. In sensor networks with untagged observations, our own observations may be
included in the information we receive.

Information loops (updating with the same sensor reading several times) give
rise to optimistic covariances.

Safe fusion algorithms (or covariance intersection techniques) give conservative


covariances, using a worst case way of reasoning.

Lecture 1, 2012

10

Sensor Fusion

Safe fusion
Given two unbiased estimates x 1 , x 2 with information I1 = P1 and I2 = P2 (pseudo-inverses if singular covariances), respectively. Compute the following: 1. SVD: I1 2. SVD: D1
T = U1 D1 U1 . T U1 I2 U 1 D 1 1/2 T = U2 D2 U2 . 1/2 1/2 1 1

T T = U2 D1 U1 . 4. State transformation: x 1 and x 2 . The covariances of these are 1 = T x 2 = T x 1 C OV(x 1 ) = I and C OV(x 2 ) = D2 , respectively.

3. Transformation matrix:

5. For each component i

= 1, 2, . . . , nx , let

ii ii i x i = x 1 , D = 1 if D2 < 1, ii ii ii i if D2 > 1. x i = x 2 , D = D2

6. Inverse state transformation:

x = T 1 x , P = T 1 D 1 T T
Lecture 1, 2012 11

Sensor Fusion

Transformation steps
x 2 x 2 x 1 x 2 x 1 x 1

Lecture 1, 2012

12

Sensor Fusion

Sensor network example, contd


X3=fusion(X1,X2); % WLS X4=fusion(X1,X3); % X1 used twice X5=safefusion(X1,X3); plot2(X3,X4,X5)

Lecture 1, 2012

13

Sensor Fusion

Sequential WLS
The WLS estimate can be computed recursively in the space/time sequence yk . Suppose the estimate x k1 with covariance Pk based on observations y1:k1 . A new observation is fused using

x k = x k 1 +

T Pk1 Hk

T Hk Pk1 Hk

+ Rk

1 1

(yk Hk x k 1 ), Hk Pk1 .

T T Pk = Pk1 Pk1 Hk + Rk Hk Pk1 Hk

Note that the fusion formula can be used alternatively. In fact, the derivation is based on the information fusion formula applying the matrix inversion lemma.

Lecture 1, 2012

14

Sensor Fusion

Batch vs sequential evaluation of loss function


The minimizing loss function can be computed in two ways using batch and sequential computations, respectively,

V W LS ( xN ) = =

N k=1 N k=1

1 (yk Hk x N )T R k (yk Hk x N )

T (yk Hk x k1 )T (Hk Pk1 Hk + Rk )1 (yk Hk x k 1 )

1 N )T P 0 ( x0 x N ) ( x0 x

The second expression should be used in decentralized sensor network implementations and on-line algorithms. The last correction term to de-fuse the inuence of the initial values is needed only when this initialization is used.

Lecture 1, 2012

15

Sensor Fusion

Batch vs sequential evaluation of likelihood


The Gaussian likelihood of data is important in model validation, change detection and diagnosis. Generally, Bayes formula gives
N

p(y1:N ) = p(y1 )
k=2

p(yk |y1:k1 ).

For Gaussian noise and using the sequential algorithm, this is


N

p(y1:N ) =
k=1

T (yk ; Hk x k1 , Hk Pk1 Hk + Rk )

The off-line form is:


N

p(y1:N ) = ( xN ; x0 , P0 )

det(PN )
k=1

(yk ; Hk x N , R k ).

Prefered for de-centralized computation in sensor networks or on-line algorithms.


Lecture 1, 2012 16

You might also like