You are on page 1of 236

Filtering and Estimation

for Mobile Robots


Nathan Michael
MEAM 620
Spring, 2008

University of Pennsylvania
Philadelphia, Pennsylvania

Reference

Outline
Kinematic Model of a Mobile Robot
Kalman Filter
Extended Kalman Filter
Unscented Kalman Filter
Particle Filter

Outline
Kinematic Model of a Mobile Robot
Kalman Filter
Extended Kalman Filter
Unscented Kalman Filter
Particle Filter

Mobile Robot Model

t1

t0

Mobile Robot Model


Differential Drive
l

t1

t0

(x, y)

Mobile Robot Model


Differential Drive
l

t1

t0

(x, y)

r
l Left wheel orientation
r Right wheel orientation

Mobile Robot Model


Differential Drive
l

t1

t0

(x, y)

r
l Left wheel orientation
r Right wheel orientation
(x, y) Robot center position
Robot orientation

Mobile Robot Model


Differential Drive
l

t1

t0

(x, y)

r
l Left wheel orientation
r Right wheel orientation
(x, y) Robot center position
Robot orientation
v Linear velocity
Angular velocity

Mobile Robot Model


Differential Drive
l

t1

t0

Pose

(x, y)

r
l Left wheel orientation
r Right wheel orientation
(x, y) Robot center position
Robot orientation
v Linear velocity
Angular velocity

Mobile Robot Model


Differential Drive
l

t1

t0

Pose

(x, y)

r
l Left wheel orientation
r Right wheel orientation
(x, y) Robot center position

Kinematic Velocities

Robot orientation
v Linear velocity
Angular velocity

Mobile Robot Model


l

(x, y)

t1

t0

Mobile Robot Model


l

(x, y)

t1

Wheel radius

d
r
=
dt
b

t0

r
!

dr
dl

dt
dt

"

Wheel base

Mobile Robot Model


l

(x, y)

t1

Wheel radius

d
r
=
dt
b

t0

r
!

dr
dl

dt
dt

"

Wheel
base"
!

dx
r cos dl
dr
=
+
dt
2
dt
dt
!
"
dy
r sin dl
dr
=
+
dt
2
dt
dt

Mobile Robot Model


Discrete State Updates
xt = xt1 + x

t1

t0

yt = yt1 + y
t = t1 +

Mobile Robot Model


Discrete State Updates
xt = xt1 + x

t1

t0

yt = yt1 + y
t = t1 +

Treat discrete changes as motions


along circles of constant radius

Mobile Robot Model


Discrete State Updates

(xc , yc )
R

v = R
!v!
! !
R=! !

v
xc = x sin

v
yc = y + cos

Mobile Robot Model


Discrete State Updates
xt = xt1 + x

(xc , yc )

yt = yt1 + y
t = t1 +

Mobile Robot Model


Discrete State Updates
xt = xt1 + x

(xc , yc )

yt = yt1 + y
t = t1 +

Mobile Robot Model


Discrete State Updates
xt = xt1 + x

(xc , yc )

yt = yt1 + y

t = t1 +
R

v
xt
xc,t1 + sin (t1 + t)
yt = yc,t1 v cos (t1 + t)

t
t1 + t

Mobile Robot Model


Velocity model

t1

t0

v = v + !v

= + "
v
v
t)
xt = xt1 sin t1 + sin(t1 +

v
v
t)
yt = yt1 + cos t1 cos(t1 +

t = t1 +
t + # t

Mobile Robot Model


Velocity model
Gaussian noise

t1

t0

v = v + !v

= + "
v
v
t)
xt = xt1 sin t1 + sin(t1 +

v
v
t)
yt = yt1 + cos t1 cos(t1 +

t = t1 +
t + # t

Mobile Robot Model


Velocity model
Gaussian noise

t1

t0

v = v + !v

= + "
v
v
t)
xt = xt1 sin t1 + sin(t1 +

v
v
t)
yt = yt1 + cos t1 cos(t1 +

t = t1 +
t + # t

Why?

Outline
Kinematic Model of a Mobile Robot
Kalman Filter
Extended Kalman Filter
Unscented Kalman Filter
Particle Filter

Kalman Filter
Linear system model with Gaussian noise
Moments parameterization
First moment - mean
Second moment - covariance

Kalman Filter
What if the system is
nonlinear?

Linear system model with Gaussian noise


Moments parameterization
First moment - mean
Second moment - covariance

Kalman Filter
What if the system is
nonlinear?

Is this a reasonable
characterization of the system?

Linear system model with Gaussian noise


Moments parameterization
First moment - mean
Second moment - covariance

Kalman Filter
(algorithm)

t = At t1 + Bt ut
t = At t1 ATt + Rt

t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Kalman Filter
Thats it, only five
equations.

(algorithm)

t = At t1 + Bt ut

t = At t1 ATt + Rt

t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Kalman Filter
(algorithm)

Prediction

t = At t1 + Bt ut
t = At t1 ATt + Rt

t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Kalman Filter
(algorithm)

Prediction

Where should the


state be at this point?

t = At t1 + Bt ut

t = At t1 ATt + Rt

t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Kalman Filter
(algorithm)
Linear system model

t = At t1 + Bt ut
t = At t1 ATt + Rt

Kalman Filter
(algorithm)

t = At t1 + Bt ut System input
t = At t1 ATt + Rt

Kalman Filter
(algorithm)

t = At t1 + Bt ut System input
t = At t1 ATt + Rt

System covariance

Kalman Filter
(algorithm)

t = At t1 + Bt ut
t = At t1 ATt + Rt

We are stating that the system process behaves


based on the following assumption:
p(xt | ut , xt1 ) =

12 (xt At xt1 Bt ut )T Rt1 (xt At xt1 Bt ut )

det(2Rt )

1
2

Kalman Filter
(algorithm)

Process model

t = At t1 + Bt ut
t = At t1 ATt + Rt

We are stating that the system process behaves


based on the following assumption:
p(xt | ut , xt1 ) =

12 (xt At xt1 Bt ut )T Rt1 (xt At xt1 Bt ut )

det(2Rt )

1
2

Kalman Filter
(algorithm)

t = At t1 + Bt ut
t = At t1 ATt + Rt

At this point, the filter parameters are:


First and second moments
System process model
State update
Input update
Process noise

Kalman Filter
(algorithm)

t = At t1 + Bt ut
t = At t1 ATt + Rt

At this point, the filter parameters are:


First and second moments
System process model
State update
Input update
Process noise

Kalman Filter
(algorithm)

t = At t1 + Bt ut
t = At t1 ATt + Rt

At this point, the filter parameters are:


First and second moments
System process model
State update
Not
always
constant,
Input update
consider
adaptive
filters
for
Process noise
unknown process noise.

Kalman Filter
(algorithm)

t = At t1 + Bt ut
t = At t1 ATt + Rt

Example:

u = [x,
y]

A = I2 ,

B = tI2 ,

Rt =

2
diag(1 ,
T

t = t1 + [xt,

yt]

t = t1 + diag(12 , 22 )

2
2 )

Example:

u = [x,
y]
T
A = I2 ,

B = tI2 ,

Rt =

2
diag(1 ,
T

t = t1 + [xt,

yt]

2
2

t = t1 + diag(1 , 2 )
y
2
Equipotential
ellipse

x
1

2
2 )

Example:

u = [x,
y]
T
A = I2 ,

B = tI2 ,

Rt =

2
diag(1 ,
T

t = t1 + [xt,

yt]

2
2

t = t1 + diag(1 , 2 )
y
2
Equipotential
ellipse

x
1

t=0
T
0 = [0, 0]
0 = I2

2
2 )

Example:

u = [x,
y]
T
A = I2 ,

B = tI2 ,

Rt =

2
diag(1 ,
T

t = t1 + [xt,

yt]

2
2

t = t1 + diag(1 , 2 )
y
2
Equipotential
ellipse

2
2 )

x
1

t=0
T
0 = [0, 0]
0 = I2

t=1

Kalman Filter
(algorithm)

t = At t1 + Bt ut
t = At t1 ATt + Rt

t CtT (Ct
t CtT + Qt )1
Kt =
Measurement
update

t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Kalman Filter
(algorithm)

t = At t1 + Bt ut
Kalman gain

t = At t1 ATt + Rt

t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Kalman Filter
Kalman gain

(algorithm)

t CtT (Ct
t CtT + Qt )1
Kt =
How much weight should be given to the
prediction versus the measurement?

Kalman Filter
Kalman gain

(algorithm)

t CtT (Ct
t CtT + Qt )1
Kt =
How much weight should be given to the
prediction versus the measurement?
More parameters to consider:
Linear measurement model
Measurement noise

Kalman Filter
Kalman gain

(algorithm)

t CtT (Ct
t CtT + Qt )1
Kt =
How much weight should be given to the
prediction versus the measurement?
More parameters to consider:
Linear measurement model
Measurement noise

Kalman Filter
(algorithm)

t CtT (Ct
t CtT + Qt )1
Kt =
The measurement of the state behaves
based on the following assumption:

p(zt | xt ) =

12 (zt Ct xt )T Q1
t (zt Ct xt )

det(2Qt )

1
2

Kalman Filter
(algorithm)

T
T
1

Kt = t Ct (Ct t Ct + Qt )

t =
t + Kt (zt Ct
t )

Innovation

What is the error between the actual


and predicted measurements?

Kalman Filter
(algorithm)

T
T
1

Kt = t Ct (Ct t Ct + Qt )

t =
t + Kt (zt Ct
t )

As a thought exercise, what if we


have a perfect measurement model
and state feedback?

Kalman Filter
(algorithm)

T
T
1

Kt = t Ct (Ct t Ct + Qt )

t =
t + Kt (zt Ct
t )

As a thought exercise, what if we


have a perfect measurement model
and state feedback?

Ct = IN , Qt = 0N Kt = IN , t = zt

Kalman Filter
(algorithm)

T
T
1

Kt = t Ct (Ct t Ct + Qt )

t =
t + Kt (zt Ct
t )

As a thought exercise, what if we


have a perfect measurement model
and state feedback?

Ct = IN , Qt = 0N Kt = IN , t = zt

Kalman Filter
(algorithm)

T
T
1

Kt = t Ct (Ct t Ct + Qt )

t
t = (I Kt Ct )
How does the error of the measurement model
differ from the predicted error?

Kalman Filter
(algorithm)

T
T
1

Kt = t Ct (Ct t Ct + Qt )

t
t = (I Kt Ct )

Consider again if we have perfect measurements.

Kalman Filter
(algorithm)

T
T
1

Kt = t Ct (Ct t Ct + Qt )

t
t = (I Kt Ct )

Consider again if we have perfect measurements.

Ct = IN , Kt = IN t = 0N

Kalman Filter
(algorithm)

T
T
1

Kt = t Ct (Ct t Ct + Qt )

t
t = (I Kt Ct )

Consider again if we have perfect measurements.

Ct = IN , Kt = IN t = 0N
Clearly perfect information means there is no
error, so the error is degenerate (a point).

Kalman Filter
(algorithm)

T
T
1

Kt = t Ct (Ct t Ct + Qt )

t
t = (I Kt Ct )

Consider again if we have perfect measurements.

Ct = IN , Kt = IN t = 0N

This is just a thought exercise!

Kalman Filter
(algorithm)

t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
Example (cont.):

t
t = (I Kt Ct )

Ct = I2 ,

Qt =

2
diag(!1 ,

2
!2 )

Example (cont.):
T
T
1

Kt = t Ct (Ct t Ct + Qt )

t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Ct = I2 ,
Kt

Qt =

2
diag(!1 ,

2
!2 )

t (
t + Qt )1
=
=

(t1 +
(t1 +

2
2
diag(1 , 2 ))
2
2
diag(1 + "1 ,

2
2

2 1
"2 ))

Example (cont.):

t = t1 + [xt,

yt]

t = t1 + diag(12 , 22 )

Kt

t (
t + Qt )1
=
=

(t1 +
(t1 +

2
2
diag(1 , 2 ))
diag(12 + "21 ,

22 + "22 ))1

t = (I2 Kt )
t + Kt zt
t
t = (I2 Kt )

1
Prediction

t=0
T
0 = [0, 0]
0 = I2

t=1

1
Prediction

t=0
T
0 = [0, 0]
0 = I2

t=1
Measurement

Qt
zt

1
Prediction

t=0
T
0 = [0, 0]
0 = I2
1

t=1
Measurement

Qt
zt

Kalman Filter
(algorithm)

t = At t1 + Bt ut
t = At t1 ATt + Rt

What about
computational
complexity?

t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Kalman Filter
(algorithm)

t = At t1 + Bt ut
t = At t1 ATt + Rt

What about
computational
complexity?

t CtT (Ct
t CtT + Qt )1 N = 2?
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Kalman Filter
(algorithm)

t = At t1 + Bt ut
t = At t1 ATt + Rt

What about
computational
complexity?

t CtT (Ct
t CtT + Qt )1 N = 2?
Kt =
N = 210 ?
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Kalman Filter
(algorithm)

What about
global
uncertainty?

t = At t1 + Bt ut
t = At t1 ATt + Rt

What about
computational
complexity?

t CtT (Ct
t CtT + Qt )1 N = 2?
Kt =
N = 210 ?
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Kalman Filter
(algorithm)

What about
global
uncertainty?

t = At t1 + Bt ut
t = At t1 ATt + Rt

What about
computational
complexity?

t CtT (Ct
t CtT + Qt )1 N = 2?
Kt =
N = 210 ?
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

What if we redefine the parameterization?

Re-parameterization
= 1
Information matrix

Re-parameterization
= 1
Information matrix

Information vector

Information Filter
(algorithm)

1
T
1

t = (At t1 At + Rt )
1

t = t (At t1 t1 + Bt ut )

t =

T 1
Ct Qt Ct

t
+

t =

T 1
Ct Qt zt

+ t

Information Filter
(algorithm)

Prediction

1
T
1

t = (At t1 At + Rt )
1

t = t (At t1 t1 + Bt ut )

t =

T 1
Ct Qt Ct

t
+

t =

T 1
Ct Qt zt

+ t

Information Filter
(algorithm)

Prediction

1
T
1

t = (At t1 At + Rt )
1

t = t (At t1 t1 + Bt ut )

Measurement

t =

T 1
Ct Qt Ct

t
+

t =

T 1
Ct Qt zt

+ t

KF/IF Comparison
Kalman Filter

Information Filter

t = At t1 + Bt ut

t = (At 1 AT + Rt )1

t1 t

t = At t1 At + Rt

t (At 1 t1 + Bt ut )
t =
t1

t CtT (Ct
t CtT + Qt )1
Kt =

t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Reminder:

t =

T 1
Ct Qt Ct

t
+

t
t = CtT Q1
z
+

t
t

KF/IF Comparison
Kalman Filter

Information Filter

t = At t1 + Bt ut

t = (At 1 AT + Rt )1

t1 t
1

t = t

t = At t1 At + Rt

t (At 1 t1 + Bt ut )
t =
t1

t CtT (Ct
t CtT + Qt )1
Kt =

t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Reminder:

t =

T 1
Ct Qt Ct

t
+

t
t = CtT Q1
z
+

t
t

KF/IF Comparison
Kalman Filter

Information Filter

t = At t1 + Bt ut

t = (At 1 AT + Rt )1

t1 t

t = At t1 At + Rt

t (At 1 t1 + Bt ut )
t =
t1

t CtT (Ct
t CtT + Qt )1
Kt =

t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Reminder:

t
t =
t

t =

T 1
Ct Qt Ct

t
+

t
t = CtT Q1
z
+

t
t

Computational complexity?

KF/IF Comparison
Kalman Filter

Information Filter

t = At t1 + Bt ut

t = (At 1 AT + Rt )1

t1 t

t = At t1 At + Rt

t (At 1 t1 + Bt ut )
t =
t1

t CtT (Ct
t CtT + Qt )1
Kt =

t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Reminder:

t =

T 1
Ct Qt Ct

t
+

t
t = CtT Q1
z
+

t
t

Computational complexity?

KF/IF Comparison
Kalman Filter

Information Filter

t = At t1 + Bt ut

t = (At 1 AT + Rt )1

t1 t

t = At t1 At + Rt

t (At 1 t1 + Bt ut )
t =
t1

t CtT (Ct
t CtT + Qt )1
Kt =

t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Reminder:

t =

T 1
Ct Qt Ct

t
+

t
t = CtT Q1
z
+

t
t

Computational complexity?

KF/IF Comparison
Kalman Filter

Information Filter

t = At t1 + Bt ut

t = (At 1 AT + Rt )1

t1 t

t = At t1 At + Rt

t (At 1 t1 + Bt ut )
t =
t1

t CtT (Ct
t CtT + Qt )1
Kt =

t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Reminder:

t =

T 1
Ct Qt Ct

t
+

t
t = CtT Q1
z
+

t
t

We need the information matrix


inverse to compute the mean.

KF/IF Comparison
Kalman Filter

Information Filter

t = At t1 + Bt ut

t = (At 1 AT + Rt )1

t1 t

t = At t1 At + Rt
t =
t (At 1 t1 + Bt ut )

t1
The choice of IF or KF is application dependent.

t CtT (Ct
t CtT + Qt )1
Kt =

t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Reminder:

t =

T 1
Ct Qt Ct

t
+

t
t = CtT Q1
z
+

t
t

What if the process or measurement model


is non-linear (or both are non-linear)?

What if the process or measurement model


is non-linear (or both are non-linear)?
We could linearize the system model about the mean ...

Outline
Kinematic Model of a Mobile Robot
Kalman Filter
Extended Kalman Filter
Unscented Kalman Filter
Particle Filter

Extending the Kalman Filter


(to non-linear system models)
KF algorithm

t = At t1 + Bt ut
t = At t1 ATt + Rt

t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Extending the Kalman Filter


(to non-linear system models)

t = g(ut , t1 )

t = At t1 + Bt ut Non-linear
t =

At t1 ATt

+ Rt

process model

t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Extending the Kalman Filter


(to non-linear system models)

t = g(ut , t1 )

t = At t1 + Bt ut Non-linear

Linear process
process model
T
model
t = At t1 At + Rt

t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )

Extending the Kalman Filter


(to non-linear system models)

t = g(ut , t1 )

t = At t1 + Bt ut Non-linear

Linear process
process model
T
model
t = At t1 At + Rt

t CtT (Ct
t CtT + Qt )1
Kt =
h(
t )
t =
t + Kt (zt Ct
t )

Non-linear
model
measurement
t = (I Kt Ct )
t

Extending the Kalman Filter


(to non-linear system models)

t = g(ut , t1 )

t = At t1 + Bt ut Non-linear

Linear process
process model
T
model
t = At t1 At + Rt

T
T
1

K
=

C
(C

C
+
Q
)
Linear
t
t t
t t t
t
measurement
h(
t )
t =
t + Kt (zt Ct
t )
model

Non-linear
model
measurement
t = (I Kt Ct )
t

Extended Kalman Filter


(algorithm)

t = g(ut , t1 )
t = Gt t1 GTt + Rt

T
T
1

Kt = t Ht (Ht t Ht + Qt )

t =
t + Kt (zt h(
t ))
t
t = (I Kt Ht )

Extended Kalman Filter


(algorithm)

Linearized
process model

t = g(ut , t1 )
t = Gt t1 GTt + Rt

T
T
1

Kt = t Ht (Ht t Ht + Qt )

t =
t + Kt (zt h(
t ))
t
t = (I Kt Ht )

Extended Kalman Filter


(algorithm)

Linearized
process model

t = g(ut , t1 )
t = Gt t1 GTt + Rt

T
T
1

Linearized Kt = t Ht (Ht t Ht + Qt )

measurement
t =
t + Kt (zt h(
t ))
model

t
t = (I Kt Ht )

Extended Kalman Filter


(algorithm)

Linearized
process model

t = g(ut , t1 )
t = Gt t1 GTt + Rt

T
T
1

Linearized Kt = t Ht (Ht t Ht + Qt )

measurement
t =
t + Kt (zt h(
t ))
model

t
t = (I Kt Ht )

Linearization is generally a first order Taylor expansion.

Extended Kalman Filter


(algorithm)

Mobile robot example:

t0

Extended Kalman Filter


(algorithm)

Mobile robot example:

t1

t0

Extended Kalman Filter


(algorithm)

Mobile robot example:

t1

t0

(v, )

Extended Kalman Filter


(algorithm)

Mobile robot example:

Laser

t1

t0

(v, )

Extended Kalman Filter


(algorithm)

Mobile robot example:

Laser

t1

t0

(v, )

Robot has a map of the environment.

Extended Kalman Filter


(algorithm)

Mobile robot example:

Laser

t1

t0

(v, )

Well consider the problem


of localization via velocity
information and feature detection
with unknown correspondence.

Robot has a map of the environment.

Extended Kalman Filter


(algorithm)

Mobile robot example:

t1

t0

Velocity model
v = v + !v

= + "
v
v
t)
xt = xt1 sin t1 + sin(t1 +

v
v
t)
yt = yt1 + cos t1 cos(t1 +

t = t1 +
t + # t

Extended Kalman Filter


(algorithm)

Mobile robot example:

t1

t0

Velocity model
Gaussian noise
v = v + !v

= + "
v
v
t)
xt = xt1 sin t1 + sin(t1 +

v
v
t)
yt = yt1 + cos t1 cos(t1 +

t = t1 +
t + # t

Extended Kalman Filter


(algorithm)

Mobile robot example:

t1

t0

Velocity model
v = v + !v

= + "
v
v
t)
xt = xt1 sin t1 + sin(t1 +

v
v
t)
yt = yt1 + cos t1 cos(t1 +

t = t1 +
t + # t

... but the EKF already accounts for noise.

Extended Kalman Filter


(algorithm)

Mobile robot example:

Computing G:
From Taylor expansion:

t1

t0

g(ut , xt1 ) g(ut , t1 ) + Gt (xt1 t1 )

Extended Kalman Filter


(algorithm)

Mobile robot example:

Computing G:
From Taylor expansion:

t1

g(ut , xt1 ) g(ut , t1 ) + Gt (xt1 t1 )

g(ut , t1 )
Gt =
xt1

t0

Extended Kalman Filter


(algorithm)

Mobile robot example:

Computing G:
dxt
= 1,
dxt1

dxt
=0
dyt1

v
v
dxt
t1 dt1 = cos t1 + cos(t1 + t)
dyt
= 0,
dxt1

t0

dyt
=1
dyt1

v
v
dyt
= sin t1 + sin(t1 + t)
dt1

dt
= 0,
dxt1

dt
= 0,
dyt1

dt
=1
dt1

Extended Kalman Filter


(algorithm)

Mobile robot example:

Computing G:
dxt
= 1,
dxt1

dxt
=0
dyt1

v
v
dxt
t1 dt1 = cos t1 + cos(t1 + t)
dyt
= 0,
dxt1

t0

dyt
=1
dyt1

v
v
dyt
= sin t1 + sin(t1 + t)
dt1

Partial derivatives

dt
= 0,
dxt1

dt
= 0,
dyt1

dt
=1
dt1

Extended Kalman Filter


(algorithm)

Mobile robot example:

t1

t0

Computing G:

1 0
G = 0 1
0 0

wv cos t1
wv sin t1

+ cos(t1 + t)
+ sin(t1 + t)
1
v
w
v
w

Extended Kalman Filter


(algorithm)

Mobile robot example:

t1

Computing G:

1 0
G = 0 1
0 0

wv cos t1
wv sin t1

+ cos(t1 + t)
+ sin(t1 + t)
1
v
w
v
w

What about the process noise?

t0

Extended Kalman Filter


(algorithm)

Mobile robot example:

t1

Computing G:

1 0
G = 0 1
0 0

wv cos t1
wv sin t1

+ cos(t1 + t)
+ sin(t1 + t)
1
v
w
v
w

What about the process noise?

Rt =
t0

T
V t Mt V t

Extended Kalman Filter


(algorithm)

Mobile robot example:

t1

Computing G:

1 0
G = 0 1
0 0

wv cos t1
wv sin t1

+ cos(t1 + t)
+ sin(t1 + t)
1
v
w
v
w

What about the process noise?

Rt =

t0

!
!v
Mt =
0

"

0
,
!

T
V t Mt V t

g(ut , t1 )
Vt =
ut

Extended Kalman Filter


(algorithm)

Mobile robot example:

t1

Computing G:

1 0
G = 0 1
0 0

wv cos t1
wv sin t1

+ cos(t1 + t)
+ sin(t1 + t)
1
v
w
v
w

What about the process noise?

Rt =

t0

!
!v
Mt =
0

Noise in control space

"

0
,
!

T
V t Mt V t

g(ut , t1 )
Vt =
ut

Extended Kalman Filter


(algorithm)

Mobile robot example:


Obstacles detected

Measurement model
i
zt

i
(rt ,

i
t )

Extended Kalman Filter


(algorithm)

Mobile robot example:


Obstacles detected

Measurement model
Obstacle i
i
i

i=1
i=2

zt = (rt , t )

Extended Kalman Filter


(algorithm)

Mobile robot example:


Obstacles detected

i=1

Measurement model
Range
Obstacle i
i
i
zt = (rt , t ) Bearing

i=2

Extended Kalman Filter


(algorithm)

Mobile robot example:


Obstacles detected

Measurement model
i
zt

i
(rt ,

i
t )

We have a map, lets use it!

Extended Kalman Filter


(algorithm)

Mobile robot example:


Obstacles detected

Measurement model
i
zt

i
(rt ,

i
t )

We have a map, lets use it!


We compute the predicted
measurement according to the map
Correspond observed and
predicted obstacles
Update the estimate parameters

Extended Kalman Filter


(algorithm)

Mobile robot example:


Obstacles detected

Measurement model
i
zt

i
(rt ,

i
t )

What if we wanted to incorporate


more information into the model, such
as the shape, reflectivity, texture, etc.?

Mobile robot example (Measurement model):


For each observed obstacle: i
For each predicted obstacle (from the map): mk

Mobile robot example (Measurement model):


For each observed obstacle: i
For each predicted obstacle (from the map): mk
What is the predicted range and bearing of each
obstacle according to the map.

Mobile robot example (Measurement model):


For each observed obstacle: i
For each predicted obstacle (from the map): mk
What is the predicted range and bearing of each
obstacle
to the map.
! " according
#
2 + (m
2
(m

)
k,x
k,y
ztk =
atan2(mk, y y, mk, x x
)

Mobile robot example (Measurement model):


For each observed obstacle: i
For each predicted obstacle (from the map): mk
What is the predicted range and bearing of each
obstacle
to the map.
! " according
#
Check
2 + (m
2
(m

)
k,x
k,y
k
zt =
for

atan2(mk, y y, mk, x x
)
agreement!

Mobile robot example (Measurement model):


For each observed obstacle: i
For each predicted obstacle (from the map): mk
What is the predicted range and bearing of each
obstacle
to the map.
! " according
#
Check
2 + (m
2
(m

)
k,x
k,y
k
zt =
for

atan2(mk, y y, mk, x x
)
agreement!
Linearization of the measurement model

h(xt , k, m) h(
t , k, m) +
k
Ht

k
Ht (xt

h(
t , k, m)
=
xt

t )

Mobile robot example (Measurement model):


For each observed obstacle: i
For each predicted obstacle (from the map): mk
What is the predicted range and bearing of each
obstacle
to the map.
! " according
#
Check
2 + (m
2
(m

)
k,x
k,y
k
zt =
for

atan2(mk, y y, mk, x x
)
agreement!
Linearization of the measurement model

h(xt , k, m) h(
t , k, m) +

k
Ht (xt

h(
t , k, m)
=
xt
t , k, m) ?
What is h(
k
Ht

t )

Mobile robot example (Measurement model):


For each observed obstacle: i
For each predicted obstacle (from the map): mk
What is the predicted range and bearing of each
obstacle
to the map.
! " according
#
Check
2 + (m
2
(m

)
k,x
k,y
k
zt =
for

atan2(mk, y y, mk, x x
)
agreement!
Linearization of the measurement model

h(xt , k, m) h(
t , k, m) +

k
Ht (xt

h(
t , k, m)
=
xt
t , k, m) ?
What is h(
k
Ht

t )

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):
! "
#
2 + (m
2
(m

)
k,x
k,y
k
zt =
atan2(mk, y y, mk, x x
)

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):
! "
#
2 + (m
2
(m

)
k,x
k,y
k
zt =
atan2(mk, y y, mk, x x
)

mk,x
x
zk
k
Ht = mk,yt,r

y
k )2
(
zt,r

mk,y
y
zk
t,r
mk,x
x
(zk )2
t,r

0
1

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):
! "
#
2 + (m
2
(m

)
k,x
k,y
k
zt =
atan2(mk, y y, mk, x x
)

mk,x
x
zk
k
Ht = mk,yt,r

y
k )2
(
zt,r

Done

k
St

mk,y
y
zk
t,r
mk,x
x
(zk )2
t,r

k T
k
Ht t (Ht )

+ Qt

0
1

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):
! "
#
2 + (m
2
(m

)
k,x
k,y
k
zt =
atan2(mk, y y, mk, x x
)

mk,x
x
zk
k
Ht = mk,yt,r

y
k )2
(
zt,r

Done

k
St

mk,y
y
zk
t,r
mk,x
x
(zk )2
t,r

k T
k
Ht t (Ht )

+ Qt

0
1

Error covariance of this predicted measurement

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):
! "
#
2 + (m
2
(m

)
k,x
k,y
k
zt =
atan2(mk, y y, mk, x x
)

mk,x
x
zk
k
Ht = mk,yt,r

y
k )2
(
zt,r

Done

k
St

mk,y
y
zk
t,r
mk,x
x
(zk )2
t,r

k T
k
Ht t (Ht )

+ Qt

0
1

Error covariance of this predicted measurement


Now that weve gone through all of the possible
obstacles, we need to figure out a correspondence.

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):
Done

k
Compute zt ,

k
Ht ,

k
St

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):
k
Compute zt ,

k
Ht ,

k
St

Done
Compute the most likely obstacle that matches the
observed obstacle using Maximum Likelihood.

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):
k
Compute zt ,

k
Ht ,

k
St

Done
Compute the most likely obstacle that matches the
observed obstacle using Maximum Likelihood.

j = arg max
k

12 (zti
ztk )T (Stk )1 (zti
ztk )

1
k
det(2St ) 2

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):
k
Compute zt ,

k
Ht ,

k
St

Done
Compute the most likely obstacle that matches the
observed obstacle using Maximum Likelihood.

j = arg max
k

12 (zti
ztk )T (Stk )1 (zti
ztk )

1
k
det(2St ) 2

ML gives us the predicted obstacle that


corresponds with the observed obstacle
with highest probability.

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):
Done

k
Compute zt ,

k
Ht ,

Compute j

k
St

At this point, we only need to update the estimate to


include the information from this measurement.

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):
Done

k
Compute zt ,

k
Ht ,

Compute j

k
St

At this point, we only need to update the estimate to


include the information from this measurement.
As the EKF is recursive, we can incrementally
introduce each observed obstacle.

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):
Done

k
Compute zt ,

k
Ht ,

k
St

Compute j

At this point, we only need to update the estimate to


include the information from this measurement.
As the EKF is recursive, we can incrementally
introduce each observed obstacle.
j T
j 1

= t (Hi ) (St )
j
i i

t =
t + Kt (zt zt )
i j

t = (I Kt Ht )t
i
Kt

Mobile robot example (Measurement model):


For each observed obstacle:
For each predicted obstacle (from the map):
Done

k
Compute zt ,

k
Ht ,

k
St

Compute j

At this point, we only need to update the estimate to


include the information from this measurement.
As the EKF is recursive, we can incrementally
introduce each observed obstacle.

j T
j 1

= t (Hi ) (St )
j
i i

t =
t + Kt (zt zt )
i j

t = (I Kt Ht )t
t
Done, t =
t , t =
i
Kt

Extended Kalman Filter


Handles non-linear systems via linearization
Assumes a unimodal solution

Extended Kalman Filter


Handles non-linear systems via linearization
Assumes a unimodal solution
Multi-hypothesis filters address this issue.

Extended Kalman Filter


Handles non-linear systems via linearization
Assumes a unimodal solution
Multi-hypothesis filters address this issue.
Example (multi-robot range-only sensing):

Extended Kalman Filter


Handles non-linear systems via linearization
Assumes a unimodal solution
Multi-hypothesis filters address this issue.
Example (multi-robot range-only sensing):
Robot is here.

Extended Kalman Filter


Handles non-linear systems via linearization
Assumes a unimodal solution
Multi-hypothesis filters address this issue.
Example (multi-robot range-only sensing):
Robot is here.
Measurements
indicate it could
be at either
location!

Extended Kalman Filter


Handles non-linear systems via linearization
Assumes a unimodal solution
Multi-hypothesis filters address this issue.
Example (multi-robot range-only sensing):
Robot is here.
Measurements
indicate it could
be at either
Consider using
location!
multiple hypotheses
and monitor each
estimate covariance.

What about the EKF in the information form?

Extended Information Filter


(algorithm)

t1 =

1
t1 t1

t = g(ut , t1 )
1

t = (Gt t1 GTt + Rt )1

t
t =
t
T 1

t = t + Ht Qt Ht
T 1

t = t + Ht Qt (zt h(
t ) Ht
t ))

Reminder:

Gazebo Simulation Example

Robot follows a simple trajectory:

v = 0.1 m/s
= 0.1 rad/s

Gazebo Simulation Example

Robot is equipped with a GPS sensor.

Gazebo Simulation Example

Gazebo Simulation Example


GPS

Gazebo Simulation Example


GPS

Estimate

Gazebo Simulation Example


GPS

Estimate
True Position

Gazebo Simulation Example


Position Error

Angular Error

Gazebo Simulation Example


Position Error

Angular Error

GPS

Estimate

Gazebo Simulation Example


Position Error

Angular Error

GPS

Estimate
Are the results reasonable given
the system model and sensors?

Gazebo Simulation Example


Are the results reasonable given
the system model and sensors?

Gazebo Simulation Example


Are the results reasonable given
the system model and sensors?
To answer this question we need to
examine the system characterization:

How was the filter initialized?


How were the process and measurement
noise models defined?

Gazebo Simulation Example


Are the results reasonable given
the system model and sensors?
To answer this question we need to
examine the system characterization:

= [0, 0, 0]
How was the filter initialized? = 100I3

How were the process and measurement


noise models defined?

Gazebo Simulation Example


Are the results reasonable given
the system model and sensors?
To answer this question we need to
examine the system characterization:

= [0, 0, 0]
How was the filter initialized? = 100I3

How were the process and measurement


noise models defined?
R= 0.1I3
10 0 0
Q = 0 10 0
0 0 2

Gazebo Simulation Example


Are the results reasonable given
the system model and sensors?
To answer this question we need to
examine the system characterization:

= [0, 0, 0]
How was the filter initialized? = 100I3

How were the process and measurement


noise models defined?
R= 0.1I3
10 0 0
Q = 0 10 0
0 0 2

Why?

Gazebo Simulation Example


Are the results reasonable given
the system model and sensors?
To answer this question we need to
examine the system characterization:

= [0, 0, 0]
How was the filter initialized? = 100I3

How were the process and measurement


noise models defined?
R= 0.1I3
10 0 0
Q = 0 10 0
0 0 2

Why?

Filters require
tuning, guessing is
not a good idea!

Gazebo Simulation Example


In simulation we have access to the error models.
From the world file:

!
"
0.5 0.1
Mt =
0.1 0.5

Recall:

Extended Kalman Filter


(algorithm)

Mobile robot example:

t1

Computing G:

1 0
G = 0 1
0 0

wv cos t1
wv sin t1

+ cos(t1 + t)
+ sin(t1 + t)
1
v
w
v
w

What about the process noise?

Rt =

t0

!
!v
Mt =
0

Noise in control space

"

0
,
!

T
V t Mt V t

g(ut , t1 )
Vt =
ut

Gazebo Simulation Example


In simulation we have access to the error models.
From the world file:

!
"
0.5 0.1
Mt =
0.1 0.5
Noise in control space

Rt = Vt Mt VtT Non-constant noise

Gazebo Simulation Example


In simulation we have access to the error models.
From the world file:

!
"
0.5 0.1
Mt =
0.1 0.5
Noise in control space

Rt = Vt Mt VtT Non-constant noise


sin +sin(+wt t)

g(ut , t1 )
Vt =
=
ut

t
cos cos(+wt t)
t

vt (sin sin(+t t))


vt cos(+t t)t
+
2
t
t
vt (cos cos(+t t))
vt sin(+t t)t

+
t
t2

Gazebo Simulation Example


In simulation we have access to the error models.
From the world file:

1 0
Q = 0 1
0 0

0
0
0.4

GPS provides state information directly.

Gazebo Simulation Example


Old Results

New Results

Position Error

Angular Error

Old

New

Gazebo Simulation Example


Can we improve these results?

How is it possible that these results are better?

Is there a better way to compute the noise?

Adaptive Filtering Techniques

Generally applies to fitting or learning


system models.

Updates the process (measurement) model

based on the disparity between the


predicted and sensed state (measurements).

Gazebo Simulation using an


Adaptive EKF

Gazebo Simulation using an


Adaptive EKF
Position Error

Angular Error

Outline
Kinematic Model of a Mobile Robot
Kalman Filter
Extended Kalman Filter
Unscented Kalman Filter
Particle Filter

Unscented Kalman Filter


Derivative-free alternative to EKF
Higher-order approximation accuracy than
EKF (2nd vs. 1st)

Represents varying distributions via dicrete


points (sigma points)

Unscented Kalman Filter

Unscented Kalman Filter


Equipotential ellipse

R
N =2

Sigma point

Unscented Kalman Filter


Equipotential ellipse

g(t1 , ut )
2

R
N =2

Sigma point

Unscented Kalman Filter


Equipotential ellipse

g(t1 , ut )
2

R
N =2

Sigma point

Unscented Kalman Filter


Equipotential ellipse

g(t1 , ut )
2

R
N =2

Sigma point

Sigma points are typically (but not necessarily)


distributed in this manner.

Unscented Kalman Filter


Equipotential ellipse

g(t1 , ut )
2

R
N =2

Sigma point

Sigma points are typically (but not necessarily)


distributed in this manner.
The following derivation assumes this distribution in N.

Unscented Kalman Filter

... it is easier to approximate a probability distribution


than it is to approximate an arbitrary nonlinear function
or transformation.

Unscented Kalman Filter


(algorithm)
At a high level:

Unscented Kalman Filter


(algorithm)
At a high level:
1. Initialize sigma points based on the moments
parameterization.

Unscented Kalman Filter


(algorithm)
At a high level:
1. Initialize sigma points based on the moments
parameterization.
2. Transform the sigma points according to the
nonlinear process model.

Unscented Kalman Filter


(algorithm)
At a high level:
1. Initialize sigma points based on the moments
parameterization.
2. Transform the sigma points according to the
nonlinear process model.
3. Initialize predicted sigma points based on the
transformed sigma points.

Unscented Kalman Filter


(algorithm)
At a high level:
1. Initialize sigma points based on the moments
parameterization.
2. Transform the sigma points according to the
nonlinear process model.
3. Initialize predicted sigma points based on the
transformed sigma points.
4. Transform the predicted sigma points according to
the nonlinear measurement model.

Unscented Kalman Filter


(algorithm)
At a high level:
1. Initialize sigma points based on the moments
parameterization.
2. Transform the sigma points according to the
nonlinear process model.
3. Initialize predicted sigma points based on the
transformed sigma points.
4. Transform the predicted sigma points according to
the nonlinear measurement model.
5. Update the moments parameterization based on
the measurement correction.

Initialization of Sigma Points

si = {xi , Wi }
Each Sigma point is defined by a state and weight:

xi RN

Wi R

Initialization of Sigma Points


Sigma points must have the following properties:
2N
!

Wi = 1

i=0

2N
!

Wi xi

i=0

2N
!
i=0

Wi (xi )(xi )

Initialization of Sigma Points


Sigma points must have the following properties:
2N
!

Wi = 1

i=0

Consistent with
first moment

2N
!
i=0

2N
!

Wi xi

i=0

Wi (xi )(xi )

Initialization of Sigma Points


Sigma points must have the following properties:
2N
!

Wi = 1

i=0

Consistent with
first moment

2N
!
i=0

2N
!
i=0

Wi xi

Consistent with
second moment

Wi (xi )(xi )

Initialization of Sigma Points


x0 =

Initialization of Sigma Points


x0 =

xi = +

!"

1 W0

Initialization of Sigma Points


x0 =

xi = +
i = {1, . . . , N }

!"

1 W0

Initialization of Sigma Points


x0 =

xi = +
i = {1, . . . , N }

!"

Matrix Square-Root
rowi if B = AT A
columni if B = AAT

1 W0

Initialization of Sigma Points


x0 =

xi = +
i = {1, . . . , N }

!"

xi+N =

Matrix Square-Root
rowi if B = AT A
columni if B = AAT

1 W0

!"

1 W0

Initialization of Sigma Points


x2
x3
R

x0

x1
x4

Initialization of Sigma Points


1 W0
Wi =
2N

Initialization of Sigma Points


1 W0
Wi =
2N
i = {1, . . . , 2N }

Initialization of Sigma Points


1 W0
Wi =
2N
i = {1, . . . , 2N }

2
, = (N + ) N
W0 =
N +

Initialization of Sigma Points


1 W0
Wi =
2N
i = {1, . . . , 2N }

2
, = (N + ) N
W0 =
N +

Scaling parameters

Initialization of Sigma Points


= [0, 0]
!
"
2 0
=
0 1

= 1,

= 0.1 : 0.1 : 1

Initialization of Sigma Points


Initialize(, , , ) :

Initialization of Sigma Points


Initialize(, , , ) :

2
, = (N + ) N
W0 =
N +

Initialization of Sigma Points


Initialize(, , , ) :

2
, = (N + ) N
W0 =
N +
x0 =

Initialization of Sigma Points


Initialize(, , , ) :

2
, = (N + ) N
W0 =
N +
x0 =
!"
#
N
xi = +

1 W0
i

Initialization of Sigma Points


Initialize(, , , ) :

2
, = (N + ) N
W0 =
N +
x0 =
!"
#
N
xi = +

1 W0
i
!"
#
N
xi+N =

1 W0
i

Unscented Kalman Filter


(algorithm)
Prediction

{x0 , xi } = Initialize(, , , )
{
x0 , x
i } = g(x0 , xi , u)

2N
!

Wi x
i

i=0

2N
!
i=0

Wi (
xi
)(
xi
) + R
T

Unscented Kalman Filter


(algorithm)
Prediction

{x0 , xi } = Initialize(, , , )
{
x0 , x
i } = g(x0 , xi , u)

2N
!

Wi x
i

i=0

2N
!
i=0

Wi (
xi
)(
xi
) + R
T

Correction of Sigma Points

, )
{
x0 , x
i } = Initialize(
, ,
{
z0 , zi } = h(
x0 , x
i , u)
z =
S=

2N
!

2N
!

Wi zi

i=0

Wi (
zi z)(
zi z)T + Q

i=0
2N
!

i=0

Wi (
xi
)(
zi z)

K = S

Unscented Kalman Filter


(algorithm)

= Prediction(, , u, , )
{
xi ,
, }

{z, , , K, S} = Sigma Correction(xi , , , , )


=
+ K(z z)
KSK T
=

Outline
Kinematic Model of a Mobile Robot
Kalman Filter
Extended Kalman Filter
Unscented Kalman Filter
Particle Filter

Particle Filters
Nonparametric representation of estimation
Permits a broader space of distributions
(including multimodal distributions)

Represents estimates as discrete hypotheses


(particles)

Samples (and resamples) based on

probability distribution of process and


measurement models

Basic Particle Filter

Basic Particle Filter


There are many
variations!

Basic Particle Filter


There are many
variations!

Xt =

xm
t
1
{!xt ,

1
M
wt ", . . . , !xt ,

M
wt "}

Basic Particle Filter


There are many
variations!

Xt =

xm
t
1
{!xt ,

1
M
wt ", . . . , !xt ,

M
wt "}

Basic idea:
Create a set of weighted hypotheses
(particles) based on the previous set of
hypotheses, system inputs, and
measurements.

Basic Particle Filter


There are many
variations!

Xt =

xm
t
1
{!xt ,

1
M
wt ", . . . , !xt ,

M
wt "}

Basic idea:
Create a set of weighted hypotheses
(particles) based on the previous set of
hypotheses, system inputs, and
measurements.

Create a set of hypotheses sampled from


the weighted hypotheses based on the
distribution of weights.

Basic Particle Filter


There are many
variations!

Xt =

xm
t
1
{!xt ,

1
M
wt ", . . . , !xt ,

M
wt "}

Basic idea:
Create a set of weighted hypotheses
(particles) based on the previous set of
Monte Carlo
hypotheses, system inputs, and
method
measurements.

Create a set of hypotheses sampled from


the weighted hypotheses based on the
distribution of weights.

Basic Particle Filter


There are many
variations!

Xt =

xm
t
1
{!xt ,

1
M
wt ", . . . , !xt ,

M
wt "}

Basic idea:
Create a set of weighted hypotheses
(particles) based on the previous set of
Monte Carlo
hypotheses, system inputs, and
method
measurements.

Create a set of hypotheses sampled from

the weighted hypotheses based on the


distribution of weights.
Sample Importance Resampling

Basic Particle Filter


(algorithm)

For M particles:

Basic Particle Filter


(algorithm)

For M particles:
m
m
sample xt p(xt | ut , xt1 )

Basic Particle Filter

(algorithm)
Sampling introduces noise
For M particles:
to every particle

sample

m
xt

m
p(xt | ut , xt1 )

Basic Particle Filter

(algorithm)
Sampling introduces noise
For M particles:
to every particle
m
sample xt p(xt | ut ,
wtm = p(zt | xm
t )

m
xt1 )

Basic Particle Filter

(algorithm)
Sampling introduces noise
For M particles:
to every particle
Importance
Factor

m
sample xt p(xt | ut ,
wtm = p(zt | xm
t )

m
xt1 )

Basic Particle Filter

(algorithm)
Sampling introduces noise
For M particles:
to every particle
Importance
Factor

End

m
m
sample xt p(xt | ut , xt1 )
wtm = p(zt | xm
t )
m
Xt = Xt + !xm
,
w
t
t "

Basic Particle Filter

(algorithm)
Sampling introduces noise
For M particles:
to every particle
Importance
Factor

End

m
m
sample xt p(xt | ut , xt1 )
wtm = p(zt | xm
t )
m
Xt = Xt + !xm
,
w
t
t "

t)
Xt = Importance Resampling(X

Importance Resampling
Take the particles and lay
them side-by-side according
to each particles weight.

m
x
t

Importance Resampling
Take the particles and lay
them side-by-side according
to each particles weight.
1
wt

1
x
t

wtm

m
x
t

m
x
t

Importance Resampling
Take the particles and lay
them side-by-side according
to each particles weight.
1
wt

1
x
t

m
x
t

wtm

m
x
t

Normalize the weights and


sample based on this distribution.

Importance Resampling
Take the particles and lay
them side-by-side according
to each particles weight.
1
wt

1
x
t

wtm

m
x
t

m
x
t

Particles with
higher weights
will be selected
with higher
probability.

Normalize the weights and


sample based on this distribution.

Common Issues
Particle deprivation
Random number generation
Resampling frequency

Common Issues
Particle deprivation
Random number generation
frequency
Resampling
!
wtm

1
m
p(zt | xm
)
w
t
t1

if resampling took place


otherwise

You might also like