Professional Documents
Culture Documents
University of Pennsylvania
Philadelphia, Pennsylvania
Reference
Outline
Kinematic Model of a Mobile Robot
Kalman Filter
Extended Kalman Filter
Unscented Kalman Filter
Particle Filter
Outline
Kinematic Model of a Mobile Robot
Kalman Filter
Extended Kalman Filter
Unscented Kalman Filter
Particle Filter
t1
t0
t1
t0
(x, y)
t1
t0
(x, y)
r
l Left wheel orientation
r Right wheel orientation
t1
t0
(x, y)
r
l Left wheel orientation
r Right wheel orientation
(x, y) Robot center position
Robot orientation
t1
t0
(x, y)
r
l Left wheel orientation
r Right wheel orientation
(x, y) Robot center position
Robot orientation
v Linear velocity
Angular velocity
t1
t0
Pose
(x, y)
r
l Left wheel orientation
r Right wheel orientation
(x, y) Robot center position
Robot orientation
v Linear velocity
Angular velocity
t1
t0
Pose
(x, y)
r
l Left wheel orientation
r Right wheel orientation
(x, y) Robot center position
Kinematic Velocities
Robot orientation
v Linear velocity
Angular velocity
(x, y)
t1
t0
(x, y)
t1
Wheel radius
d
r
=
dt
b
t0
r
!
dr
dl
dt
dt
"
Wheel base
(x, y)
t1
Wheel radius
d
r
=
dt
b
t0
r
!
dr
dl
dt
dt
"
Wheel
base"
!
dx
r cos dl
dr
=
+
dt
2
dt
dt
!
"
dy
r sin dl
dr
=
+
dt
2
dt
dt
t1
t0
yt = yt1 + y
t = t1 +
t1
t0
yt = yt1 + y
t = t1 +
(xc , yc )
R
v = R
!v!
! !
R=! !
v
xc = x sin
v
yc = y + cos
(xc , yc )
yt = yt1 + y
t = t1 +
(xc , yc )
yt = yt1 + y
t = t1 +
(xc , yc )
yt = yt1 + y
t = t1 +
R
v
xt
xc,t1 + sin (t1 + t)
yt = yc,t1 v cos (t1 + t)
t
t1 + t
t1
t0
v = v + !v
= + "
v
v
t)
xt = xt1 sin t1 + sin(t1 +
v
v
t)
yt = yt1 + cos t1 cos(t1 +
t = t1 +
t + # t
t1
t0
v = v + !v
= + "
v
v
t)
xt = xt1 sin t1 + sin(t1 +
v
v
t)
yt = yt1 + cos t1 cos(t1 +
t = t1 +
t + # t
t1
t0
v = v + !v
= + "
v
v
t)
xt = xt1 sin t1 + sin(t1 +
v
v
t)
yt = yt1 + cos t1 cos(t1 +
t = t1 +
t + # t
Why?
Outline
Kinematic Model of a Mobile Robot
Kalman Filter
Extended Kalman Filter
Unscented Kalman Filter
Particle Filter
Kalman Filter
Linear system model with Gaussian noise
Moments parameterization
First moment - mean
Second moment - covariance
Kalman Filter
What if the system is
nonlinear?
Kalman Filter
What if the system is
nonlinear?
Is this a reasonable
characterization of the system?
Kalman Filter
(algorithm)
t = At t1 + Bt ut
t = At t1 ATt + Rt
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Kalman Filter
Thats it, only five
equations.
(algorithm)
t = At t1 + Bt ut
t = At t1 ATt + Rt
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Kalman Filter
(algorithm)
Prediction
t = At t1 + Bt ut
t = At t1 ATt + Rt
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Kalman Filter
(algorithm)
Prediction
t = At t1 + Bt ut
t = At t1 ATt + Rt
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Kalman Filter
(algorithm)
Linear system model
t = At t1 + Bt ut
t = At t1 ATt + Rt
Kalman Filter
(algorithm)
t = At t1 + Bt ut System input
t = At t1 ATt + Rt
Kalman Filter
(algorithm)
t = At t1 + Bt ut System input
t = At t1 ATt + Rt
System covariance
Kalman Filter
(algorithm)
t = At t1 + Bt ut
t = At t1 ATt + Rt
det(2Rt )
1
2
Kalman Filter
(algorithm)
Process model
t = At t1 + Bt ut
t = At t1 ATt + Rt
det(2Rt )
1
2
Kalman Filter
(algorithm)
t = At t1 + Bt ut
t = At t1 ATt + Rt
Kalman Filter
(algorithm)
t = At t1 + Bt ut
t = At t1 ATt + Rt
Kalman Filter
(algorithm)
t = At t1 + Bt ut
t = At t1 ATt + Rt
Kalman Filter
(algorithm)
t = At t1 + Bt ut
t = At t1 ATt + Rt
Example:
u = [x,
y]
A = I2 ,
B = tI2 ,
Rt =
2
diag(1 ,
T
t = t1 + [xt,
yt]
t = t1 + diag(12 , 22 )
2
2 )
Example:
u = [x,
y]
T
A = I2 ,
B = tI2 ,
Rt =
2
diag(1 ,
T
t = t1 + [xt,
yt]
2
2
t = t1 + diag(1 , 2 )
y
2
Equipotential
ellipse
x
1
2
2 )
Example:
u = [x,
y]
T
A = I2 ,
B = tI2 ,
Rt =
2
diag(1 ,
T
t = t1 + [xt,
yt]
2
2
t = t1 + diag(1 , 2 )
y
2
Equipotential
ellipse
x
1
t=0
T
0 = [0, 0]
0 = I2
2
2 )
Example:
u = [x,
y]
T
A = I2 ,
B = tI2 ,
Rt =
2
diag(1 ,
T
t = t1 + [xt,
yt]
2
2
t = t1 + diag(1 , 2 )
y
2
Equipotential
ellipse
2
2 )
x
1
t=0
T
0 = [0, 0]
0 = I2
t=1
Kalman Filter
(algorithm)
t = At t1 + Bt ut
t = At t1 ATt + Rt
t CtT (Ct
t CtT + Qt )1
Kt =
Measurement
update
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Kalman Filter
(algorithm)
t = At t1 + Bt ut
Kalman gain
t = At t1 ATt + Rt
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Kalman Filter
Kalman gain
(algorithm)
t CtT (Ct
t CtT + Qt )1
Kt =
How much weight should be given to the
prediction versus the measurement?
Kalman Filter
Kalman gain
(algorithm)
t CtT (Ct
t CtT + Qt )1
Kt =
How much weight should be given to the
prediction versus the measurement?
More parameters to consider:
Linear measurement model
Measurement noise
Kalman Filter
Kalman gain
(algorithm)
t CtT (Ct
t CtT + Qt )1
Kt =
How much weight should be given to the
prediction versus the measurement?
More parameters to consider:
Linear measurement model
Measurement noise
Kalman Filter
(algorithm)
t CtT (Ct
t CtT + Qt )1
Kt =
The measurement of the state behaves
based on the following assumption:
p(zt | xt ) =
12 (zt Ct xt )T Q1
t (zt Ct xt )
det(2Qt )
1
2
Kalman Filter
(algorithm)
T
T
1
Kt = t Ct (Ct t Ct + Qt )
t =
t + Kt (zt Ct
t )
Innovation
Kalman Filter
(algorithm)
T
T
1
Kt = t Ct (Ct t Ct + Qt )
t =
t + Kt (zt Ct
t )
Kalman Filter
(algorithm)
T
T
1
Kt = t Ct (Ct t Ct + Qt )
t =
t + Kt (zt Ct
t )
Ct = IN , Qt = 0N Kt = IN , t = zt
Kalman Filter
(algorithm)
T
T
1
Kt = t Ct (Ct t Ct + Qt )
t =
t + Kt (zt Ct
t )
Ct = IN , Qt = 0N Kt = IN , t = zt
Kalman Filter
(algorithm)
T
T
1
Kt = t Ct (Ct t Ct + Qt )
t
t = (I Kt Ct )
How does the error of the measurement model
differ from the predicted error?
Kalman Filter
(algorithm)
T
T
1
Kt = t Ct (Ct t Ct + Qt )
t
t = (I Kt Ct )
Kalman Filter
(algorithm)
T
T
1
Kt = t Ct (Ct t Ct + Qt )
t
t = (I Kt Ct )
Ct = IN , Kt = IN t = 0N
Kalman Filter
(algorithm)
T
T
1
Kt = t Ct (Ct t Ct + Qt )
t
t = (I Kt Ct )
Ct = IN , Kt = IN t = 0N
Clearly perfect information means there is no
error, so the error is degenerate (a point).
Kalman Filter
(algorithm)
T
T
1
Kt = t Ct (Ct t Ct + Qt )
t
t = (I Kt Ct )
Ct = IN , Kt = IN t = 0N
Kalman Filter
(algorithm)
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
Example (cont.):
t
t = (I Kt Ct )
Ct = I2 ,
Qt =
2
diag(!1 ,
2
!2 )
Example (cont.):
T
T
1
Kt = t Ct (Ct t Ct + Qt )
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Ct = I2 ,
Kt
Qt =
2
diag(!1 ,
2
!2 )
t (
t + Qt )1
=
=
(t1 +
(t1 +
2
2
diag(1 , 2 ))
2
2
diag(1 + "1 ,
2
2
2 1
"2 ))
Example (cont.):
t = t1 + [xt,
yt]
t = t1 + diag(12 , 22 )
Kt
t (
t + Qt )1
=
=
(t1 +
(t1 +
2
2
diag(1 , 2 ))
diag(12 + "21 ,
22 + "22 ))1
t = (I2 Kt )
t + Kt zt
t
t = (I2 Kt )
1
Prediction
t=0
T
0 = [0, 0]
0 = I2
t=1
1
Prediction
t=0
T
0 = [0, 0]
0 = I2
t=1
Measurement
Qt
zt
1
Prediction
t=0
T
0 = [0, 0]
0 = I2
1
t=1
Measurement
Qt
zt
Kalman Filter
(algorithm)
t = At t1 + Bt ut
t = At t1 ATt + Rt
What about
computational
complexity?
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Kalman Filter
(algorithm)
t = At t1 + Bt ut
t = At t1 ATt + Rt
What about
computational
complexity?
t CtT (Ct
t CtT + Qt )1 N = 2?
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Kalman Filter
(algorithm)
t = At t1 + Bt ut
t = At t1 ATt + Rt
What about
computational
complexity?
t CtT (Ct
t CtT + Qt )1 N = 2?
Kt =
N = 210 ?
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Kalman Filter
(algorithm)
What about
global
uncertainty?
t = At t1 + Bt ut
t = At t1 ATt + Rt
What about
computational
complexity?
t CtT (Ct
t CtT + Qt )1 N = 2?
Kt =
N = 210 ?
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Kalman Filter
(algorithm)
What about
global
uncertainty?
t = At t1 + Bt ut
t = At t1 ATt + Rt
What about
computational
complexity?
t CtT (Ct
t CtT + Qt )1 N = 2?
Kt =
N = 210 ?
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Re-parameterization
= 1
Information matrix
Re-parameterization
= 1
Information matrix
Information vector
Information Filter
(algorithm)
1
T
1
t = (At t1 At + Rt )
1
t = t (At t1 t1 + Bt ut )
t =
T 1
Ct Qt Ct
t
+
t =
T 1
Ct Qt zt
+ t
Information Filter
(algorithm)
Prediction
1
T
1
t = (At t1 At + Rt )
1
t = t (At t1 t1 + Bt ut )
t =
T 1
Ct Qt Ct
t
+
t =
T 1
Ct Qt zt
+ t
Information Filter
(algorithm)
Prediction
1
T
1
t = (At t1 At + Rt )
1
t = t (At t1 t1 + Bt ut )
Measurement
t =
T 1
Ct Qt Ct
t
+
t =
T 1
Ct Qt zt
+ t
KF/IF Comparison
Kalman Filter
Information Filter
t = At t1 + Bt ut
t = (At 1 AT + Rt )1
t1 t
t = At t1 At + Rt
t (At 1 t1 + Bt ut )
t =
t1
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Reminder:
t =
T 1
Ct Qt Ct
t
+
t
t = CtT Q1
z
+
t
t
KF/IF Comparison
Kalman Filter
Information Filter
t = At t1 + Bt ut
t = (At 1 AT + Rt )1
t1 t
1
t = t
t = At t1 At + Rt
t (At 1 t1 + Bt ut )
t =
t1
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Reminder:
t =
T 1
Ct Qt Ct
t
+
t
t = CtT Q1
z
+
t
t
KF/IF Comparison
Kalman Filter
Information Filter
t = At t1 + Bt ut
t = (At 1 AT + Rt )1
t1 t
t = At t1 At + Rt
t (At 1 t1 + Bt ut )
t =
t1
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Reminder:
t
t =
t
t =
T 1
Ct Qt Ct
t
+
t
t = CtT Q1
z
+
t
t
Computational complexity?
KF/IF Comparison
Kalman Filter
Information Filter
t = At t1 + Bt ut
t = (At 1 AT + Rt )1
t1 t
t = At t1 At + Rt
t (At 1 t1 + Bt ut )
t =
t1
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Reminder:
t =
T 1
Ct Qt Ct
t
+
t
t = CtT Q1
z
+
t
t
Computational complexity?
KF/IF Comparison
Kalman Filter
Information Filter
t = At t1 + Bt ut
t = (At 1 AT + Rt )1
t1 t
t = At t1 At + Rt
t (At 1 t1 + Bt ut )
t =
t1
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Reminder:
t =
T 1
Ct Qt Ct
t
+
t
t = CtT Q1
z
+
t
t
Computational complexity?
KF/IF Comparison
Kalman Filter
Information Filter
t = At t1 + Bt ut
t = (At 1 AT + Rt )1
t1 t
t = At t1 At + Rt
t (At 1 t1 + Bt ut )
t =
t1
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Reminder:
t =
T 1
Ct Qt Ct
t
+
t
t = CtT Q1
z
+
t
t
KF/IF Comparison
Kalman Filter
Information Filter
t = At t1 + Bt ut
t = (At 1 AT + Rt )1
t1 t
t = At t1 At + Rt
t =
t (At 1 t1 + Bt ut )
t1
The choice of IF or KF is application dependent.
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
Reminder:
t =
T 1
Ct Qt Ct
t
+
t
t = CtT Q1
z
+
t
t
Outline
Kinematic Model of a Mobile Robot
Kalman Filter
Extended Kalman Filter
Unscented Kalman Filter
Particle Filter
t = At t1 + Bt ut
t = At t1 ATt + Rt
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
t = g(ut , t1 )
t = At t1 + Bt ut Non-linear
t =
At t1 ATt
+ Rt
process model
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
t = g(ut , t1 )
t = At t1 + Bt ut Non-linear
Linear process
process model
T
model
t = At t1 At + Rt
t CtT (Ct
t CtT + Qt )1
Kt =
t =
t + Kt (zt Ct
t )
t
t = (I Kt Ct )
t = g(ut , t1 )
t = At t1 + Bt ut Non-linear
Linear process
process model
T
model
t = At t1 At + Rt
t CtT (Ct
t CtT + Qt )1
Kt =
h(
t )
t =
t + Kt (zt Ct
t )
Non-linear
model
measurement
t = (I Kt Ct )
t
t = g(ut , t1 )
t = At t1 + Bt ut Non-linear
Linear process
process model
T
model
t = At t1 At + Rt
T
T
1
K
=
C
(C
C
+
Q
)
Linear
t
t t
t t t
t
measurement
h(
t )
t =
t + Kt (zt Ct
t )
model
Non-linear
model
measurement
t = (I Kt Ct )
t
t = g(ut , t1 )
t = Gt t1 GTt + Rt
T
T
1
Kt = t Ht (Ht t Ht + Qt )
t =
t + Kt (zt h(
t ))
t
t = (I Kt Ht )
Linearized
process model
t = g(ut , t1 )
t = Gt t1 GTt + Rt
T
T
1
Kt = t Ht (Ht t Ht + Qt )
t =
t + Kt (zt h(
t ))
t
t = (I Kt Ht )
Linearized
process model
t = g(ut , t1 )
t = Gt t1 GTt + Rt
T
T
1
Linearized Kt = t Ht (Ht t Ht + Qt )
measurement
t =
t + Kt (zt h(
t ))
model
t
t = (I Kt Ht )
Linearized
process model
t = g(ut , t1 )
t = Gt t1 GTt + Rt
T
T
1
Linearized Kt = t Ht (Ht t Ht + Qt )
measurement
t =
t + Kt (zt h(
t ))
model
t
t = (I Kt Ht )
t0
t1
t0
t1
t0
(v, )
Laser
t1
t0
(v, )
Laser
t1
t0
(v, )
Laser
t1
t0
(v, )
t1
t0
Velocity model
v = v + !v
= + "
v
v
t)
xt = xt1 sin t1 + sin(t1 +
v
v
t)
yt = yt1 + cos t1 cos(t1 +
t = t1 +
t + # t
t1
t0
Velocity model
Gaussian noise
v = v + !v
= + "
v
v
t)
xt = xt1 sin t1 + sin(t1 +
v
v
t)
yt = yt1 + cos t1 cos(t1 +
t = t1 +
t + # t
t1
t0
Velocity model
v = v + !v
= + "
v
v
t)
xt = xt1 sin t1 + sin(t1 +
v
v
t)
yt = yt1 + cos t1 cos(t1 +
t = t1 +
t + # t
Computing G:
From Taylor expansion:
t1
t0
Computing G:
From Taylor expansion:
t1
g(ut , t1 )
Gt =
xt1
t0
Computing G:
dxt
= 1,
dxt1
dxt
=0
dyt1
v
v
dxt
t1 dt1 = cos t1 + cos(t1 + t)
dyt
= 0,
dxt1
t0
dyt
=1
dyt1
v
v
dyt
= sin t1 + sin(t1 + t)
dt1
dt
= 0,
dxt1
dt
= 0,
dyt1
dt
=1
dt1
Computing G:
dxt
= 1,
dxt1
dxt
=0
dyt1
v
v
dxt
t1 dt1 = cos t1 + cos(t1 + t)
dyt
= 0,
dxt1
t0
dyt
=1
dyt1
v
v
dyt
= sin t1 + sin(t1 + t)
dt1
Partial derivatives
dt
= 0,
dxt1
dt
= 0,
dyt1
dt
=1
dt1
t1
t0
Computing G:
1 0
G = 0 1
0 0
wv cos t1
wv sin t1
+ cos(t1 + t)
+ sin(t1 + t)
1
v
w
v
w
t1
Computing G:
1 0
G = 0 1
0 0
wv cos t1
wv sin t1
+ cos(t1 + t)
+ sin(t1 + t)
1
v
w
v
w
t0
t1
Computing G:
1 0
G = 0 1
0 0
wv cos t1
wv sin t1
+ cos(t1 + t)
+ sin(t1 + t)
1
v
w
v
w
Rt =
t0
T
V t Mt V t
t1
Computing G:
1 0
G = 0 1
0 0
wv cos t1
wv sin t1
+ cos(t1 + t)
+ sin(t1 + t)
1
v
w
v
w
Rt =
t0
!
!v
Mt =
0
"
0
,
!
T
V t Mt V t
g(ut , t1 )
Vt =
ut
t1
Computing G:
1 0
G = 0 1
0 0
wv cos t1
wv sin t1
+ cos(t1 + t)
+ sin(t1 + t)
1
v
w
v
w
Rt =
t0
!
!v
Mt =
0
"
0
,
!
T
V t Mt V t
g(ut , t1 )
Vt =
ut
Measurement model
i
zt
i
(rt ,
i
t )
Measurement model
Obstacle i
i
i
i=1
i=2
zt = (rt , t )
i=1
Measurement model
Range
Obstacle i
i
i
zt = (rt , t ) Bearing
i=2
Measurement model
i
zt
i
(rt ,
i
t )
Measurement model
i
zt
i
(rt ,
i
t )
Measurement model
i
zt
i
(rt ,
i
t )
)
k,x
k,y
ztk =
atan2(mk, y y, mk, x x
)
)
k,x
k,y
k
zt =
for
atan2(mk, y y, mk, x x
)
agreement!
)
k,x
k,y
k
zt =
for
atan2(mk, y y, mk, x x
)
agreement!
Linearization of the measurement model
h(xt , k, m) h(
t , k, m) +
k
Ht
k
Ht (xt
h(
t , k, m)
=
xt
t )
)
k,x
k,y
k
zt =
for
atan2(mk, y y, mk, x x
)
agreement!
Linearization of the measurement model
h(xt , k, m) h(
t , k, m) +
k
Ht (xt
h(
t , k, m)
=
xt
t , k, m) ?
What is h(
k
Ht
t )
)
k,x
k,y
k
zt =
for
atan2(mk, y y, mk, x x
)
agreement!
Linearization of the measurement model
h(xt , k, m) h(
t , k, m) +
k
Ht (xt
h(
t , k, m)
=
xt
t , k, m) ?
What is h(
k
Ht
t )
)
k,x
k,y
k
zt =
atan2(mk, y y, mk, x x
)
)
k,x
k,y
k
zt =
atan2(mk, y y, mk, x x
)
mk,x
x
zk
k
Ht = mk,yt,r
y
k )2
(
zt,r
mk,y
y
zk
t,r
mk,x
x
(zk )2
t,r
0
1
)
k,x
k,y
k
zt =
atan2(mk, y y, mk, x x
)
mk,x
x
zk
k
Ht = mk,yt,r
y
k )2
(
zt,r
Done
k
St
mk,y
y
zk
t,r
mk,x
x
(zk )2
t,r
k T
k
Ht t (Ht )
+ Qt
0
1
)
k,x
k,y
k
zt =
atan2(mk, y y, mk, x x
)
mk,x
x
zk
k
Ht = mk,yt,r
y
k )2
(
zt,r
Done
k
St
mk,y
y
zk
t,r
mk,x
x
(zk )2
t,r
k T
k
Ht t (Ht )
+ Qt
0
1
)
k,x
k,y
k
zt =
atan2(mk, y y, mk, x x
)
mk,x
x
zk
k
Ht = mk,yt,r
y
k )2
(
zt,r
Done
k
St
mk,y
y
zk
t,r
mk,x
x
(zk )2
t,r
k T
k
Ht t (Ht )
+ Qt
0
1
k
Compute zt ,
k
Ht ,
k
St
k
Ht ,
k
St
Done
Compute the most likely obstacle that matches the
observed obstacle using Maximum Likelihood.
k
Ht ,
k
St
Done
Compute the most likely obstacle that matches the
observed obstacle using Maximum Likelihood.
j = arg max
k
12 (zti
ztk )T (Stk )1 (zti
ztk )
1
k
det(2St ) 2
k
Ht ,
k
St
Done
Compute the most likely obstacle that matches the
observed obstacle using Maximum Likelihood.
j = arg max
k
12 (zti
ztk )T (Stk )1 (zti
ztk )
1
k
det(2St ) 2
k
Compute zt ,
k
Ht ,
Compute j
k
St
k
Compute zt ,
k
Ht ,
Compute j
k
St
k
Compute zt ,
k
Ht ,
k
St
Compute j
= t (Hi ) (St )
j
i i
t =
t + Kt (zt zt )
i j
t = (I Kt Ht )t
i
Kt
k
Compute zt ,
k
Ht ,
k
St
Compute j
j T
j 1
= t (Hi ) (St )
j
i i
t =
t + Kt (zt zt )
i j
t = (I Kt Ht )t
t
Done, t =
t , t =
i
Kt
t1 =
1
t1 t1
t = g(ut , t1 )
1
t = (Gt t1 GTt + Rt )1
t
t =
t
T 1
t = t + Ht Qt Ht
T 1
t = t + Ht Qt (zt h(
t ) Ht
t ))
Reminder:
v = 0.1 m/s
= 0.1 rad/s
Estimate
Estimate
True Position
Angular Error
Angular Error
GPS
Estimate
Angular Error
GPS
Estimate
Are the results reasonable given
the system model and sensors?
= [0, 0, 0]
How was the filter initialized? = 100I3
= [0, 0, 0]
How was the filter initialized? = 100I3
= [0, 0, 0]
How was the filter initialized? = 100I3
Why?
= [0, 0, 0]
How was the filter initialized? = 100I3
Why?
Filters require
tuning, guessing is
not a good idea!
!
"
0.5 0.1
Mt =
0.1 0.5
Recall:
t1
Computing G:
1 0
G = 0 1
0 0
wv cos t1
wv sin t1
+ cos(t1 + t)
+ sin(t1 + t)
1
v
w
v
w
Rt =
t0
!
!v
Mt =
0
"
0
,
!
T
V t Mt V t
g(ut , t1 )
Vt =
ut
!
"
0.5 0.1
Mt =
0.1 0.5
Noise in control space
!
"
0.5 0.1
Mt =
0.1 0.5
Noise in control space
g(ut , t1 )
Vt =
=
ut
t
cos cos(+wt t)
t
+
t
t2
1 0
Q = 0 1
0 0
0
0
0.4
New Results
Position Error
Angular Error
Old
New
Angular Error
Outline
Kinematic Model of a Mobile Robot
Kalman Filter
Extended Kalman Filter
Unscented Kalman Filter
Particle Filter
R
N =2
Sigma point
g(t1 , ut )
2
R
N =2
Sigma point
g(t1 , ut )
2
R
N =2
Sigma point
g(t1 , ut )
2
R
N =2
Sigma point
g(t1 , ut )
2
R
N =2
Sigma point
si = {xi , Wi }
Each Sigma point is defined by a state and weight:
xi RN
Wi R
Wi = 1
i=0
2N
!
Wi xi
i=0
2N
!
i=0
Wi (xi )(xi )
Wi = 1
i=0
Consistent with
first moment
2N
!
i=0
2N
!
Wi xi
i=0
Wi (xi )(xi )
Wi = 1
i=0
Consistent with
first moment
2N
!
i=0
2N
!
i=0
Wi xi
Consistent with
second moment
Wi (xi )(xi )
xi = +
!"
1 W0
xi = +
i = {1, . . . , N }
!"
1 W0
xi = +
i = {1, . . . , N }
!"
Matrix Square-Root
rowi if B = AT A
columni if B = AAT
1 W0
xi = +
i = {1, . . . , N }
!"
xi+N =
Matrix Square-Root
rowi if B = AT A
columni if B = AAT
1 W0
!"
1 W0
x0
x1
x4
2
, = (N + ) N
W0 =
N +
2
, = (N + ) N
W0 =
N +
Scaling parameters
= 1,
= 0.1 : 0.1 : 1
2
, = (N + ) N
W0 =
N +
2
, = (N + ) N
W0 =
N +
x0 =
2
, = (N + ) N
W0 =
N +
x0 =
!"
#
N
xi = +
1 W0
i
2
, = (N + ) N
W0 =
N +
x0 =
!"
#
N
xi = +
1 W0
i
!"
#
N
xi+N =
1 W0
i
{x0 , xi } = Initialize(, , , )
{
x0 , x
i } = g(x0 , xi , u)
2N
!
Wi x
i
i=0
2N
!
i=0
Wi (
xi
)(
xi
) + R
T
{x0 , xi } = Initialize(, , , )
{
x0 , x
i } = g(x0 , xi , u)
2N
!
Wi x
i
i=0
2N
!
i=0
Wi (
xi
)(
xi
) + R
T
, )
{
x0 , x
i } = Initialize(
, ,
{
z0 , zi } = h(
x0 , x
i , u)
z =
S=
2N
!
2N
!
Wi zi
i=0
Wi (
zi z)(
zi z)T + Q
i=0
2N
!
i=0
Wi (
xi
)(
zi z)
K = S
= Prediction(, , u, , )
{
xi ,
, }
Outline
Kinematic Model of a Mobile Robot
Kalman Filter
Extended Kalman Filter
Unscented Kalman Filter
Particle Filter
Particle Filters
Nonparametric representation of estimation
Permits a broader space of distributions
(including multimodal distributions)
Xt =
xm
t
1
{!xt ,
1
M
wt ", . . . , !xt ,
M
wt "}
Xt =
xm
t
1
{!xt ,
1
M
wt ", . . . , !xt ,
M
wt "}
Basic idea:
Create a set of weighted hypotheses
(particles) based on the previous set of
hypotheses, system inputs, and
measurements.
Xt =
xm
t
1
{!xt ,
1
M
wt ", . . . , !xt ,
M
wt "}
Basic idea:
Create a set of weighted hypotheses
(particles) based on the previous set of
hypotheses, system inputs, and
measurements.
Xt =
xm
t
1
{!xt ,
1
M
wt ", . . . , !xt ,
M
wt "}
Basic idea:
Create a set of weighted hypotheses
(particles) based on the previous set of
Monte Carlo
hypotheses, system inputs, and
method
measurements.
Xt =
xm
t
1
{!xt ,
1
M
wt ", . . . , !xt ,
M
wt "}
Basic idea:
Create a set of weighted hypotheses
(particles) based on the previous set of
Monte Carlo
hypotheses, system inputs, and
method
measurements.
For M particles:
For M particles:
m
m
sample xt p(xt | ut , xt1 )
(algorithm)
Sampling introduces noise
For M particles:
to every particle
sample
m
xt
m
p(xt | ut , xt1 )
(algorithm)
Sampling introduces noise
For M particles:
to every particle
m
sample xt p(xt | ut ,
wtm = p(zt | xm
t )
m
xt1 )
(algorithm)
Sampling introduces noise
For M particles:
to every particle
Importance
Factor
m
sample xt p(xt | ut ,
wtm = p(zt | xm
t )
m
xt1 )
(algorithm)
Sampling introduces noise
For M particles:
to every particle
Importance
Factor
End
m
m
sample xt p(xt | ut , xt1 )
wtm = p(zt | xm
t )
m
Xt = Xt + !xm
,
w
t
t "
(algorithm)
Sampling introduces noise
For M particles:
to every particle
Importance
Factor
End
m
m
sample xt p(xt | ut , xt1 )
wtm = p(zt | xm
t )
m
Xt = Xt + !xm
,
w
t
t "
t)
Xt = Importance Resampling(X
Importance Resampling
Take the particles and lay
them side-by-side according
to each particles weight.
m
x
t
Importance Resampling
Take the particles and lay
them side-by-side according
to each particles weight.
1
wt
1
x
t
wtm
m
x
t
m
x
t
Importance Resampling
Take the particles and lay
them side-by-side according
to each particles weight.
1
wt
1
x
t
m
x
t
wtm
m
x
t
Importance Resampling
Take the particles and lay
them side-by-side according
to each particles weight.
1
wt
1
x
t
wtm
m
x
t
m
x
t
Particles with
higher weights
will be selected
with higher
probability.
Common Issues
Particle deprivation
Random number generation
Resampling frequency
Common Issues
Particle deprivation
Random number generation
frequency
Resampling
!
wtm
1
m
p(zt | xm
)
w
t
t1