Professional Documents
Culture Documents
Haocheng Li
Department of Aerospace Engineering
Worcester Polytechnic Institute
1 / 31
Outline
Kalman Filter
Uncorrelated Process and Measurement Noise
H Infinity Filter
Dynamic Constrained Optimization
A Game Approach to H Infinity Filter
2 / 31
Outline
Kalman Filter
Uncorrelated Process and Measurement Noise
H Infinity Filter
Dynamic Constrained Optimization
A Game Approach to H Infinity Filter
3 / 31
Constant Estimations
Measurement System
y = Hx + v
x the unknown constant
v the measurement noise
H the system matrix determined by sensor setup
y the measurement output
x the estimation of x
y = y Hx measurement residue
4 / 31
Constant Estimations
Theorem
The most probable value of x is the
x that minimizes the norm of the
measurement residue y .
Define the cost function of the estimator to be
J = T
y y
Expand the cost function as follow
J = (y H
x)T (y H
x)
= yT y yT H
x
xT HT y + xT HT Hx
To minimize the cost function
J
= 2yT H +
xT HT H = 0
x
Therefore, the optimal estimation of the x is
x = (HT H)1 HT y
Haocheng Li (Worcester Polytechnic Institute)
5 / 31
Outline
Kalman Filter
Uncorrelated Process and Measurement Noise
H Infinity Filter
Dynamic Constrained Optimization
A Game Approach to H Infinity Filter
6 / 31
7 / 31
8 / 31
since the error mean is zero. Therefore, the recursive estimation variance
equation is
Pk = (I Kk Hk )Pk1 (I Kk Hk )T + Kk Rk KT
k
Therefore, the optimal estimation gain can be computed as
Tr(Pk )
J
=
Kk
Kk
= 2(I Kk Hk )Pk1 (Hk ) + 2Kk Rk
=0
Then, the gain update equation is
T
1
Kk = Pk1 HT
k (Hk Pk1 Hk + Rk )
Haocheng Li (Worcester Polytechnic Institute)
9 / 31
Summary
Recursive Least Square Estimation
Initialization:
x0 = E (x)
P0 = E (x
x0 )(x
x0 )T
Measurement:
yk = Hk x + vk
E (vk ) = 0
E (vi vkT ) = Rk ik
Update Estimation:
T
1
Kk = Pk1 HT
k (Hk Pk1 Hk + Rk )
xk = xk1 + Kk (yk H
xk1 )
Pk = (I Kk Hk )Pk1 (I Kk Hk )T + Kk Rk KT
k
Haocheng Li (Worcester Polytechnic Institute)
10 / 31
Outline
Kalman Filter
Uncorrelated Process and Measurement Noise
H Infinity Filter
Dynamic Constrained Optimization
A Game Approach to H Infinity Filter
11 / 31
12 / 31
a priori estimate: x
k = E (xk |y1 , y2 , , yk1 )
+
a posteriori estimate: x
k = E (xk |y1 , y2 , , yk )
x
)(x
x
)
k
k
k
+
+ T
x
)(x
x
)
k
k
k
13 / 31
xk =Fk1
xk1
The covariance matrix of the estimation error propagates as follow
xk )T
Pk = E (xk xk )(xk
T
)
= Fk1 E (xk1
xk1 )(xk1
xk1 )T Fk1 + E (wk1 wk1
T
+ Fk1 E (xk1 xk1 )wk1
+ E wk1 (xk1 xk1 )T FT
k1
The resulting propagation equation is
Pk = Fk1 Pk1 Fk1 + Qk1
Haocheng Li (Worcester Polytechnic Institute)
14 / 31
15 / 31
Initialization:
x+
0 = E (x0 )
T
P+
x+
x+
0 = E (x0
0 )(x0
0)
Estimation:
Prediction of the estimation covariance matrix:
+
P
k = Fk1 Pk1 Fk1 + Qk1
x
x+
k = Fk1
k1
Posterior Estimation (Correction)
x+
x
x
k =
k + Kk (yk H
k)
Posterior Covariance
T
T
P+
k = (I Kk Hk )Pk (I Kk Hk ) + Kk Rk Kk
Haocheng Li (Worcester Polytechnic Institute)
16 / 31
Outline
Kalman Filter
Uncorrelated Process and Measurement Noise
H Infinity Filter
Dynamic Constrained Optimization
A Game Approach to H Infinity Filter
17 / 31
N1
X
Lk (xk , wk )
k=0
(k = 0, , N 1)
18 / 31
= (x0 ) +
= (x0 ) +
N1
X
k=0
N1
X
k=0
N1
X
Lk (xk , wk ) + T
(F
x
+
w
x
)
k
k+1
k+1 k k
Lk (xk , wk ) +
T
k+1 (Fk xk
N1
X
k=0
N
X
+ wk )
Lk (xk , wk ) + T
k+1 (Fk xk + wk )
k=0
T
k+1 xk+1
T
T
k xk + 0 x0
k=0
N1
X
T
T
(Hk T
k xk ) N xN + 0 x0
k=0
Haocheng Li (Worcester Polytechnic Institute)
19 / 31
(k = 0, , N)
(k = 0, , N 1)
(k = 0, , N)
The third condition yields the dynamic constrained and the first two
conditions can be simplified as follow:
=0
x0
T
N =0
Hk
T
(k = 1, , N 1)
k =
xk
Hk
=0
(k = 0, , N)
wk
T
0 +
20 / 31
Outline
Kalman Filter
Uncorrelated Process and Measurement Noise
H Infinity Filter
Dynamic Constrained Optimization
A Game Approach to H Infinity Filter
21 / 31
Problem Statement
Estimation Problem
The system dynamics and measurement model is as follow:
xk+1 = Fk xk + wk
yk = Hk xk + vk
The goal is to estimate a linear combination of states
zk = Lk xk
Cost Function
The performance criterion
PN1
J1 =
kx0 x0 k2 1
P0
k=0 kzk
P
+ N1
k=0
zk k2Sk
kwk k2 1 + kvk k2 1
Qk
Rk
22 / 31
X
1
1
kx0 x0 k2P1 +
kzk zk k2Sk kwk k2Q1 + kvk k2R1 0
0
k
k
k=0
X
1
1
J = kx0 x0 k2P1 +
kzk zk k2Sk kwk k2Q1 + kvk k2R1
0
k
k
k=0
23 / 31
zk wk ,vk ,x0
24 / 31
zk wk ,vk ,x0
= min max J
xk wk ,vk ,x0
= min max J
xk wk ,yk ,x0
X
1
1
kxk
xk k2S kwk k2Q1 + kyk Hk xk k2R1
J = kx0 x0 k2P1 +
k
0
k
k
k=0
= (x0 ) +
N1
X
Lk
k=0
2T
k+1
(Fk xk + wk )
25 / 31
x0
2T
N
Hk
wk
2T
k
=0
=0
=0
=
Hk
xk
26 / 31
Pk+1 = Fk (P Sk + H R Hk ) F + Qk
k
0 = x0
k Pk + HT R1 Hk Pk 1 S
k (k xk )
k+1 = Fk k + Fk Pk I S
k k
k Pk + HT R1 Hk Pk 1 HT R1 (yk Hk k )
+ F k Pk I S
k k
k k
27 / 31
k
k
k=0
J = min max J
xk
yk
N1
X
k + S
k P
kS
k )(k
(k xk )T (S
xk )
k=0
k P
k HT R1 (yk Hk k )
+ 2(k xk )T S
k k
1
1
T 1
H
P
H
R
R
)(y
)
+ (yk Hk k )T (R1
k k k k
k
k k
k
k
28 / 31
xk
J
2
k HT R1 R1 )(yk Hk k ) + 2R1 Hk P
k (k xk ) = 0
= (R1
Hk P
k k
k
k
yk
k
Therefore, the stationary point is clearly satisfies by the following
conditions
xk = k
yk = H k k
Notice that the stationary condition implies that the cost function is zero
J=0
thus, the attenuation condition
J1
is satisfied.
Haocheng Li (Worcester Polytechnic Institute)
29 / 31
xk+1 = Fk xk + Fk Pk I S
k k
1
1
HT
xk )
k Rk (yk Hk
30 / 31
kx0 x0 k2 1
P0
k=0 kzk
P
+ N1
k=0
zk k2Sk
kwk k2 1 + kvk k2 1
Qk
Rk
k Pk + HT R1 Hk Pk
K k = Pk I S
k k
1
1
HT
k Rk
xk+1 = Fk xk + Fk Kk (yk Hk
xk )
1
1
T
k + H R Hk )1 FT + Qk
Pk+1 = Fk (Pk S
k k
k
with the positive definiteness criterion
T 1
P1
k S k + H k Rk H k > 0
Haocheng Li (Worcester Polytechnic Institute)
31 / 31