You are on page 1of 31

Optimal States Estimation

Kalman and H Infinity Approaches

Haocheng Li
Department of Aerospace Engineering
Worcester Polytechnic Institute

ECE 5311 Presentation, 2016

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

1 / 31

Outline

Least Square Estimation


Constant Estimations
Recursive Least Square Estimation

Kalman Filter
Uncorrelated Process and Measurement Noise

H Infinity Filter
Dynamic Constrained Optimization
A Game Approach to H Infinity Filter

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

2 / 31

Outline

Least Square Estimation


Constant Estimations
Recursive Least Square Estimation

Kalman Filter
Uncorrelated Process and Measurement Noise

H Infinity Filter
Dynamic Constrained Optimization
A Game Approach to H Infinity Filter

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

3 / 31

Constant Estimations

Measurement System
y = Hx + v
x the unknown constant
v the measurement noise
H the system matrix determined by sensor setup
y the measurement output
x the estimation of x
y = y Hx measurement residue

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

4 / 31

Constant Estimations
Theorem
The most probable value of x is the
x that minimizes the norm of the
measurement residue y .
Define the cost function of the estimator to be
J = T
y y
Expand the cost function as follow
J = (y H
x)T (y H
x)
= yT y yT H
x
xT HT y + xT HT Hx
To minimize the cost function
J
= 2yT H +
xT HT H = 0
x
Therefore, the optimal estimation of the x is

x = (HT H)1 HT y
Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

5 / 31

Outline

Least Square Estimation


Constant Estimations
Recursive Least Square Estimation

Kalman Filter
Uncorrelated Process and Measurement Noise

H Infinity Filter
Dynamic Constrained Optimization
A Game Approach to H Infinity Filter

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

6 / 31

Recursive Least Square Estimation


Recursive Measurement System
yk = Hk x + vk
Suppose the recursive estimator of x satisfies
xk =
xk1 + Kk (yk H
xk1 )
Notice that the recursive estimation error mean
E (k ) = E (x
xk )


=E x
xk1 Kk (yk H
xk1 )


=E x
xk1 Kk (Hk x + vk Hxk1 )
The resulting error mean equation is
E (k ) = (I Kk Hk )E (k1 ) Kk E (vk )
The estimation mean error is unbiased.
Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

7 / 31

Recursive Least Square Estimation


Define the optimal criterion as the variance of the estimation error




T
Jk = E (T

)
=
E
Tr(

)
=
E
Tr(P
)
k k
k
k k
The covariance matrix of the estimation error is
Pk = E (k T
k )
!


T
=E
(I Kk Hk )k1 Kk vk (I Kk Hk )k1 Kk vk
T
T
T
= (I Kk Hk )E (k1 T
k1 )(I Kk Hk ) + Kk E (vk vk )Kk
T
T
T
Kk E (vk T
k1 )(I Kk Hk ) (I Kk Hk )E (k1 vk )Kk

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

8 / 31

The measurement noise and previous estimation is independent


T
E (vk T
k1 ) = E (k1 vk ) = 0

since the error mean is zero. Therefore, the recursive estimation variance
equation is
Pk = (I Kk Hk )Pk1 (I Kk Hk )T + Kk Rk KT
k
Therefore, the optimal estimation gain can be computed as
Tr(Pk )
J
=
Kk
Kk
= 2(I Kk Hk )Pk1 (Hk ) + 2Kk Rk
=0
Then, the gain update equation is
T
1
Kk = Pk1 HT
k (Hk Pk1 Hk + Rk )
Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

9 / 31

Summary
Recursive Least Square Estimation

Initialization:
x0 = E (x)


P0 = E (x
x0 )(x
x0 )T
Measurement:
yk = Hk x + vk
E (vk ) = 0
E (vi vkT ) = Rk ik
Update Estimation:
T
1
Kk = Pk1 HT
k (Hk Pk1 Hk + Rk )

xk = xk1 + Kk (yk H
xk1 )
Pk = (I Kk Hk )Pk1 (I Kk Hk )T + Kk Rk KT
k
Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

10 / 31

Outline

Least Square Estimation


Constant Estimations
Recursive Least Square Estimation

Kalman Filter
Uncorrelated Process and Measurement Noise

H Infinity Filter
Dynamic Constrained Optimization
A Game Approach to H Infinity Filter

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

11 / 31

Uncorrelated Process and Measurement Noise


Linear Discrete Time System
xk = Fk1 xk1 + wk1
yk = H k xk + vk

The process and measurement noise is Gaussian: wk (0, Qk ) and


vk (0, Rk )
The random process is Markov: E (wk wjT ) = Qk jk and
E (vk vjT ) = Rk jk
The process and measurement noise is uncorrelated: E (wk vjT ) = 0

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

12 / 31

a priori estimate: x
k = E (xk |y1 , y2 , , yk1 )
+
a posteriori estimate: x
k = E (xk |y1 , y2 , , yk )



priori estimation error covariance: P


=
E
(x

x
)(x

x
)
k
k
k


+
+ T

posteriori estimation error covariance: P+


=
E
(x

x
)(x

x
)
k
k
k

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

13 / 31

Propagation of the states and Covariance


For a linear discrete system
xk =Fk1 xk1 + wk1
The average of the states propagates as follow

xk =Fk1
xk1
The covariance matrix of the estimation error propagates as follow


xk )T
Pk = E (xk xk )(xk


T
)
= Fk1 E (xk1
xk1 )(xk1
xk1 )T Fk1 + E (wk1 wk1




T
+ Fk1 E (xk1 xk1 )wk1
+ E wk1 (xk1 xk1 )T FT
k1
The resulting propagation equation is
Pk = Fk1 Pk1 Fk1 + Qk1
Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

14 / 31

Hence, a priori estimation of current step can be obtained from the


posterior estimation from the previous step using the propagation equation:
x
x+
k = Fk1
k1
+
P
k = Fk1 Pk1 Fk1 + Qk1

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

15 / 31

Initialization:
x+
0 = E (x0 )


T
P+
x+
x+
0 = E (x0
0 )(x0
0)
Estimation:
Prediction of the estimation covariance matrix:
+
P
k = Fk1 Pk1 Fk1 + Qk1

Computation of the estimation gain


T
T
1
Kk = P
k Hk (Hk Pk Hk + Rk )

Priori Estimation (Prediction)

x
x+
k = Fk1
k1
Posterior Estimation (Correction)

x+
x
x
k =
k + Kk (yk H
k)
Posterior Covariance

T
T
P+
k = (I Kk Hk )Pk (I Kk Hk ) + Kk Rk Kk
Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

16 / 31

Outline

Least Square Estimation


Constant Estimations
Recursive Least Square Estimation

Kalman Filter
Uncorrelated Process and Measurement Noise

H Infinity Filter
Dynamic Constrained Optimization
A Game Approach to H Infinity Filter

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

17 / 31

Dynamic Constrained Optimization

Dynamic Constrained Optimization Problem


Suppose the cost function to be
J = (x0 ) +

N1
X

Lk (xk , wk )

k=0

subject to the system dynamics for N steps


xk+1 = Fk xk + wk

Haocheng Li (Worcester Polytechnic Institute)

(k = 0, , N 1)

Optimal States Estimation

ECE 5311 Presentation, 2016

18 / 31

Applying the Lagrange multiplier, the augmented cost function is


Ja = (x0 ) +

= (x0 ) +

= (x0 ) +

N1
X
k=0
N1
X
k=0
N1
X


Lk (xk , wk ) + T
(F
x
+
w

x
)
k
k+1
k+1 k k
Lk (xk , wk ) +

T
k+1 (Fk xk

N1
X

k=0
N
X

+ wk )

Lk (xk , wk ) + T
k+1 (Fk xk + wk )

k=0

T
k+1 xk+1
T
T
k xk + 0 x0

k=0

Define the Hamiltonian of the system dynamics:


Hk = Lk (xk , wk ) + T
k+1 (Fk xk + wk )
Therefore, the augmented cost function can be rewritten as
Ja = (x0 ) +

N1
X

T
T
(Hk T
k xk ) N xN + 0 x0

k=0
Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

19 / 31

The necessary conditions for optimality is


Ja
=0
xk
Ja
=0
wk
Ja
=0
k

(k = 0, , N)
(k = 0, , N 1)
(k = 0, , N)

The third condition yields the dynamic constrained and the first two
conditions can be simplified as follow:

=0
x0
T
N =0
Hk
T
(k = 1, , N 1)
k =
xk
Hk
=0
(k = 0, , N)
wk

T
0 +

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

20 / 31

Outline

Least Square Estimation


Constant Estimations
Recursive Least Square Estimation

Kalman Filter
Uncorrelated Process and Measurement Noise

H Infinity Filter
Dynamic Constrained Optimization
A Game Approach to H Infinity Filter

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

21 / 31

Problem Statement
Estimation Problem
The system dynamics and measurement model is as follow:
xk+1 = Fk xk + wk
yk = Hk xk + vk
The goal is to estimate a linear combination of states
zk = Lk xk

Cost Function
The performance criterion
PN1
J1 =

kx0 x0 k2 1
P0

Haocheng Li (Worcester Polytechnic Institute)

k=0 kzk
P
+ N1
k=0

zk k2Sk
kwk k2 1 + kvk k2 1

Optimal States Estimation

Qk

Rk

ECE 5311 Presentation, 2016

22 / 31

The direct minimization is not tractable, therefore, seek the a performance


bound such that
1
J1

which is equivalent to the following


N1

X

1
1
kx0 x0 k2P1 +
kzk zk k2Sk kwk k2Q1 + kvk k2R1 0
0

k
k
k=0

Consider the alternative form of the cost function


N1

X

1
1
J = kx0 x0 k2P1 +
kzk zk k2Sk kwk k2Q1 + kvk k2R1
0

k
k
k=0

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

23 / 31

The minimax problem

The minimax problem


Seek estimator that minimize the worst case scenario
J = min max J

zk wk ,vk ,x0

Nature can choose the initial condition, process and measurement


noise to maximize the cost
We need to choose the estimator to minimize the worst case scenario

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

24 / 31

The minimax problem is equivalent to


J = min max J

zk wk ,vk ,x0

= min max J

xk wk ,vk ,x0

= min max J

xk wk ,yk ,x0

The cost function can be transformed to the equivalent form as


N1

X

1
1
kxk
xk k2S kwk k2Q1 + kyk Hk xk k2R1
J = kx0 x0 k2P1 +
k
0

k
k
k=0

= (x0 ) +

N1
X

Lk

k=0

The Hamiltonian of the system is defined as


Hk = Lk +
Haocheng Li (Worcester Polytechnic Institute)

2T
k+1
(Fk xk + wk )

Optimal States Estimation

ECE 5311 Presentation, 2016

25 / 31

Stationary with respect to x0 and wk

The conditions of the stationary points are


(x0 )
2T
0
+

x0
2T
N

Hk
wk
2T
k

Haocheng Li (Worcester Polytechnic Institute)

=0
=0
=0
=

Hk
xk

Optimal States Estimation

ECE 5311 Presentation, 2016

26 / 31

the stationary conditions are


x0 = x0 + P0 0
xk = k + Pk k
wk = Qk k+1
N = 0

k Pk + HT R1 Hk Pk 1 FT k+1
k = I S
k
k k

1
1
k (k xk )
k Pk + HT R Hk Pk
S
+ I S
k k

k Pk + HT R1 Hk Pk 1 HT R1 (yk Hk k )
+ I S
k k
k k
1
T 1
1 T

Pk+1 = Fk (P Sk + H R Hk ) F + Qk
k

0 = x0

k Pk + HT R1 Hk Pk 1 S
k (k xk )
k+1 = Fk k + Fk Pk I S
k k

k Pk + HT R1 Hk Pk 1 HT R1 (yk Hk k )
+ F k Pk I S
k k
k k

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

27 / 31

Stationary with respect to yk and xk


After maximizing with respect to the x0 and wk , the minimax problem
become
N1
X

1
1
J = kx0 x0 k2P1 +
kzk zk k2Sk kwk k2Q1 + kvk k2R1
0

k
k
k=0

J = min max J

xk

yk

Substitute the stationary conditions, the cost function is equivalent to the


following quadratic form
J=

N1
X

k + S
k P
kS
k )(k
(k xk )T (S
xk )

k=0

k P
k HT R1 (yk Hk k )
+ 2(k xk )T S
k k

1
1
T 1

H
P
H
R

R
)(y

)
+ (yk Hk k )T (R1
k k k k
k
k k
k
k

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

28 / 31

Therefore, the stationary conditions are


J
k + S
k P
kS
k )(k
k P
k HT R1 (yk Hk k ) = 0
= 2(S
xk ) + 2 S
k k

xk
J
2
k HT R1 R1 )(yk Hk k ) + 2R1 Hk P
k (k xk ) = 0
= (R1
Hk P
k k
k
k
yk
k
Therefore, the stationary point is clearly satisfies by the following
conditions

xk = k
yk = H k k
Notice that the stationary condition implies that the cost function is zero
J=0
thus, the attenuation condition
J1

is satisfied.
Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

29 / 31

To ensure the estimator


xk minimizing the cost function require that
2J
k + S
k P
kS
k ) > 0
= 2(S
xk
k is chosen to always to be positive definite, the second
Since the matrix S
order condition can be reduced to
k = (P1 S
k + HT R1 Hk )1 > 0
P
k k
k
the optimal estimator should be
k Pk + HT R1 Hk Pk

xk+1 = Fk xk + Fk Pk I S
k k

Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

1

1
HT
xk )
k Rk (yk Hk

ECE 5311 Presentation, 2016

30 / 31

Suppose the system dynamics and measurement model are


xk+1 = Fk xk + wk
yk = Hk xk + vk
zk = Lk xk
with the performance criterion
PN1
J1 =

kx0 x0 k2 1
P0

k=0 kzk
P
+ N1
k=0

zk k2Sk
kwk k2 1 + kvk k2 1
Qk

Rk

then the estimation scheme is


k = LT Sk Lk
S
k

k Pk + HT R1 Hk Pk
K k = Pk I S
k k

1

1
HT
k Rk

xk+1 = Fk xk + Fk Kk (yk Hk
xk )
1
1
T
k + H R Hk )1 FT + Qk
Pk+1 = Fk (Pk S
k k
k
with the positive definiteness criterion
T 1

P1
k S k + H k Rk H k > 0
Haocheng Li (Worcester Polytechnic Institute)

Optimal States Estimation

ECE 5311 Presentation, 2016

31 / 31

You might also like