You are on page 1of 37

ObserversandKalmanFilters

CS393R:AutonomousRobots

Slides Courtesy of
Benjamin Kuipers

GoodAfternoonColleagues

Arethereanyquestions?

StochasticModelsofan
UncertainWorld
= F (x,u)
x

y = G(x)

= F (x,u,1)
x
y = G(x,2 )

Actionsareuncertain.
Observationsareuncertain.
i~N(0,i)arerandomvariables

Observers
= F (x,u,1)
x
y = G(x,2 )
Thestatexisunobservable.
Thesensevectoryprovidesnoisyinformation
aboutx.
=Obs(y)
x
Anobserverisaprocessthatuses
sensoryhistorytoestimatex.
Thenacontrollawcanbewritten

)
u=Hi (x

KalmanFilter:OptimalObserver
= F (x,u,1)
x
y = G(x,2 )

u
x
y

EstimatesandUncertainty
Conditionalprobabilitydensityfunction

Gaussian(Normal)Distribution
CompletelydescribedbyN(,2)
Mean
Standarddeviation,variance2
1
2

(x)2 /2 2

TheCentralLimitTheorem
Thesumofmanyrandomvariables
withthesamemean,but
witharbitraryconditionaldensityfunctions,

convergestoaGaussiandensityfunction.
Ifamodelomitsmanysmallunmodeled
effects,thentheresultingerrorshould
convergetoaGaussiandensityfunction.

IllustratingtheCentralLimitThm
Add1,2,3,4variablesfromthesamedistribution.

DetectingModelingError
Everymodelisincomplete.
Iftheomittedfactorsareallsmall,the
resultingerrorsshouldadduptoaGaussian.

Iftheerrorbetweenamodelandthedata
isnotGaussian,
Thensomeomittedfactorisnotsmall.
Oneshouldfindthedominantsourceoferror
andaddittothemodel.

EstimatingaValue
Supposethereisaconstantvaluex.
Distancetowall;angletowall;etc.

2
1

Attimet1,observevaluez1withvariance
(t1) =z1
x
Theoptimalestimateiswith
2
variance 1

ASecondObservation
Attimet2,observevaluez2withvariance

2
2

MergedEvidence

UpdateMeanandVariance
Weightedaverageofestimates.

(t2) =Az1 +Bz2


x

A +B =1

Theweightscomefromthevariances.
Smallervariance=morecertainty

2 2
2
1

x(t2) = 2
z
+
2 1
2
2 z2
1 + 2 1 + 2
1
1
1
= 2+ 2
2
(t2) 1 2

FromWeightedAverage
toPredictorCorrector
Weightedaverage:

(t2) =Az1 +Bz2 =(1K)z1 +Kz2


x
Predictorcorrector:

(t2) =z1 +K(z2 z1)


x
(t1) +K(z2 x
(t1))
=x
Thisversioncanbeappliedrecursively.

PredictorCorrector
Updatebestestimategivennewdata

(t2) =x
(t1) +K(t2)(z2 x
(t1))
x
Updatevariance:
2

12
K (t2) = 2
2
1 + 2

(t2) = (t1) K(t2 ) (t1)


2

=(1K(t2)) (t1)

StatictoDynamic
Nowsupposexchangesaccordingto

=F(x,u,) =u+
x

(N(0, ))

DynamicPrediction
2

Att2weknow x(t2) (t2)


Att3afterthechange,beforeanobservation.

(t2) +u[t3 t2]


x(t3 ) =x
2

(t ) = (t2) + [t3 t2 ]
Next,wecorrectthispredictionwiththe
observationattimet3.

DynamicCorrection
Attimet3weobservez3withvariance
Combinepredictionwithobservation.

(t3) =x
(t ) +K(t3)(z3 x
(t ))
x
2

(t3) =(1K(t3)) (t )
2

(t )
K (t3) = 2
2
(t3 ) + 3

2
3

QualitativeProperties

x(t3) =x(t3 ) +K(t3)(z3 x(t3 ))

2(t3 )
K (t3) = 2
2
(t3 ) + 3
2
3

Supposemeasurementnoiseislarge.
ThenK(t3)approaches0,andthemeasurement
willbemostlyignored.
2

(t )
Supposepredictionnoiseislarge.
ThenK(t3)approaches1,andthemeasurement
willdominatetheestimate.

KalmanFilter
Takesastreamofobservations,anda
dynamicalmodel.
Ateachstep,aweightedaveragebetween
predictionfromthedynamicalmodel
correctionfromtheobservation.

TheKalmangainK(t)istheweighting,
2

(t)

basedonthevariancesand
2
(t)
Withtime,K(t)andtendtostabilize.

Simplifications
Wehaveonlydiscussedaonedimensional
system.
Mostapplicationsarehigherdimensional.

Wehaveassumedthestatevariableis
observable.
Ingeneral,sensedatagiveindirectevidence.

=F(x,u,1) =u+1
x
z=G(x,2) =x+2
Wewilldiscussthemorecomplexcasenext.

UpToHigherDimensions
OurpreviousKalmanFilterdiscussionwas
ofasimpleonedimensionalmodel.
Nowwegouptohigherdimensions:
n
Statevector: x
m
Sensevector: z
l
Motorvector: u
First,alittlestatistics.

Expectations
Letxbearandomvariable.
TheexpectedvalueE[x]isthemean:

E[x] =

1
x p(x) dx x= x i
N 1

Theprobabilityweightedmeanofallpossible
values.Thesamplemeanapproachesit.

Expectedvalueofavectorxisby

T
component.
E[x] =x =[x1,L xn ]

VarianceandCovariance
ThevarianceisE[(xE[x])2]
N
1
2 =E[(xx) 2] = (xi x) 2
N 1
CovariancematrixisE[(xE[x])(xE[x])T]
N
1
Cij = (xik xi )(xjk x j )
N k=1
DividebyN1tomakethesamplevariancean
unbiasedestimatorforthepopulationvariance.

CovarianceMatrix
Alongthediagonal,Ciiarevariances.
OffdiagonalCijareessentiallycorrelations.

C 1,1 = 12

C2,1

CN ,1

C1,2
2
C2,2 = 2

M
2
CN ,N = N
C1,N

O
L

IndependentVariation
xandyare
Gaussianrandom
variables(N=100)
Generatedwith
x=1y=3
Covariancematrix:
0.90

0.44
Cxy =

8.82
0.44

DependentVariation
canddarerandom
variables.
Generatedwith
c=x+yd=xy
Covariancematrix:
10.62

7.93
Ccd =

8.84
7.93

DiscreteKalmanFilter
Estimatethestatexnofalinear
stochasticdifferenceequation
x k = Ax k1 + Bu k1 + wk1
processnoisewisdrawnfromN(0,Q),with
covariancematrixQ.

withameasurementzm

z =Hx +v
k

measurementnoisevisdrawnfromN(0,R),with
covariancematrixR.

A,Qarenn.Bisnl.Rismm.Hismn.

EstimatesandErrors
n

xistheestimatedstateattimestepk.
k

xafterprediction,beforeobservation.
k

e
=x

x
Errors:
k
k
k
k
ek =xk x

Errorcovariancematrices:

T
k k

P =E[e e ]
T
k

Pk =E[ek e ]
k Pk
KalmanFilterstaskistoupdate x

TimeUpdate(Predictor)
Updateexpectedvalueofx

k1 + Buk1
xk = Ax
UpdateerrorcovariancematrixP

T
Pk =APk1A +Q

Previousstatementsweresimplified
versionsofthesameidea:

3
2
3

(t ) =x
(t2) +u[t3 t2]
x
2

(t ) = (t2) + [t3t2 ]

MeasurementUpdate(Corrector)
Updateexpectedvalue

k =x
+K k(zk Hx
)
x

k
innovationis zk Hx

Updateerrorcovariancematrix

Pk =(IK kH)P

Comparewithpreviousform

(t3) =x
(t ) +K(t3)(z3 x
(t ))
x
2
2
(t3) =(1K(t3)) (t3 )

TheKalmanGain
TheoptimalKalmangainKkis
T
T
1
K k =Pk H (HPk H +R)
T
k
T
k

PH
=
HP H +R
Comparewithpreviousform
2

(t3 )
K (t3) = 2
2
(t3 ) + 3

ExtendedKalmanFilter
Supposethestateevolutionand
measurementequationsarenonlinear:
x k = f (x k1,uk1 ) + wk1

zk =h(xk) +vk
processnoisewisdrawnfromN(0,Q),with

covariancematrixQ.
measurementnoisevisdrawnfromN(0,R),
withcovariancematrixR.

TheJacobianMatrix
Forascalarfunctiony=f(x),
y = f (x)x
Foravectorfunctiony=f(x),
f1
y1 (x) L
x1
y =J x = M = M
f
yn n (x) L

x1

f1
(x)
xn x1
M M

fn
xn
(x)
xn

LinearizetheNonLinear
LetAbetheJacobianoffwithrespecttox.
f i
A ij =
(x k1,uk1 )
x j
LetHbetheJacobianofhwithrespecttox.

hi
Hij =
(xk)
x j

ThentheKalmanFilterequationsare
almostthesameasbefore!

EKFUpdateEquations
Predictorstep:

k1,uk1 )
xk = f ( x

P =APk1A +Q

K k =P H (HP H +R)
Kalmangain:

k =x
+K k(zk h(x
))
Correctorstep: x

Pk =(IK kH)P

You might also like