You are on page 1of 55

Introduction

Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
System Identication
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice
July 7, 2009
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Outline I
1
Introduction
Denition
Objective
2
Classication
Non-parametric Models
Parametric Models
3
Least Squares Estimation
4
Recursive Least Squares Estimation
Derivation
Statistical Analysis of the RLS Estimator
5
Weighted Least Squares
6
Discrete-Time Kalman Filter
Features
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Outline II
Derivations
7
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
ARX Model
Pulse Response Model
Pseudo-Inverse
Physical Interpretation of SVD
Approximation Problem
Basic Equations
Condition Number
Eigen Realization Algorithm
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Denition
Objective
Denition
System identication is the process of developing or improving the
mathematical representation of a physical system using
experimental data. There are three types of identication
techniques: Modal parameter identication and
Structural-model parameter identication (primarily used in
structural engineering) and Control-model identication
(primarily used in mechanical and aerospace systems). The primary
objective of system identication is to determine the system
matrices, A, B, C, D from measured/analyzed data often with
noise. The modal parameters are computed from the system
matrices.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Denition
Objective
Objective
The main aim of system identication is to determine a
mathematical model of a physical/dynamic system from observed
data. Six key steps are involved in system identication (1)
develop an approximate analytical model of the structure, (2)
establish levels of structural dynamic response which are likely to
occur using the analytical model and characteristics of anticipated
excitation sources, (3) determine the instrumentation requirements
needed to sense the motion with prescribed accuracy and spatial
resolution,(4) perform experiments and record data, (5) apply
system identication techniques to identify the dynamic
characteristics such as system matrices,modal parameters, and
excitation and input/output noise characteristics, and (6)
rene/update the analytical model based on identied results.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Non-parametric Models
Parametric Models
Parameteric and Non-parameteric Models
Parametric Models: Choose the model structure and estimate
the model parameters for best t.
Non-parametric Models: Model structure is not specied a
priori but is instead determined from data. Non parametric
techniques rely on the Cross correlation function (CCF) R
yu
/
Auto correlation function (ACF) R
uu
and Spectral Density
Functions S
yu
/ S
uu
(Fourier transform of CCF and ACF) to
estimate the transfer function/frequency response function of
the model
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Non-parametric Models
Parametric Models
Non-parametric Models
Frequency Response Function (FRF)
Y(j ) = H(j )U(j )
FRF - Nonparametric
H(j ) =
S
yu
(j )
S
uu
(j )
Impulse Response Function (IRF)
y(t) =
t
_
0
h(t )u() d
[Note: IRF and FRF form a Fourier Transform Pair]
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Non-parametric Models
Parametric Models
Parametric Models
TF Models (SISO)
Y(s) =
b
m
s
m
+ b
m1
s
m1
+ + b
1
s + b
o
s
n
+ a
n1
s
n1
+ + a
1
s + a
o
U(s)
In this model structure, we choose m and n and estimate
parameters b
0
, , b
m
, a
0
, , a
n1
.
Time-domain Models (SISO)
d
n
y
dt
n
+ a
n1
d
n1
y
dt
n1
+ + a
1
dy
dt
+ a
o
y(t)
= b
m
d
m
u
dt
m
+ b
m1
d
m1
u
dt
m1
+ + b
1
du
dt
+ b
o
u(t)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Non-parametric Models
Parametric Models
Parametric Models
Discrete Time-domain Models (SISO)
y(k)+a
1
y(k 1)+ +a
n
y(k n) = b
1
u(k 1)+ +b
m
u(k m)
State Space Models (MIMO)
x
n1
= A
nn
x
n1
+B
nm
u
m1
y
r 1
= C
r n
x
n1
+D
r m
u
m1
The parameters n, r , m are given and the models parameters
A, B, C, D are to be estimated.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Non-parametric Models
Parametric Models
Parametric Models
Transfer Function Matrix Models (MIMO)
Y (s) =
_

_
H
11
(s) . . . H
1m
(s)
.
.
.
.
.
.
.
.
.
H
r 1
(s) H
rm
(s)
_

_
U (s)
which can be written as:
Y(s) = H(s)U(s)
=
_
C(sI A)
1
B +D

U(s)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Non-parametric Models
Parametric Models
Parametric Models
System identication can be grouped into frequency domain
identication methods and time-domain identication
methods. We will focus mainly on discrete-time domain model
identication and state-space identication:
1
Discrete Time-domain Models (SISO)
2
State Space Models (MIMO)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Least Squares Estimation
Consider a second-order discrete model of the form,
y(k) + a
1
y(k 1) + a
2
y(k 2) = a
3
u(k) + a
4
u(k 1)
The objective is to estimate the parameter vector
p
T
= [a
1
a
2
a
3
a
4
] using the vector of input and output
measurements. Making the substitution,
h
T
= [y(k 1) y(k 2) u(k) u(k 1)]
we can write
y(k) = h
T
p
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Least Squares Estimation
Let us say, we have k sets of measurements. Then, we can
write the above equation in matrix form as,
_

_
y
1
y
2
.
.
.
y
k
_

_
=
_

_
h
11
h
12
h
1n
h
21
.
.
.
.
.
.
.
.
.
h
k1
h
kn
_

_
_

_
p
1
p
2
.
.
.
p
n
_

_
y
i
= h
T
i
p i = 1, 2, , k (1)
In matrix form, we can write,
y = H
T
p
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Least Squares Estimation
In least-squares estimation, we minimize the following
performance index:
J =
_
y H
T
p
_
T
_
y H
T
p
_
= y
T
y y
T
H
T
p pHy + p
T
HH
T
p (2)
Minimizing the performance index in eq. 2 with respect to p,
J
p
=

p
_
y
T
y y
T
H
T
p pHy + p
T
HH
T
p
_
= 0
= Hy Hy + 2HH
T
p = 0
which results in the expression for the parameter estimate as:
p =
_
HH
T
_
1
Hy (3)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Derivation
Statistical Analysis of the RLS Estimator
Derivation
Limitations of Least Squares Estimation
The parameter update law in eq. 3 involves operating in a batch
mode. For every k + 1
th
measurement, the matrix inverse
_
HH
T
_
1
needs to be re-calculated. This is a cumbersome
operation and it is best if it can be avoided.
In a recursive estimator, there is no need to store all the
previous data to compute the present estimate. Let us use the
following simplied notations:
P
k
=
_
HH
T
_
1
and
B
k
= Hy
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Derivation
Statistical Analysis of the RLS Estimator
Derivation
Hence, the parameter update law in eq. 3 can be written as:
p
k
= P
k
B
k
In the recursive estimator, the matrices P
k
, B
k
are updated
as follows:
B
k+1
= B
k
+h
k+1
y
k+1
(4)
In order to update P
k
, the following update law is used:
P
k+1
= P
k

P
k
h
k+1
h
T
k+1
P
k
1 +h
T
k+1
P
k
h
k+1
(5)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Derivation
Statistical Analysis of the RLS Estimator
Derivation
Note that the update for matrix P
k+1
does not involve matrix
inversion.
The updates for P
k
, B
k
can then be used to update the
parameter vector as follows:
p
k+1
= P
k+1
B
k+1
p
k
= P
k
B
k
(6)
Combining these equations,
p
k+1
p
k
= P
k+1
B
k+1
P
k
B
k
Substituting eqs. 4 and 5 in the above equationwe get:
p
k+1
= p
k
+P
k
h
k+1
_
1 +h
T
k+1
P
k
h
k+1
_
1
_
y
k+1
h
T
k+1
p
k
_
(7)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Derivation
Statistical Analysis of the RLS Estimator
Statistical Analysis
Consider the scalar form of the equation again:
y
i
= h
T
i
p i = 1, 2, , k
In the presence of measurement noise, it becomes,
y
i
= h
T
i
p + n
i
i = 1, 2, , k
With the folowing assumptions
1
Average value of the noise is zero, that is, E(n
i
) = 0, where E
is the expectation operator.
2
Noise samples are uncorrelated, that is
E(n
i
n
j
) = E(n
i
)E(n
j
) = 0, i = j
3
E(n
2
i
) = r , the covariance of noise.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Derivation
Statistical Analysis of the RLS Estimator
Statistical Analysis
Recalling eq. 6:
p
k
= P
k
B
k
(8)
This can be expanded as:
p
k
=
_
k

i =1
h
i
h
T
i
_
1
k

i =1
h
i
y
i
(9)
Taking E() on both sides, we get,
E ( p
k
) = E(p) = p (10)
This makes the estimator unbiased estimator, that is the
expected value of the estimate is equal to to that of the
quantity being estimated.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Derivation
Statistical Analysis of the RLS Estimator
Statistical Analysis
Now, let us look at the covariance of the error,
Cov = E
_
( p
k
p
k
) ( p
k
p
k
)
T
_
(11)
Which upon simplication, we get
Cov = P
k
r (12)
It can be shown that P
k
decreases as k increases. Hence, as
more measurements become available, the error reduces and
converges to the true value of p. Hence, this is known as a
consistent estimator.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighted Least Squares
Extension of RLS Method
The scalar formulation can be extended to a MIMO
(multi-input multi-output) system.
A weighting matrix is introduced to emphasize the relative
importance of one parameter over the other.
Consider eq. 1. Extending this to the MIMO case and
including measurement noise,
y
i
= H
T
i
p +n
i
, i = 1, 2, , k
where, y
i
is l 1, H
i
is n l , p is n 1 and n
i
is l 1.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighted Least Squares
The performance index,J is dened by,
J =
k

i =1
_
y
i
H
T
i
p
_
T
_
y
i
H
T
i
p
_
Minimizing J with respect to p, we get,
p =
_
k

i =1
H
i
H
T
i
_
1
k

i =1
H
i
y
i
The above equation is a batch estimator. The recursive LS estimator can
be calculated by proceeding the same way as was done for the scalar case.
Dening,
P
k
=
_
k

i =1
H
i
H
T
i
_
1
B
k
=
k

i =1
H
i
y
i
(13)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighted Least Squares
The parameter update rule is given by,
p
k+1
= p
k
+P
k
H
k+1
_
y
k+1
H
T
k+1
p
k
_
Now, if we introduce a weighting matrix, W into the
performance index, we get
J =
k

i =1
_
y
i
H
T
i
p
_
T
W
_
y
i
H
T
i
p
_
(14)
The minimization of eq. 14 leads to
p =
_
k

i =1
H
i
WH
T
i
_
1
k

i =1
H
i
Wy
i
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighted Least Squares
Once again, dening
P
k
=
_
k

i =1
H
i
WH
T
i
_
1
B
k
=
k

i =1
H
i
Wy
i
(15)
the recursive relationships are given by,
p
k+1
= p
k
+P
k+1
H
k+1
W
_
y
k+1
H
T
k+1
p
k
_
(16)
and
P
k+1
= P
k
P
k
H
k+1
_
W
1
+H
T
k+1
P
k
H
k+1
_
1
H
T
k+1
P
k
(17)
Assuming that the noise samples are uncorrelated, or,
E
__
n
i
n
T
i
__
=
_
0 i = j
R i = j
It can be shown that by choosing W = R
1
produces the minimum
covariance estimator. In other words, the estimation error is the minimum.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Features
Derivations
Discrete-Time Kalman Filter
Kalman Filter is the most widely used state estimation tool
used for control and identication.
LS, RLS, WLS deals with the estimation of system parameters
Kalman lter deals with the estimation of states for a
dynamical system.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Features
Derivations
Discrete-Time Kalman Filter
Consider the linear discrete-system given by,
x
k
= Ax
k1
+Gw
k1
y
k
= H
T
x
k
+n
k
Note : The parameter vector p is replaced by x, consistent with the
terminology we have adopted for representing states.
w
k
is a n 1, is noise measurements with E() = 0 and COV() = Q
x
k
is the n 1 state vector
A is the state matrix assumed to be known
n
k
is a l 1 vector of output noise with E() = 0 and COV() = R
y
k
is l 1 vector of measurements
G is n n, H is l n and they are assumed to be known
The objective is to estimate the states, x
k
based on k observations of
y. A Recursive lter is used for this purpose. This recursive lter is
called Kalman lter.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Features
Derivations
Fundamental dierence between the WLS for Dynamical
case and Non-dynamic case
Non-dynamic case, at time t
k1
, an estimate x
k1
needs to be
produced and the covariance estimate x
k1
needs to be updated.
These quantities do not change between t
k1
and t
k
because,
x
k1
= x
k
.
In the dynamic case, x
k1
= x
k
, since the state evolves between the
time-steps k 1 and k. That means a prediction is needed of what
happens to these state estimates and the covariance estimates
between measurements.
Recall the WLS estimator in eq. 16 and eq. 17
x
k
= x
k1
+P
k
H
k
W
_
y
k
H
T
k
x
k1
_
P
k
= P
k1
P
k1
H
k
_
W
1
+H
T
k
P
k1
H
k
_
1
H
T
k
P
k1
In this estimator we cannot replace x
k1
with x
k1|k1
as x
k
is
changing between t
k1
and t
k
. The same applies to P
k1
.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Features
Derivations
Discrete-Time Kalman Filter
Consider the state estimate equation. If we know the state
estimate based on k 1 measurements, x
k1|k1
and the
state matrix, A, then, we can predict the quantity x
k|k1
using the relationship,
x
k|k1
= Ax
k1|k1
We can write the state estimate equation as,
x
k|k
= x
k|k1
+P
k|k
HR
1
_
y
k
H
T
x
k|k1
_
(18)
The above equation assumes that the weighting matrix,
W = R
1
.
Similarly, it can be shown that the covariance estimate is,
P
k|k
= P
k|k1
P
k|k1
H
_
R +H
T
P
k|k1
H
_
1
H
T
P
k|k1
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Features
Derivations
Discrete-Time Kalman Filter
Note that the matrix, H is constant. The quantity P
k|k1
can
be calculated as,
P
k|k1
= E
_
[x
k
x
k|k1
][x
k
x
k|k1
]
T
_
= AP
k1|k1
A
T
+GQG
T
In summary, the following steps are for the discrete-time
Kalman lter:
1
Prediction:P
k|k1
= AP
k1|k1
A
T
+GQG
T
2
Prediction: x
k|k1
= Ax
k1|k1
3
Covariance Estimate:P
k|k
=
P
k|k1
P
k|k1
H
_
R +H
T
P
k|k1
H

1
H
T
P
k|k1
4
State Estimate: x
k|k
= x
k|k1
+P
k|k
HR
1
_
y
k
H
T
x
k|k1
_
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Features
Derivations
Discrete-Time Kalman Filter
If input is present, such that the equations are of the form,
x
k
= Ax
k1
+Bu
k1
Gw
k1
y
k
= H
T
x
k
+n
k
Then, the state estimation becomes,
x
k|k
= x
k|k1
+Bu
k1
+P
k|k
HR
1
_
y
k
H
T
x
k|k1
_
Note: Do not confuse the input matrix, B with B
k
!
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
State Space Identication
The objective is to identify the state matrices, A, B and C.
The general state-space description for a dynamical system is
given by:
x(t) = A
c
x(t) +B
c
u(t)
y(t) = Cx(t) + Du(t) (19)
for a system of order n, r inputs and q outputs.
The discrete representation of the same system is given by:
x(k + 1) = Ax(k) +Bu(k)
y(k) = Cx(k) + Du(k) (20)
Note the distinction in the state matrices between the
continuous and discrete versions!
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Weighting Sequence Model
Representing output as a weighted sequence of inputs
Start from the initial conditions, x(0):
x(0) = 0; y(0) = Cx(0) + Du(0)
x(1) = Ax(0) +Bu(0); y(1) = Cx(1) + Du(1)
x(2) = Ax(1) +Bu(1); y(2) = Cx(2) + Du(2)
If x(0) is zero, or, k is suciently large, A
k
0 (stable
system with damping). Hence,
y(k) = CBu(k 1) + +CA
k1
Bu() +Du(k)(21)
y(k) =
k

i =1
CA
i 1
Bu(k i ) + Du(k) (22)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Weighting Sequence Model
Eq. 22 is known as the weighting-sequence model it does not
involve any state measurements, and depends only on inputs.
The output y(k) is a weighted sum of input values
u(0), u(1), u(k)
Weights CB, CAB, CA
2
B, are called Markov parameters.
Markov parameters are invariant to state transformations
Since, the Markov parameters are the pulse responses of the
system, they must be unique for a given system.
Note that the input-output description in eq. 22 is valid only
under zero initial conditions (steady-state). It is not applicable
if the transient eects are present in the system.
In this model, there is no need to consider the exact nature of
the state equations.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
State-space Observer Model
If the system is asymptotically stable, then there are only a
nite number of steps in the weighting sequence model.
However, for lightly damped systems, the number of terms in
the weighting sequence model can be too large.
Under these conditions, the state-space observer model is
advantageous.
Consider the state space model:
x(k + 1) = Ax(k) +Bu(k)
y(k) = Cx(k) + Du(k)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
State-space Observer Model
Add and subtract the term Gy(k) to the state equation:
x(k + 1) = Ax(k) +Bu(k) +Gy(k) Gy(k)
y(k) = Cx(k) +Du(k)
x(k + 1) =

Ax(k) +

Bv(k)
y(k) = Cx(k) +Du(k)
where,

A = A +GC

B = [B +GD G]
and
v(k) =
_
u(k)
y(k)
_
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
State-space Observer Model
Continuing from the previous method, objective is to nd G
so that A +GC is asymptotically stable.
The weighting-sequence model in terms of the Observer
Markov parameters (from eq. 22) is:
y(k) =
k

i =1
C

A
i 1

Bv(k i ) + Du(k) (23)


where, C

A
k1

B are known as the observer Markov


parameters. If G is chosen appropriately, then

A
p
= 0 for
nite p.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Linear Dierence Model
Eq. 23 can be written as (proceeding the same way as done
for the weighting-sequence description):
y(k) =
k

i =1
C

A
i 1
(B +GD) u(ki )
k

i =1
C

A
i 1
Gy(ki )+Du(k)
which can written as:
y(k) +
k

i =1

Y
(2)
i
y(k i ) =
k

i =1

Y
(1)
i
u(k i ) + Du(k) (24)
where

Y
(1)
i
=
k

i =1
C

A
i 1
(B +GD) and

Y
(2)
i
=
k

i =1
C

A
i 1
G
Eq. 24 is commonly referred to as ARX (AutoRegressive with
eXogeneous part) model.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Linear Dierence Model
Models discussed so far (weighting sequence, ARX, etc.) are
related in terms of the system matrices, A, B, C and D.
If these matrices are known, all the models can be derived
describing the IO relationships.
The system Markov parameters and the observer markov
parameters play an important role in system identication
using IO descriptions.
Starting from the initial conditions, x(0) we get :
x(l 1) =
l 1

i =1
A
i 1
Bu(l 1 i )
y(l 1) =
l 1

i =1
CA
i 1
Bu(l 1 i ) + Du(l 1)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Linear Dierence Model
which can be written as,
_
y(0) y(1) . . . y(l 1)

=
_
D CB . . . CA
l 2
B

_
u(0) u(1) u(2) . . . u(l 1)
u(0) u(1) . . . u(l 2)
u(0) . . . u(l 3)
.
.
.
.
.
.
u(0)
_

_
In compact form,
Y
ql
= P
qrl
V
rl l
(25)
Hence,
P = YV
+
(26)
where V
+
is called the pseudo-inverse of the matrix. The matrix,
V becomes square in case if single-input system. ARX models
can be expressed in this form.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Linear Dierence Model: ARX Model
Consider the ARX model given in eq. 24. This can be written in a
slightly modied form as:
y(k)+
1
y(k1)+ +
p
y(kp) =
0
u(k)+
1
u(k1)+ +
p
u(kp)
(27)
where, p indicates the model order. This can be arranged as:
y(k) =
1
y(k1)
p
y(kp)+
0
u(k)+
1
u(k1)+ +
p
u(kp)
(28)
which means that the output at any step k, y(k) can be expressed in
terms of p previous output and input measurements, i.e.,
y(k 1), , y(k p) and u(k 1), , u(k p).
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Linear Dierence Model: ARX Model
Let us dene a vector v(k) as,
v(k) =
_
y(k)
u(k)
_
, k = 1, 2, , l
where, l is the length of the data. Eq. 28 can be written as,
[y
0
y] = [V
0
V] (29)
where,
y
0
= [y(1) y(2) y(p)]
y = [y(p + 1) y(p + 2) y(l )]
= [
0
(
1

1
) (
p1

p1
) (
p

p
)]
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Linear Dierence Model: ARX Model
V
0
=
_

_
u(1) u(2) u(p)
v(0) v(1) v(p 1)
.
.
.
.
.
.
.
.
.
.
.
.
v(2 p) v(3 p) v(1)
v(1 p) v(2 p) v(0)
_

_
V =
_

_
u(p + 1) u(p + 2) u(l )
v(p) v(p + 1) v(l 1)
.
.
.
.
.
.
.
.
.
.
.
.
v(2) v(3) v(l p + 1)
v(1) v(2) v(l p)
_

_
The parameters can then be solved as:
= [y
0
y] [V
0
V]
+
(30)
If the system does not start from rest, the quantities y
0
and V
0
are
usually unknown. In which case, the parameters are calculated as:
= yV
+
(31)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Linear Dierence Model: Pulse Response Model
Given,
y(k) = Du(k) +CBu(k 1) +CABu(k 2) + +CA
p1
BU(k p)
(32)
Find, D, CB, CAB, , CA
p1
B using,
_
y(k) y(k + 1) . . . y(k + l 1)

=
_
D CB . . . CA
p1
B

_
u(k) u(k + 1) u(k + l 1)
u(k 1) u(k) u(k + l 2)
u(k 2) u(k 1) u(k + l 3)
.
.
.
.
.
.
.
.
.
u(k p) u(k p + 1) u(k + l 1 p)
_

_
In compact form,
Y
ql
= P
qr (p+1)
V
r (p+1)l
(33)
q:output, r :input and l :length of data.
Hence,
P = YV
+
(34)
In Matlab, you can compute the pseudo-inverse through the
command: pinv(V)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Linear Dierence Model: Pseudo-Inverse
Say, A
mn
X
n1
= b
m1
X = A
+
b, m equations in n
unknowns.
It has a unique (consistent) solution if:
Rank[A, b] = Rank(A) = n
It has innite(fewer linear independent equations than
unknowns) number of solutions if: Rank[A, b] = Rank(A) < n
It has no solutions(inconsistent) if: Rank[A, b] > Rank(A)
Note that [A, b] is an augmented matrix.
Rank is the number of linearly independent columns or rows.
Due to the presence of noise, system identication mostly
produces a set of inconsistent equations. These can be dealt
with what is known as the Singular Value Decomposition
(SVD).
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Physical Interpretation of SVD: Approximation Problem
Let A
nm
, where n m and rank(A) = n. Then, nd a
matrix X
nm
with rank(X) = k < n such that A X
2
is minimized (minimize
1
), A X
2

k+1
(A) and
rank(X) = k.
SVD addresses the question of rank and handles non-square
matrices automatically.
1
If the system has an unique solution, SVD provides this unique
solution
2
For innite solutions, SVD provides the solution with minimum
norm
3
When there is no solution, SVD provides a solution which
minimizes the error
Items 2 and 3 are called Least Squares Solutions
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Physical Interpretation of SVD: Basic Equations
if A is a m n, then there exists two ortho-normal matrices
U (m m and V (n n such that
A
mn
= U
mm

mn
V
T
nn
(35)
where, is a matrix with the same dimension as A, but
diagonal.
The scalar values,
i
are the singular values of A with

1

2

3

k
> 0 and
k+1
=
k+2
= = 0
Example
Example: Let = [1, 0.3, 0.1, 0.0001, 10
12
, 0, 0]
Then, strong rank=3, weak rank=4, very weak rank=5.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Physical Interpretation of SVD: Basic Equations
The nonzero singular values are unique, but U and V are not.
U and V are square matrices.
The columns of U are called the left singular vectors and the
columns of V are called the right singular vectors of A.
Since, U and V are orthonormal matrices, they obey the
relationship,
U
T
U = I
mm
= U
1
U
V
T
V = I
nn
= V
1
V (36)
From eq. 35, if A = UV
T
then,
= U
T
AV

mn
=
_

kk
0
knk
0
mkk
0
mknk
_
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Physical Interpretation of SVD: Basic Equations
SVD is closely related to the eigen-solution in case of
symmetric positive-denite matrices AA
T
and A
T
A.
A = UV
T
A
T
= V
T
U
T
Hence, the non-zero singular values of A are the positive
square roots of the non-zero eigenvalues of A
T
A or AA
T
.
The columns of U are the eigenvectors corresponding to the
eigenvalues of AA
T
and
The columns of V are the eigenvectors corresponding to the
eigenvalues of A
T
A.
If A consists of complex elements, then the transpose is
replaced by complex-conjugate transpose.
Denitions of condition number and rank are closely related
to the singular values.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Physical Interpretation of SVD: Condition Number
Rank: The rank of a matrix is equal to the number of
non-zero singular values. This is the most reliable method of
rank determination. Typically, a rank tolerance equal to the
square of machine precision is chosen and the singular values
above it are counted to determine the rank.
In order to calculate the Pseudo-inverse of matrix A, denoted
by A
+
using SVD,
A
+
= V
1

1
U
T
1
= V
1
diag[
1
1
,
1
2
, ,
1
k
]U
T
1
(37)
where,
A = UV
T
=
_
U
1
.
.
. U
2
_ _

1
0
0 0
_
_
_
V
T
1

V
T
2
_
_
and
A = U
1

1
V
T
1
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Eigen Realization Algorithm
Given the pulse-response histories (system Markov parameters),
ERA is used to extract the state-space model of the system.
Dene the Markov parameters as follows:
Y
0
= D =
0
Y
1
= CB =
(1)
0
Y
2
= CAB =
(2)
0
Y
3
= CA
2
B =
(3)
0
.
.
.
Y
k
= CA
k1
B =
(k)
0
(38)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Eigen Realization Algorithm
Start with a generalized m r Hankel matrix (m outputs
and r inputs and , are integers)
H(k 1) =
_

_
Y
k
Y
k+1
Y
k+1
Y
k+1
Y
k+2
Y
k+
.
.
.
.
.
.
.
.
.
.
.
.
Y
k+1
Y
k+
Y
k++2
_

_
For the case when k = 1,
H(0) =
_

_
Y
1
Y
2
Y

Y
2
Y
3
Y
1+
.
.
.
.
.
.
.
.
.
.
.
.
Y

Y
1+
Y
+1
_

_
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Eigen Realization Algorithm
If n and n, the matrix H(k 1) is of rank n.
Substituting the Markov parameters from eq. 38 into
H(k 1), we can factorize the Hankel matrix as:
H(k 1) = P

A
k1
Q

(39)
ERA starts with the SVD of the Hankel matrix
H(0) = RS
T
(40)
where the columns of R and S are orthonormal and is,
=
_

n
0
0 0
_
in which 0s are zero matrices of appropriate dimensions, and

n
= diag[
1
,
2
, ,
i
,
i +1
, ,
n
] and

1

2

i

n
0.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Eigen Realization Algorithm
Let R
n
and S
n
be the matrices formed by the rst n columns
of R and S respectively. Then,
H(0) = R
n

n
S
T
n
= [R
n

1/2
n
][
1/2
n
S
T
n
] (41)
and the following relationships hold: R
T
n
R
n
= S
T
n
S
n
= I
n
Now, examining, eq. 39 for k = 1, that is,
H(0) = P

(42)
Equating eq. 42 and eq. 41, we get,
P

= R
n

1/2
n
Q

=
1/2
n
S
T
n
(43)
That means, B = the rst r columns of Q

, and C = the rst


m rows of P

and D = Y
0
.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Eigen Realization Algorithm
In order to determine the state matrix, A, we start with,
H(1) =
_

_
Y
2
Y
3
Y
+1
Y
3
Y
4
Y
2+
.
.
.
.
.
.
.
.
.
.
.
.
Y
+1
Y
2+
Y
+
_

_
From eq. 38, we can then see that H(1) canbe factorized
using SVD as:
H(1) = P

AQ

= R
n

1/2
n
A
1/2
n
S
T
n
(44)
from which
A =
1/2
n
R
T
n
H(1)S
n

1/2
n
(45)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
Classication
Least Squares Estimation
Recursive Least Squares Estimation
Weighted Least Squares
Discrete-Time Kalman Filter
State Space Identication
Weighting Sequence Model
State-space Observer Model
Linear Dierence Model
Physical Interpretation of SVD
Eigen Realization Algorithm
Acknowledgements
Dr. Sriram Narasimhan, Assistant Professor, Univ. of Waterloo,
and Dharma Teja Reddy Pasala, Graduate Student, Rice Univ,
assisted in putting together this presentation. The materials
presented in this short course are a condensed version of lecture
notes of a course taught at Rice University and at Univ. of
Waterloo.
References
1. Jer-Nan Juang, Applied System Identication, Prentice Hall.
2. Jer-Nan Juang and M. Q. Phan, Identication and control of
mechanical systems, Cambridge Press.
3. DeRusso et al., State variables for engineers, Wiley Interscience.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication

You might also like