Professional Documents
Culture Documents
Lewis 2008
All rights reserved
A Lyapunov function is a LFC that is nonincreasing with time and hence bounded.
Definition: Continuous-time (CT) Lyapunov Function
The following results were proven by A.M. Lyapunov. Stability for CT systems refers to
the j-axis stability boundary in the s-plane. Stability for DT systems refers to the unit circle
stability boundary in the s-plane.
Lyapunov Theorem. SISL. Suppose that for a given system there exists a Lyapunov function.
Then the system is SISL.
Lyapunov Theorem. AS. Suppose that for a given system there exists a Lyapunov function
which also satisfies the stronger third condition:
dV
For CT systems: V =
< 0 , i.e. V ( x) is negative definite.
dt
For DT systems: V ( xk ) = V ( xk +1 ) V ( xk ) < 0 , i.e. V ( xk ) is negative definite.
Then the system is AS.
Exponential stability can be verified by extra analysis to relate V(x) and V ( x) and show
that the Lyapunov function decreases exponentially.
Linear-Time Invariant Autonomous Systems- Recap
We have already considered linear time invariant (LTI) autonomous systems in the following
form.
Theorem. SISL for LTI CT Systems. Let Q be a symmetric positive semidefinite matrix.
Then the system x = Ax is SISL (e.g. marginally stable) if and only if the (symmetric) matrix P
which solves the CT Lyapunov equation
AT P + PA = Q
is positive definite.
Theorem. SISL for LTI DT Systems. Let Q be a symmetric positive semidefinite matrix.
Then the system xk +1 = Axk is SISL (e.g. marginally stable) if and only if the (symmetric) matrix
P which solves the DT Lyapunov equation
AT PA P = Q
is positive definite.
The proof relies on the fact that, if the Lyapunov equations have solutions as specified,
then
V ( x) = 12 xT Px
serves as a Lyapunov function, with constant kernel matrix P symmetric and positive definite,
i.e. P = PT > 0 .
We now want to use Lyapunov Analysis to study the stability of systems with control inputs.
We shall see that instead of the Lyapunov equations
AT P + PA = Q for CT systems
AT PA P = Q for DT systems
we obtain Riccati equations, namely the equations above but with extra terms quadratic in P.
Assume the system is locally input/output detectable in the sense that there exists a neighborhood
such that
u (t ) = 0 and y (t ) = 0 t implies that x(t ) 0 .
This is implied by i/o observability.
Then the closed-loop system is asymptotically stable if one selects the control input as
u = R 1 g T ( x)V x .
To prove this, note that, according to the HJ equation
T
V 12 (V x gR 1 + u T ) R ( R 1 g T V x + u ) 12 h T h 12 u T Ru
and according to the control selection
V 12 h T h 12 u T Ru = 12 y T y 12 u T Ru
which is negative definite under the i/o detectable assumption. Therefore V(x) is a Lyapunov
function with V < 0 .
Result.
VxT f + 12 hT h 12 VxT gR 1 g T Vx = 0 .
Then the closed-loop system is stable using the control input
u = R 1 g T ( x)V x .
To prove this, note that if V(x) satisfies the HJB equation, it solves the HJ inequality.
THIS MEANS that to design a feedback control for a nonlinear system, one may first
solve the HJB equation for the value V(x), then compute the control using the formula above.
Control Design for Linear Time Invariant CT Systems
Here we specialize the previous development to the case of LTI CT systems of the form
x = Ax + Bu
We want to find a SVFB
u = Kx
to stabilize the system.
For any scalar C1 function V(x) one has
dV V T
V T
V T
V =
x =
Ax +
Bu VxT Ax + VxT Bu
=
dt
x
x
x
Completing the squares for any matrix R = RT > 0 yields
V = VxT Ax + VxT Bu = VxT Ax + 12 (VxT BR 1 + uT ) R( R 1 BT Vx + u ) 12 VxT BR 1 BT Vx 12 uT Ru .
Now suppose that V(x)>0, V(0)=0, and satisfies the Hamilton-Jacobi (HJ) inequality
VxT Ax + 12 xT C T Cx 12 VxT BR 1 BT Vx 0
for any matrix C such that (A,C) is observable.
Note that the selection of C in the HJ effectively defines an output
y = Cx
Through which the state is observable. This output is NOT used for control purposes, but only to
obtain a suitable value V(x) through solution of the HJ.
Let us make the HJ symmetric. Note that the first term is a scalar so that it equals its own
transpose, i.e.
VxT Ax = xT AT Vx .
Thus, write the HJ inequality equivalently in symmetric form
T T
T
T
1 T
1
1 T
1
2 ( x A Vx + Vx Ax ) + 2 y y 2 Vx BR B Vx 0
(To se that this is symmetric, transpose it and you will get the same equation.)
Now select the control input as
u = R 1 BT Vx .
Then the system is AS. To prove this, note that, according to the HJ equation
V 12 (VxT BR 1 + u T ) R( R 1 BT Vx + u ) 12 yT y 12 uT Ru
and according to the control selection
V 12 h T h 12 u T Ru = 12 y T y 12 u T Ru
which is negative definite under the observabilityassumption. Therefore V(x) is a Lyapunov
function with V < 0 .
CT Riccati Equation
The next results make it easy to use this machinery to design stabilizing controllers for CT LTI
systems.
It turns out that V(x) for linear systems can always be selected in the quadratic form
V ( x) = 12 xT Px
with P = PT > 0 . Therefore
V ( 12 xT Px)
=
= Px
x
x
and the HJ becomes
T T
T
1 T
1
1 T T
1 T
2 ( x A Px + x PAx ) + 2 x C Cx 2 x PBR B Px
= 12 xT ( AT P + PA + C T C PBR 1 BT P) x 0
Since this must hold for every state x(t) it is equivalent to
AT P + PA + C T C PBR 1 BT P 0
The stabilizing control is selected according to
u = R 1 BT Px Kx
Theorem. Stabilizing Control Design for CT LTI Systems.
Let Q = QT > 0, R = RT > 0 be symmetric positive definite matrices. Then the system
x = Ax + Bu is AS if there exists a positive definite solution P to the CT algebraic Riccati
equation (ARE)
AT P + PA + Q PBR 1 BT P = 0
Then, the state feedback
K = R 1 BT P
makes the closed-loop system stable.
To prove this theorem, note that if Q is positive definite then (A,C) is observable for any
square root C of Q, Q = C T C , for then C is nonsingular. Moreover, if P is a solution to the CT
ARE then the HJ inequality holds.
The theorem shows the following design method for CT LTI SVFB controllers
1. Select design matrices Q = QT > 0, R = RT > 0
2. Solve the CT ARE
AT P + PA + Q PBR 1 BT P = 0
3. Compute the SVFB
K = R 1 BT P
The ARE is easily solved using many routines, among them the MATLAB routine
[ K , P] = lqr ( A, B, Q, R)
J ( x(0), u ) =
1
2
( x Qx + u
T
Ru ) dt
Let Q = QT > 0, R = RT > 0 be symmetric positive definite matrices. Suppose there exists a
positive definite solution P to the CT algebraic Riccati equation (ARE)
AT P + PA + Q PBR 1 BT P = 0
Then, the state feedback
K = R 1 BT P
minimizes J(x(0),u) and makes the closed-loop system stable.
Proof:
V dt =
1
2
( xT Qx + u T Ru ) dt
which implies
lim V ( x(t )) = 0
t
or
V ( x(t )) = xT (t ) Px(t ) = J ( x(t ), u )
with u = Kx = R 1 BT Px .
It remains to show that V(x(t)) is the optimal value of J(x(0),u). Can you do it?
This theorem shows how to select design matrices Q, R. namely, as discussed in the notes
on LQR.
Control Design for Linear Time Invariant DT Systems
Given the LTI DT system
xk +1 = Axk + Buk
it is desired to find a stabilizing SVFB control
uk = Kxk
Note that for any scalar quadratic form V(x) one has the first forward difference evaluated
along the system trajectories given as
V ( xk ) = Vk +1 Vk = xkT+1 Pxk +1 xkT Pxk = ( Axk + Buk )T P( Axk + Buk ) xkT Pxk
= xT AT PAx xT Px + 2 xT AT PBu + u T BT PBu
where lack of a subscript on time functions indicates their values at time k.
Note that the first difference of V(.) is QUADRATIC in the state dynamics Ax+Bu. By
contrast, in the CT LTI case the differential of V(.) is LINEAR in the state dynamics Ax+Bu.
This makes expressions for DT systems more complex than for CT systems.
Complete the square to see that, for any nonsingular matrix one has
( xT AT PB1 + u T )(1 BT PAx + u ) = xT AT PB1 BT PAx + 2 xT AT PBu + uT u
So that one writes V ( x) as
V ( xk ) = xT AT PAx xT Px + u T BT PBu
+ ( xT AT PB1 + u T )(1 BT PAx + u ) xT AT PB1 BT PAx u T u
Now, to get rid of the third term on the right-hand side and introduce a positive definite quadratic
term in u, define
= BT PB + R .
Then,
V ( xk ) = xT AT PAx xT Px u T Ru
+ ( xT AT PB1 + uT )(1 BT PAx + u ) xT AT PB1 BT PAx
It is not hard to show that in fact this choice of control also minimizes the performance
measure
J ( xk , u ) =
1
2
( x Qx
i =k
T
k
+ ukT Ruk )
where u denotes the control sequence {uk uk +1 ,"} . That is, for the above V ( xk ) one has
i=k
The ARE is easily solved using many routines, among them the MATLAB routine
[ K , P] = dlqr ( A, B, Q, R)
10