You are on page 1of 10

Copyright F.L.

Lewis 2008
All rights reserved

Updated: Tuesday, November 11, 2008

Lyapunov Stability Analysis for Feedback Control Design


Lyapunov Theorems
Lyapunov Analysis allows one to analyze the stability of continuous-time dynamical systems and
discrete-time dynamical systems of the form:
Continuous-time nonlinear system
x = f ( x), x(0)
with state x(t ) R n and f(0)=0 so the origin is an equilibrium point. We assume that f(.) is
continuous so there exist solutions, and that f(x) is Lipschitz so there exists a unique solution.

Discrete-time nonlinear system


xk +1 = f ( xk ), x0
with xk R n , f(.) continuous and Lipschitz, and f(0)=0.
Definition. Lyapunov Function Candidate (LFC).

A scalar function V ( x) : R n R is said to be a Lyapunov Function Candidate (LFC) if:


1. V(x) is a continuous real-valued function
2. V ( x) > 0 , i.e. V(x) is positive definite

A Lyapunov function is a LFC that is nonincreasing with time and hence bounded.
Definition: Continuous-time (CT) Lyapunov Function

V ( x) : R n R is said to be a CT Lyapunov Function if:


V(x) is a LFC and
dV
3. V =
0 , i.e. V ( x) is negative semi-definite
dt
Definition: Discrete-Time (DT) Lyapunov Function
1

V ( x) : R n R is said to be a DT Lyapunov Function if:


V(x) is a LFC and
3. V ( xk ) = V ( xk +1 ) V ( xk ) 0 , i.e. the first difference V ( xk ) is negative semidefinite

The following results were proven by A.M. Lyapunov. Stability for CT systems refers to
the j-axis stability boundary in the s-plane. Stability for DT systems refers to the unit circle
stability boundary in the s-plane.
Lyapunov Theorem. SISL. Suppose that for a given system there exists a Lyapunov function.
Then the system is SISL.
Lyapunov Theorem. AS. Suppose that for a given system there exists a Lyapunov function
which also satisfies the stronger third condition:

dV
For CT systems: V =
< 0 , i.e. V ( x) is negative definite.
dt
For DT systems: V ( xk ) = V ( xk +1 ) V ( xk ) < 0 , i.e. V ( xk ) is negative definite.
Then the system is AS.
Exponential stability can be verified by extra analysis to relate V(x) and V ( x) and show
that the Lyapunov function decreases exponentially.
Linear-Time Invariant Autonomous Systems- Recap
We have already considered linear time invariant (LTI) autonomous systems in the following
form.

Continuous-time (CT) LTI autonomous systems


x = Ax, x(0)
with state x(t ) R n .
Discrete-time (DT) (LTI) autonomous system
xk +1 = Axk , x0
with state xk R n .
The next results hold.

Theorem. SISL for LTI CT Systems. Let Q be a symmetric positive semidefinite matrix.
Then the system x = Ax is SISL (e.g. marginally stable) if and only if the (symmetric) matrix P
which solves the CT Lyapunov equation
AT P + PA = Q
is positive definite.
Theorem. SISL for LTI DT Systems. Let Q be a symmetric positive semidefinite matrix.
Then the system xk +1 = Axk is SISL (e.g. marginally stable) if and only if the (symmetric) matrix
P which solves the DT Lyapunov equation
AT PA P = Q
is positive definite.

The proof relies on the fact that, if the Lyapunov equations have solutions as specified,
then
V ( x) = 12 xT Px
serves as a Lyapunov function, with constant kernel matrix P symmetric and positive definite,
i.e. P = PT > 0 .

If Q = QT is in fact positive definite, the theorems yields AS.


Note that the solution properties of the CT Lyapunov equation
AT P + PA = Q
refer to the locations of the poles of system matrix A with respect to the left half of the complex
plane.
On the other hand, the solution properties of the DT Lyapunov equation
AT PA P = Q
refer to the locations of the poles of system matrix A with respect to the unit circle of the
complex plane.
Lyapunov Analysis for Controlled Systems

We now want to use Lyapunov Analysis to study the stability of systems with control inputs.
We shall see that instead of the Lyapunov equations
AT P + PA = Q for CT systems
AT PA P = Q for DT systems
we obtain Riccati equations, namely the equations above but with extra terms quadratic in P.

Control Design for Nonlinear CT Systems


Let us see how direct Lyapunov techniques are by confronting a nonlinear CT system.
THIS SECTION IS NOT ON THE EE 5307 Homework or exams. It is for fun only.

Consider the nonlinear system be given by


x = f ( x) + g ( x)u
y = h( x )
with state x(t ) R n and f(0)=0 so the origin is an equilibrium point. We assume that f(.) is
continuous so there exist solutions, and that f(x) is Lipschitz so there exists a unique solution.
Then for any scalar C1 function V(x) one has
V T
V T
dV V T


=
V=
x=
f+
gu VxT f + VxT gu
x
x
x
dt
V
with Vx
the gradient, which is assumed here to be a column vector.
x
Completing the squares for any matrix R = RT > 0 yields
T
T
T
T
T
V = V x f + V x gu = V x f + 12 (V x gR 1 + u T ) R( R 1 g T V x + u ) 12 V x gR 1 g T V x 12 u T Ru .
Now suppose that V(x)>0, V(0)=0, and satisfies the Hamilton-Jacobi (HJ) inequality
V x f + 12 h T h 12 V x gR 1 g T V x 0 .
T

Assume the system is locally input/output detectable in the sense that there exists a neighborhood
such that
u (t ) = 0 and y (t ) = 0 t implies that x(t ) 0 .
This is implied by i/o observability.
Then the closed-loop system is asymptotically stable if one selects the control input as
u = R 1 g T ( x)V x .
To prove this, note that, according to the HJ equation
T
V 12 (V x gR 1 + u T ) R ( R 1 g T V x + u ) 12 h T h 12 u T Ru
and according to the control selection
V 12 h T h 12 u T Ru = 12 y T y 12 u T Ru
which is negative definite under the i/o detectable assumption. Therefore V(x) is a Lyapunov
function with V < 0 .
Result.

Based on this analysis one has the next theorem.


Theorem for Control of Nonlinear CT Systems. Suppose that V ( x) : R n R is continuous,
V(x)>0, V(0)=0, and V(x) satisfies the Hamilton-Jacobi-Bellman (HJB) equation

VxT f + 12 hT h 12 VxT gR 1 g T Vx = 0 .
Then the closed-loop system is stable using the control input
u = R 1 g T ( x)V x .

To prove this, note that if V(x) satisfies the HJB equation, it solves the HJ inequality.
THIS MEANS that to design a feedback control for a nonlinear system, one may first
solve the HJB equation for the value V(x), then compute the control using the formula above.
Control Design for Linear Time Invariant CT Systems

Here we specialize the previous development to the case of LTI CT systems of the form
x = Ax + Bu
We want to find a SVFB
u = Kx
to stabilize the system.
For any scalar C1 function V(x) one has
dV V T
V T
V T
V =
x =
Ax +
Bu VxT Ax + VxT Bu
=
dt
x
x
x
Completing the squares for any matrix R = RT > 0 yields
V = VxT Ax + VxT Bu = VxT Ax + 12 (VxT BR 1 + uT ) R( R 1 BT Vx + u ) 12 VxT BR 1 BT Vx 12 uT Ru .
Now suppose that V(x)>0, V(0)=0, and satisfies the Hamilton-Jacobi (HJ) inequality
VxT Ax + 12 xT C T Cx 12 VxT BR 1 BT Vx 0
for any matrix C such that (A,C) is observable.
Note that the selection of C in the HJ effectively defines an output
y = Cx
Through which the state is observable. This output is NOT used for control purposes, but only to
obtain a suitable value V(x) through solution of the HJ.

Let us make the HJ symmetric. Note that the first term is a scalar so that it equals its own
transpose, i.e.
VxT Ax = xT AT Vx .
Thus, write the HJ inequality equivalently in symmetric form
T T
T
T
1 T
1
1 T
1
2 ( x A Vx + Vx Ax ) + 2 y y 2 Vx BR B Vx 0
(To se that this is symmetric, transpose it and you will get the same equation.)
Now select the control input as
u = R 1 BT Vx .
Then the system is AS. To prove this, note that, according to the HJ equation
V 12 (VxT BR 1 + u T ) R( R 1 BT Vx + u ) 12 yT y 12 uT Ru
and according to the control selection
V 12 h T h 12 u T Ru = 12 y T y 12 u T Ru
which is negative definite under the observabilityassumption. Therefore V(x) is a Lyapunov
function with V < 0 .

CT Riccati Equation

The next results make it easy to use this machinery to design stabilizing controllers for CT LTI
systems.
It turns out that V(x) for linear systems can always be selected in the quadratic form
V ( x) = 12 xT Px
with P = PT > 0 . Therefore
V ( 12 xT Px)
=
= Px
x
x
and the HJ becomes
T T
T
1 T
1
1 T T
1 T
2 ( x A Px + x PAx ) + 2 x C Cx 2 x PBR B Px
= 12 xT ( AT P + PA + C T C PBR 1 BT P) x 0
Since this must hold for every state x(t) it is equivalent to
AT P + PA + C T C PBR 1 BT P 0
The stabilizing control is selected according to
u = R 1 BT Px Kx
Theorem. Stabilizing Control Design for CT LTI Systems.

Let Q = QT > 0, R = RT > 0 be symmetric positive definite matrices. Then the system
x = Ax + Bu is AS if there exists a positive definite solution P to the CT algebraic Riccati
equation (ARE)
AT P + PA + Q PBR 1 BT P = 0
Then, the state feedback
K = R 1 BT P
makes the closed-loop system stable.

To prove this theorem, note that if Q is positive definite then (A,C) is observable for any
square root C of Q, Q = C T C , for then C is nonsingular. Moreover, if P is a solution to the CT
ARE then the HJ inequality holds.

Compare the CT ARE to the CT Lyapunov equation


AT P + PA + Q = 0

The theorem shows the following design method for CT LTI SVFB controllers
1. Select design matrices Q = QT > 0, R = RT > 0
2. Solve the CT ARE
AT P + PA + Q PBR 1 BT P = 0
3. Compute the SVFB
K = R 1 BT P
The ARE is easily solved using many routines, among them the MATLAB routine
[ K , P] = lqr ( A, B, Q, R)

Optimal Control for CT LTI Systems


Now we see that the above constructions actually yield the OPTIMAL CT controller.

Consider the LTI CT system


x = Ax + Bu
We want to find a SVFB
u = Kx
to minimize the performance index

J ( x(0), u ) =

1
2

( x Qx + u
T

Ru ) dt

Theorem. Optimal Control Design for CT LTI Systems.

Let Q = QT > 0, R = RT > 0 be symmetric positive definite matrices. Suppose there exists a
positive definite solution P to the CT algebraic Riccati equation (ARE)
AT P + PA + Q PBR 1 BT P = 0
Then, the state feedback
K = R 1 BT P
minimizes J(x(0),u) and makes the closed-loop system stable.
Proof:

Select the Lyapunov function


V ( x) = 12 xT Px
Then it was shown above that if P satisfies the ARE and one selects the given SVFB, one has
V = 12 ( xT Qx + u T Ru )
Integrating both sides yields

V dt =

1
2

( xT Qx + u T Ru ) dt

lim V ( x(t )) V ( x(t )) = J ( x(t ), u )


t

However, the system has already been shown AS, so that


lim x(t ) = 0
t

which implies
lim V ( x(t )) = 0
t

or
V ( x(t )) = xT (t ) Px(t ) = J ( x(t ), u )
with u = Kx = R 1 BT Px .
It remains to show that V(x(t)) is the optimal value of J(x(0),u). Can you do it?

This theorem shows how to select design matrices Q, R. namely, as discussed in the notes
on LQR.
Control Design for Linear Time Invariant DT Systems
Given the LTI DT system
xk +1 = Axk + Buk
it is desired to find a stabilizing SVFB control
uk = Kxk

Note that for any scalar quadratic form V(x) one has the first forward difference evaluated
along the system trajectories given as
V ( xk ) = Vk +1 Vk = xkT+1 Pxk +1 xkT Pxk = ( Axk + Buk )T P( Axk + Buk ) xkT Pxk
= xT AT PAx xT Px + 2 xT AT PBu + u T BT PBu
where lack of a subscript on time functions indicates their values at time k.

Note that the first difference of V(.) is QUADRATIC in the state dynamics Ax+Bu. By
contrast, in the CT LTI case the differential of V(.) is LINEAR in the state dynamics Ax+Bu.
This makes expressions for DT systems more complex than for CT systems.
Complete the square to see that, for any nonsingular matrix one has
( xT AT PB1 + u T )(1 BT PAx + u ) = xT AT PB1 BT PAx + 2 xT AT PBu + uT u
So that one writes V ( x) as
V ( xk ) = xT AT PAx xT Px + u T BT PBu
+ ( xT AT PB1 + u T )(1 BT PAx + u ) xT AT PB1 BT PAx u T u

Now, to get rid of the third term on the right-hand side and introduce a positive definite quadratic
term in u, define
= BT PB + R .
Then,
V ( xk ) = xT AT PAx xT Px u T Ru
+ ( xT AT PB1 + uT )(1 BT PAx + u ) xT AT PB1 BT PAx

Now suppose V ( x) > 0, V (0) = 0 and it satisfies the DT HJ inequality


AT PA P + Q AT PB( BT PB + R) 1 BT PA 0
Then
V ( xk ) xT Qx uT Ru + ( xT AT PB1 + u T )(1 BT PAx + u ) .
Selecting the control as
uk = ( BT PB + R) 1 BT PAxk
yields
V ( xk ) xT Qx uT Ru .
so that V(x) is a Lyapunov function. Now assume i/o detectability with the output
yk = Cxk
with Q = Q

Q C T C , and the system is AS.

It is not hard to show that in fact this choice of control also minimizes the performance
measure

J ( xk , u ) =

1
2

( x Qx
i =k

T
k

+ ukT Ruk )

where u denotes the control sequence {uk uk +1 ,"} . That is, for the above V ( xk ) one has

V ( xk ) = min J ( xk , u ) = min 12 ( xkT Qxk + ukT Ruk )


u

i=k

One calls V ( xk ) the VALUE FUNCTION.


Summary of Design procedure for DT Systems:
1. Select design matrices Q = QT > 0, R = RT > 0
2. Solve the DT ARE
AT PA P + Q AT PB( BT PB + R) 1 BT PA = 0
3. Compute the SVFB
K = ( BT PB + R) 1 BT PA

The ARE is easily solved using many routines, among them the MATLAB routine
[ K , P] = dlqr ( A, B, Q, R)

10

You might also like