You are on page 1of 42

Linear Fractional Transformation

LFT: A mapping F : C 7→ C of the form:


a + bs
F (s) =
c + ds
with a, b, c and d ∈ C is called an LFT. If c 6= 0 then F (s) = α + βs(1 − γs)−1
Definition: For a complex matrix
 
M11 M12
M = 
M21 M22
and other complex matrices ∆l and ∆u of appropriate size define a lower LFT with respect to ∆l as:

Fl (M, ∆l ) = M11 + M12 ∆l (I − M22 ∆l )−1 M21

and an upper LFT with respect to ∆u as:

Fu (M, ∆u ) = M22 + M21 ∆u (I − M11 ∆u )−1 M12

provided the inverse matrices exist.

Robust Control 1
Motivation
A feedback control system can be rearranged as an LFT:

w: All external inputs z w


    
G(s) z G G12 w
u: Control inputs    =  11  
y u y G21 G22 u
z : outputs or error signals
y : Measured outputs u = Ky
- K(s)

Tzw = Fl (G, K) = G11 + G12 K(I − G22 K)−1 G21

• LFT is a useful way to standardize block diagrams for robust control analysis and design
• Fl (G, K) is the transfer function between the error signals and external inputs. In H∞ control
problems the objective is to minimize ||Fl (G, K)||∞

Linear Fractional Transformation 2


Example
Example 1: Show the nominal performance problem as an LFT (Find the system matrix G):
z = W1 (w − P u)
- W1 z
- y = w − Pu
G
w- y u-
-   z }| {  
- K P
6 z W −W1 P w
 = 1  
y 1 −P u

Fl (G, K) = G11 + G12 K(I − G22 K)−1 G21


= W1 + (−W1 P )K(1 + P K)−1 1
PK W1
= W1 (1 − )= = W1 S
1 + PK 1 + PK

Linear Fractional Transformation 3


Example
Example 2: Show the robust performance problem for multiplicative uncertainty as an LFT:
z1 = W1 (w − P u)
z1 z2 = W2 P u
- W1 -
y = w − Pu
w- y u- z2
- K P - W2 - G
-   z }| {
6 z1 W1 −W1 P  
    w
z2  =  0 W 2 P  
   
u
y 1 −P
     
W1 −W1 P −1 W1 S
Fl (G, K) =   +   K(1 + P K) =  
0 W2 P W2 P
 

W1 S
||Fl (G, K)||∞ =  
= sup[|W1 (jω)S(jω)|2 + |W2 (jω)T (jω)|2 ]
W2 T ω

Linear Fractional Transformation 4


LFT and Stability
Small gain theorem:
Internal stability:

- ∆
z w
G(s) 
y u
e d
 M (s) 
- K(s)
z w

         
z G G12 w e M M12 d
  =  11     =  11  
y G21 G22 u z M21 M22 w
Theorem: Let M ∈ RH∞ . Then the closed-
Theorem: K stabilizes G iff K stabilizes G22 .
loop system is well-posed and internally sta-
Proof: (Zhou p. 223) G22 and G share the
∆ ∈ RH∞ with ||∆||∞ ≤ 1 iff
ble for all
same A.
||M11 ||∞ < 1.
Linear Fractional Transformation 5
Pulling out the uncertainties
Basic principle is to ”pull out all uncertainties” which can appear in different points of a block diagram
and to combine them in one uncertainty.

Example: Consider a mass-spring-damper system where the actual mass, m is within 10% of a
nominal mass, m̄, the actual damping value c is within 20% of a nominal value, c̄ and the spring
stiffness k is perfectly known. The dynamical equation of the system motion is:

c k F - ∆
ẍ + ẋ + =
m m m
where:
e d
m = m̄(1+0.1δm ) −1 ≤ δm ≤ 1 G(s)
 
z w
c = c̄(1 + 0.2δc ) − 1 ≤ δc ≤ 1

1
s2 x = (F − c̄(1 + 0.2δc )sx − kx)
m̄(1 + 0.1δm )
Linear Fractional Transformation 6
Pulling out the uncertainties

F - - 1 - ẍ - 1 x -
m̄ s2
6 6
− −

0.1 
dm
δ m

em

  c̄s 
6 6
+ +

0.2 dc δc ec

 k 
+

    
em G11 G12 G13 dm  
     δm 0
 ec  = G21 G22 G23   dc  ∆=  
    
0 δc
x G31 G32 G33 F
−0.1m̄s2 −0.2s2
G11 (s) = , G12 (s) = , G13 = · · ·
m̄s2 + c̄s + k m̄s2 + c̄s + k
Linear Fractional Transformation 7
Algebraic Ricatti Equation (ARE)
ARE: A∗ X + XA + XRX + Q = 0 where R = R∗ and Q = Q∗ .
• ARE is so much important for control design as Lyapunov equation for control analysis.
• There are many solutions X = X ∗ to ARE. If A + RX is stable X is a stabilizing solution. The
stabilizing solution is unique.

• To each ARE, we can associate a Hamiltonian matrix H :


 
A R
H=  
−Q −A∗
2n×2n

 of H are symmetric
Lemma: Eigenvalues  wrt the imaginary axis.
0 −I
Proof: Denote J =   then J −1 HJ = −H ∗ . So λ is an eigenvalue of H iff −λ̄ is.
I 0
Remark: If there are no pure imaginary eigenvalues then there are n stable and n unstable
eigenvalues of H .

Robust Control 8
How to solve ARE
 
X1
Under no pure imaginary eigenvalues assumption, let T =   be a basis of the stable
X2
2n×n
n-dimensional invariant subspace. Equivalently HT = T Λ for some stable matrix Λn×n .
Lemma: If det(X1 ) 6= 0 then X = X2 X1−1 is a stabilizing solution to ARE.
Proof: We are to prove that:

1. X = X∗
2. X satisfies to ARE.
3. A + RX is stable.
Under conditions:

H1: There are no pure imaginary eigenvalues of H .

H2: det(X1 ) 6= 0 for some basis of stable invariant subspace.


We can find a stabilizing solution to ARE by finding a basis T and building X = X2 X1−1 .
Algebraic Ricatti Equation (ARE) 9
Bounded Real Lemma
Let G(s) = C(sI − A)−1 B where (A,B,C) is stabilizable and detectable. Introduce the Hamiltonian
matrix:  
A BB ∗
H0 =  
−C ∗ C −A∗
Denote: H0 ∈ dom(Ric) if H1 and H2 hold for H0 and X =Ric(H0 ) is the stabilizing solution to ARE.
Theorem: Let G ∈ RH∞ . The following conditions are equivalent:
1. ||G||∞ < 1
2. H0 has no eigenvalue on the imaginary axis and H0 ∈ dom(Ric). Condition H2 is automatically
satisfied when R ≥ 0 or R ≤ 0

3. There is a stabilizing solution to ARE:

A∗ X + XA + XBB ∗ X + C ∗ C = 0

Remark: We can easily treat the case ||G||∞ < γ by simply changing B with γ −1 B .

Algebraic Ricatti Equation (ARE) 10


H∞ Control
Consider the system described by: z w
G(s) 
  y u
A B1 B2
 
G(s) = 
 C1 0 D12 

- K(s)
C2 D21 0

Optimal H∞ Control: Find all admissible controllers K(s) such that ||Tzw ||∞ is minimized.

Suboptimal H∞ Control: Given γ > 0, find all admissible controllers K(s), if there are any, such
that ||Tzw ||∞ < γ.
Assumptions:
(A1) (A, B1 , C1 ) is controllable and observable (A2) (A, B2 , C2 ) is stabilizable and detectable
   

B1 ∗
0
(A3) D12 [C1 D12 ] = [0 I] (A4)  D21 = 
D21 I

Robust Control 11
H∞ Control
What can we do if the Assumptions are not satisfied?

(A3) This Assumption means that C1 and D12 are orthogonal so that the penalty on
z = C1 x + D12 u includes a nonsingular, normalized penalty on the control u (It means that there is
no cross weighting between the state and control input, and the control weight matrix is the identity).

Remark: If there is a control input with no weighting filter, D12 has not full column rank (D12 has a
zero column) and cannot be normalized. The solution is to add the weighting filters on control inputs
(even a very small gain to avoid the singularity in the computations).

(A4) This Assumption is dual to (A3) and concerns how the exogenous signal w enters P : w includes
both plant disturbances and sensor noise, these are orthogonal, and the sensor noise weighting is
normalized and nonsingular.

Remark: In order to avoid the singularity on each measured output we should add an exogenous input
(sensor noise).

D22 6= 0: This problem occurs when we transform a discrete system to a continuous system. In this
case we can solve the problem for D22 = 0 and compute the controller K0 and then the final
controller is: K = K0 (I + D22 K0 )−1
Robust Control 12
State-Space Solution
The solution involves two AREs with Hamiltonian matrices:
   
A γ −2 B1 B1∗ − B2 B2∗ A∗ γ −2 C1∗ C1 − C2∗ C2
H=  J = 
−C1∗ C1 −A∗ −B1 B1∗ −A

Theorem: There exist a stabilizing controller such that ||Tzw ||∞ < γ iff these conditions hold:
1. H ∈ dom(Ric) and X=Ric(H )> 0.
2. J ∈ dom(Ric) and Y=Ric(J )> 0.
3. ρ(XY ) < γ 2 (ρ(A) = |λmax (A)| is the spectral radius of A)
Moreover, one such controller is:
 
 (I − γ −2 Y X)−1 Y C2∗
Ksub (s) =  
−B2∗ X 0

where:
 = A + γ −2 B1 B1∗ X − B2 B2∗ X − (I − γ −2 Y X)−1 Y C2∗ C2
H∞ Control 13
Example (mass/spring/damper)

F1 F1 = m1 x¨1 + b1 (x˙1 − x˙2 ) + k1 (x1 − x2 )


m1 F2 = m2 x¨2 + b2 x˙2 + k2 x2 + k1 (x2 − x1 ) + b1 (x˙2 − x˙1 )
x1
States: x1 , x2 , x˙1 = x3 , x˙2 = x4
k1 b1 Inputs: F1 (control input) F2 (disturbance)
Outputs: x1 , x2 (measured)
F2
m2 x2 Parameters: m1 = 1, m2 = 2, k1 = 1, k2 = 4, b1 = 0.2, b2 = 0.1
Objective: Reduce the effect of disturbance force (F2 ) on x1 in the
k2 b2
frequency range 0 ≤ ω ≤ 2.

      
x˙1 0 0 1 0 x1 0 0
       
 x˙   0 0 0 1  x   0 0 
 2    2     F1 
 = k1 b1
 + 
 x˙3   −k1 −b1   x3   1 0 
   m1 m1 m1 m1    m1  F2
k1
x˙4 m2 − k1m+k
2
2 b1
m2 − b1m+b
2
2
x4 0 1
m2

H∞ Control 14
Example (mass/spring/damper)

6
z2
Wu w1 = F2 s+5 10
Wu = , W1 =
? s + 50 s+2
6- x1 - W1 z-
1
P x2 0.01s + 0.1
u = F1 Wn1 = Wn2 =
s + 100
y1
 ?
 Wn1 w2 = n1 x1 (s) = P11 (s)F1 (s) + P12 (s)F2 (s)
x2 (s) = P21 (s)F1 (s) + P22 (s)F2 (s)
y2
 ?
 w3 = n2
Wn2 
Build the augmented plant:
    
z1 W1 P12 0 0 W1 P11 F2
    
 z   0 0 0 Wu  n 
 2    1 
 =  
 y1   P12 Wn1 0 P11   n2 
    
y2 P22 0 Wn2 P21 F1

H∞ Control 15
Example (mass/spring/damper)
Matlab Codes:

Ap=[0 0 1 0;0 0 0 1;-1 1 -0.2 0.2;0.5 -2.5 0.1 -0.15];


Bp=[0 0;0 0;1 0;0 0.5];Cp=[1 0 0 0;0 1 0 0];Dp=[0 0;0 0];
P=ss(Ap,Bp,Cp,Dp); % State-space model of the plant
Ptf=tf(P); % Transfer function model
% Weighting filters:
W1=tf(5,[0.5 1]);Wu=tf([1 5],[1 50]);
Wn=tf([0.01 0.1],[1 100]);
% Augmented plant:
G=[W1*Ptf(1,2) 0 0 W1*Ptf(1,1);0 0 0 Wu;Ptf(1,2) Wn 0
Ptf(1,1);Ptf(2,2) 0 Wn Ptf(2,1)]
% Convert state-space object to a system matrix (appropriate
for Robust Control toolbox)
G1=ss(G);[A,B,C,D]=ssdata(G1);Gsys=pck(A,B,C,D);

H∞ Control 16
Example (mass/spring/damper)
H∞ controller design
[K,T,gopt] = hinfsyn(Gsys,nmeas,ncon,gmin,gmax,tol);
nmeas=2: Number of controller inputs,
ncon=1: Number of controller outputs,
gmin=0.1, gmax=10 (for the bisection algorithm)

K Controller: 24 states, two inputs, one output


T Closed-loop system (between w and z ): 48 states, 3 inputs, two outputs
gopt=0.2311

Note that the order of the plant model was 4 !

H2 controller design
[K2,T2] = h2syn(Gsys,nmeas,ncon);

H∞ Control 17
Integral Control
How can we design an H∞ controller with integral action?

Introduce an integral in the performance weight W1 , then the transfer function


between z1 and w1 is given by:

Tzw = W1 (1 + P K)−1

Now if the resulting controller K stabilizes the plant P and make ||Tzw ||∞ , then K
must have a pole at s = 0.
Problem: H∞ theory cannot be applied to systems with poles on the imaginary axis.
1
Solution: Consider a pole very close to the origin in W1 (i. e. W1 = s+ǫ
) and solve
the H∞ problem. The resulting controller will have a pole very close to the origin
which can be replaced by an integrator.

H∞ Control 18
Model & Controller Order Reduction
Introduction: Robust controller design based on the H∞ method leads to very
high-order controllers. There are different methods to design a low-order controller:

• Design directly a fixed-order controller for a high-order plant (open problem)


• Reduce the plant model order and design a low-order controller for the low-order
model.

• Design a high-order controller and the reduce the controller order.


Model Reduction Techniques:

Classical Approaches: Zero-pole cancelation, omitting non-dominant poles and


zeros, ...

Modern Approaches: Balanced model reduction, weighted balanced model


reduction, Hankel norm model reduction, ...

Based on Identification: Time-domain approaches, frequency-domain approaches.


Robust Control 19
Model Order Reduction
Problem: Given a full-order model G(s), find a lower-order model (r -th order modelGr (s)) such that
G and Gr are close in some sense (e.g. infinity-norm).
Additive model reduction: The additive model error ∆a = G − Gr is defined and is minimized.
This problem can be formulated as inf ||G − Gr ||∞
deg(Gr )≤r

Multiplicative model reduction: The relative error ∆r = G−1 (G − Gr ) is defined and minimized.
The problem can be formulated as inf ||G−1 (G − Gr )||∞
deg(Gr )≤r

Frequency-weighted model reduction: In general, the requirement on the approximation accuracy


is different at different frequencies. This problem is formulated as

inf ||Wo (G − Gr )Wi ||∞


deg(Gr )≤r

Example: In model reduction for control purpose, the objective is to find a reduced order model such
that the closed-loop transfer functions are close to each other:

||S − Sr ||∞ = ||T − Tr ||∞ = ||U (G − Gr )Sr ||∞

Model & Controller Order Reduction 20


Preliminaries
Consider the following LTI system:
 
A B
G(s) =   = C(sI − A)−1 B + D
C D

Theorem The following are equivalent:

1. (A,B) is controllable.

2. The controllability matrix C = [B AB A2 B . . . An−1 B] has full-row rank.


3. The eigenvalues of A + BF can be freely assigned by a suitable choice of F .

4. The solution P to the Lyapunov equation

AP + P A∗ + BB ∗ = 0

is positive definite (assuming A stable).

Model & Controller Order Reduction 21


Preliminaries
Theorem The following are equivalent:

1. (C,A) is observable.

2. The observability matrix O = [C CA CA2 . . . CAn−1 ]T has full-column rank.


3. The eigenvalues of A + LC can be freely assigned by a suitable choice of L.

4. The solution Q to the Lyapunov equation

A∗ Q + QA + C ∗ C = 0

is positive definite (assuming A stable).

Minimal realization: A state-space realization (A, B, C, D) of G(s) is a minimal realization of


G(s) if A has the smallest possible dimension.
Theorem: A state-space realization of G(s) is minimal if and only if (A, B) is controllable and
(C, A) is observable.
Example: For a SISO system, if there is a zero-pole cancellation the corresponding state-space
realization is not minimal and either (A,B) is not controllable or (C,A) is not obsrevable.
Model & Controller Order Reduction 22
Compute a Minimal Realization
 
A B
Lemma: Consider G(s) =  . Suppose that there exists a symmetric matrix
C D
 
∗ P1 0
P =P = 
0 0

with P1 nonsingular such that: AP + P A∗ + BB ∗ = 0.


Now partition the realization (A, B, C, D) compatibly with P as
 
A11 A12 B1
 
G(s) = 
 A21 A22 B2 

C1 C2 D
 
A11 B1
Then G(s) =  . Moreover, (A11 , B1 ) is controllable if A11 is stable.
C1 D
Model & Controller Order Reduction 23
Compute a Minimal Realization
Procedure:
 
A B
1. Let G(s) =   be a stable realization.
C D
2. Compute the controllability Gramian P ≥ 0 from AP + P A∗ + BB ∗ = 0.
 
P1 0
3. Diagonalize P to get P = [U1 U2 ]   [U1 U2 ]∗ with P1 > 0 and [U1 U2 ]
0 0
unitary.
 
U1∗ AU1 U1∗ B
4. Then G(s) =  is a controllable realization.
CU1 D

Idea: Assume that P1 =diag(P11 , P12 ) such that λmax (P12 ) ≪ λmin (P11 ); then one can
discard those weekly controllable states corresponding to P12 without causing much error.

Problem: The controllability (or observability) Gramian alone cannot give an accurate indication of the
dominance of the system states.
Model & Controller Order Reduction 24
Balanced Realization
Balanced realization: A minimal realization of G(s) for which the controllability and observability
Gramians are equal is referred to as a balanced realization.

Lemma: The eigenvalues of the product of the Gramians are invariant under state transformation.

= T x. Then  = T AT −1 ,
Proof: Consider that the state is transformed by a nonsingular T to x̂
B̂ = T B and Ĉ = CT −1 . Now using the Lyapunov equations we find that P̂ = T P T ∗ and
Q̂ = (T −1 )∗ QT −1 . Note that P̂ Q̂ = T P QT −1 .
Remark: A transformation matrix T can always be chosen such that P̂ = Q̂ = Σ where
Σ = diag(σ1 Is1 , σ2 Is2 , . . . , σN IsN ). T and Σ are solutions to the following equations:
T AT −1 Σ + Σ(T −1 )∗ A∗ T ∗ + T BB ∗ T ∗ = 0
(T −1 )∗ A∗ T ∗ Σ + ΣT AT −1 + (T −1 )∗ C ∗ CT −1 = 0

Procedure: If (A, B, C, D) is a minimal realization with Gramians P


> 0 and Q > 0, then find a
∗ ∗ 2 ∗ −1 ∗ −1/2
matrix
 R such that P = R
 R and diagonalize RQR = U Σ U . Now, let T = R U Σ
T AT −1 TB
then   is balanced. [Ab,Bb,Cb,sig,Tinv]=balreal(A,B,C);
CT −1 D
Model & Controller Order Reduction 25
Balanced Truncation
Main Idea: Suppose σr ≫ σr+1 for some r . Then the balanced realization implies that those states
corresponding to the singular values of σr+1 , . . . , σN are less controllable and less observable than
those states corresponding to σ1 , . . . , σr and can be truncated.
 
A11 A12 B1
 
Theorem: Suppose G(s) =   A21 A22 B2  ∈ RH∞ is a balanced realization with

C1 C2 D
Gramian Σ = diag(Σ1 , Σ2 ): (Σ1 and Σ2 have no diagonal entries in common)

Σ1 = diag(σ1 Is1 , σ2 Is2 , . . . , σr Isr )


Σ2 = diag(σr+1 Isr+1 , σr+2 Isr+2 , . . . , σN IsN )

where σi has multiplicity si for i


= 1, . . . , N and are decreasingly ordered. Then the truncated
system Gr ≡ (A11 , B1 , C1 , D) is balanced and asymptotically stable. Furthermore,

N
X N
X
||G − Gr ||∞ ≤ 2 σi and ||G − G(∞)||∞ ≤ 2 σi
i=r+1 i=1

Model & Controller Order Reduction 26


Balanced Truncation
Remarks:

Residualization: The truncated model has not the same static gain as the original high-order model.
The residualization technique considers the DC contributions of the truncated states to modify the
system matrix (instead of truncating the states their derivatives are forced to zero so at
steady-state the original and reduced-order model have the same gain).

Unstable systems: The unstable systems can be factored as G(s) = Gst (s)Gunst (s), then only
the order of the stable part will be reduced.

Frequency weighted balanced reduction: In this technique the stability of the reduced-order model
cannot be guaranteed. Moreover, an upper bound for the modeling error cannot be derived.

[Gb,sig]=sysbal(G); % find a balanced


MATLAB Commands:
realization Gb and HSVs sig
Gr=strunc(Gb,r); % truncate to the r-th order
Gres=sresid(Gb,r); % Truncation by residualization technique
[WGb,sig] = sfrwtbal(Gb,w1,w2);
WGr=strunc(WGb,r);
Model & Controller Order Reduction 27
Hankel Norm
Hankel singular values: The decreasingly ordered numbers, σ1 > σ2 > · · · > σN ≥ 0, are called
the Hankel singular values of the system.

Hankel-Norm: The largest Hankel Singular value is called Hankel norm of a system (||G||H = σ1 ).
It can be interpreted as the maximum of future output energy integral. Hankel norm is not really a
norm because ||G||H = 0 does not necessarily imply that G ≡ 0.
Hankel-Norm model reduction: This problem is to find Gr such that ||G − Gr ||H is minimal.
(MATLAB command: Gr=hankmr(Gbal,sigma,r);)
Upper bound on the modeling error: The Hankel norm of the modeling error is σr+1 and the upper
bound for the infinity norm of the modeling error is smaller than that of truncation method:
PN
||G − Gr ||∞ ≤ i=r+1 σi

Theorem: Consider G(s) with Hankel singular values σ1 > σ2 > · · · > σN ≥ 0, then:
Z ∞ N
X
||G||H ≤ ||G||∞ ≤ |g(t)|dt ≤ 2 σi
0 i=1

where g(t) = CeAt B is the impulse response of G(s).


Model & Controller Order Reduction 28
Balanced Truncation
Example 1: Consider the mass/spring/damper example in H∞ control. A minimal balanced
realization of the augmented plant can be obtained by:
Hankel singular values of the system
30

[Gb,sig] =sysbal(Gsys); 25

plot(sig) 20

Minimal realization is of 8th-order


15
(plant order+ filter orders)
The two last SVs are too small so the 10

corresponding states can be truncated


5
Gr=strunc(Gb,6)
0
1 2 3 4 5 6 7 8

Example 2: Consider the transfer function between the control force F1 and the filtered position z1 .

G41=pck(A,B(:,4),C(1,:),D(1,4)); [Gb,sig]=sysbal(G41); plot(sig)


Gr=strunc(Gb,2); Gres=sresid(Gb,2);
Model & Controller Order Reduction 29
Balanced Truncation
Comparison of the frequency response of the original and reduced-order
models:
Bode Diagram
40

The 3 last Hankel singular values Original


Truncated
30 Residualized
are too small and can be truncated.
Hankel singular values of the system
30 20

25
10

Magnitude (dB)
20
0

15

−10

10

−20
5

−30
0
1 1.5 2 2.5 3 3.5 4 4.5 5

−40
−2 −1 0 1 2
10 10 10 10 10
Frequency (rad/sec)

Modeling error: Note that σ4 + σ5 + σ6 = 1.797


||G − Gr ||∞ = 1.706 ||G − Gres ||∞ = 1.991 ||G − Ghank ||∞ = 1.693
Model & Controller Order Reduction 30
Controller Order Reduction
In model order reduction the aim is to reduce ||G − Gr ||∞ whereas in controller order reduction the
main issue is to preserve the stability and performance of the closed-loop system.

Stability: According to the small gain - Kr − K


theorem, the closed-loop system with
the reduced-order controller is stable if:
r - - K -+ ?- P
y
-

(Kr − K)P 6
1 + KP < 1


Performance: To preserve the closed-loop performance, the difference between the high-order and
reduced-order closed-loop transfer function should be minimized (frequency weighted reduction).


1 1 (K − Kr )P
min
− = min

1 + KP 1 + Kr P ∞
(1 + KP )(1 + Kr P ) ∞

KP Kr P
(K − Kr )P
min
− = min

1 + KP 1 + Kr P ∞
(1 + KP )(1 + Kr P ) ∞

Model & Controller Order Reduction 31


Positive Real Systems
Definition: The rational transfer function H(s) is positive real (PR) if

1. H(s) ∈ R ∀s ∈ R (The coefficients of s are real).


2. H(s) is analytic for Re[s] > 0.
3. Re [H(s)] ≥ 0, ∀s : Re[s] ≥ 0.
Strictly Positive Real Systems: H(s) is SPR if H(s − ǫ) is PR for some ǫ > 0.
1
Example: H(s) = s+λ withλ > 0 is SPR.
1
H(s) = s is PR but not SPR and H(s) = −1s is not PR.
Remark 1: H(s) is PR if (1) and (2) hold and
• any pure imaginary pole of H(s) is a simple pole with non negative residue,
• for all ω for which jω is not a pole of H(s), Re[H(jω)] ≥ 0.
Remark 2: H(s) is SPR if (1) holds and
• H(s) is analytic for Re[s] ≥ 0,
• Re[H(jω)] > 0 ∀ω .
Robust Control 32
Strictly Positive Real Systems
Properties of SPR systems:

1. H(s) has no pole in RHP and on the imaginary axis.


2. The Nyquist plot of H(jω) lies strictly in right half complex plane.

3. The relative degree of H(s) is 0, 1 or -1.

4. H(s) is minimum phase (H(s) has no zero with Re [z] ≥ 0).


5. If H(s) is SPR, then 1/H(s) is also SPR. Similarly, if H(s) is PR, then 1/H(s) is PR.

6. If H1 (s) ∈ SPR and H2 (s) ∈ SPR then H(s) = α1 H1 (s) + α2 H2 (s) is SPR for
α1 ≥ 0, α2 ≥ 0 and α1 + α2 > 0.
7. If H1 (s) and H2 (s) are SPR then the feedback interconnection of two systems is also SPR.

H1 (s) 1
H(s) = = 1 ∈ SPR
1 + H1 (s)H2 (s) H1 (s) + H2 (s)

This property holds even if H1 (s) is PR.

Positive Real Systems 33


Positive Real Lemma
Lemma (Kalman-Yakubovich-Popov): Consider H(s) is stable with following minimal realization
(controllable and observable) (n: number of states, m number of inputs and outputs):
 
A B
H(s) =   = C(sI − A)−1 B + D
C D

• H(s) is PR if and only if there exist P ∈ Rn×n > 0, Q ∈ Rm×n and W ∈ Rm×m such that:

P A + AT P = −QT Q, P B = C T − QT W, W T W = D + D T

H(s) is SPR if in addition H̄(s) = W + Q(sI − A)−1 B has no zero on the imaginary axis.
• H(s) is SPR, that is H(jω) + H ∗ (jω) > 0 ∀ω , if there exist P ∈ Rn×n > 0,
Q ∈ Rm×n , W ∈ Rm×m and ǫ > 0 such that:.

P A + AT P = −ǫP − QT Q, P B = C T − QT W, W T W = D + D T

Positive Real Systems 34


Passivity Theorem
Theorem (passivity): Consider the LTI system H(s) with a minimal realization (A, B, C, D)
interconnected with a sector nonlinearity φ(t, y) defined as:

 φ(t, 0) = 0 ∀t ≥ 0
 y T φ(t, y) ≥ 0 ∀t ≥ 0

Then the closed-loop system is globally and exponentially stable if H(s) is SPR.

Proof: We show that if H(s) is SPR the closed-loop system is stable. From positive real lemma, there
exist P > 0, ǫ > 0, Q, W such that:

P A + AT P = −ǫP − QT Q, P B = C T − QT W, W T W = D + D T

Now, we consider V (x) = xT P x as a Lyapunov function and we show that


V̇ (x) = ẋT P x + xT P ẋ < 0. (Note that u = −φ(t, y))
V̇ (x) = [Ax − Bφ]T P x + xT P [Ax − Bφ] = xT (AT P + P A)x − φT B T P x − xT P Bφ
V̇ (x) = −ǫV (x) − xT QT Qx − 2φT y − φT (D + D T )φ + 2φT W T Qx
V̇ (x) ≤ −ǫV (x) − [Qx + W φ]T [Qx + W φ] < 0
Positive Real Systems 35
Positive Real Control
There are two different robust control problems known as positive real control problem:

Problem 1: Let H(s) ∈ {Hi (s) ∈ PR, i = 1, . . . N } then if feedback controller K is SPR the
closed loop system is robustly stable.
Example: Consider a second-order system (the transfer function between velocity and force in a
position control system) with uncertain parameters ωn and ζ :

sωn2
H(s) = 2 ζ > 0, ωn > 0
s + 2ζωn s + ωn2
Since H(s) is PR any SPR controller will stabilize the closed-loop system.

Problem 2: Consider H(s) in feedback with a sector nonlinear - φ


uncertainty presented by an LFT. The robust control problem
is to design a controller K such that Tde is SPR and a norm e d
Tzw is minimized. (See Solution to the positive real con- z H(s) w
of
y

u
trol problem for linear time-invariant systems, IEEE TAC 39(10),
1994) - K(s)

Positive Real Systems 36


Positive Real Control
Relation between positive realness and infinity norm: It can be shown that

1 − H
H ∈ SPR ⇐⇒ < 1
1 + H ∞
Proof:

1−H
1+H < 1 ⇐⇒ |1 − H| < |1 + H| ∀ ω

⇐⇒ (1 − Re{H})2 + (Im{H})2 < (1 + Re{H})2 + (Im{H})2 ∀ ω
⇐⇒ 4Re{H} > 0 ∀ ω ⇐⇒ H ∈ SPR
1 H
Note that if H is SPR then 1+H and 1+H are stable, so 1−H
1+H is stable.
1+H 1−H
Similarly, it can be shown that ||H||∞ < 1 ⇐⇒ and ∈ SPR
1−H 1+H
1+H
Proof: (⇒) If ||H||∞ < 1 then 1 + H and 1 − H are SPR so 1−H is also SPR.
1−H (1 − Re{H}) − j Im{H}
(⇐) we have: ∈ SPR ⇒ ∈ SPR
1+H (1 + Re{H}) + j Im{H}
⇒ 1 − (Re{H})2 − (Im{H})2 > 0 ∀ω ⇒ ||H||∞ < 1
Positive Real Systems 37
Positive Real Control
Sector nonlinearity: The nonlinearity φ(α, β) is a sector nonlinearity if there are constants α, β, a
and b (with β > α and a < 0 < b ) such that (y is the input to φ):

αy 2 ≤ yφ(α, β) ≤ βy 2 ∀ t ≥ 0, ∀ y ∈ [a b]

Gain of φ: The memoryless sector nonlinearity φ(α, β) satisfies the following inequality:

||φ(α, β)||2 ≤ γ||y||2 ∀y ∈ R

where γ is the maximum of two-norm input-output gain of the system. It is clear that γ = |β|.
Small gain theorem: Consider the linear stable system H interconnected with a sector nonlinearity
φ(−1, 1) (γ = 1) then the closed loop system is stable if ||H||∞ < 1 (No restriction on the phase of
H but on its magnitude).
Remark: This theorem can be compared with the passivity theorem where the sector nonlinearity is
φ(0, ∞) and the stability condition is H(s) ∈ SPR (No restriction on magnitude of H but on its
phase).

Positive Real Systems 38


Passivity and Small Gain Theorem
Equivalent transformations: Consider two systems H1 and H2 interconnected with negative
feedback.

- H1
-
6

H2 

Additive transformation:

+ ?
- H1 - - H1 + 1
- -
6 6

+ ? H2
H2   
1 − H2

Positive Real Systems 39


Passivity and Small Gain Theorem
Multiplicative Transformation:

- γ H1 - γH1
- -
6 6

1/γ H2  H2 /γ 

Feedback Transformation:

?- H1
- - H1 -
- - 1 + H1
6 6

?
-
 H2  H2 − 1 

Positive Real Systems 40


Passivity and Small Gain Theorem
From small gain to passivity: Apply three transformations on the small gain loop:

-
- ?
+ H
- H - 1−H
- ⇒ 6
Feedback: 6
?
+
φ(−1, 1)  φ(0, 2) 

- H - 2H
2 1−H
- - 1−H
6 6
Multiplicative: ⇒

1/2 φ(0, 2) φ(0, 1) 

-
-+ ?
1+H
- 2H
- 1−H
- 1−H
⇒ 6
Additive: 6
+? φ(0, ∞) 
φ(0, 1)  
Positive Real Systems 41
Passivity and Small Gain Theorem
Circle criterion: Consider the sector nonlinearity φ(α, β) in closed-loop with the linear system
H(s). The closed-loop system is stable if one of the following conditions is satisfied:
1. If 0
< α < β , the Nyquist plot of H(jω) does not enter to a disk centered on the real axis which
1 1
passes through (− , − ) (this disk will be called D(α, β)) and encircles it m times in the
α β
counterclockwise direction, where m is the number of RHP poles of H(s) (Nyquist criterion is a
special case where α = β = 1).

2. If 0 = α < β and H(s) is stable and the Nyquist plot of H(jω) lies to the right of the vertical
line defined by Re[s] = −1/β (Passivity theorem is a special case where β goes to infinity).

3. If α
< 0 < β and H(s) is stable and the Nyquist plot of H(jω) lies in the interior of the disk
D(α, β) (Small gain theorem is a special case where α = −1, β = 1).
Remark: Using the three transformations, it can be shown that the stability condition for a closed-loop
1 + βH
system with sector nonlinearity φ(α, β) is ∈ SPR.
1 + αH

Positive Real Systems 42

You might also like