You are on page 1of 74

GR

LQG balancing in controller


design

Simon van Mourik

Department of
Mathematics
Master’s Thesis

GR

LQG balancing in controller


design

Simon van Mourik

University of Groningen
Department of Mathematics
P.O. Box 800
9700 AV Groningen June 2003
Contents

1 Introduction 3

2 Physical model 5
2.1 damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 State space model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Discretisation 9
3.1 Discretisation in space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.1 Modal approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.2 Finite-difference approximation . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Discretisation in time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Control and displacement costs . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4 Robust stabilization for coprime factor perturbations 15


4.1 Robust Stabilization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.1.1 Coprime factor perturbations as an H∞ -optimization problem . . . . . 17

5 Other robust stabilization problems 21


5.1 Internal stability and the Small Gain Theorem . . . . . . . . . . . . . . . . . 21
5.2 Robust stability with weighted perturbations . . . . . . . . . . . . . . . . . . 22
5.3 Construction of the central controller for a weighted plant . . . . . . . . . . . 24
5.3.1 A robust controller for a weighted plant . . . . . . . . . . . . . . . . . 26
5.3.2 Loop shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3.3 The gap metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.4 Robustness and the stability region . . . . . . . . . . . . . . . . . . . . . . . . 29

6 Aim of this project 31


6.1 The need for numerical research . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.2 Construction of a low-order controller . . . . . . . . . . . . . . . . . . . . . . 31

7 LQG balanced truncations 35


7.1 A Schur method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

8 Test 1: approximation of the PDE 40


8.1 The frequency response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
8.2 Convergence of frequency response . . . . . . . . . . . . . . . . . . . . . . . . 40
8.3 Functional gains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

1
8.4 Time-delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

9 Test 2: Performance for a full-order controller 47


9.1 The stability diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
9.2 Case 1: variation of γ with input noise . . . . . . . . . . . . . . . . . . . . . . 48
9.3 Case 2: performance with weighting functions . . . . . . . . . . . . . . . . . . 48
9.4 Case 3: shifting of the stability region . . . . . . . . . . . . . . . . . . . . . . 51

10 Construction of a low-order controller: ’How low can you go?’ 53


10.1 Approximation of the nominal model . . . . . . . . . . . . . . . . . . . . . . . 53
10.2 First controller construction method . . . . . . . . . . . . . . . . . . . . . . . 55
10.3 Second controller construction method . . . . . . . . . . . . . . . . . . . . . . 57
10.4 Third controller construction method . . . . . . . . . . . . . . . . . . . . . . . 57

11 The choice of r vs. n 59

12 Conclusions 65

A Program description of ’Runbeam’ 69


A.1 Input options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
A.2 Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
A.3 Postprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
A.4 Extension of Runbeam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

Bibliography 72

2
Chapter 1

Introduction

It is often necessary to stabilize physical systems by a controller. For example, an aeroplane


that is kept on course by an automatic pilot, or a satellite that is kept in a certain orbit, shown
in figure 1.1. In this thesis, the physical plant is considered to be a flexible one-dimensional
beam in space. Bontsema ([2]) used beam equations to model the dynamics of a satellite in
a crude but illustrative manner.

Figure 1.1: The motion of a satellite with flexible panels is modelled by the motion of a flexible
beam.

Many systems are modelled as by means of a partial differential equation, which corresponds
to an infinite-dimensional state-space representation. Standard controller design is based on
a finite-dimensional state-space representation, so it is necessary to make a finite-dimensional
numerical approximation of a partial differential equation (PDE). This we call the original
or nominal system, and is usually high-dimensional (high-order) in order to encapture the
properties of the PDE accurately. (The finite-dimensional nominal system represents the
infinite-dimensional PDE). Standard controller design would result in a controller with the
same high order as the original (nominal) system that it is designed for. However, in practical
situations, a low-order controller is preferred for several reasons:

• Easier implementability.

• Higher reliability, because there are fewer things to go wrong in software or bugs to fix
in the hardware.

3
• Less computational effort is needed, allowing real-time application of the controller in
many situations.

There are several methods of obtaining a low-order controller for a high-order plant, for
example

(1) Design a low-order controller directly based on the high-order plant.

(2) Approximate the nominal high-order plant by a lower-order (coarse) numerical approx-
imation.

(3) Apply a model reduction procedure, for example by LQG balanced truncations. We
consider two cases here. Model reduction of

– the plant, and


– the controller.

In this thesis, we concentrate on the last two methods.


The performance objectives of the controlled plant can be formulated as the minimization
of a certain transfer function: a certain H∞ -optimization problem. Different performance
objectives correspond to different transfer functions.
The controller we use is the so-called central controller that robustly stabilizes a specific type
of uncertain plant, namely a coprime factor perturbed plant (chapter 4). In chapter 5 it is
shown that robust stabilization of this plant is a special case of another (more general) H∞
problem, that is, finding a controller that minimizes the transfer function from any suitably
chosen external input (these are disturbances) to any chosen output.
In this thesis, an answer is given to the questions:

• How well do LQG balanced truncations work compared to coarse numerical approxima-
tion?

• How many states does a controller require minimally?

It turns out that in practice, the second and third method both have their shortcomings. In
general, when designing a low-order controller, it seems best to use a combination of the two
methods: low-order numerical approximation and thereafter state reduction by LQG balanced
truncations. Chapter 11 provides a method that indicates what combination is most suitable
for a given plant and a given numerical approximation method.

4
Chapter 2

Physical model

As a physical model, we consider a flexible one-dimensional beam in space. Given a certain


initial condition (the beam is bended initially, and then released), the beam oscillates and
damps out to a certain equilibrium position, but not the right one. The aim is to move the
beam to the position where the middle is placed in the origin, and the beam lies perfectly
flat. In order to do so, thrust rockets are placed in the middle of the beam that can steer the
beam up, down, and tilt the beam w.r.t. the horizontal axis.
We derive a partial differential equation for a one-dimensional flexible Euler-Bernoulli beam
with structural damping. After the model is derived, we construct a state-space model. We
consider a beam with length 2, beam cross-sectional area a and mass density ρ. The vertical
displacement of the beam at time t at place x is w(x, t), see figure 2.1. in this thesis, all units
are standard S.I.

0
dx

−1 x 1
x
Figure 2.1:

In figure 2.2, we consider a part of the beam at point x and with length dx.
V is the shear force and M is the bending moment. The forces acting on this part of the
∂2w
beam should be in equilibrium: V − (V + ∂V ∂x dx) + ρadx ∂t2 = 0, which leads to:

∂V ∂2w
= ρa 2 . (2.1)
∂x ∂t
The bending moments are also in equilibrium, where it is assumed that there are no rotatory
inertia effects: −V dx − M + M + ∂M
∂x dx = 0, which gives:

5
ρ a dx δ___
w 2( x , t )
δt 2

M V V+ δ___
V dx δM dx
M+ ___
δx δx

dx

Figure 2.2:

∂M
V = . (2.2)
∂x
If σ denotes the stress in the beam and ² the strain and E the Young’s modulus of elasticity,
then the following normal stress-strain relation is assumed (Hooke’s law):

σ = E². (2.3)
The moment of inertia of the beam cross-section is given by:
Z
I = z 2 da. (2.4)
a
Here z is the distance from the neutral axis. When the vertical displacements of the beam
are small and if there is no shear deformation, the following relation for the angle of rotation
of the beam cross section φ holds:

∂w
= φ. (2.5)
∂x
The relation between the normal strain and the slope is:

∂φ
²=z . (2.6)
∂x
The lateral force on da is σda, so the bending moment acting on the cross section is:
Z
M = σz da. (2.7)
a
Combining equations 2.3-2.7, the bending moment can be expressed as:

∂2w
M = EI . (2.8)
∂x2
Equations 2.1, 2.2 and 2.8 now give the well known partial differential equation for the Euler-
Bernoulli beam without damping:

∂2w ∂4w
ρa + EI = 0. (2.9)
∂t2 ∂x4

6
The beam has free ends, so there are no shear forces and no bending moments at the ends,
which leads to the following boundary conditions:

∂3w ∂3w
(−1, t) = 0 , (1, t) = 0. (2.10)
∂x3 ∂x3

∂2w ∂2w
(−1, t) = 0 , (1, t) = 0. (2.11)
∂x2 ∂x2

2.1 damping
In this section we will introduce some damping in the system. We take a damping model
proposed by Voight (Voight-damping):

∂²
σ = E² + E ∗ ² . (2.12)
∂t
This relation means that due to the stress σ, the deformations of the beam, expressed by the
strain ², will not be instantaneous, as it takes some time before the beam has reached its final
deformation. Equation 2.8 is then replaced by:

∂2w ∗ ∂ w
3
M = EI + E I . (2.13)
∂x2 ∂t∂x2
If we apply equation 2.13, then equation 2.9 is replaced by:

∂2w ∗ ∂ w
5 ∂4w
ρa + E I + EI = 0. (2.14)
∂t2 ∂t∂x4 ∂x4
The boundary conditions 2.10 and 2.11 are replaced by:

∂2w ∗ ∂ w
3
(EI + E I )(±1, t) = 0. (2.15)
∂x2 ∂t∂x2

∂3w ∗ ∂ w
4
EI + E I (±1, t) = 0. (2.16)
∂x3 ∂t∂x3
We can write equation 2.14 as:

wtt + βAwt + αAw = 00 (2.17)


∂4
Where β = E ∗ I/ρa, and α = EI/ρa, A = ∂x4
. Further, equations 2.15 and 2.16 imply that

∂2w ∂2w −α t
(±1, t) = (±1, 0)e β , (2.18)
∂x2 ∂x2
∂3w ∂3w −α t
(±1, t) = (±1, 0)e β .
∂x3 ∂x3

The actuators are thrust rockets are placed in the middle of the beam in order to steer it
towards the right position in space. According to [2], the actuators are chosen to be a point
force in the middle and a point torque in the middle of the beam:

7
1
wtt + βAwt + αAw = (F δ − M (δ 0 ))0 (2.19)
ρa
Where δ is a delta distribution, and δ 0 = − ∂∂xδ .

2.2 State space model


Now that the physical model and the actuators and the sensors are determined, the model of
the beam with input and output is written in state-space representation:

∂w
= Aw + Bu (2.20)
∂t
y = Cw + Du, (2.21)

where u = [F M ]T , and

· ¸
w(x, t)
w = . (2.22)
wt (x, t)

Further,

· ¸
0 I
A = (2.23)
−αA −βA
· ¸
1 0 0
B =
ρa δ δ 0
· ¸
δ 0
C =
δ0 0
· ¸
0 0
D = .
0 0

8
Chapter 3

Discretisation

3.1 Discretisation in space


The state-space is infinite-dimensional, so for constructing a controller, a finite-dimensional
numerical approximation of the state-space is needed. Two types of approximation are used:

• Modal approximation;

• Finite-Difference (F-D) approximation.

The modal approximation converges very rapidly, as we shall see later on. However, in order
∂4
to derive a state-space form, the orthonormal modes of the operator ∂x 4 are required, which
cannot be computed for each PDE. Further, modal approximation is a so-called spectral
method, which is only suitable when solutions are smooth, i.e. have no steep gradients. For
the beam equations this is indeed the case. The F-D approximation converges less rapidly as
the modal one, but it is applicable to each type of PDE, and preferred when steep gradients
appear in the solution, for example when modelling a shockwave.
The state-space is discretised in space by both modal and F-D approximation, and dynamic
simulations are carried out with use of a time step discretisation.
2
First, an assumption is made in order to simplify the boundary conditions: ∂∂xw2 (±1, 0) =
∂3w
∂x3
(±1, 0) = 0, so according to equations 2.18, equations (2.15, 2.16) ⇒ (2.10, 2.11). This is
justified by the fact that α >> β, substituted in equation 2.18.

3.1.1 Modal approximation


The eigenvalues λi of operator A satisfy

∂ 4 w(x, t)
Aw(x, t) = = λ4i w(x, t), (3.1)
∂x4
and the boundary conditions 2.10 and 2.11 imply that the symmetric eigenvalues (eigenfre-
quencies) λi are positive solutions of

sinh(λi ) cos(λi ) + cosh(λi ) sin(λi ) = 0. (3.2)


The anti-symmetric positive eigenvalues satisfy

9
sinh(λi ) cos(λi ) − cosh(λi ) sin(λi ) = 0. (3.3)

The symmetric eigenvectors (modes) are x/ 2 and

cosh(λi ) + cos(λi ) cosh(λi x) − cos(λi x)


vi (x) = q {cos(λi x) + cos(λi ) } (3.4)
cosh(λi ) + cos(λi )
cosh2 (λi ) + cos2 (λi )
i = 3, 5, 7, ..
p
The anti-symmetric eigenvectors are 1/ (3/2) and

sinh(λi ) + sin(λi ) sinh(λi x) − sin(λi x)


vi (x) = q {sin(λi x) + sin(λi ) } (3.5)
sinh(λi ) + sin(λi )
sinh2 (λi ) + sin2 (λi )
i = 4, 6, 8, ..

The eigenvalues have the property that in equation 3.2 λi ↓ (i + 3/4)π, and in equation 3.3
λi ↑ (i + 1/4)π for i → ∞. When equations 3.2 and 3.3 are solved numerically, this is of use,
because for very large i, solving λi gives numerical problems.
These eigenvectors form a complete orthogonal basis on W = L2 (−1, 0) ⊕ L2 (0, 1) ⊕ R. We
are going to write down a matrix representation of the system w.r.t. this orthonormal basis.
Every w ∈ W has a representation

X
w(x, t) = wi (t)vi (x), (3.6)
i=1

where wi and vi satisfy


∞ ∞ ∞
X X X 1
wi00 (t)vi (x) +β wi0 (t)Avi (x) +α wi (t)Avi (x) = (F δ − M δ 0 ). (3.7)
ρa
i=1 i=1 i=1

Using Avi (x) = λ4i vi (x) this leads to:



X 1
{wi00 (t)vi (x) + βwi0 (t)λ4i vi (x) + αwi (t)λ4i vi (x)} = (F δ − M δ 0 ). (3.8)
ρa
i=1

The orthonormality of the eigenvectors means that the L2 (−1, 1) inner product satisfies:
Z 1 ½
0 i=6 j
vi (x)vj (x) dx = (3.9)
−1 1 i =j
so after multiplicaton by vj (x) for a fixed j and integration over x, equation 3.8 becomes:
1
wj00 (t)vj (x) + βwj0 (t)λ4i vj (x) + αwj (t)λ4i vj (x) = (F (t)vj (0) − M (t)vj0 (0)). (3.10)
ρa
The state-space form is now
· ¸
A B
ΣG ≡ , (3.11)
C D

10
where

 
0 1 0 0 ···

 −αλ41 −βλ41 0 0 ··· 

A =  0 0 0 1 ···  (3.12)

 0 0 −αλ42 −βλ42 

.. .. .. ..
. . . .
 
0 0
 v1 (0) v10 (0) 
 ρa ρa 
 
B =  0 0 
 v2 (0) v2 (0) 
0
 
 ρa ρa 
.. ..
. .
· ¸
v1 (0) 0 v2 (0) 0 · · ·
C =
v10 (0) 0 v20 (0) 0 · · ·
· ¸
0 0
D = ,
0 0

and the corresponding state-vector has the form

 
w1 (t)

 w1t (t) 


w =  w2 (t) 
. (3.13)

 w2t (t) 

..
.

3.1.2 Finite-difference approximation


Consider the following Taylor-expansions:

δx2 00 δx3 000


w(x + δx, t) = w(x, t) + δxw0 (x, t) + w (x, t) + w (x, t) + O(δx4 ) (3.14)
2 3!
δx2 00 δx3 000
w(x − δx, t) = w(x, t) − δxw0 (x, t) + w (x, t) − w (x, t) + O(δx4 ) (3.15)
2 3!
Adding these derivatives gives,

∂2w w(x − δx) − 2w(x) + w(x + δx)


= + O(δx2 ) (3.16)
∂x2 δx2
In general, every order of derivative can be obtained in this way. This is called Finite-
Difference approximation. Klompstra found in [5] that
Z 1
2
δx
f (0) = f (x)δ(x) dx = δ(0)f (0)δx + O(δx2 ), (3.17)
− 21 δx

11
Where δ(0) is the Dirac delta function in 0. This is discretized as
½ 1
δ(0)f (x) = δx f (0) + O(δx) x = 0
(3.18)
0 x=6 0

and

δ(δx) − δ(−δx)
δ 0 (0) = + O(δx) (3.19)
2δx
Alltogether, the state-space matrices are (with n is odd):

 
0 0
 .. .. 
 . . 
 −1

· ¸  0  ← n + (n + 1)/2 − 1
0 I 1  2δx2 
A = , B=  1 0  ← n + (n + 1)/2 (3.20)
−αA −βA ρa  δx 
 0 1  ← n + (n + 1)/2 + 1
 2δx2 
 .. .. 
 . . 
0 0
· ¸ · ¸
0 ··· 0 1 0 ··· 0 0 0
C = , D= , (3.21)
0 ··· −1/2δx 0 1/2δx · · · 0 0 0

where

 
−1 2 −1

 2 −5 4 −1 

 −1 4 −6 4 −1 
1 
 .. .. .. .. .. 

A= 4 . . . . .  . (3.22)
δx  

 −1 4 −6 4 −1  
 −1 4 −5 2 
−1 2 −1

The corresponding state-vector is:

 
w(x1 , t)
 .. 

 . 

 w(xn , t) 
 wt (x1 , t)  .
w =   (3.23)
 
 .. 
 . 
wt (xn , t)

Here, xi denote the discretised spatial points.

12
3.2 Discretisation in time
Let w be the state-vector of the beam and wc the state-vector of the controller. Ac , Bc , Cc , Dc
are the system controller matrices. d and di are output and input disturbances respectively
(see chapter 5). The time-discretisation scheme looks as follows:

wn+1 − wn
= θAwn+1 + (1 − θ)Awn + Bun+1 (3.24)
δt
y n+1 = Cwn+1 + Dun+1 + d
wn+1
c − wnc
= θAc wn+1
c + (1 − θ)Ac wnc + Bc y n+1
δt
un+1 = Cc wn+1
c + Dc y n+1 + di

θ denotes the implicity of the time-integration. θ = 0 means explicit integration. The


advantage of choosing θ = 0.5 is that, in contrast with other choices, this one gives a precision
of O(δt2 ) instead of O(δt), so this semi-implicit (or -explicit) time-integration will be used in
the following. The system given in equation (3.24) can be written as:

· ¸ · ¸ · ¸
wn+1 wn Bdi
= UV +U , (3.25)
wn+1
c wnc Bc d

where

· ¸
(I − θ)A + I/δt 0
V = , (3.26)
0 (I − θ)Ac + I/δt
· ¸−1
I/δt − θA − BDc C −BCc
and U = .
−Bc C I/δt − θAc − Bc DCc

With modal approximation, w = w(t) is computed (equation 3.6) instead of w(x, t). Given
the initial state w(x, 0), w(0) is determined by equation (3.6) and the simulation starts. For
computation of the graphical output and costs, w(x, t) is obtained from w(t) by the same
equation.
Numerical stability is guaranteed if max(|λ(U V )|) < 1 for all eigenvalues λ of V . (The
program gives a message if this is not the case.)

3.3 Control and displacement costs


In order to check how well a controller works, the quality of a controller is defined by something
we call controller performance. The controller performance consists of costs and robustness.
The robustness is defined later on. The costs consist of the control costs Ju and the displace-
ment costs Jd , and are here defined as
Z T Z T
Ju = (|F (t)| + |M (t)|) dt and Jd = |w(0, t)| dt, (3.27)
0 0

13
with T the simulation time, F the force in the middle, and M the torque in the middle. The
costs give an indication of the performance of a given controlled system w.r.t. the controller
output, and the displacement of the middle of the beam. If the controller uses little force in
order to stabilize the beam, the control costs are low. If the controller steers the beam into
the right position in small time, the displacement costs are low. The integrals are computed
with the trapezium rule:
Z T N −1
X f (ti ) + f (ti+1 )
f (t) dt = δt + O(δt2 ), (3.28)
0 2
i=0

where T = N δt, and ti = iδt.

14
Chapter 4

Robust stabilization for coprime


factor perturbations

In this chapter a controller is constructed for a particular case of the RH∞ optimization
problem, namely for a plant with a transfer function G that is written in terms of normalized
coprime factors. The controller stabilizes the plant robustly with respect to normalized co-
prime factor perturbations of the transfer function. For the details of the theory we refer to
[7] and [6]. Plant ΣG and the controller ΣK have the following state-space representations:
· ¸ · ¸
A B Ac Bc
ΣG = , and ΣK = , (4.1)
C D Cc Dc

where G(s) = C(sI − A)−1 B + D. We assume here that D = 0, because this is the case for
our model. A lower linear fractional transformation (LLFT) will be denoted by

· ¸
P11 P12
FL ( , K) := P11 + P12 K(I − P22 K)−1 P21 . (4.2)
P21 P22

−1
Alternatively, if P21 exists, an LFT can also be denoted by:

ΓP [K] = (P11 K + P12 )(P21 K + P22 )−1 (4.3)

Definition 1 Matrices M̄ , N̄ ∈ RH∞ constitute a Left Coprime Factorization (LCF) of G


if and only if

1. M̄ is invertible

2. G = M̄ −1 N̄

3. There exists V ,U ∈ RH∞ such that

M̄ V + N̄ U = 1 (4.4)

An arbitrary large number of LCF’s can be generated for a single system ΣG . A particular
left coprime factorization of G(s) is one in which the factors N̄ ,M̄ are normalized:

15
Definition 2 A left coprime factorization of G as defined above is normalized if and only if

N̄ N̄ + + M̄ M̄ + = I for all s = iω (4.5)


or equivalently, if and only if the matrix [N̄ , M̄ ] is co-inner.(G+ (s) = Gt (−s), and G is inner
if Gt is stable).

4.1 Robust Stabilization Problem


We now consider a particular robust stabilization problem, which is formulated in terms of
the normalized left coprime factorization representation of the nominal system G(s).
Let the nominal plant model have a normalized left coprime factorization N̄ ,M̄ such that

G = M̄ −1 N̄ . (4.6)

Then the coprime factor perturbed system can be written as

GM = (M̄ + MM )−1 (N̄ + MN ) (4.7)

where MN ,MM are stable unknown transfer functions which represent the uncertainty in the
nominal system. The robust design objective is to stabilize not only the nominal system G,
but the family of perturbed systems defined by

Gε = {(M̄ + MM )−1 (N̄ + MN ) : k[MM , MN ]k∞ < ε} (4.8)

using a feedback controller K. See figure 4.1.

∆N ∆M

_ − _ −1
e1 N M
u1

K u2 e2

Figure 4.1: Coprime factor perturbations

Some preliminary definitions will now be given for this particular problem.

Definition 3 The feedback system of figure 4.1 (MM =MN = 0) will be denoted by (G, K) and
called internally stable if and only if

1. det(I − GK)(∞) 6= 0

16
2. (I − GK)−1 , K(I − GK)−1 ,(I − GK)−1 G,
I + K(I − GK)−1 G ∈ RH∞ .

Definition 4 The feedback system of figure 4.1 denoted by (G, K, ε) is robustly stable if and
only if (GM, K) is internally stable for all GM ∈ Gε .

The maximum value of ε whilst retaining stability is called the robustness margin for this
problem. Hence ε is a limitation on the ’size’ of perturbation that can exist without desta-
bilizing the closed-loop system of figure 4.1 Further, if there exists a K such that (G, K, ε)
is robustly stable, then (G, ε) is said to be robustly stabilizable with robustness margin ε.
Necessary and sufficient conditions for robust stability will now be stated, and then it will be
shown that this problem fits neatly into the standard H∞ framework.

Lemma 1 The feedback system (G, K, ε) is robustly stable if and only if (G, K) is internally
stable and
°· ¸°
° K(I − GK)−1 M̄ −1 °
°
° (I − GK)−1 M̄ −1 °
° ≤ ε−1 . (4.9)

Therefore, (G, ε) is robustly stabilizable if and only if


°· −1 −1 °
¸°
inf °° K(I − GK) M̄ ° ≤ ε−1 , (4.10)
K ° (I − GK)−1 M̄ −1 °∞

where the infimum is chosen over all stabilizing controllers K.

We also have that

°· ¸° °· ¸ °
° K(I − GK)−1 M̄ −1 ° ° K(I − GK)−1 M̄ −1 £ ¤°
°
° (I − GK)−1 M̄ −1 °
° = ° N̄ M̄ °

° (I − GK)−1 M̄ −1 °

°· ¸ °
° K −1
£ ¤°
(since [N̄ M̄ ] is normalized) = ° (I − GK) G I °
° I °

= FL (P, K)k∞

with P given by equation 4.11.

4.1.1 Coprime factor perturbations as an H∞ -optimization problem


Equation 4.10 is in the form of an H∞ -optimization problem, which allows ε−1 to be chosen
as small as possible. The problem stated above can be converted to the standard form of an
H∞ -optimization problem. Let

 
· ¸ 0 I
P11 P12
P := =  M̄ −1 G  (4.11)
P21 P22
M̄ −1 G

and G∆ = FL (P, ∆), with ∆ = [∆M ∆N ].

17
z w
P

y u

K
FL (P,K)

Figure 4.2: Standard block diagram.

Then 4.10 is equivalent to

inf
kFL (P, K)k∞ ≤ ε−1 (4.12)
K

where K is chosen over all stabilizing controllers. (P is referred to as the Standard H∞ -


Plant). P and K can be placed into a standard block diagram, see figure 4.2, and the optimal
controller K minimizes the transfer function Tzw from w to z. In the next chapter, we show
that for a certain choice of w and z, the corresponding transfer function Tzw equals FL (P, K)
in (4.12).

Definition 5 X is a solution to the Control Algebraic Riccati Equation (CARE) if and only
if it satisfies

AX + X ∗ A − XBB ∗ X + C ∗ C = 0, (4.13)
and Z is a solution to the Filter Algebraic Riccati Equation (FARE) if and only if it satisfies

AZ ∗ + ZA − ZC ∗ CZ + BB ∗ = 0. (4.14)

Theorem 1 There exists a controller ΣK such that the family of feedback systems of figure
4.1 (G, K, ε) is robustly stable if and only if

ε2 ≤ {1 + λmax (ZX)}−1 = 1 − k[N̄ , M̄ ]k2H , (4.15)

where X, Z > 0 are unique solutions to (CARE) and (FARE) respectively and M̄ −1 N̄ is a
normalized LCF of G = C(sI − A)−1 B.

Corollary 1 The optimal robustness margin for this problem is equal to

εmax = {1 + λmax (ZX)}−1 . (4.16)

Lemma 2 All controllers for the normalized LCF robust stabilization problem satisfying
kFL (P, K)k∞ ≤ γ, for γ > (1 − k[N̄ , M̄ ]k2H )−1/2 , are given by:

ΣK = ΓL [Φ], (4.17)

18
where L has the state-space form:
 
· ¸ Ac −γ 2 W1∗ −1 B γ 2 ξ −1 W1∗ −1 ZC ∗
L11 L12
L= =  F I 0  (4.18)
L21 L22
C 0 I

and has the transfer function


· ¸
I − F (sI − Ac )−1 γ 2 W1∗ −1 B F (sI − Ac )−1 γ 2 ξ −1 W1∗ −1 ZC ∗
(4.19)
−C(sI − Ac )−1 γ 2 W1∗ −1 B I + C(sI − Ac )−1 γ 2 ξ −1 W1∗ −1 ZC ∗

where ξ = (γ 2 − 1)1/2 ,

W1 := I + (XZ − γ 2 I), (4.20)


m×p
Φ ∈ RH∞ with kΦk∞ ≤ 1, Z, X solutions of the (FARE) and (CARE) respectively,

Ac = A + BF (4.21)

and

F = −B ∗ X. (4.22)

A particular controller is given by:

ΣK0 = L12 L−1


22 (4.23)

which corresponds to Φ = 0, and is the central or maximum entropy controller for the selected
tolerance level. The state space formulae for K0 are now given.

Corollary 2 The central controller for tolerance level γ > (1−k[N̄ , M̄ ]k2H )−1/2 , has the state
space form
· c ¸
A − ξ −1 W1∗ −1 ZC ∗ C γ 2 W1∗ −1 ZC ∗
ΣK0 = (4.24)
B∗X 0

where X (resp. Z) solves (CARE) (resp. (FARE)), and F, Ac and W1 follow from equations
(4.22), (4.21), and (4.20).

Recapitulating, the state-space form of the central controller for the coprime factor perturbed
system is given by

Ac = A + BF + W −1∗ ZC ∗ Cγ 2 , (4.25)
−1∗ ∗ 2
Bc = W ZC γ ,

Cc = B X,
Dc = 0,

with

19
l = max{λ(XZ)}, (4.26)

²max = 1/ 1 + l,
² = 1/γ,

γopt = 1 + l,

1+l
γ = ,
ζ
F = −B ∗ X,
W = XZ + (1 − γ 2 )I,

where X (resp. Z) solves (CARE) (resp. (FARE)). Parameter ζ lies in the interval [0, 1].
Choosing ζ = 1 gives an optimal controller in theory, but in practice W would become
uninvertible (ζ = 0 also gives numerical problems). So ζ should be chosen a bit smaller,
which gives a so-called sub-optimal controller. A special case of the central controller with
²max = 0 is the LQG controller. Substitution of ²max = 0 into equations 4.26 yields

Ac = A − BB ∗ X − ZC ∗ C (4.27)

Bc = ZC
Cc = −B ∗ X.

The LQG controller has no guaranteed robustness margin, so the robustness will have to be
determined numerically.

We have established a method to construct a central controller for a coprime factor perturbed
plant. A certain choice of robustness margin results in the well-known LQG controller. We
also noted that designing a controller for a coprime factor perturbed plant can be written as
a special choice of the more general H∞ -optimization problem.

20
Chapter 5

Other robust stabilization problems

In the previous chapter, a central controller was derived for a particular H∞ problem, namely
for a normalized coprime factor perturbed system. In this chapter, a more general approach is
used in order to derive a central controller for the more general case. The underlying theory is
far too extensive to be covered in this thesis. However, in order to illustrate the general idea
behind it, internal stability is considered once more, together with the small gain theorem.
The system is assumed to be minimal.

5.1 Internal stability and the Small Gain Theorem


Internal stability of a plant means that all states of the plant are exponentially stable. Con-
sider figure 5.1, where a system ΣM ∈ RH∞ interconnected with system Σ∆ ∈ RH∞ . In-
ternal stability makes sure that the disturbances w1 , w2 ∈ RH∞ cannot destabilize the two
interconnected systems.

w1 e1

w2
M e2

Figure 5.1: Internal stability diagram.

Theorem 2 The system in figure 5.1 is internally stable if and only if the system is well-
posed, and the transfer function
· ¸−1
I −∆
(5.1)
−M I

from (w1 , w2 ) to (e1 , e2 ) belongs to RH∞ .

21
The Small Gain Theorem states under what conditions the interconnected system in figure
5.1 is internally stable.

Theorem 3 (Small gain theorem)


Suppose M (s) ∈ RH∞ and let γ > 0. Then the interconnected system shown in figure 5.1 is
well-posed and internally stable for all ∆(s) ∈ RH∞ with

(a) kM k∞ ≤ 1/γ ⇐⇒ k∆k∞ < γ

(b) kM k∞ < 1/γ ⇐⇒ k∆k∞ ≤ γ.

5.2 Robust stability with weighted perturbations

z w
H

y u

Figure 5.2: Standard block diagram.

z w

H
H∆

y u

F (H,K)
L

Figure 5.3: Generalized uncertainty model.

22
Consider the system described by the block diagram in figure 5.2. ΣH is called the generalized
plant, w(t) and u(t) inputs, and y and z outputs. For example, with

 
0 I
H =  M̄ −1 G  (5.2)
M̄ −1 G
as in equation (4.11) this becomes the problem considered in the previous chapter.
The problem is finding a controller that internally stabilizes the interconnected system and
minimizes kTzw k∞ , the transfer function from w to z. (Usually, w is a disturbance vector).
Notation: FL (H, K) = Tzw , see also figure 5.2. We are interested in a controller that stabilizes
not only H but a whole class of perturbed H, namely H∆ , with

· ¸
∆1 ∆2
H∆ = H + , (5.3)
0 0
that contains the nominal plant H (so ∆ = 0). See figure 5.3. This is called the Generalized
Uncertainty Model. It is hard to determine physically with what perturbations of G (the
transfer function of the physical plant), ∆1 and ∆2 correspond.
The following definition and theorem state for what set of perturbations, and under what
conditions, K still stabilizes the interconnected system.

Definition 6 A permissible perturbation ∆, is one such that ∆ ∈ D² where


D² ≡ {∆ : ∆ ∈ RH∞ ; k∆k∞ < ²}. (5.4)

Theorem 4 K stabilizes FU (H, ∆) in figure 5.3 for all ∆ ∈ D² if and only if:
(a) K stabilizes FU (H, 0),
(b) kFL (H, K)k∞ ≡ γ ≤ ²−1 .

The preceding theorem can be proven with the Small Gain Theorem, which states that
kFL (H, K)k∞ · k∆k∞ < 1 is sufficient and necessary for internal stability of the generalized
uncertainty model, see figure 5.3. So here, M is taken to be equal to FL (H, K). Given a
plant H, we look for a controller that minimizes kFL (H, K)k∞ , and has a robustness margin
²max as large as possible:

Corollary 3 ’The Robust Stabilization Problem’


The largest strictly positive number ² = ²max , such that for all ∆ ∈ D² there exists a single
controller which stabilizes FU (H, ∆) is given by:
²max = (inf kFL (H, K)k∞ )−1 , (5.5)
K

where K is chosen over all controllers which internally stabilize FU (H, 0) .

The controller achieving this robustness margin is called an optimally robust, or optimal
controller. Notation: in case K is an optimal controller, γ = γopt , ² = ²max . For any other
(suboptimal) controller the inequality ² ≤ ²max holds. Here, ² is the maximum infinity norm
of plant perturbations in D² that are stabilized by some controller K. This leads to the
question of how to find an optimal controller:

23
Definition 7 ’The H∞ Optimization Problem’
Find
inf kFL (H, K)k∞ = γopt (5.6)
K

where K is chosen over all controllers which internally stabilize FU (H, 0) .

The robustness margin is conservative: For ² > ²max there exists a ∆ ∈ D² for which the
interconnected system is not internally stable, but for an arbitrary ∆ ∈
/ D² the system can be
stable because ∆ is large in some frequency range where it cannot ’harm’ the system. (These
are typically high frequencies). In other words: the set of perturbations that are stabilized is
generally larger than the class of admissible perturbations.

5.3 Construction of the central controller for a weighted plant


Any controller that is constructed, has to internally stabilize the system, and therefore this
is called this an admissible controller. Further, there are some objectives, encaptured in the
choice of weighting functions that form H. We construct a controller that minimizes kTzw k∞
in figure 5.2. The realization of the transfer matrix H is taken to be of the form
 
A B1 B2 · ¸
A B
ΣH = C1 D11 D12 =
  (5.7)
C D
C2 D21 0
which is compatible with the dimensions of z(t) ∈ Rp1 , y(t) ∈ Rp2 ,w(t) ∈ Rm1 ,u(t) ∈ Rm2 ,
and the state x(t) ∈ Rn . The following assumptions are made:

(A1) (A, B2 ) is stabilizable and (C2 , A) is detectable;

(A2) · ¸
0 £ ¤
D12 = and D21 = 0 I ; (5.8)
I

(A3) · ¸
A − jωI B2
has full column rank for all ω; (5.9)
C1 D12

(A4) · ¸
A − jωI B1
has full row rank for all ω. (5.10)
C2 D21

Let X be a solution of the following Riccati equation

A∗ X + XA + XRX + Q = 0, (5.11)
Then the Hamiltonian matrix associated with this Riccati equation is defined as
· ¸
A R
H= , (5.12)
−Q −A∗
where X =Ric(H) means that X satisfies equation 5.11. Further, dom(Ric) denotes the
domain of Ric.

24
Define
· ¸
∗ γ 2 Im1 0 £ ¤
R := D1• D1• − , where D1• = D11 D12 (5.13)
0 0
· 2 ¸ · ¸
∗ γ Ip1 0 D11
R̃ := D•1 D•1 − , where D•1 =
0 0 D21
· ¸ · ¸
A 0 B £ ∗ ¤
H∞ := ∗ ∗ − ∗ R−1 D1• C1 B ∗
−C1 C1 −A −C1 D1•
· ∗
¸ · ¸
A 0 C∗ £ ¤
J∞ := ∗ − ∗ R̃−1 D•1 B1∗ C
−B1 B1 −A −B1 D•1
X∞ := Ric(H∞ ) Y∞ := Ric(J∞ )
· ¸
F1∞
F := := −R−1 [D1• ∗
C1 + B ∗ X∞ ]
F2∞
£ ¤ ∗
L := L1∞ L2∞ := −[B1 D•1 + Y∞ C ∗ ]R̃−1

Partitions D, F1∞ , and L1∞ are as follows:

 ∗ ∗ ∗ 
· ¸ F11∞ F12∞ F2∞
F0  L∗
11∞ D1111 D1112 0 
= . (5.14)
L0 D  L∗
12∞ D1121 D1122 I 
L∗2∞ 0 I 0

Theorem 5 Suppose H satisfies assumptions (A1 − A4 ).

(a) There exists an admissible controller K(s) such that kTzw k∞ < γ if and only if

– γ >max(σ̄[D1111 , D1112 ],σ̄[D1111 ∗
, D1121 ]);
– H∞ ∈ dom(Ric) with X∞ =Ric(H∞ ≥ 0);
– J∞ ∈ dom(Ric) with Y∞ =Ric(J∞ ≥ 0);
– ρ(X∞ Y∞ ) < γ 2 .

(b) Given that the conditions of part (a) are satisfied, then all rational internally stabilizing
controllers K(s) satisfying kTzw k∞ < γ are given by

K = FL (M∞ , Φ) for arbitrary Φ ∈ RH∞ such that kΦk∞ < γ (5.15)


where
 
 B̂1 B̂2
M∞ =  Cˆ1 D̂11 D̂12 
Cˆ2 D̂21 0
∗ ∗
D̂11 = −D1121 D1111 (γI − D1111 D1111 )−1 D1112 − D1122 ,

25
D̂12 ∈ Rm2 ×m2 and D̂21 ∈ Rp2 ×p2 are any matrices (e.g. choleski factors) satisfying


D̂12 D̂12 = I − D1121 (γ 2 I − D1111

D1111 )−1 D1121

− D1122

D̂21 ∗
D̂21 = I − D1112 (γ 2 I − D1111 D1111

)−1 D1112
and
B̂2 = Z∞ (B2 + L12∞ )D̂12
Cˆ2 = −D̂21 (C2 + F12∞ )
−1
B̂1 = −Z∞ L2∞ + B̂2 D̂12 D̂11 ,
ˆ −1 ˆ
C1 = F2∞ + D̂11 D̂21 C2
 = A + BF + B̂1 D̂−1 Cˆ221
where
Z∞ = (I − γ 2 Y∞ X∞ )−1 .

The choice of Q = 0 results in what is called the central controller for the general H∞
problem. The Matlab function ’hinfsyn’ obtains a controller by solving X∞ and Y∞ repeatedly
for different sub-optimal γ, until γ − γopt < δ for some chosen δ. This iterative procedure is
computationally expensive, especially for large systems. Of course, γ > γopt , but the resulting
γ from this command is arbitrarily close to γopt . (The program has an extra option of choosing
γ
γ, namely γ = opt ζ , with ζ ∈ (0, 1), in order to prevent eventual numerical problems, or for
other goals.) If assumption (A2) is not satisfied, but D12 and D21 have full column rank and
full row rank respectively, a normalizing procedure can be constructed.
In the next section, H is derived from G by means of suitable weighting functions.

5.3.1 A robust controller for a weighted plant


As an illustrative example, we consider perturbations on the input and output of the plant.
Suppose that G obeys the laws of the feedback configuration given in figure 5.4,

di d

r u ud y
K G

Figure 5.4: Standard feedback configuration. In our case r = n = 0.

then the transfer function from w to z is given by:


· ¸
(I − GK)−1 G (I − GK)−1
Tzw = = FL (G, K), (5.16)
K(I − GK)−1 G K(I − GK)−1

with

26
· ¸ · ¸ · ¸ · ¸
z1 y w1 di
= , and = . (5.17)
z2 u w2 d
This is the exact same transfer function that is minimized by the central controller for the
coprime factor perturbed system. In general, this is not the transfer function one wants to
minimize, because the two disturbances may not be equally ’important’ so we do not want to
minimize their influence on the system equally. The disturbances are therefore weighted, as
shown in figure 5.5.
~ ~
u~ di d

V1 W1 W2
di d
u ud y y~
K G V2

Figure 5.5: Feedback configuration with weights. H denotes the generalized plant.

We do now have a weighted feedback configuration For the sake of simplicity, W1 , W2 , V1


and V2 are considered to be constant matrices. In general, the weights may be chosen to be
frequency-dependent. From figure 5.5, the following transfer function is obtained
· ¸
V1 K(I − GK)−1 GW1 V1 K(I − GK)−1 W2
Tzw = = FL (H, K), (5.18)
V2 (I − GK)−1 GW1 V2 (I − GK)−1 W2
with · ¸ · ¸ · ¸ · ¸
z1 ỹ w1 d˜
= , and = (5.19)
z2 ũ w2 d̃i
Further, the LFT framework is established by choosing
 
0 0 V1
H =  V2 GW1 V2 W2 V2 G  (5.20)
GW1 W2 G
in equation 5.18. The state equations of the extended plant H can be written as

ẋ = Ax + B1 w + B2 u
z = C1 x + D11 w + D12 u
y = C2 x + D21 w + D22 u (5.21)

So a state-space representation of H is

27
 
A B1 B2
ΣH =  C1 D11 D12  (5.22)
C2 D21 D22
with

A = A (5.23)
£ ¤
B1 = BW1 0
B2 = B
· ¸
0
C1 =
V2 C
C2 = C
· ¸
0 0
D11 =
V2 DW1 V2 W2
· ¸
V1
D12 =
V2 D
£ ¤
D21 = DW1 W2
D22 = D.
In general, the weighting functions belong to RH∞ , are frequency-dependent, and are written
in state-space form, and hence they have state-space variables. Obviously, equation 5.23 is
then of a more complicated form. Also, these variables should be taken into account in
numerical simulations.
Unstructured uncertainties in the system, e.g. additive, allow a theoretical robust stability
test under various assumptions. One can adjust the weighting functions to those assumptions,
but there are some drawbacks: Sometimes, the type of uncertainty is unknown, as explained
in section 5.4. Also, the robust stability objective is hard to formulate in terms of a norm
². It is also very difficult to to predict the effect of different weights. Further, application of
weighting functions do not give much insight in the robustness and costs quantitatively.
These are all reasons to perform numerical research in order to visualize the performance
of a controller as a function of weighting functions. We can choose weights to optimize
control/displacement costs, and the stability region. The quantitative (actual) results can in
general only be obtained by numerical computations.

5.3.2 Loop shaping


Loop shaping is a well-known tool to enhance closed-loop properties by replacing G with
U1 GU2 with U1,2 suitable weighting functions. Next, a (sub-)optimal controller K∞ for U1 GU2
is designed, and finally the controller is determined as K = U2 K∞ U1 . In [1] it has been shown
that this is equivalent with placing weighting functions (figure 5.5) as V1 = U1−1 ,V2 = U2 ,W1 =
U1 , and W2 = U2−1 .

5.3.3 The gap metric


In order to compute the norm of the difference between two unstable systems in a manner
that for stable systems reduces to the H∞ norm, the gap metric brings solution. The rather

28
technical algorithm to compute the norm in the gap metric (the gap) between two systems, is
given in [1]. The following result shows the (guaranteed) robust stability margin of a system.
Theorem 6 Suppose the feedback system with the pair (G0 , K0 ) is stable. Let
G := G : δg (G, G0 ) < r1 and K := K : δg (K, K0 ) < r2 .Then the feedback system with the pair
(G, K) is also stable for all G ∈ G and K ∈ K if and only if
arcsin γ(G0 ,K0 ) ≥ arcsin r1 + arcsin r2 . (5.24)
The gap is denoted by δg , and γ(G0 ,K0 ) = kFL (G0 , K0 )k∞ . In this thesis only perturbations
of G0 are considered, so r2 = 0, and therefore the guaranteed stability margin, according to
the Small Gain Theorem, is given by ² = 1/γ(G0 ,K0 ) = δg (G, G0 ).

5.4 Robustness and the stability region


In order to check the quality of a controller, the robustness is determined by checking which
perturbations of the plant are fatal for stability of the closed-loop system, and which are
not. For a given type of perturbation and a given controller K, one could find out what the
stabilized ’region’ of closed-loop system is, by calculating kK(I − GK)−1 k∞ = γ, and find
the class of perturbations that are ’allowed’ by the controller K. This we call the stability
region of G. There are two obstacles; Firstly, it is very hard to determine the type of these
perturbations, and often they are of more than one type. Secondly, the theoretical robustness
margin ² is conservative, so the ’actual’ stability region cannot be determined mathematically.
We choose to determine the stability region in a more general way, that is mathematically
less complex, and determines the actual region.
Theorem 7 Consider H a stabilizable and detectable system, and K a controller. Both have
state-space forms defined at the start of this chapter. Then the interconnected system is
internally stable if and only if Ā is stable, with
· ¸ · ¸
A BCc BDc £ ¤
Ā = + (I − DDc )−1 C DCc . (5.25)
0 Ac Bc
It turns out that D = Dc = 0, so our system is internally stable ⇐⇒ the transfer matrix in
· ¸ · ¸· ¸
ẋ A BCc x
= (5.26)
x˙c Bc C Ac xc
has its eigenvalues in the open left half plane. The system perturbations are chosen to be
variations of α and β in equation 2.19, hence any perturbed system can be written in state-
space form and it can be determined if K stabilizes G by checking the eigenvalues of Ā. The
size of the stability region of a perturbed system is one objective (amongst others) that we
want to optimize. The other objectives are the costs, and are already discussed.

So far, we have established a construction method for a robust controller for the more general
H∞ -optimization problem. A particular choice of plant H results in the coprime factor based
controller of chapter 4. Further, a method is described that determines internal stability of
the closed-loop system with a perturbed plant in a straightforward manner. Robustness is
defined in terms of the size a stability region. Together with the costs, controller performance
is defined completely, and the quality of a controller can be tested.

29
30
Chapter 6

Aim of this project

In the previous chapters, we established the construction method of a robust controller for
the H∞ -optimization problem. A particular choice of plant H results in the coprime factor
based controller of chapter 4. We are able to test the performance of a controller in terms of
costs and robustness. The methods that we use in order to obtain a controller of low order,
are discussed in this chapter.
The aim of this project is to investigate several methods in order to obtain a low-order robust
controller that meets the following performance objectives:

• Low control costs;

• Low displacement costs;

• Large stability region for the given controller.

6.1 The need for numerical research


The design objectives that are listed above can be investigated theoretically; the robustness
margin of a controller is guaranteed to be ²max = 1/γ , see equation (5.15). Also an (conser-
vative) error bound of the gap between an original and reduced plant can be given. Because
of these two conservative features, it will become a very complex task (if not impossible) to
theoretically determine the stability region of a controller with a reduced number of states.
Also, because of the uncertainty of the type of perturbation, it is theoretically hard to con-
struct a controller with appropriate weighting functions. However, there are some valuable
guidelines. Further, we are interested in how the control/displacement costs behave under
state reduction. On practical grounds, numerical research seems to be the best option for
this investigation.

6.2 Construction of a low-order controller


The original physical plant is modelled by a PDE, in equation 2.19. This PDE is approximated
by a system with a large state space, say N >> 1 , which we call the nominal model. The
nominal model represents the PDE system for which a controller is constructed. Whenever
a controller is constructed, it is connected with P 2N , to see how it (probably) works on the
PDE. Three ways of obtaining a low order controller are investigated, and although there

31
are more ways, these three seem to be most instructive to the author. See figure 6.1. These
methods are referred to as the first, second and third controller construction method.

1 – Approximate the PDE by the nominal model, i.e. a high number of states: P 2N .
– Reduce P 2N for a large number of states, which leads to P 2N −R .
– Construct a controller with 2N − R states: K 2N −R .
– Connect the controller to the nominal model.

2 – Approximate the PDE by a lower order model P 2n .


– Reduce P 2n to a model with a small number of states, which leads to P 2n−r .
– Construct a controller with 2n − r states: K 2n−r .
– Connect the controller to the nominal model.

3 – Approximate the PDE by a lower order model P 2n , with n << N .


– Construct a controller with 2n states: K 2n .
– Reduce K 2n by a small number of states r, which leads to K 2n−r (Here, r << R).
– Connect the controller to the nominal model.

The stability region is obtained by changing the 4th step in each of the three methods, namely
by connecting the controller to a perturbed nominal model, and checking internal stability.
For numerical approximation, reduction, and controller construction, different options are
investigated:

(a) Numerical approximation: modal or with finite-differences.

(b) Reduction: by LQG balanced truncations (or Schur reduction). This is explained in the
next chapter.

(c) Controller: central controller for the coprime factor perturbations H∞ plant ΣP , a more
general H∞ controller with constant weights, or an LQG controller for ΣP .

Because of the computational advantages discussed in chapters 4 and 7, the standard H∞


plant ΣP (≡ ΣG from chapter 4) is used hereafter, whenever weighting functions do not play
a role.

32
PDE

numerical approximation
of the PDE

2N 2n 2n
P P P
large model small model construct
reduction reduction controller

2N−R 2n−r 2n
P P K

construct construct small model


controller controller reduction

2N−R 2n−r 2n−r


K K K

connect to nominal system

2N
P

Figure 6.1: Three methods for obtaining a low-order controller. Note that n << N and
r << R.

33
34
Chapter 7

LQG balanced truncations

In general, the controller that is constructed with respect to a system has the same number
of states as the system itself. In order to construct a low order controller for a high-order
system, the order of the system has to be reduced somehow. One of the methods that seem
suitable stems from a systems theoretical point of view, and is called the LQG balanced
truncation method. The general idea behind (any) state reduction of a system, is to obtain a
new system that lies close (in some sense) to the nominal system. The truncated system can
be seen as a perturbation of the nominal system. A robust controller will therefore not only
stabilize the truncated system, but also the original one. Balanced truncations are obtained as
follows. The system is transformed, by a state transformation x → T x with T a nonsingular
matrix, to be defined later on. The transformed states [T x1 ...T x2N ] correspond to positive,
descending Hankel singular values σ1 ...σ2N , which are the diagonal elements of the diagonal
1
matrix ∆ = (P Q) 2 , where P and Q are solutions to equations (4.13) and (4.14) (for the
transformed system). After the transformation, the last states are the least significant ones,
2n−r
and are the first ones to be removed. Notation: ΣH denotes a system with a numerical
approximation of 2n states, and r states removed by balanced truncations. (A more correct
2N −r
but inconvenient notation is ΣH 2n−r ). The closeness of the two unstable systems ΣH and
2N
ΣH , which denote the truncated and original system respectively, can be computed in some
suitable metric, for example the gap metric or the L∞ norm. The next algorithm is used.

• Solve P and Q from equations (4.13) and (4.14).

• Calculate Q = RT R, which is symmetric positive definite.

• Consider the (SPD) matrix RP RT , which is similar to P Q.

• Find a unitary U (eigenvectors of RP RT ) such that RT P R = U Σ2 U T .


1
• Define T = Σ 2 U T R.

• The LQG balanced system has the following realization:


· ¸ · −1 ¸
Abal Bbal T AT T −1 B
ΣHbal = = . (7.1)
Cbal Dbal CT D

After transformation of plant ΣH , the last r states are removed and a controller is built
according to the transformed and reduced plant. The controller now has 2N − r states

35
instead of the usual 2N states. When we use the central controller of chapter 4 an advantage
shows up. Consider the algorithm that leads to a controller of 2N − r states:

(1) Compute T and balance the system;

(2) Reduce the system;

(3) Compute P and Q as before, only this time A,B and C are balanced and reduced;

(4) Construct a central controller from equations (4.25) and (4.26).

If we want the program to do simulations with one system but with different reduced order
models, T has to be computed only once, as is the same for P and Q. In step (3) P and Q
are solved for r = 0, and for an arbitrary r, the central controller is constructed in (4) using
Pr and Qr , The upper left parts of P and Q with dimensions 2N − r × 2N − r.

7.1 A Schur method


Often, the transformation matrix T is ill-conditioned due to modes with small eigenvalues of
P Q, which is the case for our problem with large n. In [8] an algorithm is presented to skip
the balancing part in the previous section, and compute the reduced and balanced system
in one step in a numerically safe way. The method works theoretically for stable systems,
but since the previous algorithm is obtained in a similar way for both stable and unstable
systems, we try to make the next algorithm to work in the same way. The only difference with
stable systems in the previous algorithm lies in the fact that for unstable systems P and Q
are solutions of the CARE and the FARE, and that for stable systems, P and Q are solutions
of some Lyapunov equations. The next algorithm uses the same Lyapunov equations, hence
for unstable systems we try to use the same CARE’s. This is the algorithm, for a given r:

(1) Compute matrices VP,big , VL,big ∈ R2N ×2N −r whose columns form orthonormal bases
for the respective right and left eigenspaces of P Q associated with the “big” eigenvalues
2
σ12 , ..σ2N −r .

(2) Let
T
Ebig = VL,big VR,big (7.2)
and compute its singular-value decomposition
T
UE,big ΣE,big VE,big = Ebig . (7.3)

(3) Let
−1
SL,big := VL,big UE,big ΣE,big
2
∈ R2N ×2N −r (7.4)
− 12
SR,big := VR,big VE,big ΣE,big ∈ R2N ×2N −r .

(4) Compute the state-space realisation of the reduced plant Pr :


· ¸ · T T
¸
2N −r Ar Br SL,big ASR,big SL,big B
ΣH = = (7.5)
Cr Dr CSR,big D

36
In step (1), VR,big and VL,big are computed by the following algorithm:

(a) Solve P and Q as in the previous section;

(b) Using Schur transformations and so-called Givens rotations, compute real matrices VA
and VD such that VAT P QVA and VDT P QVD are upper triangular and have their eigen-
values on the diagonal in ascending and descending order, respectively.

(c) Partition VA and VD as


r 2n−r
z }| { z }| {
VA = [ VR,small | VL,big ], (7.6)
2n−r r
z }| { z }| {
VD = [ VR,big | VL,small ]. (7.7)

The big difference with the previous algorithm is that the system is not LQG balanced
anymore, which means that we can no longer compute an P and Q and pick their upper
left parts to construct a controller with 2N − r states for arbitrary r. For each r, step (2)-(4)
have to be carried out, but step (1) has to be done only once (this is the expensive step). So
here there is no big computational difference. However, for the case where a central controller
is used, the controller has to be constructed separately for every r by solving Pr and Qr . This
contrasts with the previous method, where this is done only once. In conclusion, the LQG
balanced truncation method is cheaper, but can lead to an ill-conditioned transformation.
The Schur reduction is costly, but is numerically more correct. For stable balanced systems,
balanced by means of Lyapunov equations, the truncated system has an error bound of

k H(s) − Hr (s) k∞ ≤ 2(σ2N −r+1 + .. + σ2N ) (7.8)


where H(s) and Hr (s) are the transfer functions of the system Σ2N
H and the truncated system
2N −r
ΣH respectively, and σi are the Hankel singular values of Σ2N
H . By definition this bound
is conservative. For unstable systems, a similar (conservative) bound can be given, in terms
of the LQG singular values.

37
38
Before we start testing the controller construction methods, some numerical aspects have to
be inspected. We would like to know what a suitable N is for the nominal model Σ2N
G in order
to approximate the PDE within a reasonable accuracy. The costs are computed by means of
dynamical simulation. The discrete time step has to be suitably small for accuracy, but not
too small w.r.t. computational costs. We note that a suitable time step is based only on the
motion that follows from the initial state condition given below.

39
Chapter 8

Test 1: approximation of the PDE

The physical parameters and initial condition are taken from [2]. We have α = 1.129, β =
3.8913 10−4 , and the length of the beam is 2m. The initial state condition is: w(x, 0) =
(1/21)x7 + (1/30)x6 − (1/5)x4 − (1/6)x4 + (1/3)x3 + (1/2)x2 , and wt (x, 0) = 0. In this
chapter, N = n, and r = 0, according to the second controller construction method. The
simulation runs from 0 to 100 seconds. Further, θ = 0.5 throughout this thesis.

8.1 The frequency response


The transfer function G(s) is defined by ŷ(s) = G(s)û(s) (u is the input, y the output, s = iω,
and ’ˆ’ denotes the Laplace transform), and is in the case of two inputs u and two outputs y
given by:

· ¸
G11 (s) G12 (s)
G(s) = . (8.1)
G21 (s) G22 (s)

The frequency response plot that is built into the program, shows G11 (s) and G22 (s) for a
chosen frequency domain. Recall that y = [w(0, t) wx (0, t)]T , and u = [F M ]T . Theoretically,
G12 and G21 are equal to zero, but due to numerical errors, they have very small values in
the frequency domain ω ∈ (10−2 , ∞), which can be shown by a Bode plot. (The reasons why
no Bode plot has been used are practical; The Bode plot command in Matlab shows eight
graphs: for each transfer function Gii the magnitude of the response is shown, as well as the
phase angle. We are only interested in two graphs: the magnitudes of the response of G11 and
G22 . The Bode plots cannot be adjusted properly in many ways, such as zooming, linewidth
adjustment, or showing only two of the eight graphs. The program has options for showing
the Bode plot and the frequency response plots, both for plant ΣG as well as for controller
ΣK .)

8.2 Convergence of frequency response


In this section, we investigate what a suitable N might be for the nominal model. In figure
8.2 the frequency response for different choices of N is shown, with use of a modal approxima-
tion. The upper graph shows G11 and the lower graph shows G22 . Adding more states to the
system apparently contributes relatively the most to the high frequency responses, while for

40
low frequencies, the response remains almost exactly the same. The very low magnitudes of
H-F responses indicate that the nominal system is approximated very accurately even for low
N . Figure 8.1 shows the frequency response using a finite difference approximation. The F-D
approximated system converges quickly for larger N but given that the differences have a con-
siderable magnitude, it seems that more states are needed than with modal approximation, in
order to obtain a nominal model that approximates the PDE accurately. Perhaps the (O)(δx)
discretisation of the delta function in equations 3.18 and 3.19 slows down the convergence. It
can be shown by Bode plots that for each numerical approximation method, the phase angle
responses also converge. It can be shown that both modal and F-D approximations converge
to the same frequency responses.
0
10
N=11
N=21
N=51
|G11(s)|

−5
10

−10
10
0 1 2 3
10 10 10 10
ω(u1) (rad/sec)

0
10

−2
10
|G22(s)|

−4
10

−6
10

−8
10
0 1 2 3
10 10 10 10
ω(u2) (rad/sec)

Figure 8.1: Frequency response for F-D approximation. The differences have a norm of about
10−1 .

8.3 Functional gains


The frequency responses give the impression that for sufficiently large N , say N = 21, the
PDE is approximated very well, for low frequencies. The central controller for the coprime
perturbed plant is used. In order to see how the controller reacts on two different systems,
the controller output as a function of time u (this is called the functional gain) is inspected
for those systems. The controller is based on a system with N = 11, and then connected to
systems with N = 11 and N = 21. Figure 8.3 shows the controller outputs F and M , of which
u consists. For modally approximated systems with N = 11 and N = 21, the controller output
is exactly the same. Apparently, the differences between these systems for high frequencies

41
0
10
N=11
N=21
N=51
|G11(s)|

−5
10

−10
10
0 1 2 3
10 10 10 10
ω(u ) (rad/sec)
1

0
10

−2
10
|G22(s)|

−4
10

−6
10

−8
10
0 1 2 3
10 10 10 10
ω(u ) (rad/sec)
2

Figure 8.2: Frequency response for modal approximation. The differences have a norm of
about 10−3 .

42
are negligible. Figure 8.4 shows that for two F-D approximated systems with N = 11 and
N = 21, the controller reacts differently, as the frequency responses already indicated. The
time step is chosen as δt = 0.005, and the controller is the central controller from chapter 4.
u1 (F)

N=11
N=21

0 2 4 6 8 10 12 14 16 18 20
time (s)

0.5
u2 (M)

−0.5
0 2 4 6 8 10 12 14 16 18 20
time (s)

Figure 8.3: Central controller for modal N = 11, 21. The functional gains are virtually
identical.

8.4 Time-delay
The spatial discretisation error has been investigated in the previous section by means of the
frequency response and functional gains. Now the error due to the time step is investigated.
For simplicity, assume explicit time-integration (θ = 0). At time t1 , y is observed and u is
computed. u works on the plant for a time of δt. So just before a new u is computed, u has
a time-delay of δt. This delay can be modelled as a perturbation e−τ s , with 0 ≤ τ ≤ δt. To
get an idea of how big the time step should be chosen, there are two points of attention that
have to be satisfied.

(1) The dynamic behaviour of the beam is captured. So for H-F oscillations δt has to be
small.

(2) The δt corresponds to a time-delay. This delay is a perturbation of G(s), and since this
type of perturbation is not considered, it must be made small.

The first attention point is investigated by means of looking at the convergence of the dis-
placement costs for an uncontrolled system. Table 8.1 shows that a time step of 0.1 s is small
enough.

43
u1 (F)

N=11
N=21

0 2 4 6 8 10 12 14 16 18 20
time (s)

0.2

0.1

−0.1
u2 (M)

−0.2

−0.3

−0.4

−0.5
0 2 4 6 8 10 12 14 16 18 20
time (s)

Figure 8.4: Central controller for F-D with n = 11, and N = 11, 21. The functional gains
differ slightly.

δt displacement costs
0.5 9.17
0.1 9.18
0.05 9.18
0.01 9.18
0.005 9.18

Table 8.1: Costs for various sizes of the time step for the uncontrolled system.

44
The second attention point is investigated by means of looking at the convergence of the
control/displacement costs for a controlled system. Table 8.5 indicates that the time step has
to be about 0.005 s, for the controlled system.

δt control costs displacement costs total costs


0.5 12.6698 5.1420 17.8118
0.1 8.5679 4.9047 13.4726
0.05 8.4128 4.8924 13.3052
0.01 8.3881 4.8942 13.2823
0.005 8.3911 4.8938 13.2849
0.001 8.3943 4.8939 13.2882
0.0005 8.3947 4.8939 13.2887

Figure 8.5: Costs for various sizes of the time step for the controlled system.

The reason for taking the time step not too small is, obviously, saving on computational
time. Also, carrying a bigger memory (mainly plot data) due to a small time-step slows down
Matlab considerably. The program allows the plot data to be saved every time step, but plot
data can be saved every couple of time steps without losing essential graphical information.
Intermezzo: A too large time interval between the plot data results in a phenomenon called
aliasing, which causes an erroneous frequency. In figure 8.6 one simulation with δt = 0.05 is
shown, with a plot time step of δt, and with 20δt.
0.25
plot step= 0.05 s
plot step= 1 s

0.2

0.15

0.1
w(0,t) (m)

0.05

−0.05

−0.1

−0.15
0 5 10 15 20 25 30 35 40 45 50
time (s)

Figure 8.6: A large plot time step can lead to an erroneous frequency.

Now we have a feeling for the number of states that the nominal model requires, and the size
of the time step. We can now actually compare the three controller construction methods by
means of controller performance, investigate how the methods work, and how many (or few)
states a controller requires. But before we do so, some properties of a full-order controller,
based directly on the nominal system, are investigated in the next chapter.

45
46
Chapter 9

Test 2: Performance for a full-order


controller

For the construction of a (low-order) controller, two aspects are of vital importance: robust-
ness and costs. In this chapter, the properties of a controlled system with respect to these
aspects, and the manipulation of these properties, are inspected more closely by means of
three test cases. The costs, given by equation 3.27, are determined with use of a simulation
in time, and the robustness is determined with use of a stability diagram. The next sec-
tion gives an explanation of the diagram. In this chapter, a modal approximation is used
with N = n = 11 and r = 0, according to the second controller construction method. This
means that the controller is based directly on the nominal system. The time step is chosen
as δt = 0.005, final time T is 100 seconds, and the physical parameters are the same as in
the previous chapter. Whenever weighting functions are unimportant, the central controller
w.r.t. coprime factor perturbations is used. In section 9.3 the more general controller is used.

9.1 The stability diagram

The stability diagram shows the stability region, which is defined as the region in which, for
the original model G and controller K, each perturbed system G̃ is stabilized by K. According
to the H∞ optimization problem, the unperturbed ΣP lies within the stability region. Figure
9.4 shows the stability diagram for a controller that is constructed for a nominal system. The
perturbations of P consist of variations of α and β. The number of points that correspond
to a stable closed-loop system, is called the number of stability points. The number of points
is taken to be 900 in a particular area: [α̃, β̃] ∈ [α · 10−5 , α · 1.5] × [β · 10−5 , β · 1.5]. (The
number of points and the area can be chosen freely in the program input). The ∗ denotes
the nominal system, a ’¤’ means instability, and ’·’ stability. Furthermore, it should be
mentioned that for high-order systems, the computation of the stability diagram becomes
very time consuming. For each point in the area of [α̃, β̃], the eigenvalues of the matrix in
equation 5.26 are computed in a standard way. The Matlab command ’spy’ shows that this
matrix is probably not sparse enough in order to save time by computing the eigenvalues in
a different way, e.g. by LU factorizations.

47
9.2 Case 1: variation of γ with input noise
In this section we look at the central controller with varying ζ that is connected to plant
G with a varying input noise di (figure 5.4). As is explained in chapter 4, γ = ζ/²opt ζ,
with 0 < ζ < 1, the inequalities prevent numerical problems. Note that ²max (guaranteed
robustness margin) equals ²opt if and only if ζ = 1. Further, ζ = 0 theoretically corresponds
to the LQG controller, which in theory has no guaranteed robustness margin. A low ζ means
a high γ, and will theoretically lead to smaller disturbance rejection (equation 5.16). It also
leads to a smaller guaranteed stability margin ²max . How this works out in practice, and
how it works out for the costs, is shown below in table 9.1. As an input noise we take a
(high-frequency) white noise, its magnitude denotes the standard deviation.

input noise ζ control costs displacement costs total costs ] stable points ²max
0 0.99 12.06 4.34 16.40 701 0.38
0 0.5 4.48 4.41 8.89 831 0.19
0 0.01 4.32 4.50 8.82 836 0.0038
0.5 0.99 11.89 4.46 16.34 701 0.38
0.5 0.5 4.21 4.72 8.93 831 0.19
0.5 0.01 4.13 4.90 9.03 836 0.0038
5 0.99 19.29 13.42 32.71 701 0.38
5 0.5 19.44 19.20 38.64 831 0.19
5 0.01 20.21 21.71 41.93 836 0.0038

Table 9.1: Properties for varying ζ and varying input noise.

For three different noise values, the controller performance is checked for three different choices
of ζ. Some properties can be seen from Table 9.1. First of all, the n.o. stable points ,that
is independent of input noise, increases as ζ decreases. This is somewhat surprising because
for smaller ζ, the guaranteed robustness margin decreases. Further, for high input noise, a
higher ζ results in smaller costs. For low input noise, a higher ζ results in larger costs. An
explanation could be that a high ζ results in a more powerful output signal of the controller.
With a small input disturbance, a less powerful controller output also does the job, and is
besides that more economic. For larger input disturbances, a more powerful controller output,
that is dominant over the disturbances, is needed.

9.3 Case 2: performance with weighting functions


Here, we look at the plant H, defined in chapter 5. Consider the generalized plant in figure
5.5. For simplicity, the weighting functions are constant 2 × 2 matrices. We consider only
weights of the form cI with c constant and I the identity. For illustrative purposes, the plant
has the following properties: di = 0.5, d = 0.1, and ζ = 0.9. We choose V1 = V2 = W1 = I. A
good disturbance rejection will result in low costs, but this is not as straightforward as it looks.
In general controller design, the magnitude of the weights correspond to the ’importance’ of
the corresponding inputs or outputs. A larger weight for d means that the influence of d on
the beam motion is made smaller. Since di = 5d, one might think that W2 = W1 /5 results in
the best disturbance rejection and hence the lowest costs, because di is 5 times larger than

48
−4 −4
x 10 x 10
5 5

4 4

3 3
β

2 2

1 1

0 0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4
α α
−4
x 10
5

3
β

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4
α

Figure 9.1: The stability regions that correspond to table 9.1. Upper left: ζ = 0.99. Upper
right: ζ = 0.5. Lower left: ζ = 0.01. A lower ζ corresponds to a larger stability region.

49
d, and can be seen as 5 times more ’important’. Table 9.2 shows the ambiguity in choices of
W2 ; The displacement costs increase with W2 , opposite to the control costs.

W2 control costs displacement costs total costs


0.1I 12.78 4.12 16.90
0.2I 10.58 4.17 14.75
0.4I 7.07 4.28 11.35

Table 9.2: Varying W2 matrices. Intuitively, one might think that W2 = 0.2I results in
optimal costs. The ambiguity is, that the displacement costs increase with W2 , opposite to the
control costs.

’Importance’ appears to be a vague definition. A quantitative comparison is difficult to make,


as the previous example showed. From this example, it seems that suitable weights that meet
certain performance objectives, have to be tuned by means of numerical experiments. The
transfer fuction corresponding to the weighted plant is minimized by the central controller
(equation 5.18). From this equation it is very hard, if not impossible, to see the influence of
the weights on the controller performance.
It is also interesting to note, that due to these H-F disturbances, δt = 0.005 is not nearly
small enough to obtain an accuracy of two digits in the costs. The results in table 9.2 change,
but remain proportionally the same for smaller δt. A smaller δt decreases the costs, because
of a smaller time-delay of the controller. A very interesting explanation for this could be
the following. The disturbance frequency is never higher than the frequency with which the
observations and controller outputs are done, namely 1/δt. A smaller δt would in reality
make the disturbance frequency higher, and (from the frequency response plots) it is clear
that this is desirable, since disturbances on the H-F frequency response have lower impact
on the behaviour than L-F disturbances (this was already shown in section 8.3). Figure 9.2
shows the influence of a smaller time step. The displacement of the middle of the beam,
w(0, t), is shown as a function of time. The left graph slows an oscillation on a low frequency
scale, while in the right graph this oscillation is absent.
If the statements made above turn out to be true, the time step could be adjusted in practice.
Suppose that the upper bound U of the frequency domain of a pertubation is moreorless
known, then δt is chosen to be equal to 1/U , in order to maximize the possible disturbance
frequency.
The size of the stability region may be explained with use of eigenvalue analysis. Let us have
a look at the eigenvalues of A, denoted by ¤, and of

· ¸
A BCc
, (9.1)
Bc C Ac

denoted by ∗, to see how the eigenvalues are shifted by the different controllers.
Only the unstable eigenvalues are shifted, the rest stays at the same location. It is hard to
say anything from the eigenvalue plots; the left one corresponds to a larger stability region,
but the largest eigenvalue on the real axis is larger than that in the right plot. (The program
has an option that plots the eigenvalues like above).

50
0.35

0.3 0.3

0.25

0.2 0.2

0.15

w(0,t) (m)
w(0,t) (m)

0.1
0.1

0.05

0
0

−0.05

−0.1
−0.1

−0.15
−0.2
0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 90
time (s) time (s)

Figure 9.2: The positive influence of a small time step. Left: δt = 0.1 s. Right: δt = 0.01 s

40 eig [A]
eig [A] eig [A BCc
eig [A BC 30 BC A]
c c c
BC A]
30 c c

20

20

10

10

0
0

−10
−10

−20
−20

−30
−30

−0.35 −0.3 −0.25 −0.2 −0.15 −0.1 −0.05 0


−0.35 −0.3 −0.25 −0.2 −0.15 −0.1 −0.05 0

Figure 9.3: Left: the eigenvalues for W2 = 0.1I with 858 stable points. Right: the eigenvalues
for W2 = 0.1I with 823 stable points.

9.4 Case 3: shifting of the stability region


Figure 9.4 shows left the stability region for a controller constructed for the nominal plant on
a logarithmic scale. The region is almost unbounded for large α and β, but relatively small
w.r.t. small α and β. Given a controller that robustly stabilizes the nominal plant, apparently
a very flexible beam with very small loss of energy by damping (compared to the nominal
beam) is more difficult to stabilize than a very stiff beam with large damping. This is of course
what one expects physically. We wish to shift the stability region towards the region of low
α and β. That way, a controller can also stabilize the more difficult types of beams. As an
effort, the parameters of the nominal plant are shifted to [α̂, β̂] = 10−1 · [α, β]. The controller
that is based on a more difficult system is perhaps more robust w.r.t. the more difficult
systems also. But this does not directly seem to be the case. Figure 9.5 shows the differences
in stability regions for different choices of nominal systems, on which the controller is based.
The right hand diagram of figure 9.5 shows that choosing smaller nominal parameters has no
positive effect. In fact, choosing the nominal parameters lower, seems to even decrease the

51
size of the stability region. Choosing them higher also does not seem to help much.
10
10

8
10

6
10

4
10

2
10

0
10
β

−2
10

−4
10

−6
10

−8
10

−10
10
−10 −5 0 5 10 15
10 10 10 10 10 10
α

Figure 9.4: The stability region for a controller constructed for the nominal plant (on a
logarithmic scale). The region is almost unbounded for large α and β.

−4 −4
x 10 x 10
5 5

4 4

3 3
β

2 2

1 1

0 0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4
α α

Figure 9.5: the differences in stability regions for different choices of nominal systems, on
which the controller is based. A controller based on a nominal system with relatively high
values of α and β (left) seems to have an even larger stability region than for one based on
lower parameters [α̂, β̂] = 10−1 · [α, β].

52
Chapter 10

Construction of a low-order
controller: ’How low can you go?’

In this chapter, we base the controller on a reduced order model, connect it on the nominal
system, and check how it performs. The controller is based on coprime factor perturbations,
as in this chapter weights do not play a role. The main concern of state reduction is to
eliminate as many states as possible while maintaining important properties of the system,
as explained in chapter 7. It was also explained in this chapter, that there are two ways
to obtain a low-order system that approximates the nominal system, namely a low-order
numerical approximation, and by LQG balanced truncations. Three controller construction
methods are proposed, and now they are investigated. All parameters are the same as in the
previous chapter, and ζ = 0.9.
Before model reduction is applied, let us look at the performance under influence of the
magnitude of N of the nominal system. Notation: G2N −r denotes the plant from chapter 4
with 2N − r states.

10.1 Approximation of the nominal model


A nominal system G2N with N = 41 is approximated with F-D and modally, with G2n , where
n ≤ N . Accordingly, a controller with 2n states is constructed, and connected to the nominal
system, and tested.
Table 10.1 shows the performance of the controller based on modal approximation. A con-
troller of 10 states (n = 5) performs just as well as a 82 order controller, as the costs and
robustness are the same. For technical reasons it is not possible to obtain a system of less
than six states, so technically for lower-order controllers, LQG balancing seems necessary.
Practically it seems obsolete. This is in agreement with the fact that modal approximation
converges very rapidly towards the nominal system.
Table 10.2 shows that for F-D the costs and stability region vary with the choice of n. For
larger n, the stability region decreases (!) while the total costs decrease. Altogether, the
differences for various choices of n are not very large.
Now, nominal systems with various N are considered. Based on those systems, controllers
with corresponding N are based on those systems.
Table 10.3 shows, for F-D approximation, that for larger N the costs decrease and the stability
region increases, so the performance is better for larger systems. Table 10.4 shows for modal

53
approximations however, that the displacement costs have a slight increase for larger N .
We conclude from the last two tables, that for larger systems, the corresponding controller
performs better for both modal and F-D approximations.
The controllers are the same ones as in the previous two tests. The difference is, that they
are connected to different nominal systems. Comparison of the four tables reveals that the
controllers perform better on high-order nominal systems. Also, high-order nominal systems
lie closer to the PDE than low-order ones, just as modal approximated systems lie closer to
the PDE than F-D approximated systems. This might be the reason that controllers applied
to modally based nominal systems perform better than on F-D based systems, as the tables
show.
n control costs displacement costs total costs ] stability points
5 4.88 4.26 9.14 766
11 4.88 4.26 9.14 766
21 4.88 4.26 9.14 766
41 4.88 4.26 9.14 766

Table 10.1: Varying n, N = 41 (modal approximation).

n control costs displacement costs total costs ] stability points


5 6.19 4.34 10.52 758
11 5.53 4.34 9.88 751
21 5.37 4.35 9.72 749
41 5.22 4.35 9.57 749

Table 10.2: Varying n, N = 41 (F-D approximation).

N control costs displacement costs total costs ] stability points


5 8.39 4.89 13.28 502
11 6.29 4.54 10.83 654
21 5.57 4.42 9.99 716
41 5.22 4.35 9.57 749

Table 10.3: Varying N (F-D approximation).

N control costs displacement costs total costs ] stability points


5 5.03 3.73 8.76 737
11 4.89 4.21 9.10 750
21 4.88 4.26 9.13 766
41 4.88 4.26 9.14 766

Table 10.4: Varying N (modal approximation).

54
10.2 First controller construction method

r control costs displacement costs total costs ] stability points


0 5.22 4.35 9.57 749
1 Inf Inf Inf 749
..
.
30 Inf Inf Inf 749
31 5.22 4.35 9.57 749
..
.
70 5.22 4.35 9.57 749
71 5.21 4.35 9.56 749
72 5.22 4.35 9.57 749
73 5.19 4.35 9.54 749
74 5.23 4.35 9.57 749
75 5.43 4.35 9.78 749
76 5.28 4.35 9.62 749
77 5.27 4.38 9.65 749
78 5.31 4.35 9.66 749
79 Inf Inf Inf 22
80 Inf Inf Inf 22
81 Inf Inf Inf 3

Table 10.5: F-D approximated nominal system of N=41. Model reduction is applied by means
of LQG balanced truncations. The number of controller states is 2N −r, so r = 78 corresponds
with 4 controller states.

The nominal system G2N with N = 41 is reduced to G2N −r for varying r. A controller based
on the reduced system(s) is connected to the nominal system G2N . Table 10.5 shows for F-D,
that for r = 31..78, the costs and stability region remain the same. Apparently, only 4 states
of the system are really significant in order to construct a well-performing controller (also, the
first four Hankel singular values are large w.r.t. the rest). For r = 1..30, the system becomes
unstabilizable and undetectable, for unknown reasons so far. This must be caused by numer-
ical errors somehow. In chapter 7 it was explained that transformation T , which balances the
system, can be ill-conditioned. The Schur method that avoids numerical errors due to the
ill-conditioned transformation, is applied. Figure 10.1 shows the control/displacement costs,
the total costs, ²max , and the number of stability points of the controlled system for different
r. (The last two are scaled for presentational reasons). This is basically the same as the
previous table showed, only in a more transparent way. Left, LQG balanced truncations are
applied with use of transformation T . Right shows the results for balanced truncations using
the Schur method. The plots are referred to as ’performance plots’. Both truncation methods
from chapter 7 fail for low r, in the sense that this results in an uncontrollable system. (In
the graphs this is displayed by zero costs, zero stability points and ²max = 0). However, they
both give the same costs and stability region, so the truncation methods do not seem to give
rise to high inaccuracies. From now on, only the ’original’ method from chapter 7 is applied.

55
Figure 10.2 shows that the costs with modal approximation are lower than with F-D, and the
stability region is larger.
Modal approximation also gives rise to unstabilizability for low r. The costs are then unde-
fined, and the n.o. stability points is equal to zero. It is important to note that this only
happens for large systems, and for relatively low r. So in practice, this will not cause any
problems.
Further, for large N several problems arise; apart from the growing numerical errors, a larger
N causes the computational time of the CARE and FARE to grow rapidly, and for our beam
these equations often cannot be solved by Matlab anymore.
10 10

9 9

8 8

7 7

6 6

5 5

4 4

3 3

2 Control costs 2 Control costs


Displacement costs Displacement costs
Total costs Total costs
1 ε*10 1 ε*10
# stab. points/100 # stab. points/100

0 0
0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 90
r r

Figure 10.1: Performance for F-D approximation. The left graph shows LQG balanced trun-
cations that are applied with use of transformation T . The right graph shows the results for
balanced truncations using the Schur method. The performance is the same for both methods.

10

3
Control costs
Displacement costs
2
Total costs
ε*10
1 # stab. points/100

0
0 10 20 30 40 50 60 70 80 90
r

Figure 10.2: Performance with modal approximation. The performance is better than with
F-D approximation.

56
10.3 Second controller construction method
The previous two sections revealed that for a F-D approximated system, it is desirable w.r.t.
controller performance to have a nominal system with N as large as possible, but that on the
other hand this causes numerical problems. We try to approximate the nominal system G2N
with G2n with N = 41 and n = 21. Figure 10.3 shows for F-D that this construction method
leads to slightly higher costs. Modal approximation gives the same performance for n = 21
as for n = 41. Figure 10.4 shows that the results are better than with F-D.
An advantage is that the CARE and the FARE are solved faster and are solvable by Matlab at
all. For problems that require a lot of different controllers, for example the tuning of weighting
functions, it may very well be desirable to use a smaller nominal system. The choice of n < N
is not straightforward, and in the next chapter a nice method for determining a suitable n is
presented.
10

2 Control costs
Displacement costs
Total costs
1 ε*10
# stab. points/100

0
0 5 10 15 20 25 30 35 40 45
r

Figure 10.3: Performance with F-D approximation, N = 41 and n = 21. The performance is
slightly worse than with n = 41.

10.4 Third controller construction method


Figure 10.5 shows, that (with F-D approximation) with the third method, i.e. reduction w.r.t.
the controller states instead of plant states, there is not much difference with the previous
method, except the reduced order controller seems to stabilize the plant less often for low r.

57
10

Control costs
2 Displacement costs
Total costs
ε*10
1 # stab. points/100

0
0 5 10 15 20 25 30 35 40 45
r

Figure 10.4: Performance with modal approximation.The performance is better than with F-D
approximation.

10

2 Control costs
Displacement costs
Total costs
1 ε*10
# stab. points/100

0
0 5 10 15 20 25 30 35 40 45
r

Figure 10.5: Performance with F-D approximation, N = 41 and n = 21. The reduction w.r.t.
the controller states instead of plant states seems to have the same results, except that the
reduced order controller seems to stabilize the plant less often for low r.

58
Chapter 11

The choice of r vs. n

We use two ways of obtaining a low-order approximation of a given nominal system:

(1) By a (coarse) low-order numerical approximation.

(2) By LQG balanced truncations.

In section 10.3 it was explained that it can be desirable to approximate a nominal system
numerically with a low-order system in order to avoid numerical problems with the constuction
of a controller, and in particular with the Riccati equations that need to be solved. From the
previous chapter it is clear that with a low-order modal approximation the nominal system
is approximated very well, so there is no need for LQG balanced truncations. However, an
F-D approximation converges less quickly to the nominal system. Given a nominal system,
a low-order approximation is done more accurately by LQG balanced truncations than by a
coarser numerical F-D approximation.
In many cases, one would like to use the second controller construction method but it is not
clear which n should be chosen. When constructing a controller by means of tuning weighting
functions of the nominal plant, a low-order controller has to be constructed a lot of times by
LQG balanced truncations. In order to save on computational time, we would like to know
how small n can be chosen within a reasonable loss of controller performance.
A method is presented to determine what effects various n have on the performance of the
controlled system. Figures 11.1 and 11.2 show the total costs and the number of stability
points along the z-axis for varying n and r. An F-D approximation is used. The number of
states is equal to 2n − r, so the diagonal 2n − r = c(onstant) shows for a fixed n.o. states (c),
what type of reduction might be best. Figures 11.1 and 11.2 show that for large r (and hence
large n) the costs are slightly lower, but (for this type of perturbation) the stability region is
slightly smaller. For optimal controller performance n should be chosen as large as possible.
Figures 11.3 and 11.4 show that with modal approximation, the performance is practically
invariant along the diagonals 2n − r = c, and hence that a choice of low n is preferrable.
The plots show two properties of the controlled system, with respect to costs, stability region
and gap.

• The appropriate choice of the n.o. controller states (2n − r).

• The appropriate choice of the n.o. states to approximate the nominal system (n).

59
It is hard to recognize any loss of controller performance for small n or for large r. Therefore,
we can conclude from the four figures that for both modal and F-D approximation, when
designing a controller (e.g. by adjusting weighting functions), a choice of low n is appropriate.
Also, the number of controller states (2n − r) can be made small (4 states) without much loss
of performance (this is less clearly visible).

Total costs for different n and r.

12

10

0 100
0 80
20
60
40
40
60
80 20

100 0 r
2n

Figure 11.1: The total costs for a F-D approximated system. First, the nominal system is
approximated with 2n states, then reduced with r states to 2n − r states (for each combination
of n and r), and connected to the nominal system. Zero costs means that the reduced system
is unstabilizable. Further, the costs are almost invariant for combinations of n and r, except
for 2n − r very small (along the diagonal in the figure) and for n very low, as expected.

60
Number of stability points for different n and r.

800

600

400

200

0 100
0 80
20
60
40
40
60
80 20

100 0 r
2n

Figure 11.2: Size of the stability region for a F-D approximated system. The robustness is
almost invariant of various choices of n and r.

61
Total costs for different n and r.

10

0 100
0 80
20
60
40
40
60
80 20

100 0 r
2n

Figure 11.3: The total costs for a modally approximated system. The costs are invariant of
various choices of n and r.

62
Number of stability points for different n and r.

800

600

400

200

0 100
0 80
20
60
40
40
60
80 20

100 0 r
2n

Figure 11.4: Size of the stability region for a modally approximated system. The stability
region is invariant of various choices of n and r.

63
64
Chapter 12

Conclusions

A robust controller is established based on a coprime factor perturbed plant. This plant can
be written in the form of a (more general) H∞ -optimization problem, and its controller as a
special case of the controller corresponding to the H∞ -optimization problem. The (manipu-
lation of the) properties of the controller are investigated, and tested by means of controller
performance; indicated by costs and robustness. The physical plant is modelled as a par-
tial differential equation with infinite-dimensional state-space, and the nominal model, which
represents the system properties of the physical plant, is modelled as a high-order numerical
approximation of the PDE. The standard controller based on the nominal plant is of the same
high order.
The main goal of this project was to investigate the two techniques for obtaining a well-
performing low-order controller, namely by model reduction by means of LQG balanced trun-
cations, and by means of coarse numerical approximations. Two numerical approximation
techniques are used; finite-differences, and modal (a so-called spectral method). The low-
order controller is based on a low-order system, obtained via these techniques, connected to
the nominal model, and tested on its performance.
It turns out that, for the beam system in this thesis, LQG balanced truncations are a very
effective tool for obtaining a well-performing low-order controller. A high-order system does
not seem to loose any vital information, when it is truncated downto four states, because a
controller based on the truncated system performs the same as the one based on the high-order
system.
However, for this system, LQG balanced truncations are moreorless obsolete. Model reduction
a coarse numerical modal approximation of the nominal system yields the same controller
performance as reduction by means of LQG balanced truncations. Coarse finite-difference
approximation yields a low-order controller that performs substantially less well than with
LQG balanced truncations.
The drawbacks of modal approximation are that it is not applicable to every type of PDE,
and that they only work well when the solutions of the PDE are smooth, i.e. they do not
have steep gradients. Finite-differences, in contrast, are applicable to every type of PDE, and
are recommended in the case of steep gradients, for example when modelling a shock wave.
When a system is modelled with use of finite-differences, LQG balanced truncations seems a
very effective and even necessary tool for obtaining a well-performing low-order controller.
For large systems, LQG balanced truncations has some drawbacks w.r.t. computational costs
and numerical errors. In practice, when a controller construction has to be carried out many

65
times, it is therefore recommended to use a combination of both reduction methods, i.e.
approximate the nominal system with a lower-order system, and then apply LQG balanced
truncations. It is shown that this has only a small effect on the controller performance.

66
Acknowledgements
First of all I would like to thank Mark Opmeer for all the nice discussions and valuable
suggestions. I am also grateful to Fred Wubs for helping with the numerical aspects of this
thesis, and to Ruth Curtain for her comments, and especially for giving me the liberty of
research I desired. And to Esther Moet and Manon Hegeman: Thanks for the typing work!

Simon van Mourik

67
68
Appendix A

Program description of ’Runbeam’

The computer program that is designed for this research is called ’Runbeam’. It is written in
c
Matlab° code. In the (input) file ’Runbeam’ all the options for contructing a suitable plant
and controller are shown. The different options are each called in different files, which creates
a clear program structure. Runbeam constructs the state-space form of a plant in the ’Init’
file. In this thesis it is a beam, but any other type of plant can be constructed as well. (One
needs to adapt the graphical output and the definition of costs to that certain plant, though.)

A.1 Input options


A brief description of Runbeam is given by a list of input options.

1 A number of simulations can be chosen. For each simulation, one can specify what
that simulation looks like. For example, n, the n.o. discretization points in space of
the system that a controller is designed for. Similarly, one can choose N , the n.o. dis-
cretization points of the nominal system, r the n.o. states eliminated by LQG balanced
truncations, or the type of numerical approximation (F-D or modal).

2 State reduction by LQG balanced truncations can be done in two ways, as explained
in chapter 7: Standard LQG balanced reduction or LQG balanced reduction with use
of the ’Schur’ method. A feature of the program exists if for a simulation set, only r
varies, the number of eliminated states. For both reduction methods, the CARE and
the FARE are then solved only once in order to obtain a balanced and truncated system.

3 Three types of controllers can be chosen:

– The central controller from chapter 4.


– The central controller from chapter 5.
– The LQG controller from chapter 4.

4 Controller states reduction, with the same properties and options as for plant state
reduction.

5 The stability region, if chosen, is calculated. The n.o. stability points is calculated, and
as an extra option, the stability region is displayed. The n.o. points in the chosen area
can be distributed logarithmically or linearly.

69
6 Computation of the gap between ΣN n
G and ΣG , defined in chapter 7.

7 The eigenvalues of A and · ¸


A B Cc
(A.1)
Bc C Ac
are plotted in one figure to see how the controller affects the eigenvalues of A.

8 A stabilizability and detectability test. It holds that (A, B) of ΣG is stabilizable ⇐⇒


∀λi , xi s.t. x∗i A = x + i∗ λi with Re(λi ) ≥ 0, it holds that x∗i B 6= 0. (A, C) is dectectable
⇐⇒ (A∗ , C ∗ ) is stabilizable. Stabilizability is displayed by

V = min{max{Vi (j)}},
i j

where j is the jth element, and x∗i B = V . If the displayed V is small, the stabilizability
is small. Detectability is defined similarly.

For the general central controller, appropriate weighting matrices can be chosen. An extra
feature is, that when the first central controller is used, the standard LGQ reduction method is
applied, and only r varies in a set of simulations. A controller is constructed each simulation,
but the CARE and FARE are solved only once, which saves computational time. Of course
there is also an option for choosing no controller at all.

A.2 Dynamics
There is an option for choosing a dynamic simulation of the beam. If yes, the simulation
calculates the control/displacement costs of the system. The final time, time step, θ, and the
plot time step can be chosen. In case of numerical instability, a message is displayed. Further
options are:

• Impulse response, where the behaviour of the beam is computed under influence of an
impulse response from u(1) or u(2), or both.

• Display of a real-time movie of the beam in 2D.

A.3 Postprocessing
In order to visualize certain properties of the controlled plant, several plots can be chosen for
displayment.

• A 2D plot that displays the evolution of the displacement of the middle of the beam as
a function of time.

• A 3D plot that displays the evolution of the whole beam in time.

• A Bode plot and a frequency response plot of the plant and/or controller. Input is the
frequency range.

• The functional gains display the evolution in time of the controller outputs u(1) and
u(2).

70
During the simulation set, the following properties (if relevant) are displayed each simulation:

• The number of the current simulation.

• The robust stability margin ²max .

• (In)stability of (P n , K n ).

• (In)stability of (P N , K N ).

• Estimation of computational time for determination of the stability region.

• Numerical instability.

• Estimation for determination of the costs.

• The input/displacement costs.

A.4 Extension of Runbeam


In order to produce the graphs in chapter 11, the extension of ’Runbeam’, ’Runrunbeam’,
is designed. Input is N , as defined in chapter 7, and for each n between 3 and N , rum-
beam is called, and the desired simulation are carried out. The desired simulations involve
computation of

• the gap,

• the total costs, and

• the stability region,

for all possible combinations of n and r. The results are stored in a matrix, and visualized
with a 3D graph.

71
Bibliography

[1] K. Zhou and J.C. Doyle. ’Essentials of robust control’. Prentice-hall, 1998.

[2] J. Bontsema. ’Dynamic stabilization of large flexible space structures’. Phd. thesis, 1989.

[3] K.A.E. Camp. ’The search for a reduced order controller: comparison of balanced rduc-
tion techniques’. Master’s thesis, 2001, Virginia State University.

[4] J.T. van der Vaart. ’Robust control of a flexible beam’. Master’s thesis, 1989, RUG.

[5] M.H. Klompstra. ’Simulation package for flexible systems’. Master’s thesis, 1987, RUG.

[6] R.F. Curtain and H. Zwart. ’An Introduction to Infinite Dimensional Linear Systems
Theory’, Springer-Verlag New York, 1995.

[7] D.C. McFarlane and K. Glover. ’Robust controller design using normalized coprime factor
plant descriptions’. Lecture notes in control and inf. sc. 138. Springer-Verlag, 1987.

[8] M.G. Safonov and R.Y. Chiang. ’A Schur method for balanced-truncation model reduc-
tion’. IEEE trans. a.c., 34, 729, 1989.

72

You might also like