You are on page 1of 8

CDS 212 - Solutions to the Midterm Exam

Instructor: Danielle C. Tarraf


November 6, 2007

Problem 1

(a) Recall that the H∞ norm of a transfer function is time-delay invariant.


Hence:
 1/2
1 1 1 1
kĜ(s)k∞ = = sup
= sup 2 2
=
s + a ∞ w∈R jw + a
w∈R a + w a

1
(b) The transfer function of the system is Ĝ(s) = , with corresponding
s+1
H2 norm:
Z ∞
1
kĜk22 = Ĝ∗ (jw)Ĝ(jw)dw
2π −∞
Z ∞
1 1 1
= dw
2π −∞ 1 − jw 1 + jw
Z ∞
1 1
=
2π ∞ 1 + w2
1 ∞
= arctan w

2π −∞
1
=
2
1
hence kĜk2 = √
2

(c) Recall that:


kyk∞ ≤ kGk1 kuk∞
where G(t) is the impulse response of the system. Here, G(t) = te−t when

1
û1 s x̂0
s−1
+

+
û2 1 x̂2
s+2

Figure 1: Figure for problem 1(d)

t ≥ 0, with corresponding l1 norm:


Z ∞
kGk1 = |G(t)|dt
−∞
Z ∞
= te−t dt
0
∞ Z ∞
= −te−t + e−t dt

0
∞ 0
−t
= 0−e
0
= 1
and the amplitude of the output cannot exceed 1.

(d) Consider the parallel interconnection of two first order systems as shown
in the Figure above, and note by inspection that Ĝ is indeed the corre-
sponding transfer matrix.
Let x0 (t) = L−1 (x̂0 (s)) and x2 (t) = L−1 (x̂2 (s)). We have:
ẋ2 + 2x2 = u2
ẋ0 − x0 = u̇1
y = x0 + x2
Set x1 = x0 − u1 , with ẋ1 = ẋ0 − u̇1 = x0 = x1 + u1 . The state-space
equations of the system are then:
        
ẋ1 1 0 x1 1 0 u1
= +


ẋ2 0 −2  x2  0 1  u2

  x1   u1
y = 1 1 + 1 0


x2 u2

(e) For the autonomous system (i.e. input identically 0), the state and output
trajectories for t ≥ 0 are given by:
x(t) = eAt x(0)
y(t) = CeAt x(0)

2
where eAt is defined by the infinite series:
A2 t2 A3 t3
eAt = I + At + + + ...
2! 3!
Note that here:
 
0 0 1
A2 =  0 0 0  , Ak = 0, for k ≥ 3
0 0 0
Hence  
1 t t2 /2
eAt = 0
 1 t 
0 0 1
and
 
1 + 2t + t2 /2
x(t) =  2+t ,
1
y(t) = 1 + 2t + t2 /2,

for t ≥ 0.

(f) The transfer matrix is given by Ĝ(s) = C(sI − A)−1 B + D. Note that
due to the structure of C and B, we only need to compute the upper right
entry of (sI − A)−1 . Recall that
adj(sI − A)
(sI − A)−1 =
det(sI − A)
We have:
 
s −1 0
sI − A =  0 s −1  ,
0 0 s
 0
∗ ∗ ∗
adj(sI − A) =  ∗ ∗ ∗ 
1 ∗ ∗
and
s −1
+ 1 0 −1 = s3

det(sI − A) = s
0 s 0 s
Hence
1
 
∗ ∗ 0
  s3 s3 + 1
Ĝ(s) = 1 0 0  ∗ ∗ ∗  0  + 1 =
s3
∗ ∗ ∗ 1

3
Problem 2
(a) FALSE. Consider a SISO system with A=B=1, C=D=0. A has an eigen-
value at 1 while the transfer function is identically 0 and hence has no
poles - this is a very trivial example of an unobservable system.

(b) TRUE. We have:


Z ∞
1
kĜ1 k22 = Ĝ∗ (rjw)Ĝ(rjw)dw
2π ∞
Z ∞
1 1
= Ĝ∗ (jv)Ĝ(jv) dv
2π −∞ r
= r−1 kĜk22
where the second equality follows by a change of the integration variable.

(c) FALSE. Consider the case where M = aI for some scalar |a| < 1. Clearly,
there exists a matrix D = I = D−1 such that:
kDM D−1 k2 = kM k2 = |a| · kIk2 = |a| < 1
However, the LMI:
   
−X 0 −X 0
= <0
0 a∗ IXIa + X 0 (|a|2 + 1)X
is not feasible, since it requires X to be both positive definite and negative
definite. The correct statement will be derived later on in the class!

(d) TRUE. One way of proving this is by using state-space methods to com-
pute the H2 norms of the relevant systems (see Section 2.6 in DFT). Pos-
sible state-space realizations of systems S1 and S2 described by transfer
functions Ĝ1 and Ĝ2 are given by:
   
−a1 1 −a2 1
S1 = , S2 =
1 0 1 0
System S, the cascade interconnection of S1 and S2 , with corresponding
transfer function Ĝ(s) = Ĝ2 (s)Ĝ1 (s), then has the following state space
realization:      
−a1 0 1
S= 1 −a2  0 

0 1 0
Recall that for a system
 
A B
S=
C D

4
with A Hurwitz, the H2 norm of the corresponding transfer function Ĝ
can be computed as:
kĜk22 = CLC 0
where L is the solution to the Lyapunov equation:
AL + LA0 = −BB 0
For first order systems S1 and S2 , we thus have:
c2i b2i
kĜi k22 = −
2ai
For the second order system S, the relevant Lyapunov equation is:
       
−a1 0 l 1 l0 l l −a1 1 1 
+ 1 0

=− 1 0
1 −a2 l 0 l2 l0 l2 0 −a2 0
Solving the corresponding system of three equations in three unknowns:

 −2a1 l1 + 1 = 0
−(a1 + a2 )l0 + l1 = 0
l0 − a2 l2 = 0

we get:
1 1 1
l1 = , l0 = , l2 =
2a1 2a1 (a1 + a2 ) 2a1 a2 (a1 + a2 )

Hence kĜk22 = l2 , and


kĜk2 ≤ kĜ1 k2 kĜ2 k2
⇔ kĜk22 ≤ kĜ1 k22 kĜ2 k22
1 −1 −1
⇔ ≤ ·
2a1 a2 (a1 + a2 ) 2a1 2a2
⇔ a1 a2 ≥ 2.

(e) TRUE. Note that I − ∆A being singular is equivalent to the existence of


a v 6= 0 such that:
(I − ∆A)v = 0 ⇔ Iv = ∆Av
⇒ kvk2 = k∆Avk2 ≤ k∆k2 kAk2 kvk2
1 1
⇔ k∆k2 ≥ =
kAk2 σmax (A)
To show that this lower bound can be achieved, let U ΣV ∗ be a singular
value decomposition of A and consider the perturbation matrix
∆ = V Σ̃U ∗

5
where
1
 
 σmax (A) 0 ... 0 
.. 
 

Σ̃ =  0 0 . 
.. .. 
 
 ..
 . . . 
0 ... ... 0
1
Note that k∆k2 = and that:
σmax (A)

I − ∆A = I − V Σ̃U ∗ U ΣV ∗
= I − V Σ̃ΣV ∗
  
1 0 ... 0
..
 0 ...
  
 .  ∗
= V I − 
 . ..
 V

 .. .. 
 . . 
0 ... ... 0

with (I − ∆A)v1 = 0; hence I − ∆A is singular.

6
Problem 3

(a) Let x1 = y and x2 = ẏ − u. We then have:

ẋ1 = ẏ
= x2 + u

and

ẋ2 = ÿ − u̇
= −aẏ − by − cy 2 + u
= −a(x2 + u) − bx1 − cx21 + u
= −bx1 − ax2 − cx21 + (1 − a)u

The corresponding second order state space realization is:



 ẋ1 = x2 + u
ẋ2 = −bx1 − ax2 − cx21 + (1 − a)u
y = x1

Note that this realization is not unique; it is simply one of many possible
realizations.

(b) When a = 3, b = 2 and c = 2, the state equations of the autonomous


system reduce to:

ẋ1 = x2
ẋ2 = −2x1 − 3x2 − 2x21

and the corresponding linearized (about the origin) dynamics are given
by:     
δ̇1 0 1 δ1
=
δ̇2 −2 −3 δ2
The eigenvalues of the linearized dynamics are λ1 = −1 and λ2 = −2.
Since the linearized dynamics are asymptotically stable, and for f (x) =
−2x2 , we have:

kf (δ) − f (0) − f 0 (x)|0 · δk2 2|δ 2 |


lim = lim =0
kδk2 →0 kδk2 |δ|→0 |δ|

we conclude that the equilibrium point at the origin is locally asymptot-


ically stable. Clearly, it cannot be globally stable as there exists another
equilibrium point at (−1, 0).

7
(c) The dynamics of the autonomous nonlinear system are given by:
ẋ = Ax + f (x)
where    
0 −2 0
A= and f (x) =
1 −3 −2x21
Let V (x) = x0 P x be a Lyapunov function for the linearized system; then
P is a solution of the Lyapunov equation A0 P +P A = −Q for some Q > 0,
and along the trajectories of the nonlinear system, we have:
V̇ (x) = ẋ0 P x + x0 P ẋ
= [x0 A0 + f 0 (x)]P x + x0 P [Ax + f (x)]
= −x0 Qx + 2x0 P f (x)
≤ −λmin (Q)kxk2 + 2λmax (P )kf (x)k2 kxk2
p
≤ [−λmin (Q) + 2λmax (P )] kxk2 whenever |x1 | ≤ /2
Thus, the function V : B → R is a Lyapunov function for the nonlinear
system in the neighborhood
p
B = {x ∈ R2 kxk2 < /2}

λmin (Q)
for  < , for any Q > 0 and corresponding solution P to the
2λmax (P )
Lyapunov equation; in particular, trajectories starting in this neighbor-
hood will converge asymptotically to the origin. For instance, for Q = I
with λmin (Q) = 1, we have
 
1 5 1
P =
4 1 1

with λmax (P ) = 3 + 2 5 and the corresponding provable region of attrac-
tion: ( s )
1 1
B = x ∈ R2 kxk2 < √

2 3+2 5

λmin (Q)
Ideally, we would like to choose Q > 0 to maximize the ratio ,
2λmax (P )
which is not an easy problem. What is straightforward though is comput-
ing an upper bound for this ratio, which establishes the limitations of this
approach in finding a region of attraction. We have:
2λmax (P )kAk = 2kP kkAk ≥ 2kP Ak ≥ kA0 P + P Ak = k − Qk ≥ λmin (Q)
which implies:
λmin (Q)
≤ kAk2
2λmax (P )

You might also like