Professional Documents
Culture Documents
Problem 1
1
(b) The transfer function of the system is Ĝ(s) = , with corresponding
s+1
H2 norm:
Z ∞
1
kĜk22 = Ĝ∗ (jw)Ĝ(jw)dw
2π −∞
Z ∞
1 1 1
= dw
2π −∞ 1 − jw 1 + jw
Z ∞
1 1
=
2π ∞ 1 + w2
1 ∞
= arctan w
2π −∞
1
=
2
1
hence kĜk2 = √
2
1
û1 s x̂0
s−1
+
ŷ
+
û2 1 x̂2
s+2
(d) Consider the parallel interconnection of two first order systems as shown
in the Figure above, and note by inspection that Ĝ is indeed the corre-
sponding transfer matrix.
Let x0 (t) = L−1 (x̂0 (s)) and x2 (t) = L−1 (x̂2 (s)). We have:
ẋ2 + 2x2 = u2
ẋ0 − x0 = u̇1
y = x0 + x2
Set x1 = x0 − u1 , with ẋ1 = ẋ0 − u̇1 = x0 = x1 + u1 . The state-space
equations of the system are then:
ẋ1 1 0 x1 1 0 u1
= +
ẋ2 0 −2 x2 0 1 u2
x1 u1
y = 1 1 + 1 0
x2 u2
(e) For the autonomous system (i.e. input identically 0), the state and output
trajectories for t ≥ 0 are given by:
x(t) = eAt x(0)
y(t) = CeAt x(0)
2
where eAt is defined by the infinite series:
A2 t2 A3 t3
eAt = I + At + + + ...
2! 3!
Note that here:
0 0 1
A2 = 0 0 0 , Ak = 0, for k ≥ 3
0 0 0
Hence
1 t t2 /2
eAt = 0
1 t
0 0 1
and
1 + 2t + t2 /2
x(t) = 2+t ,
1
y(t) = 1 + 2t + t2 /2,
for t ≥ 0.
(f) The transfer matrix is given by Ĝ(s) = C(sI − A)−1 B + D. Note that
due to the structure of C and B, we only need to compute the upper right
entry of (sI − A)−1 . Recall that
adj(sI − A)
(sI − A)−1 =
det(sI − A)
We have:
s −1 0
sI − A = 0 s −1 ,
0 0 s
0
∗ ∗ ∗
adj(sI − A) = ∗ ∗ ∗
1 ∗ ∗
and
s −1
+ 1 0 −1 = s3
det(sI − A) = s
0 s 0 s
Hence
1
∗ ∗ 0
s3 s3 + 1
Ĝ(s) = 1 0 0 ∗ ∗ ∗ 0 + 1 =
s3
∗ ∗ ∗ 1
3
Problem 2
(a) FALSE. Consider a SISO system with A=B=1, C=D=0. A has an eigen-
value at 1 while the transfer function is identically 0 and hence has no
poles - this is a very trivial example of an unobservable system.
(c) FALSE. Consider the case where M = aI for some scalar |a| < 1. Clearly,
there exists a matrix D = I = D−1 such that:
kDM D−1 k2 = kM k2 = |a| · kIk2 = |a| < 1
However, the LMI:
−X 0 −X 0
= <0
0 a∗ IXIa + X 0 (|a|2 + 1)X
is not feasible, since it requires X to be both positive definite and negative
definite. The correct statement will be derived later on in the class!
(d) TRUE. One way of proving this is by using state-space methods to com-
pute the H2 norms of the relevant systems (see Section 2.6 in DFT). Pos-
sible state-space realizations of systems S1 and S2 described by transfer
functions Ĝ1 and Ĝ2 are given by:
−a1 1 −a2 1
S1 = , S2 =
1 0 1 0
System S, the cascade interconnection of S1 and S2 , with corresponding
transfer function Ĝ(s) = Ĝ2 (s)Ĝ1 (s), then has the following state space
realization:
−a1 0 1
S= 1 −a2 0
0 1 0
Recall that for a system
A B
S=
C D
4
with A Hurwitz, the H2 norm of the corresponding transfer function Ĝ
can be computed as:
kĜk22 = CLC 0
where L is the solution to the Lyapunov equation:
AL + LA0 = −BB 0
For first order systems S1 and S2 , we thus have:
c2i b2i
kĜi k22 = −
2ai
For the second order system S, the relevant Lyapunov equation is:
−a1 0 l 1 l0 l l −a1 1 1
+ 1 0
=− 1 0
1 −a2 l 0 l2 l0 l2 0 −a2 0
Solving the corresponding system of three equations in three unknowns:
−2a1 l1 + 1 = 0
−(a1 + a2 )l0 + l1 = 0
l0 − a2 l2 = 0
we get:
1 1 1
l1 = , l0 = , l2 =
2a1 2a1 (a1 + a2 ) 2a1 a2 (a1 + a2 )
5
where
1
σmax (A) 0 ... 0
..
Σ̃ = 0 0 .
.. ..
..
. . .
0 ... ... 0
1
Note that k∆k2 = and that:
σmax (A)
I − ∆A = I − V Σ̃U ∗ U ΣV ∗
= I − V Σ̃ΣV ∗
1 0 ... 0
..
0 ...
. ∗
= V I −
. ..
V
.. ..
. .
0 ... ... 0
6
Problem 3
ẋ1 = ẏ
= x2 + u
and
ẋ2 = ÿ − u̇
= −aẏ − by − cy 2 + u
= −a(x2 + u) − bx1 − cx21 + u
= −bx1 − ax2 − cx21 + (1 − a)u
Note that this realization is not unique; it is simply one of many possible
realizations.
and the corresponding linearized (about the origin) dynamics are given
by:
δ̇1 0 1 δ1
=
δ̇2 −2 −3 δ2
The eigenvalues of the linearized dynamics are λ1 = −1 and λ2 = −2.
Since the linearized dynamics are asymptotically stable, and for f (x) =
−2x2 , we have:
7
(c) The dynamics of the autonomous nonlinear system are given by:
ẋ = Ax + f (x)
where
0 −2 0
A= and f (x) =
1 −3 −2x21
Let V (x) = x0 P x be a Lyapunov function for the linearized system; then
P is a solution of the Lyapunov equation A0 P +P A = −Q for some Q > 0,
and along the trajectories of the nonlinear system, we have:
V̇ (x) = ẋ0 P x + x0 P ẋ
= [x0 A0 + f 0 (x)]P x + x0 P [Ax + f (x)]
= −x0 Qx + 2x0 P f (x)
≤ −λmin (Q)kxk2 + 2λmax (P )kf (x)k2 kxk2
p
≤ [−λmin (Q) + 2λmax (P )] kxk2 whenever |x1 | ≤ /2
Thus, the function V : B → R is a Lyapunov function for the nonlinear
system in the neighborhood
p
B = {x ∈ R2 kxk2 < /2}
λmin (Q)
for < , for any Q > 0 and corresponding solution P to the
2λmax (P )
Lyapunov equation; in particular, trajectories starting in this neighbor-
hood will converge asymptotically to the origin. For instance, for Q = I
with λmin (Q) = 1, we have
1 5 1
P =
4 1 1
√
with λmax (P ) = 3 + 2 5 and the corresponding provable region of attrac-
tion: ( s )
1 1
B = x ∈ R2 kxk2 < √
2 3+2 5
λmin (Q)
Ideally, we would like to choose Q > 0 to maximize the ratio ,
2λmax (P )
which is not an easy problem. What is straightforward though is comput-
ing an upper bound for this ratio, which establishes the limitations of this
approach in finding a region of attraction. We have:
2λmax (P )kAk = 2kP kkAk ≥ 2kP Ak ≥ kA0 P + P Ak = k − Qk ≥ λmin (Q)
which implies:
λmin (Q)
≤ kAk2
2λmax (P )