You are on page 1of 46

EEE 586

HW # 1 SOLUTIONS

Problem 1.4 Pick as states x = [q ; q _]. Then dx x2 = M 1 (x1 )[u C (x1 ; x2 )x2 Dx2 g (x1 )] dt Problem 1.5 Pick as states x = [q1 ; q _1 ; q2 ; q _2 ]. Then 0 x2 k dx B MgL I sin(x1 ) I (x1 x3 ) =B @ x4 dt 1 k J (x1 x3 ) J u

1 C C A

Problem 1.6 Pick as states x = [q1 ; q _1 ; q2 ; q _2 ]. Then 0 x2 dx B M 1 (x1 )[h(x1 ; x2 )x2 + K (x1 x3 )] =B @ x4 dt J 1 K (x1 x3 ) + J 1 u Problem 1.7 A state space realizaton for G is x _ = Ax + Bu ; Substituting u = r z = r (t; y ) x _ = Ax B (t; Cx) + Br ; y = Cx y = Cx

1 C C A

Problem 1.10 _o = y . Collecting the equations for z and e we get the claimed state-space a. Dene e = i o ; then e _ = representation. b. At he equilibrium points z _ = 0; e _ = 0. Therefore, the following equations should hold simultaneously: Aze + B sin(ee ) = 0 Cze = 0 Since A is invertible, ze = A1 B sin(ee ) and, hence, CA1 B sin(ee ) = 0. But G(0) = CA1 B 6 = 0, so it must be that sin(ee ) = 0, or ee = k; k 2 Z. From this it now follows that ze = 0 so the equilibria of the system are ze = 0; ee = k c. For this transfer function, e _ = z , so e = z _= 1 1 z sin(e) 1 1 = e _ sin(e)

which is similar to the pendulum equation. _o = y . Collecting the equations for z and e we get the _ = Problem 1.11 a. Dene e = i o ; then e claimed state-space representation.

b. At he equilibrium points z _ = 0; e _ = 0. Therefore, the following equations should hold simultaneously: Aze + B sin(ee ) = 0 Cze = 0 Since A is invertible, ze = A1 B sin(ee ) and, hence, CA1 B sin(ee ) = 0. But G(0) = CA1 B 6 = 0, so it must be that sin(ee ) = 0, or ee = k; k 2 Z. From this it now follows that ze = 0 so the equilibria of the system are ze = 0; ee = k c. For this transfer function, e _ = z , so e = z _= 1 1 z sin(e) 1 1 = e _ sin(e)

which is similar to the pendulum equation. Problem 1.13 p 1. At the equilibrium, x2 = 0 and x1 + x3 1 =6 = 0, implying that x1 = 0 or x1 = 6. The state matrix of the linearization is 0 1 0 1 ; 1 1 2 1 the rst one corresponding to the zero equilibrium and the second to the other two. The eigenvalues of the rst are 0:5 j 0:866 so the equilibrium type is stable focus. The eigenvalues of the second are 1; 2 so the equilibrium type is a saddle. 3 2. At the equilibrium, x1 = x2 and 0:1x1 2x1 x2 1 0:1x1 = 0 so either x1 = x2 = 0 or x1 = x2 = roots 2 of the quadratic 0:1x1 + x1 + 1:9. The state matrix of the linearization is 1 1 0:1 2x1 0:3x2 1 2 Substituting the values of x1 at the equilibrium we compute the eigenvalues of this matrix to determine the equilibrium type: x1 = x2 = 0, eigenvalues 0:908; 2:09, stable node. x1 = x2 = 7:45, eigenvalues 1:5 j 1:18, stable focus. x1 = x2 = 2:55, eigenvalues 0:37; 3:37, saddle. 3. The obvious equilibrium is the origin (0; 0). When x2 6 = 0 then, x2 = 2 + 2x1 , from which x1 = 0 or x1 = 3. The corresponding values for x2 are 2 and -4, respectively. On the other hand, when x2 = 0, x1 can be either 1 or 0. So the equilibrium points are (0; 0); (1; 0); (0; 2); (3; 4) The corresponding linearization state matrices are 1 0 1 1 ; ; 0 2 0 2

3 0 4 2

9 3 4 2

Computing their eigenvalues we have the following characterization of equilibria: At (0,0), eigenvalues 1; 2, unstable node. 2

At (1,0), eigenvalues 1; 2, saddle. At (0,2), eigenvalues 3; 2, stable node. At (-3,-4), eigenvalues 3; 10, saddle. Notice that there is one more case to be examined, when x1 = 1. This case leads to a singular dierential equation where one of the equations becomes algebraic. For this example, the algebraic equation can be written as x2 = (1 + x1 ) + a(x1 )(1 + x1 )2 where a(:) is a suitable function (found by substituting the expression back into the equations). 4. The only equilibrium is the origin for which the linearization matrix has eigenvalues 0:5 j 0:866 and the equilibrium is an unstable focus. Problem 1.15 Dierentiating the L cos and L sin terms we get cos _2 sin ) = H my + mL( sin + _2 cos ) = V mg mL( Solving for H; V and substituting, we obtain = mgL sin mL2 mLy I cos _2 sin ky _ My = F my mL cos + mL Collecting the second derivatives in the left-hand side, mgL sin I + mL2 mL cos = _2 sin ky mL cos M +m y F + mL _ Now, the inverse of the left-hand side matrix is 1 M + m mL cos mL cos I + mL2 () where () is the determinant of the matrix, i.e., () = (I + mL2 )(m + M ) m2 L2 cos2 = Im + IM + m2 L2 sin2 + M mL2 which is always positive and satises the stated inequality (> (I + mL2 )M + mI ). Problem 1.19 a. d dt

Z
0

A()d = !i k

p gh

_ =uk A(h)h Let x = h x _= b. x = p pa = gh

p gh y=x

1 p [u k gx]; A(x)

x _=

p p c. At equilibrium, 0 = uss k gxss , yss = xss = r. Hence, uss = k gr. Problem 1.22

p g [u k x]; A(x=g )

y = x=(g )

a. x _1 x _2 = dx1f dx1 + r1 = dx2f dx2 + r2

The assumptions r1 = x1 , r2 = r1 =Y , and x1f = 0 yield x _1 x _2 = ( d)x1 = d(x2f x2 ) x1 =Y

b. When = m x2 =(km + x2 ), the equilibrium equations are m x 2 0 = d x 1 km + x 2 m x 2 x 1 0 = d(x2f x 2 ) Y (km + x 2 ) From the rst equation, x 1 = 0 or m x km d 2 =d)x 2 = km + x 2 m d

Substituting x 1 in the second equilibrium yields x 2 = x2f . Substituting x 2 = km d=(m d) in the second equilibrium equation yields km d x 1 = Y x2f m d Hence, there are two equilibrium points km d km d Y x2f ; and [0; x2f ] m d m d c. When = m x2 =(km + x2 + k1 x2 2 ), the equilibrium equations are m x 2 0 = d x 1 km + x 2 + k1 x 2 2 m x 2 x 1 0 = d(x2f x 2 ) Y (km + x 2 + k1 x 2 2) 2 is the root of d = ( x2 ). Since d < maxx2 0 f(x2 )g, this equation has From the rst equation, x 1 = 0 or x two roots, say x 2a ; x 2b . Substituting x 1 = 0 in the second equilibrium yields x 2 = x2f . Substituting x 2 = x 2a in the second equilibrium yields ( 2a ) ( x2a ) x1 =Y = 0 ) x 1 = Y (x2f x 2a ) . x2f x 2 = x 2b in the second equilibrium equation yields x 1 = Y (x2f x 2b ). since (x2a ) = d. Similarly, substituting x Hence, there are three equilibrium points [Y (x2f x 2a ) ; x 2a ] and [Y (x2f x 2b ) ; x 2b ] and [0; x2f ]

EEE 586

HW # 2 SOLUTIONS

Problem 2.1 1. Clearly, at an equilibrium x1 = x2 . Substituting in the rst equation we get (x1 x3 1 ) = 0, so the equilibria are (0; 0), (1; 1), (1; 1). To determine the type, we look at the linearization: 1 + 6x2 1 @f 1 = A= 1 1 @x whose eigenvalues at the three equilibria are: f1 j g, f4:828; 0:828g, f4:828; 0:828g, respectively. Thus, the origin is a stable focus and the other two are saddle points. 3. Here we distinguish the following cases: 1. x1 = 0, x2 = 0, yielding the equilibrium (0; 0). 2. x1 = 0, 2 h(x) = 0, yielding the equilibrium (0; 2). 3. 1 x1 2h(x) = 0, x2 = 0, yielding the equilibrium (1; 0). 4. 1 x1 2h(x) = 0, 2 h(x) = 0, yielding the equilibrium (3; 4). To determine the type, we look at the linearization: ! x2 x2 x2 1 x1 2 1+ 2 1+ @f x1 + x1 (1 + 2 (1+x1 )2 ) x1 A= = x2 x2 x2 (1+ 2 2 1+ @x x1 )2 x1 whose eigenvalues at the four equilibria are: f1; 2g (unstable node), f2; 3g (stable node), f12g (saddle), f7:772; 0:772g (saddle), respectively. 5. At an equilibrium x _ = 0. The possible solutions are given by
2 0 = (x1 x2 )(1 x2 1 x2 ); 2 0 = (x1 + x2 )(1 x2 1 x2 )

2 The locus of 1 = x2 1 + x2 is a circle and these are not isolated equilibria. The other solution is (0; 0) which is an isolated equilibrium where @f 1 1 = 1 1 @x

and whose eigenvalues are 1 j . Hence, (0; 0) is an unstable focus. 6. We have 0 = x1 x3 0 = x3 1 + x2 ; 2

Evaluating at the three equilibria, (0; 0) has a Jacobian with eigenvalues 1; 1 and it is a saddle. (1; 1) and (1; 1) have a Jacobian with eigenvalues 2; 4 and it is a stable node. Problem 2.2 1. At an equilibrium x _ = 0. The possible solutions are (0; 0), (2; 0), (2; 0). To determine the type, we look at the linearization: ! 0 1 @f = A= 5x4 @x 1 + 161 1 whose eigenvalues at the three equilibria are: f0:5 j 0:866g, f1:56; 2:56g, f1:56; 2:56g, respectively. Thus, the origin is a stable focus and the other two equilibria are saddle points. 1

The latter has two real roots at x1 = 1. Hence, there are three equilibrium points at (0; 0), (1; 1), (1; 1). The jacobian is @f 3x2 1 1 = 1 3x2 @x 2

8 8 x2 = x3 1 ) x1 (1 x1 ) = 0 ) x1 = 0 or x1 = 1

2. At an equilibrium x _ = 0. The possible solutions are (0; 0), (1; 2), (1; 2). To determine the type, we look at the linearization: 2 x2 x1 @f = A= 4x1 1 @x whose eigenvalues at the three equilibria are: f2; 1g, f0:5 j 1:94g, f0:5 j 1:94g, respectively. Thus, the origin is a saddle point and the other two equilibria are stable foci. 3. At an equilibrium x _ = 0 so x2 = 0 implying that (x1 ) = 0. The only solution is (0; 0). To determine the type, we look at the linearization: 0 1 0 1 @f = = A= @ @ @x 0:5 1 + 0:5 1 + @x @x 1 2 whose eigenvalues are f0:25 j 0:66g. Thus, the origin is a stable focus. Problem 2.7 We start with y (t) k1 e y (t) + k3 =k2
a(tt0 )

ea(t ) [k2 y ( ) + k3 ]d Z
t

t0

k3 =k2 + k1 ea(tt0 ) + Z

ea(t ) k2 [y ( ) + k3 =k2 ]d

t0

Dene z (t) = eat [y (t) + k3 =k2 ]. Then


t

z (t) k3 =k 2eat + k1 eat0 + | {z } using BG lemma convert back to y y (t) k1 ea(tt0 ) + (t) + Z
(t) t t0

k2 z ( )d

t0

( )k2 ek2 (t ) d Z
t

k3 e(ak2 )(t ) d +
t0

k1 k2 ea(tt0 ) ek2 (t ) d

t0

From this, the desired result is obtained after performing the integrations. Problem 2.12 !I Rn , T (x) = D1 (D A)x + b, x 2 I Rn is a contraction It is enough to show that the operator T : I Rn 7 1 Rn , (due to the strict diagonal dominant condition, D exists), that is, kT (x) T (y )k = kx y k, 8x, y 2 I 0 < 1, kT (x) T (y )k = D1 (D A)x + b D1 (D A)y + b 1 D (D A) (x y ) = kE (x y )k = eij = then X
j

with

aij ; i6 =j ii 0; i = j 0X 1

kT (x) T (y )k1 kE k1 kx y k1 = max


i

B j6 C =i C kx y k = kx y k jeij j kx y k1 = max B 1 1 i @ jaii j A P


j6 =i

jaij j

from the strict diagonal dominant condition we have that 2

jaii j

jaij j

< 1, hence < 1.

Problem 2.15 The system of equations can be written as: # " x1 1 + 1+ x1 1 x2 1 = x2 1 + 1+ 2 x 2 2 x


2

notice that the matrix on the right is strict diagonal dominant since the diagonal elements are bounded by jj xi 3 < 5 R, jj < 1 < 1. From exercise 2.12 we know that there exists a 4 < 1 + 1+x2 4 for all xi 2 I 2 , then = 3 i 4 unique solution. Problem 2.17 _ . There is only one equilibrium at the orign and it is an unstable focus 1. Dene the states x1 = y , x2 = y 2 or node depending on the value . Following the P-B Criterion, we consider V = x2 1 + x2 . Its derivative along 2 2 2 2 _ the trajectories of the system is V = (2x1 ; 2x2 )(x2 ; x1 + x2 (1 x1 x2 )) = 2x2 (1 x2 1 x2 ). It follows 2 2 _ that V < 0 outside the disc M = fV = x1 + x2 < 1 + g, where is an arbitrary small positive constant. So, trajectories of the system that begin in M stay in M . Hence, by the P-B Criterion there exists a periodic in M . 2 2. There is only one equilibrium at the orign and it is an unstable node. Let V = x2 1 + x2 . Its derivative 2 2 2 2 _ along the trajectories of the system is V = (2x1 ; 2x2 )(x2 ; x1 + x2 (2 3x1 2x )) = 2x2 (2 3x2 1 2x ). So, 2 2 2 2 2 2 _ V < 0 in the set M = fV = x1 + x2 < 1 + g. (Notice that 2 3x1 2x > 2 2x1 2x = 2 2V .) This implies that trajectories of the system that begin in M stay in M . Hence, by the P-B Criterion there exists a periodic orbit in M . 2 2 3. There is only one equilibrium at the orign and it is an unstable focus. Let V = 1 2 x1 + x2 + x1 x2 . 2 The last term introduces x1 in the derivative and the coecients are chosen to eliminate some of the cross _ = terms (x1 x2 ), while keeping V positive denite. Its derivative along the trajectories of the system is V 2 2 2 2 _ x1 + 3x2 2x2 (x1 + 2x2 ) . To apply the P-B Criterion, we need V < 0 for large enough jxj. A quick way to verify this is to use a mesh plot in MATLAB. The analytical version is a bit more tedious. 2 2 2 _ = z1 1 First, redene variables as z1 = x1 and z2 = x1 + 2x2 . Then V 2 (z2 z1 ) (z2 3=2). Notice that 2 2 _ _ < 3=2g\fz : jz j > rg V < 0 in the set M1 = fz : z2 > 3=2g. It suces to show that V < 0 in the set M2 = fz : z2 3 2 2 2 2 2 _ > r2 3=2. for large enough r. In M2 , z2 < 3=2 so V z1 + 4 (z2 z1 ) . Moreover, jz j > r ) z1 > r2 z2 So, 3 2 3 1 2 9 2 _ 1 z1 + z2 z2 z1 z1 + + 1:84jz1 j V 4 4 2 4 8 _ _ < 0 in M1 [ M2 and The roots of the quadratic are 7:92; 0:567, so V < 0 for jz1 j > r = 8. Notice that V c 2 (M1 [ M2 ) = fz : z2 3=2 & jz j rg which is a compact set. Dene the set M as the smallest level set _ < 0 in M and trajectories of the system that begin in M fV cg containing (M1 [ M2 )c . This implies that V stay in M . Hence, by the P-B Criterion there exists a periodic orbit in M . In this case, the choice of our coordinates is such that V (x) = z > z=4, so the set M is easy to compute: M = fx : V (x) < 16g. In general, its computation involves an optimization problem that may or may not be computable analytically. In the attached gures, notice the conservatism in the estimate of the set M containing the limit cycle. It goes without saying that other functions V may produce dierent or better results. 4. The equilibrium points are the roots of 0 = x1 + x2 x1 max(jx1 j; jx2 j); Combining the two, we have 0 = 2x1 x1 (max(jx1 j; jx2 j) 1)2 = x1 [2 + (max(jx1 j; jx2 j) 1)2 ] ) x1 = 0; x2 = 0 Hence, there is a unique equilibrium at the origin, whose Jacobian has eigenvalues 1 j 1:414. Hence, the origin 2 2 is an unstable focus. Next, consider V (x) = 1 2 (x1 + x2 ). We have rV f = x1 (x1 + x2 x1 max(jx1 j; jx2 j)) + 2x2 (2x1 + x2 x2 max(jx1 j; jx2 j)) 2 2 = [x2 1 2x1 x2 + x2 ] kxk max(jx1 j; jx2 j) 3 0 = 2x1 + x2 x2 max(jx1 j; jx2 j)

The quadratic in the brackets can be written as x> P x which is upper-bounded by max (P )kxk2 . Hence, for suciently large c, on the surface kxk = c, max(jx1 j; jx2 j) can be made arbitrarily large and rV f will be negative. (The same can be shown by completing the squares.) Hence, trajectories starting in M = fxjV (x) cg stay in M . Furthermore, M contains a single equilibrium point which is an unstable focus. It follows from P-B criterion that there is a periodic orbit in M . Problem 2.20 1.

@f2 @f1 + = 1 + a 6 =0 @x1 @x2

By Bendixsion's criterion, there are no periodic orbits. 2. The equilibrium points are the roots of
2 0 = x1 (1 + x2 1 + x2 ); 2 0 = x2 (1 + x2 1 + x2 )

2 The system has an isolated equilibrium at the origin and a continuum of equilibria on the unit circle x2 1 + x2 = 1. It can be easily veried that the origin is a stable node. Transform the system into polar coordinates x1 = r cos , x2 = r sin . Then it follows that r _ = r(1 r2 )

For r < 1 every trajectory starting inside the unit circle approaches the origin as t ! 1. For r > 1, every trajectory starting outside the unit circle diverges to 1. Hence, there are no limit cycles. (An argument using no theorem but basic analysis on the trajectories.) _ 2 = 0. But then x _1 = 1 6 = 0. So the system has no equilibrium and by Lemma 3.2 3. x1 must be zero from x the index of any closed curve is zero and, therefore, it cannot be a closed orbit. 4. The equilibria are the line x2 = 0. By Lemma 3.2, a periodic orbit must enclose at least one equilibrium. But then it will intersect the equilibrium line, and once the state becomes equal to the equilibrium, it will remain there. This contradicts the assumption of a periodic orbit, hence no such orbit exists. 5. The equilibria are x2 = 0 and x1 = k; k 2 Z. All are saddle points (the linearization matrix is [0 1; 1 0]). Hence, by Lemma 3.2, there can be no periodic orbits. Problem 2.21 x1 +b . The function V (x) is negative in D and the curve V (x) = 0 is the boundary of a. Let V (x) = x2 x 1 +a D (@D). Evaluating, rV f jV =0 = cx1 (x1 + a) < 0; 8x 2 @D. Hence, trajectories starting in D, stay in D. b. @f2 @f1 + = 1 + x2 < 0; 8x 2 D @x1 @x2 By Bendixson's criterion, there can be no closed orbits entirely in D. Since D is an invariant set, it follows that there can be no closed orbits through any point in D. Problem 2.23 a.

@f2 @f1 + = a[2b g (x1 )] = @x1 @x2 k < 2b )

2ab a(2b k) 8x

for jx1 j > 1 for jx1 j 1

@f1 @f2 + < 0; @x1 @x2

By Bendixsion's criterion, there are no periodic orbits. Problem 2.24 Suppose M does not contain an equilibrium point. Then, by the P-B criterion, there is a periodic orbit in M . But, by Corollary 2.1 of the Index theorem, the periodic orbit must contain an equilibrium point, which is a contradiction. Hence, M contains at least one equilibrium point

Problem 2.28 a. To apply P-B we note that the origin is the only equilibrium (see Reference [195] for a rather complicated analysis, or resort to numerical evaluation). For the stability of the equilibrium, we form the linearization matrix 1 1 + A= 1 + Its eigenvalues are the roots of s2 + 2s(1 ) + 2(2 2 + 1=2). The last term has negative discriminant, so it is always positive. By the Routh criterion, the roots of the polynomial will be in the left half plane i 1 > 0. There will be two roots in the right half plane when > 1, and the equilibrium will be competely unstable (focus or node, to be determined later). 2 Next, consider the coordinate transformation x x and dene the function V (x) = x2 1 + x2 . Its derivative is 2 @V f = [x2 + x2 2 (x1 tanh(x1 ) + x2 tanh(x2 )) (x1 tanh(x2 ) + x2 tanh(x1 ))] @x 1 _ 2 [V 4 jxj], so it is negative for V > 162 2 . The latter establishes the set M = fxjV 162 2 g Hence, V that is positively invariant and, by the P-B criterion, contains a periodic orbit. b.,c. The phase portrait of the system can be easily produced with MATLAB with a standard simulation model (see attached Figures). Dierent initial conditions provide dierent trajectories and their number and locations should be chosen to provide as complete of a picture as necessary. An overlay of the vectoreld using the quiver command is also helpful. >> >> >> >> [X1 X2]=meshgrid(-3:.25:3,-3:.25:3); F2=-X2+tanh(2*X1)+tanh(2*X2); F1=-X1+tanh(2*X1)-tanh(2*X2); quiver(X1,X2,F1,F2)

For > 1 the trajectories show a clear convergence to a limit cycle while for < 1 (part c), they show convergence to the only equilibrium. d. As varies the equilibrium changes from a stable focus 0 < < 1 to an unstable focus > 1. Problem 2.29 a. To apply P-B we note that, from the second equation, the equilibrium should satisfy either x2 = 1. But for the former, x _ 1 = a so it cannot be an equilibrium. For the latter, the rst equation x1 = 0 or 1+ x2 1 leads to a 5x1 = 0. Hence, the system has one equilibrium at a=5 xe = 1 + (a=5)2 Next, to analyze the stability of the equilibrium, we form the linearization matrix 3 2 8x2 4x2 4x1 1 x2 1 1+ + 2 2 x (1+x2 1+x2 1 1) 1 5 A=4 2bx2 x2 bx1 1 x2 b 1 1+x2 + (1+x2 )2 1+ x2
1 1 1

Evaluating at x = xe we get " 5 A=

8(a=5)2 1+(a=5)2 2b(a=5)2 1+(a=5)2

4(a=5) 1+( a=5)2 b(a=5) 1+( a=5)2

a=5 1 + (a=5)2

5 4 3(a=5) (a= 5) 2b(a=5) b

a=5 5 2 Letting c = 3(a=5) (a= 5) , the eigenvalues of A are 1+(a=5)2 (a positive constant) times the roots of s + 2 s(b c) bc + 8b(a=5) = s + s(b c) + a + 25=a. Applying the y the Routh criterion, the roots of the polynomial will be in the left half plane i b c > 0, or b > 3(a=5) 25=a. There will be two roots in the right half plane when b < 3(a=5) 25=a, and the equilibrium will be completely unstable (focus or node). To nd a positively invariant set containing the equilibrium, we notice that solutions that start in the rst quadrant remain there for all t. (This is expected since the states represent concentrations.) Indeed, when _ 1 > 0. Similarly, when x2 = 0 and x1 > 0, then x _ 2 > 0. Hence, trajectories cannot x1 = 0 and x2 > 0, then x cross the boundary of the rst quadrant. Further, we notice that x1 is bounded since for large x1 (x1 > a and

x2 > 0), then x _ 1 < 0. Similarly, when x2 is large (x2 > 1 + a2 > 1 + x2 _ 2 < 0. Hence, the 1 and x1 > 0), then x box a > x1 > 0, 1 + a2 > x2 > 0 is positively invariant, containing one unstable focus/node equilibrium for b < 3(a=5) 25=a; by the P-B criterion, the system has a periodic orbit. b.,c. The phase portrait of the system can be easily produced with MATLAB with a standard simulation model, as in Problem 2.28. Dierent initial conditions provide dierent trajectories and their number and locations should be chosen to provide as complete of a picture as necessary. An overlay of the vectoreld using the quiver command is also helpful. When a = 10, b = 2, the trajectories converge to a limit cycle, while when b = 4 (above the critical value) they converge to the stable focus equilibrium. d. Constructing a plot of the eigenvalues of the linearization matrix for a = 10 b = [0 : 100], we nd that the equilibrium is an unstable node for small b (b < 0:2), bifurcating to an unstable focus until the critical value of periodic solutions (b = 3:5), where it bifurcates to a stable focus, and then to a stable node for large values of b (b > 55:8).

V and V`-level sets for Pr. 2.17.3 (detail)

|x +2x |2 = 3/2 1 2

x-trajectory 1

-1

V' > 0 -2

-3 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2

V and V`-level sets for Pr. 2.17.3

10

V-decreasing

5 V-level set 2 V(x) = |z| /4=16 0

V-increasing (numerical eval.) -5

-10 V-decreasing -10 -5 0 5 10 15

EEE 586

HW # 3 SOLUTIONS

Problem 3.1 1. f (x) = x2 + jxj is not continuously dierentiable at the origin (jxj has a discontinuous derivative). It is locally Lipschitz since its derivative (left and right limits) is bounded on any compact set. It is continuous since both terms are. It is not globally Lipschitz since the derivative of x2 is not bounded. x sin x 3. f (x) = sin xsgnx = sin x (xsgnx) = x jxj. Is not continuously dierentiable at the origin. It is glob.L. since its derivative (left and right limits) is bounded. Hence it is loc.L. and continuous. 6. f (x) = tan x is not continuous at =2 but it is at 0. It is only locally continuously dierentiable. It is locally Lipschitz around 0. 8. f (x) = [x1 + ajx2 j; (a + b)x1 + bx2 1 x1 x2 ] is not continuously dierentiable at 0 (unless a=0). It is continuous everywhere and locally Lipschitz. It is not globally Lipschitz because of the x2 1 and x1 x2 terms. Problem 3.3 Let f1 ; f2 be locally Lipschitz: jfi (x) fi (y )j ki jx y j, for all x; y 2 D. Then, jf1 (x) + f2 (x) f1 (y ) f2 (y )j jf1 (x) f1 (y )j + jf2 (x) f2 (y )j (k1 + k2 )jx y j, in D. Hence f1 + f2 is Lipschitz. Also, jf1 (x)f2 (x)f1 (y )f2 (y )j jf1 (x)f2 (x)f1 (y )f2 (y )f1 (x)f2 (y )j jf1 (x)jjf2 (x)f2 (y )j+jf2 (y )jjf1 (x) f1 (y )j K jx y j, in D, as long as f1 ; f2 are bounded in D. That is the case, for example, if fi are continuous and D is compact. In general, f1 f2 is locally Lipschitz Finally, jf2 f1 (x) f2 f1 (y )j = jf2 [f1 (x)] f2 [f1 (y )]j k2 jf1 (x) f1 (y )j k2 k1 jx y j. So f1 f2 is Lipschitz in the same domain.

Problem 3.4 kKxk is Lipschitz in x and, using 3.3, g (x)kKxk and g (x)K (x) are Lipschitz on any compact set. (Note: continuity on compact sets implies boundedness.) When g (x)kKxk > , then 1=kKxk < g (x)= so it is bounded on compact sets. That means that Kx=kKxk is Lipschitz. It remains to show continuity at the boundary. Consider g (x)kKxk = , g (y )kKy k = + , for some small > 0. Then g (x)Kx= Ky=kKy k = g (x)kKxkKx=kKxk Ky=kKy k = (1 =)Kx=kKxk Ky=kKy k. Since Kx=kKxk is Lipschitz, the norm of the last expression is less than = + Lkx y k. Since this can be made arbitrarily small (for suciently small x y and ), we can establish the continuity at the boundary. So f is locally Lipschitz on any compact set. Problem 3.7 ([1, 2]) First, the function f (x) =
1 1+g >(x)g (x) g (x)

is bounded for all x.

it is continuous dierentiable in D = ft0 t t0 + a; kx x0 k bg. Then there exists a unique solution x(t) on the interval t0 t t0 + , for some > 0 (Thm. 3.1). Next, we extend the interval of existence to innity: We know that, if x(t) is a solution of the dierential equation on a t-interval, then there exists a maximal t-interval of existence. We prove by contradiction that in our case the maximal t-interval is t 2 [t0 ; 1). Assume the contrary, that is, the maximal t-interval is nite t 2 [t0 ; ], this implies that the solution to the dierential equation escapes to innity in nite time lim x(t) = 1 t! so, Z t f (x)dt; t 2 [t0 ; ] x(t) = x0 ; +
t0 t!

1 kg (x)k kf (x)k = 1; g (x) = > 1 + g (x)g (x) 1 + kg (x)k2

lim x(t) = x0 + lim


t!

t!

f (x)dt

t0

lim x(t) x0 + 1( t0 ) 1

but the last inequality does not hold since the right side is nite. Then we can conclude that the maximal t-interval is [t0 ; 1). Finally, this implies that the solution on an interval [t0 ; t0 + T ] belongs to a compact set of the form D = fx : kx x0 k rg where r = T . In that set f is loc.L. and, by Thm. 3.3, the solution is unique. This holds for arbitrary T . Problem 3.8 1. Let D = x 2 I R2 j2x1 + x2 j < 1 then we have a linear system which is asymptotically stable since the eigenvalues have real part less than zero. 2. Using V (x) = x1 x2 with x1 x2 = c, c > 0 and starting in the rst quadrant, pick c large enough such that 2x1 + x2 > 1 ( c 1 8 ). Then _ (x) = x _2 V _ 1 x2 + x1 x 2 2 = x2 + x1 x1 sat (2x1 + x2 ) 3 c2 + x4 1 x1 2 = x2 2 + x1 x1 = 2 x1
3 The function c2 + x4 1 x1 has a minimum at x1 = that the trajectories cannot reach the origin. 3 4

and as long as c >

p 3 3 16

_ (x) > 0 which implies then V

3. From the previous section we have seen that the region of attraction for the equilibrium is not I R2 , hence the equilibrium cannot be global. Problem 3.11 The Van der Pol equation: x _ = f (x; ); x(t0 ) = x0 ; f (x) = x2 x1 + 1 x2 1 x2 (1)

From Section 3.3 in the textbook and using = : @ 0 1 = A (t; 0 ) = f (x; ) 1 2x2 x1 0 1 x2 @x 1 0 x=x(t;0 );=0 @ 0 = f (x; ) B (t; 0 ) = 1 x2 @ 1 x2
x=x(t;0 );=0

The nal equations are: x _ = f (x; 0 ); x(t0 ) = x0 x _ = A (t; 0 ) x + B (t; 0 ) ; x (t0 ) = 0 Problem 3.14 a. As in Pr. 3.11. b. 1 1 > 1 > p 2x f (x) = r _ = x x + x1 tanh(x1 ) x1 tanh(x2 ) + x2 tanh(x1 ) + x2 tanh(x2 ) r 2 x> x x1 + x2 1 x1 + x2 = r+ tanh(x1 ) + tanh(x2 ) r r 1 j x1 + x2 j jx1 + x2 j r+ j tanh(x1 )j + j tanh(x2 )j r r p 1 r+2 2 2

since j tanh(:)j 1 and jx1 + x2 j r

r r p 2 (x1 + x2 )2 2x1 x2 p x2 1 + x2 + 2x1 x2 1+ 2 2 r r r2

2 2 2 2 2 For the last inequality, notice that x2 1 + x2 2x1 x2 = (x1 x2 ) 0 ) x1 + x2 = r > 2x1 x2 . Similarly for the term j x1 + x2 j=r. c. Immediate, using the comparison lemma and the solution of the ODE for r in part (b).

Problem 3.20 Let f be Lipschitz on D with constant k, and consider > 0. Take = =k . Then for jx y j < , jf (x) f (y )j kjx y j < k=k = Since is independent of x; y , it follows that f is u.c. Problem 3.23 Set g ( ) = f (x); 0 1. Since D is convex, x 2 D for 0 1. Then, dx = f 0 (x)x d Z 1 Z f (x) = f (x) f (0) = g (1) g (0) = g 0 ( ) d = g 0 ( ) = f 0 (x)
0

f 0 (x) d x

References
[1] Braun, M., Dierential equations and their applications, Springer-Verlag 1983. [2] Hale, J. K., Ordinary dierential equations, Krieger Publishing Company 1980.

EEE 586

HW # 4 SOLUTIONS

Problem 4.3 Obtain a Lyapunov function from the corresponding linearization. 1 > 2 2 2 _ (1) A = [1 0; 0 1], solve A> P + P A = I , so P = 1 2 I and V (x) = 2 x x. Then V = x1 + x1 x2 x2 1 > 2 1 _ V , implying local ES. 2 x x x1 ( 2 jx2 j). So, locally in fjx2 j < 1=2g, V 2 > _ Alternatively, for jxj < r, V = x1 rjx1 jjx2 j x2 2 jxj Q(r )jxj, where 1 r=2 Q(r) = r=2 1 _ is n.d. for r < 2. implying that V For large x2 this Lyapunov function has indenite derivative, oering no conclusion on global asymptotic stability. On the other hand, x2 is independent of x1 and decays exponentially fast. This implies that after 2 some time T1 , x2 < 1=2, hence x _2 1 x1 , converging to zero by the comparison principle. This implies the global asymptotic stability of the origin. (2) A = [1 1; 1 1], solve A> P + P A = I , so P = 1 2 I (A is I + a skew symmetric matrix) and > > 2 > _ = x1 x2 x2 x x . Then V (1 x x ) + x x x (1 x x) = x> x(1 x> x). So, locally in fjxj < 1g, V (x) = 1 2 1 1 2 2 _ V , implying local ES. For large x this Lyapunov function has positive derivative, showing divergence. In V fact, in reverse time, the equilibrium at the origin is unstable, but the same Lyapunov has a negative derivative for x> x > 1, implying that the system has a periodic solution. This is a stable limit cycle in reverse time, corresponding to an unstable limit cycle in forward time. This implies that the region of attraction cannot be global. |||||1 > 2 _ (3) A = [0 1; 1 1], solve A> P + P A = I , so P = 1 2 [3 1; 1 2] and V (x) = 2 x P x. Then V = 3x1 x2 + x2 3 2 2 2 4 3 2 3 2 2 2 2 4 2 2 _ 3x1 x2 x2 x1 x1 x1 x2 x1 + x1 x2 2x2 x1 2x2 + 2x2 x1 + 2x2 x1 . Collecting terms, V = x1 x2 x1 + x1 x2 . _ cV , for some c(r) > 0 implying local ES. An estimate of the region of So, locally in fjx1 x2 j r < 2g, V attraction is the largest Lyapunov level set contained within the hyperbolas jx1 x2 j < 2. This Lyapunov function oers no conclusion for global stability. A reverse time simulation shows the existence of a limit cycle, implying that the region of attraction cannot be global. 1 > _ (4) A = [1 1; 2 0], solve A> P + P A = I , so P = 1 2 [3 1; 1 2] and V (x) = 2 x P x. Then V = 2 2 2 3 4 2 2 4 3 _ = x1 x2 2x2 x1 x2 . So, 3x1 x1 x2 3x1 x2 x2 + 2x1 x1 x2 + 4x1 x2 2x2 . Collecting terms, V _ cV , for some c(r) > 0 implying local ES. This Lyapunov function oers no locally in fjx2 j2 r < 2g, V conclusion for global stability. We notice that the cross term x1 x2 is responsible for the x1 x3 2 term that makes the derivative indenite for large x. If we eliminate it and adjust the coecients to cancel the cross terms x1 x2 2 _ 2 4 2 4 from the derivative, we get V = 2x2 1 + x2 , V = 4x1 4x1 x2 + 4x2 x1 2x2 = 4x1 2x2 , which is negative denite and shows global asymptotic stability. Notice, however, that in order to get this result, this Lyapunov function sacrices the local exponential stability. Roughly, the reason is that the local result is due to the 2x1 coupling term in the second ODE, but the global attractiveness comes from the x3 2 term. Problem 4.2 2 p+1 _ Let V (x) = 1 + xg (x). 2 x . Then V = ax _ a xp+1 + a xp+1 1 1. When p is odd and a < 0, V 2 2 _ a xp+1 + a xp+1 1 2. When p is odd and a > 0, V 2 2
2kjxj jaj 2kjxj a

< 0 in fjxj < jaj=2kg. Hence, 0 is AS. > 0 in fjxj < a=2kg. Hence, 0 is unstable.

When p is even and a 6 = 0, let V (x) = ax, for which fx : V > on its boundary and contains points 0g has 0 _ = a2 xp + xg (x) a2 xp + a2 xp 1 2kj2xj > 0 in fjxj < a2 =2kg. Hence, 0 is arbitrarily close to 0. Then V 2 2 a unstable. Problem 4.6

1. The system is: x _1 x _2 The linearization around the origin produces A= @f 0 0 = 0 1 @x x=0 = x2 2 = x2 + x2 1 + x1 x2

which has an eigenvalue equal to zero and another equal to 1. Using the center manifold equation we get

@ h(x1 )h2 (x1 ) + h(x1 ) x2 1 x1 h(x1 ) = 0 @x1

Let h(x1 ) = ax2 1 and substitute in the center manifold equation


3 3 2 x2 1 (a 1) x1 (a + 2a x1 ) = 0 ) a = 1; 3 h(x1 ) = x2 1 + O (jx1 j )

Then substitute x2 h(x1 ) in the reduced system equation


5 x _ 1 = x4 1 + O (jx1 j )

The origin of the reduced system is unstable, hence the origin of the original system is an unstable equilibrium.
> > > 1 _ _ <0 Problem 4.7 Let V = 1 . Hence V 2 x P x. Then, V = x P Q = x with the obvious choice P = Q and 0 is AS. _ < 0 globally. This will be the case provided that yi (y ) > 0 for all y . Next, V is RU. So 0 will be GAS if V 2 For the given functions, y1 (y ) = y y 3 > 0 for small y only. y2 (y ) = y 2 + y 4 > 0 for all y . Hence, 0 is locally AS.

Problem2 4.8 x1 12x2 4x2 2 1 _ V = u + x2 u22 < 0, implying 2 , u = 1 + x1 is locally pdf and not RU. Taking the derivative, V = u4 local AS. p Consideringp the hyperbola x2 = 2=(x1 2), rewrite the equation as g (x) = 0 and form the gradient rg = [2=(x1 2); 1]. Then, we want to show that the inner product < f; rg >> 0 on the hyperbola. While the substitution is straightforward, the expressions are quite long. Here, a Matlab evaluation is helpful to show that the inner product is positive meaning that the vector eld always points to the right of the hyperbola. The last observation implies that initial conditions starting to the right of the hyperbola remain there. Hence 0 cannot be GAS. Problem 4.9 a. If x1 = 0, V (x) =
x2 2 1+x2 2 x2 1 1+x2 1

+ x2 2 ! 1 as jx2 j ! 1. If x2 = 0, V (x) =
1+x2 1 x2 1

+ x2 1 ! 1 as jx1 j ! 1.

b. On the line x1 = x2 , V (x) = Problem 4.10 Using the suggested expression,

! 1 as jx1 j = jx2 j ! 1. Hence, V is not RU. 6

x> P f + f > P x = x>

Z
0

"

# ! > @f (x) @f (x) P d x x> x P + @x @x

Next, V = f > P f 0, for any f . It is PD i V = 0 , x = 0. This is equivalent to f = 0 , x = 0, i.e., 0 is a unique equilibrium of x _ = f (x). We show this by contradiction. Suppose that there is q 6 = 0 such that f (q ) = 0. Then, from Part 1, q > P f (q ) + f (q )> P q q > q . But the left hand side is 0 so q = 0 which is a contradiction. 2

Next, we need to show that V is RU. That is, as jxj ! 1, so must f . Again, suppose that it is not so > Pf and jf (x)j < c as x grows. From Part 1, xx> 1 2 . But if f were bounded, then the fraction satises x
x> P f x> x

_ f > (P @f + Thus, we have that V is PD and RU. Furthermore, V @x Problem 4.13 2 2 1. Let V = 1 2 (x1 x2 ). Then _ V

jP jc jxj

! 0 which is a contradiction.

@f > @x P )f

f > If < 0. So 0 is GAS.

3 2 3 2 3 = x4 1 + x1 x2 + x2 x2 x1 x2 + x1 x2 4 2 2 1 1 2 = x1 + 2 x2 + 2x1 (x1 x2 ) + x2 ( 2 x2 x1 )

(x2 1 0

2 1 2 x4 1 + 2 x2 + 2x1 (x1 x2 ) for jxj 1 2 2 p jx j)2 + p x jx j 2 2 2 1 2 1 for jxj p 2

1 4

2jx1 j(x2 1 jx2 j)

_ 0 whenever jxj 1 . Furthermore, V > 0 in U = fx : jx1 j > Combining the two conditions, we get that V 4 jx2 jg, that has zero on its boundary (we write 0 2 @U ). Hence, 0 is an unstable equilibrium. 2. (corrected) This solution is based on more elementary observations: Consider the set x3 _ 2 0. Also, consider the set 1 + x2 >= 0. On its boundary in the 1st quadrant, x 6 x1 x3 > = 0. On its boundary in the 1st quadrant, x _ 0. The intersection of these two sets is therefore a 1 2 positively invariant set. Since both elements of x _ are positive, any trajectory starting inside it moves towards higher values (the equilibrium at [1,1]). Since the same is true for trajectories starting arbitrarily close to 0, we conclude that 0 is unstable. Problem 4.15 At an equilibrium, x _ = 0, hence, 8 9 8 9 9 8 9 8 < x2 = 0 = < ::: = = < ::: = < ::: x1 = 0 ::: h1 (x1 ) = 0 h1 (x1 ) h2 (x3 ) = 0 ) ) ) : ; : ; ; : ; : x3 = 0 x2 x3 = 0 ::: ::: _ = x2 Now, V (x) is locally a pdf since V > 0 for any x 6 = 0. We nd V 2 h2 (x3 )x3 0, hence the 0 _ = 0 for x2 = 0; x3 = 0. Solutions of the ODE in fx : V _ = 0g have equilibrium is US. Next, using LaSalle, V x2 0, hence, x _ 2 = 0 implying that h1 (x1 ) = 0 and, therefore, x1 = 0, together with x2 = 0; x3 = 0. Hence, _ = 0g, implying that the equilibrium is UAS. only 0 is a trajectory inside fx : V Rx Rx For global AS, we also need V to be RU. That is, both 0 1 h1 , 0 3 h2 ! 1 as x ! 1. Problem 4.16 1 2 4 Let V = 1 4 x1 + 2 x2 . Then _ V
3 3 4 = x3 1 x2 + x2 (x1 x2 ) = x2 0

_ = 0 ) x2 = 0. In the set where x2 0, This implies that 0 is uniformly stable. Next, using La Salle, V we have x _ 2 = 0 and therefore, x1 = 0. Hence, the only trajectory in this set is 0, so the equilibrium is UAS. Furthermore, since V is RU, 0 is GUAS. Problem 4.17 At an equilibrium of the system x _1 x _2 = x2 = g (x1 ) h(x1 )x2

we have x2 = 0 and g (x1 ) = 0. To have an isolated equilibrium, g must have an isolated root at 0.

Choosing V =

R x1
0

2 g (s) ds + 1 2 x2 , we get

_ V

= g (x1 )x2 + x2 (g (x1 ) h(x1 )x2 ) = h(x1 )x2 2

Hence, sucient conditions for asymptotic stability are: _ 0. We may have h(0) = 0 provided that h is a class 2. h(s) > 0 in a neighborhood of 0, so that locally V K function (e.g., h(s) = s2 ). _ = 0 if either x2 = 0 or h(x1 )=0. The rst yields system trajectories that have Applying LaSalle's Thm, V x2 0 so x _ 2 =0. Hence, g (x1 ) = 0 but this only happens at the origin. Hence, 0 is UAS. The second, if possible, can only have an isolated root at 0, so it yields a limit set x1 0. From this, x2 must be 0, hence again 0 is UAS. Rx R x1 2 Next, we choose V = 0 1 g (s) ds + 1 2 x2 + 0 h(s) ds , from which we get Z x1 _ V = g (x1 )x2 + x2 + h(s) ds (g (x1 ) h(x1 )x2 + h(x1 )x2 ) 0 Z x1 = g (x1 ) h(s) ds
0

1. sg (s) > 0 (class K) in a neighborhood of 0, for V to be a (locally) pdf.

_ is lnsdf. Furthermore, V _ = 0 implies that With the same conditions as before, we have that V is lpdf and V _ 1 0. Since x1 = 0, in a neighborhood of 0. By LaSalle, trajectories will converge to solutions in the limit set x only 0 is such a solution, 0 is UAS. Remark: The interesting observation of this analysis is that by adding the two Lyapunov functions, the overall derivative is lndf (for a non-vanishing h), containing both x1 and x2 sign denite terms.

Problem 4.17? 2 1. Using V (x) = 5x2 1 + 2x1 x2 + 2x2 , we have _ (x) = 10x1 x V _ 1 + 2x _ 1 x2 + 2x1 x _ 2 + 4x2 x _2 2 2 _ V (x) = 2 (x1 x2 ) 2 (x1 + 2x2 ) 1 x2 2 _ (x) < 0 8x 2 D f0g, with D = x 2 I and V R2 kxk < 1 2. The analysis follows Example 4.11 and is an application of Theorem 3.4. The main idea is to show that vector eld on the boundary of \ x2I R2 j jx2 j 1 S= x2I R2 jV (x) 5 {z } | {z } |
S1 S2

points towards the interior of the set (see Figure 1). This and the fact that all such sets are compact (V is radially unbounded) imply the positive invariance needed in the theorem. Analytically, the normal vectors at the boundary of S1 and S2 are given by: 5x1 + x2 @S1 : S1 = P x = x1 + 2x2 0 @S2 : S2 = sign(x2 ) 1 If the projection of the vector eld on these normals is negative, then trajectories that start on the boundary will move towards the interior of the set. Indeed, for @S2 we have that f > S2 = sign(x2 )@S2 x1 x2 (x1 + 2x2 ) 1 x2 2 x1 1; x2 = 1 f > S2 = x1 1; x2 = 1 4

yielding the condition x1 > 1 at x2 = 1 and x1 < 1 at x2 = 1 (for f > S2 < 0). For @S1 1 1 1 x2 2(1 x2 2 2) f > S1 = x> + x 2 1 1 2(1 x2 2 ) 4(1 x2 ) 2 x2 1 2x2 2 2 = x> x 2 1 2x2 2 5 4x2 which is negative for x2 2 < 1. _ (x) = 0 is the origin, hence the set S is an estimate for Now, the only trajectory in the subset of S where V the region of attraction. It is also quite interesting to perform a reverse-time simulation starting from points that are known to belong to the region of attraction. In this case, all such trajectories converge to a limit cycle that denes the RoA.

Figure 1: Vector eld directions in the set S .

Problem 4.18 From the linearization around the origin we nd that the system is asymptotically stable, since < f (A)g < 0, with @f 0 1 = A= 1 1 @x 0 For an estimate of the region of attraction, we solve the Lyapunov equation P A + A> P = I from which 1:5 0:5 P = 0:5 1

_ 0g. That is, we nd the maximum Then using V (x) = x> P x, we nd the largest level set contained in fV _ constant c such that fxjV (x) cg fxjV (x) < 0g. _ (x) = x _ V _ > P x + x> P x 2 2 4 _ V (x) = x1 x2 + x1 + 2x3 1 x2 Letting jx1 j < , we have
2 _ (x) x1 (1 2 ) x2 V 2 + 2 x1 x2 2 2 _ (x) x> 1 V x 2 1

_ (x) < 0. Hence, for < 0:7861, V 5

Next, to nd the largest level set in jx1 j we nd the point where the normal to the level set is aligned with the normal to the constraint (see Figure 2). That is, P x = [1; 0]> for a constant lambda to be determined. From this, x = [0:8; 0:4]> . For the level set to be tangent to the constraint, = =0:8. This vector x is > 2 2 on the boundary largest level set of the so, c = V (x) = x P x = ( =0:8 )0:8. Thus, the estimate region of 2 2 attraction S = x 2 I R jV (x) =0:8 .

Figure 2: Vector eld for S . Problem 4.23 Letting V = x> P x we get _ V = x> (A> P + P A P BR1 B > P P BR1 B > P )x = x> Qx u> Ru for u = R1 B > P x

_ 0 implying US and the limit set will be trajectories in V _ = 0, i.e., in x> Qx 0 and Hence, for LaSalle, V > u Ru 0. _ = 0 implies x = 0 so 0 is UAS. (Alt.: V _ is ndf.) 1. If Q > 0, then V _ 2. Since R > 0, we now have V = 0 when u = 0 and Cx = 0. Substituting, we get that the limit set will be trajectories of the system x _ = Ax in Cx 0. If (A; C ) is observable, then in any interval [t0 ; t0 + T ], x(t0 ) = L[y[t0 ;t0 +T ] ], where L is a linear transformation. That means the only solution is x 0, hence 0 is UAS. Problem 4.24 1 > > Letting V be the solution of the Hamilton-Jacobi-Bellman, Vx f + q 1 G Vx = 0 we get 4 Vx GR _ V
> = Vx f KVx GR1 G> Vx 1 > > = q (K 1 G Vx 4 )Vx GR

1 _ _ where Vx = @V @x . Hence, for LaSalle, V 0 for K 4 , implying US, the limit set will be trajectories in V = 0 > 1 > > in q 0 and u Ru 0, with u = R G Vx . _ _ 1. If q (x) > 0 (pdf), and K > 1 4 , then V = 0 implies x = 0 so 0 is UAS. (Alt.: V is ndf.) _ = 0 when u = 0 and q (x) = 0. Substituting, we get that the limit set will 2. Since R > 0, we now have V be trajectories of the system x _ = f (x) in q (x) 0. Since the only solution is x 0, it follows that 0 is UAS. Global results are obtained if the assumptions hold globally and V is, in addition, a radially unbounded function.

Problem 4.25 Controllability of (A; B ) implies that any initial state can be transfered to the origin in time > 0 so the nite-interval [0; ] gramian should be nonsingular, and therefore positive denite. (See any text on Linear System Theory for this.) Hence, its inverse is also positive denite. 6

Viewing the Lyapunov equation as the solution of a dierential equation, we get Z o > > d n At > AW + W A = BB > eA t dt = BB > eA BB > eA e dt 0 Hence, Letting V (x) = x> W 1 x, we have (A BK )W + W (A BK )> = AW + W A> 2BB > = BB > eA BB > eA
>

_ = x> W 1 [(A BK )W + W (A BK )> ]W 1 x = x> W 1 [BB > + eA BB > eA> ]W 1 x 0 V So the origin is a stable equilibrium. For asymptotic stability, we need to show that no eigenvalue of A BK can have a zero real part. Suppose there is one and let be the corresponding left eigenvector. Then (A BK ) = . Taking the quadratic of W 1 with the lyapunov derivative we get [(A BK )W + W (A BK )> ] = W + W = 2Re[] W = [BB > + eA BB > eA
>

But Re[] = 0 so B = 0 which implies that (A BK; B ) is not controllable. Since controllability is invariant under state feedback, this is a contradiction. Problem 4.28 At an equilibrium of the system x _1 x _2 = x1 2 = (x1 x2 1)x3 2 + (x1 x2 1 + x1 )x2

we have x1 = 0 and x2 = 0. The terms in the parenthesis cannot be zero (x1 x2 = 1 or x1 x2 = 1 + x2 1 ), since x1 = 0. The linearization has system matrix A = I , so the origin is ES. Letting V = x1 x2 we get _ V = = = =
2 x1 x2 + x1 x3 2 (x1 x2 1) + x1 x2 (x1 x2 1 + x1 ) 2 2 4 2 2 x2 1 x2 2x1 x2 + x1 x2 + x1 x2 (x1 x2 ) 2 2 x1 x2 (x1 x2 2) + x1 x2 (x1 x2 x2 + x1 x2 2) 2 x1 x2 (x1 x2 2) + x1 x2 ([x1 x2 1]x2 + x2 1) 2 2 2(2 2) + 2((2 1)x2 + x1 ) for x1 x2 2 0

Hence fx : x1 x2 0g is a positively invariant set. Since it does not contain 0, the origin cannot be GAS. Problem 4.29 At an equilibrium of the system x _1 x _2 = x1 x3 1 + x2 = 3x1 x2

we have x2 = 3x1 and x1 x3 1 + 3x1 = 0. Hence, there are three equilibria at (0,0), (2,6), (-2,-6). The linearization matrix is 1 3x2 1 1 A= 3 1 Its eigenvalues at (0,0) are 2, -2, implying that the origin is an unstable equilibrium (saddle). At the other two equilibria, the eigenvalues of A are -11.29, -0.7085, so these are stable nodes. To estimate the region of attraction of (2,6), (and similarly for the other equilibrium at (-2,-6)), we rst shift the state

space so that this equilibrium appears at the origin in the new coordinates. That is, we dene z1 = x1 2 and z2 = x2 6. Then, we have z _1 z _2
3 2 = z2 z1 6z1 11z1 = 3z1 z2

For a quadratic Lyapunov candidate, V = x> P z , where 1 1 p P = p q 2 Without loss of generality, the rst element is normalized to 1. The condition for P > 0 is q > p2 . The computation of the derivative is lengthy but straightforward: _ V
4 2 2 = z1 (11 3p)z1 (q p)z2 (12p 3q 1)z1 z2 2 +(6pz1 )z1 z2 + (pz1 )z1 z2 3 6z1

The rst group represents negative-denite terms. The second is cross-products to be handled by completion of squares. The third group is terms of cross-products with a coecient function of the states. These can also be handled by completion of squares with an additional constraint of bounding the coecient. For this problem, 2 this is simply a bound on z1 , say k. The last term should be handled by subtracting from the z1 with an additional magnitude constraint on z1 . Thus, one approach would be to split the quadratic terms in 4 parts and use each one to bound each of the indenite terms. This is a straightforward but inecient procedure that is good only for analysis and existence results. For more quantitative analysis, all terms should be handled together, setting up constraints and evaluating them numerically. For example, after xing k the constraints for _ 0 can be evaluated on a grid of q; p. After selecting the \best" parameters q; p, the procedure is repeated V for a dierent k until no more improvement is possible. The more interesting part of this problem is the specication of the optimization objective. This is important to achieve some form of normalization to compare the ellipsoids for dierent parameters q; p. While many possibilities exist, an attractive one is to maximize the volume of the ellipsoid level set. For an ellipsoid z > P z c, the volume is proportional to det(cP 1 ). Now, for a given set of parameters q; p and k, the largest volume ellipsoid z > P z c that is contained p in the k2 constraint set fz : jz1 j kg, corresponds to c = [1 0]P 1 [1;0] . (Obvious, after a change of coordinates w = P z .) Maximizing this volume
k4 P 1 (1;1) det(P ) ,

we get a solution which is approximately

k = 1:4; q = 0:388; p = 0:09; c = 0:5; V ol = 0:94 _ = 0 and then plot contours After specifying the ellipsoid matrix P , it is also easy to plot the contour of V _ of V = c until the level set grows beyond the boundary of V = 0. That set will be a -usually much betterestimate of the region of attraction (ROA). In the attached gures we show the \optimal" level set estimate of the ROA obtained from the above iterative method. Notice that the renement with the Lyapunov level sets is quite impressive, yielding a set that comes near the saddle equilibrium, that represents a hard bound (optimal c = 6, an order of magnitude improvement). In contrast to that, the similarly rened estimate of the ROA using the Lyapunov function corresponding to the linearized system is still fairly small. In this case, the local system behavior is signicantly dierent so the level sets and trajectories near the equilibrium have dierent directionality than far from it. It goes without saying that the renement idea using contour plots is feasible (and quite easy) because the system has only two states. For higher order systems, this approach would be no longer feasible and we would need to rely more on analytical methods to bring the problem to a computable form.

Problem 4.43 8

With the given control, the closed loop system is x _ = f (x) G(x)G> (x)P x. Using V = x> P x as a Lyapunov function candidate, _ = 2x> P x V _ = 2x> P f (x) 2x> P G(x)G> (x)P x x> P x W (x) V (x) Hence, the origin is ES. Problem 4.44 2 2 Let V = 1 2 (x1 + x2 ). Then _ V
2 2 2 2 2 = x2 1 + x1 x2 + x1 (x1 + x2 ) sin t x1 x2 x2 + x2 (x1 + x2 ) cos t p jxj2 + jxj2 (jxj cos2 t + sin2 t) (1 r)jxj2 ; for all jxj r, r < 1

2 Hence, the origin is ES and, since V = 1 2 jxj , an estimate of the region of attraction is fx : jxj < 1g.

Problem 4.54 1. x _ = (1 + u)x3 . Let u = 2, x(0) > 0. Then x diverges. The system is not I2S p stable. _ = x6 x4 ux4 x4 for jxj juj. The system is I2S 2. x _ = (1 + u)x3 x5 . Let V = x2 =2. Then V stable. 3. x _ = x + x2 u. Let u = 1, x(0) > 1. Then x diverges. The system is not I2S stable. 4. x _ = x x3 + u. The 0 equilibrium for u = 0 is unstable and, therefore, the system is not I2S stable. (But p it is bounded with bound jxj juj.) Problem 4.55 1. For the system x _1 x _2 = x1 + x2 1 x2 = x3 x 2+u 1

2 2 2 2 2 _ we choose V = 1 2 (x1 + x2 ) to cancel the cross terms and get V = x1 x2 + x2 u jxj + jxjjuj. _ is ndf for jxj > (juj) = juj and the system is I2S stable. Hence, V 2. For the system

x _1 x _2

= x1 + x2 = x3 1 x2 + u

1 2 1 4 4 2 2 4 _ we choose V = 1 4 x1 + 2 x2 ) to cancel the cross terms and get V = x1 x2 + x2 u 2 min(jxj ; jxj ) + jxjjuj. 1 4 2 2 4 2 2 4 2 4 4 2 4 4 (Note: x1 + x2 jxj + (x1 x1 ) jxj if jx1 j > 1. x1 + x2 (x1 + x2 ) + (x2 x2 ) 2 jxj if jx2 j < 1. 1 2 1 1 2 2 x4 1 + x2 2 x2 + 2 2 jxj if jx2 j > 1 and jx1 j < 1.) _ is ndf for jxj > (juj) = max(j2uj1=3 ; j2uj) and the system is I2S stable. Hence, V 3. The system now is

x _1 x _2

= x1 + = x3 1 x2 + u

This is a good example to investigate the use of more general Lyapunov functions: 4.55.1 is the simpler case of a nonlinear system where a decoupled (no cross terms) quadratic Lyapunov is appropriate. 4.55.2 is more complicated in the sense that the Lyapunov is a mix of quadratic and quartic terms, but is still decoupled. This example requires both, a non-quadratic Lyapunov function with cross terms. An appropriate change of coordinates achieves the elimination of the cross terms, as follows: With the transformation (x1 ; x2 ) 7 ! (z1 ; z2 z1 ), the system takes a form similar to 4.55.2 for which a Lyapunov V = 2 4 2 z1 + z1 + z2 can be easily found to produce the same result.

On the other hand, a useful, more general approach is suggested in Pr. 4.17. Identifying g (x1 ) = x3 1 and h(x1 ) = 1, all the conditions of that problem are met. We construct the Lyapunov function candidate as the sum of the two energy forms
2 V =1 2 x2 +

Z
0

x1

g+

1 2

Z x2 +
0

x1

2 Z h +
0

x1 2 2 1 1 4 g=1 2 x2 + 2 [x2 + x1 ] + 2 x1

Its derivative is which is negative when jxj > (juj) = max(j6uj1=3 ; j6uj). Hence, the system is I2S stable. 4. For u=0 the system has an equilibrium set fx2 1 = 1g, hence the origin is not GAS and the system is not I2S stable. 5. For u=0 the system has an equilibrium points (0,0), (1,1), (-1,-1), hence the origin is not GAS and the system is not I2S stable. Problem 4.56 The system x _ 1 = x3 _ 2 = x3 1 + x2 with x2 as an input is I2S stable. The system x 2 is has the origin as a UAS equilibrium. By Lemma 4.7, the origin of the combined system is GUAS. Problem 4.59 The system is I2S stable (Example 4.25). Since the input goes to zero, for any 1 > 0, there exists T such that et < 1 , for t > T . Choose 1 such that (1 ) < =2 and t0 > T . Also, the eect of the initial conditions is decaying as (c; t t0 ) where c > 0 is a positive constant because x is bounded. Since is class KL, there exists T 0 > 0 such that (c; t t0 ) < =2, for all t > T 0 . Then jx(t)j for all t > max T; T 0 . This implies that x(t) ! 0, as t ! 1. Problem 4.64 Let Vk+1 = V (f (xk )), Vk = V (xk ). Then Vk+1 Vk c3 jxk j2 c3 =c2 Vk . Note that V 0 implies 2 k+1 = c3 =c2 1. Thus, Vk+1 (1 )V V0 . Bringing in the LHS of the k (1 ) Vk1 (1 ) q p c2 k+1 Lyapunov function inequality, jxk+1 j c1 ( 1 ) jx0 j, from which we conclude the exponential stability of the equilibrium.
2 _ = x4 V 1 x2 + [x1 + 2x2 ]u

10

Simulink code for Pr. 4.29

EEE 586
Problem 5.2 1.

HW # 5 SOLUTIONS

for jvi j L ) SAT

vi
L

vi L

v jhi (v )j = LSAT i vi L v i jhi (v )j = LSAT vi L ) hi (v) = 0, and when jvi j > L ) SAT jhi (v )j = jLsign (vi ) sign (vi ) jvi jj = jL jvi jj = jvi j L

vi
L

= sign (vi ) L; L > 0 implies

for L < jvi j L(1 + ) ) L

jvi j 1+ ,

hence jhi (v )j jvi j vi jvi j 1+ 1+

2. For x _ = (A + BF ) x + Bh(F x), V = x> P x; P (A + BF ) + (A + BF )> P = I with A + BF Hurwitz implies P = P > > 0. Then V is positive denite function, decrescent and _ = x> P (A + BF ) + (A + BF )> P x + 2x> P Bh(F x) V x> x + 2 kP B k kxk kh(F x)k ; 2 _ V kxk 1 + 2 kP B k kF k
But kh(F x)k kF xk 1+ kF k kxk 1+ 2 E . Hence,

1+

x2E

To see this, let fi denote the rows of F (n-dimensional row vectors). Then (F x)i = fi x and fi x c is a half space and so is fi x c. Hence each inequality denes a set \
i

1 Now, suppose 1+ < 2kP B kkF k . Then there exists > 0, such that 1 + 2 kP B k kF k 1+ < . This _ kxk2 , x 2 E is locally negative denite function ( 9Br (0) : Br E ) where E = implies that V fxj j(F x)i j L(1 + )g, E is an intersection of half spaces.

Ei = fxjfi x cg \ fxjfi x cg Ei .

and E =

Notice that E is not necessarily bounded, but it contains an open ball around the origin e. g., Br (0) E ) _ where r L(1+ kF k . So, V is locally negative denite implies that the origin is uniformly asymptotically stable (locally). 3. For an estimate of the region of attraction we need to nd the largest level set contained in the set where _ < 0. Hence the region of attraction, say R, would be a set Rc = xjV (x) = x> P x c and c V > 0, V is dened as the solution of the optimization problem max c s.t., Rc fxj(F x)i L(1 + )g.
> To solve this problem note that P = P > 0 ) 9P 2 symmetric positive denite such that P 2 P 2 = P 1 1 1 1 1 1 where x 2 Rc ) P 2 x c. Now (F x)i = (F P 2 P 2 x)i = (fi P 2 )(P 2 x). Since (P 2 x) is arbitrary
1 1 1

1 1 L(1+ ) ; 8i, such that P 2 x c, we have that x 2 E , fi P 2 c L(1 + )8i. Therefore, c 1 f P 2 i implying that L(1 + ) c = min 1 i fi P 2 4. Using the above procedure we have that P = p 2 (1 + (1:9)2 ), so c = p L(1+) 2 .
2(1+(1:9) ) 1 2I

and P 2 =

p p 1 1 2I , and f1 P 2 = 2, f2 P 2 =

Problem 5.5 Let V (x) = x> P x, then _ (x) = x _ V _ > P x + x> P x > > _ V (x) = x P A + A P P BB > P + g (x); B > P x + B > P x; g (x) B > P x; B > P x _ (x) = x> (Q 2P ) x + 2 g (x); B > P x B > P x; B > P x + hg (x); g (x)i hg (x); g (x)i V _ (x) = x> (Q 2P ) x B > P x g (x)2 + kg (x)k2 V _ (x) x> (Q 2P ) x + k2 x> x V _ (x) x> Q k2 I x 2x> P x V _ (x) 2x> P x < 0 V The derivative of the Lyapunov function V (x) along the trajectories of the perturbed system is negative denite, hence the origin is globally exponentially stable. Problem 5.10 p 2 4 4 6 4 _ 2. Using V = 1 8jxj u. Then, by Theorem 4.19, the system 2 x , we have V = x + ux x x ; is I2S stable. Then, by Theorem 5.3, we conclude that the system is L1 stable . The origin of the unforced system is not exponentially stable. However, the origin of the forced system is asymptotically stable for juj < 1, which implies that jy (t)j (jx0 ); t) + ju(t)j. Therefore, the system is small-signal, nite-gain L1 stable. Problem 5.11 p 2 2 _ 1. Using V = 1 V u. Then the system is I2S stable and, 2 (x1 + x2 ), we get V 2V + x2 u 2V + by Theorem 5.1, it is L1 stable. With u = 0, the same derivation shows that the origin is GES. Since all assumptions of Theorem 5.1 are satised globally, the system is nite gain L1 stable. 2. For the system x _ 1 = x1 + x2 ; x _ 2 = x3 y = x2 1 x2 + u; we have a linearization matrix with eigenvalues in the LHP. Corollary 5.1 is applicable, showing s.s.-Lp -stability with nite gain. p 1 2 1 4 4 2 _ For a global result, take V (x) = 1 4 x1 + 2 x2 , yielding V = x1 x2 + x2 u 4 V + 2V juj. Thm.5.2 shows L1 -stability, but not with nite gain. In fact, while Thm.5.1 is not applicable because V is not bounded by _ quadratics, our last expression that 0 is ES (globally). Proceeding as in the proof p V cV (for u1= 0) shows p _ W + 2juj. The comparison Lemma together with the bound of Thm.5.1, we dene W = V to get W 8 p jy j 2W show that the system is L1 -stable with nite gain. Notice that our Lyapunov function did not meet all the general requirements of Thm.5.1, but (1) it showed ES and (2) it provided suitable bounds of the input and output vectors/functions. This is an example of how, in nonlinear systems, a technique of a proof may be applicable, while the theorem itself is not. (This is so that those theorems are easier to state.) 2 2 _ 3. Using V = 1 2 (x1 + x2 ), with u = 0, we get V = 2V (2V 1). So for V (0) > 1=2, V grows unbounded. Hence, the system is not L1 stable. However, using linearization we nd that the origin is locally ES and all assumptions of Theorem 5.1 are satised locally. Hence the system is small signal, nite gain L1 stable. Problem 5.15 2

1. For the system x _ 1 = x2 ; we have a linearization matrix with eigenvalues in the LHP. Corollary 5.1 is applicable, showing s.s.-Lp -stability with nite gain. A lower bound of the L2 gain is the H1 norm of the linearization transfer function: G(s) = s !2 1 2 s2 +ks+a . Its magnitude on the j! -axis satises jG(j! )j = (!2 +a)2 +(k! )2 = (a=! 2 1)2 +(k)2 . Hence, the maximum jG(j! )j is 1=k. 2 To obtain an upper bound we use V (x) = 1 2 x2 + a(1 cos x1 ) (Recall Problem 4.17) as a test function for the HJ inequality. With a scaling factor (to be determined), we compute 1 2 2 1 2 2 2kx2 + 2 x2 + x2 H= 2 The minimum of H occurs when = 2 k, where = 1=k. Hence, 1=k is an upper bound of the L2 -gain of the system. Since it is also a lower bound, it is equal to the L2 -gain of the system. Notice that stability here is \small-signal" since V is only an lpdf. Its domain extends up to x1 < 2 . Other test functions may yield the same L2 -gain (e.g., the quadratic V corresponding to the linearization) but with dierent domains. > 2. Let W (x) = 1 2 x x. 2 3 x2 @W 2 2 2 2 5 f = [x1 ; x2 ; x3 ] 4 x1 x2 SAT (x2 = (x2 2 x3 ) 2 x3 )SAT (x2 x3 ) = hSAT (h) @x x SAT (x2 x2 )
3 2 3

x _ 2 = a sin x1 kx2 + u;

y = x2

@W G = [x1 ; x2 ; x3 ] 4 @x

Let D = fx : jh(x)j 1g. W (x) satises (5.32-33) in D with k = 1. Taking V = W and = 1, it follows that _ 2 0 ) x1 (5.28) is satised in D. Next, for the unforced system, h(x) 0 ) x _ 3 0 ) x3 = const. ) x 0)x _ 1 0 ) x2 0 ) x3 0. Using Lemma 5.2 we conclude that, for suciently small kx0 k, the system is small-signal nite-gain L2 stable, with gain less than or equal to 1. 3. In the set D = fx : j2x1 + x2 j < 1g, the stsrem is linear, with transfer function H (s) = s2 +1 s+1 . Its p L2 -gain is the peak magnitude of the frequency response which is 2 = 3. Hence, for suciently small x0 , the p system is small signal, nite gain L2 stable with gain 2= 3. Problem 5.17 @V @V f+ Gu = @x @x @V @V 1 f+ Gu (L + W u)> (L + W u) @x @x 2 @V 1 > 1 > 1 > f + L L + h h ::: = (L + W u) (L + W u) + 2 @x 2 2 @V @V 1 > 1 h h+ Gu h> Ju Gu + u> ( 2 I J > J )u 2 @x @x 2 1 1 1 1 = (L + W u)> (L + W u) + H h> h h> Ju + 2 u> u u> J > Ju 2 2 2 2 1 1 1 = (L + W u)> (L + W u) + H + 2 u> u y > y 2 2 2
1 > 2 > Gu 1 2 u u 2 y y , from which we proceed as in the proof of Theorem 5.5.

3 0 2 x2 5 = (x2 2 x3 ) = h x3

H 0 implies

@V @x

f+

@V @x

Problem 5.18 For the system x _ = f (x) + G(x)u + K (x)w;

y = h(x)

> > we use u = G> Vx and Y = H (x) = [h; G> Vx ] to get the closed-loop system

x _ = F (x) + K (x)w; 3

Y = H (x)

We now apply the HJ inequality to compute an upper bound for the L2 -gain of w 7 ! Y , with the test function V: H 1 > Vx KK > Vx + H >H 2 1 > > > = 2Vx f 2Vx GG> Vx + 2 Vx KK > Vx + h> h + Vx GG> Vx 1 > > = 2Vx f Vx GG> Vx + 2 Vx KK > Vx + h> h 0(from the given inequality for V ) = 2Vx F +

Problem 5.24 First, the equilibrium for the equation x _ 2 = x3 2 is globally asymptotically stable. Consider x2 3 as input for the equation x _ 1 = x1 + x2 , from Lemma 5.4 we can conclude that the system is input-to-state stable, using Lemma 5.6 the system is globally uniformly asymptotically stable.

EEE 586

HW # 6 SOLUTIONS

Problem 6.1 Dene yo = y + K1 u as the output of the transformations, and uo as the input. Then u, the input to the nonlinearity, is u = K 1 (uo + yo ). From the sector condition < h(t; u) K1 u ; h(t; u) K2 u > 0, we have < yo ; yo Ku > 0 )< yo ; yo (uo yo ) > 0 )< yo ; uo > 0. Problem 6.2 1 1 _ V = ah(x)x _ = h(x) x + h(x) + u = h(x)[h(x) kx] + h(x)u yu k k Problem 6.3 To show passivity for the system x _1 x _2 = x2 = h(x1 ) ax2 + u;

y = cx1 + x2

0 < c < a, h 2 (0; 1], it is convenient to rewrite the state equations in terms of the output (i.e., in the new coordinates x1 ; y ): x _1 y _ = cx1 + y = h(x1 ) a(cx1 + y ) + uc(cx1 + y );

y = cx1 + x2

or, letting b = a c > 0, and renaming the states z _1 z _2


2 Now, choose V (z ) = 1 2 z2 +

R z1
0

= cz1 + z2 = h(z1 ) + cbz1 bz2 + u;

y = z2

2 h(x)dx + 2 z1 to get 2 2 _ = bz2 cz1 + ( + cb)z1 z2 + yu cz1 h(z1 ) V

The three rst terms form a quadratic when = cb. With this choice, _ = H (z ) + yu V where H (z ) = b(z2 cz1 )2 + cz1 h(z1 ), which is a pdf, since z1 h(z1 ) >R0. Hence, the system is strictly passive. x1 2 2 Note: In the original coordinates, V (x) = 1 2 [(x2 + cx1 ) + cbx1 ] + 0 h( )d . Problem 6.4 It follows that P is positive denite. Then, _ V = kh(x1 )x _ 1 + 2x> P x _ 2 = 2p12 x2 2p12 x1 h(x1 ) + 2p12 x1 u kax2 2 + kx2 u

Hence,
2 _ + u2 2p12 x2 yu = V 2 + 2p12 x1 h(x1 ) 2p12 x1 u kax2 2 2 2 _ + (u p12 x1 ) p12 x1 + 2p12 x1 h(x1 ) + (ka 2p12 )x2 = V 2 2 2 2 _ dotV p2 x + 2 p x + ( ka 2 p ) x = V + ( x ) 1 12 12 12 1 1 2

Since p12 < min(21 ; ak=2), (:) is positive denite. Hence the system is strictly passive. Problem 6.8 1

Considering frequencies that approach innity, it follows that SPR implies D+D> 0. Hence, nonsingularity implies that D + D> > 0. Hence, W is square nonsingular. Thus, L> L = L> W (D + D> )1 W > L From (6.15), L> L = (C B > P )> (D + D> )1 (C B > P ) Substituting in (6.14) we get P h i h i> I + A + I + A P + (C B > P )> (D + D> )1 (C B > P ) = 0 2 2

Factoring out the quadratic term, we obtain the given Riccati. Problem 6.10 a. The equations of motion are M (q ) q + C (q; q _)q _ + Dq _ + g (q ) = u; The derivative of V = 1 _> M (q )q _ + P (q ) is given by 2q _ V @P 1 > _ _+ _ Mq q _ = qM _ (q ) q+ q 2 @q 1 > _ _)q _ Dq _ g (q )] + q _ _ + g > (q )q = q _> [u C (q; q _ Mq 2 = y > u y > Dy y>u _ 2C is a skew symmetric matrix. Hence, the system is passive. where we used the property M _ y > (Kd y + v ) from which it follows that v 7 ! y is output-strictly passive. b. In this case, V _ y > (Kd y ) = q c. With v = 0, V _> Kd q _ 0. Then, _ 0)q V _0)q 0 ) g (q ) 0 ) q 0 Hence, the origin is AS. It will be GAS, if q = 0 is the unique root of g (q ) and P (q ) is RU. Problem 6.11 H (j!) = ) Z 1 jwt h(t)ejwt dt ) jH (j! )j = h ( t ) e dt 0 Z0 1 Z 1 h(t)ejwt dt sup jH (j!)j sup jh(t)j dt Z
1 !2I R 0 0

y=q _

! 2I R

Note: Alternative proofs could use the fact that khk1 is the induced L1 gain. Problem 6.11 2 2 2 (a) Let V = 1 2 (J1 !1 + J2 !2 + J3 !3 ). Then _ = (J2 J3 )!1 !2 !3 + !1 u1 + (J3 J1 )!1 !2 !3 + !2 u2 + (J1 J2 )!1 !2 !3 + !3 u3 = !> u V This shows that the system is lossless. _ + min (K )k! k2 . Hence, the map _ = !> K! + v> ! , hence, v> ! V (b) With u = K! + v we have V v7 ! ! is nite gain L2 stable with gain at most 1=min (K ). _ = ! > K!, hence, V is pdf and RU, and V _ is ndf. Hence, the origin is GAS. (c) With u = K! we have V Problem 6.13 2

Using the hint, we expand 1 1 [L + W u]> [L + W u] = L> L + L> W u + u> W > L + u> W > W u 2 2 > 1 > 2 @V @V 1 > @V 1 > > h J+ = h J+ G I J J G + h J+ G u 2 @x @x 2 @x > @V 1 1 > > u h J+ G u> 2 I J > J u + 2 @x 2 Furthermore, 1 1 1 1 1 y > y = h> h h> Ju u> J > h u> J > Ju 2 2 2 2 2 Then 1 2 1 @V @V [L + W u]> [L + W u] + u> u y > y + H = f+ Gu; 2 2 2 @x @x 8u 2 I Rm

Next, assume that there exists a positive denite continuously dierentiable V (x) satisfying H 0. Then _ V = Integrating both sides, Z Z 2 t 1 t 2 V (x(t)) V (x0 ) ju( )j d jy ( )j2 d 2 0 2 0 Z Z 1 t 2 t 2 jy ( )j d ju( )j2 d + V (x0 ) V (x(t)) 2 0 2 0 p k(y )t kL2 k(u)t kL2 + 2V (x0 ) Finally, taking the supremum over the truncation intervals ky kL2 kukL2 + Problem 6.14 For the system H1 : x _1 x _2
> with V = 1 2 x x we get

@V 2 1 @V 1 f+ Gu = [L + W u]> [L + W u] + u> u y > y + H @x @x 2 2 2 2 2 1 2 juj jy j 2 2

p 2V (x0 )

= x2 = x1 h1 (x2 ) + u;

y = x2

Since h1 2 (0; 1], it follows that H1 output strictly passive. Furthermore, with u 0 and y 0, the only solution is x 0, so it is zero-state observable. For the system H2 : x _3 with W = R x3
0

_ = x1 x2 x1 x2 h1 (x2 )x2 + ux2 = yu yh1 (y ) V

= x3 + u;

y = h2 (x3 )

h2 (z )dz we get _ = x3 h2 (x3 ) + uh2 (x3 ) = x3 h2 (x3 ) + uy W 3

Since h2 2 (0; 1], it follows that H2 strictly passive. Since both H1 ; H2 are passive, their feedback connection is also passive. The origin is also asymptotically stable since one is strictly passive and the other is output strictly passive and zero-state observable. To show global asymptotic stability, the only thin remaining to show is that the storage function (V + W ) jz j 1 is radially unbounded. Obviously, V is RU in x1 ; x2 . W is also RU in x3 since h2 (z ) 1+ jz j2 ' jz j for large z , hence the integral diverges. Hence, 0 is GAS.

Problem 6.15 1 2 4 4 2 _ (a) Let V1 = 1 4 x1 + 2 x2 . Then V = x1 x2 + y1 e1 . This shows that H1 is strictly passive. 4 4 _ Let V2 = 1 4 x3 . Then, V = x3 + y2 e2 . Hence, H2 is strictly passive. It now follows from Thm. 6.1 that the feedback interconnection is passive. (b) Since both systems are strictly passive with RU storage functions, it follows from Thm. 6.3 that the origin is GAS.

EEE 586

HW # 7 SOLUTIONS

Problem 7.1 4. The transfer function has poles on the jw-axis. The correct semicircles at innity should be drawn corresponding to the indentations of the Nyquist path, from wihch the allowed choices for a disk is to be contained in the approximate sector [135o ; 225o ], centered at -1. (E.g., verify the semicircles at innity in 1s Matlab by drawing the Nyquist for Ga (s) = s2 + as+1 and send a to zero.) One may now select dierent tradeos and t a disk in this sector, from which to obtain the sector for absolute stability. For example, taking the Nyquist plot of G together with the shifted unit circle (nyquist(G,UC/A-B), U C = tf ([1; 1]; [11])), and after some iterations on A,B), we nd that the disk D(1/9.7,1/2.3) is permissible, and that corresponds to the sector [.103, 0.435] where absolute stability is guaranteed. A dierent option would be to use the loop transformation theorem with gain a so that the new loop transfer function would have a minimum real part, for which the transformed system would be absolutely stable in a sector [0,k] and the original would be A.S. in [a,k+a]. 5. The transfer function is Hurwitz and, therefore, Cases 2 and 3 of the Circle Criterion apply. Case 2: From the Nyquist plot, minfRe[G(jw)]g > 0:6. So we choose = 1=0:6 = 1:67 and conclude that the system is absolutely stable for the sector [0,1.67]. Case 3: The Nyquist plot should be inside the disk D(; ). One can dene several possible disks. For example, starting with the center at 0.2, the radius to achieve inclusion is found as 0.9. Thus, 1= = 1:1 ! = 0:91 and 1= = 0:7 ) = 1:43. Hence, the system is Absolutely stable for the sector [0:91; 1:43]. Other meaningful choices, e.g., to maximize the radius of the disk, would involve an iterative solution. 7. The Nyquist plot is to the right of the vertical line at -0.341. Then the system is absolutely stable for the sector [0, 2.93]. Alternatively, the Nyquist plot lies inside the disk D(-1/1.1, 1/0.5), so the system is A.S. in the sector [-0.91, 2] Problem 7.10 4. For a A; (a sin ) = 0; ) (a) = 0. For a > A, 0 for 0 or (a sin ) = B for < < where = sin1 (A=a). Thus, 4 (a) = a Problem 7.11 4. G(jw) = Z
=2

4B 4B B sin d = cos = a a

A2 a2

Im[G(jw)] = 0 ) w = 1. Then, Re[G(jw)] = 1 and the equation 1 + (a)Re[G] = 0 has the unique solution a = (8=5)1=4 . Thus, we expect the system to exhibit a periodic solution with amplitude approximately a = (8=5)1=4 and frequency approximately 1rad=s. 5. jw w2 + jw(1 w2 ) G(jw) = = w2 jw + 1 (1 w2 )2 + w2 Im[G(jw)] = 0 ) w = 1. Then, Re[G(jw)] = 1 and the equation 1 + (a)Re[G] = 0 has the unique solution a = (8=5)1=4 (See Problem 5.10.1). Thus, we expect the system to exhibit a periodic solution with amplitude approximately a = (8=5)1=4 and frequency approximately 1rad=s.

jw w2 + jw(1 w2 ) = w2 jw + 1 (1 w2 )2 + w2

EEE 586

HW # 8+ SOLUTIONS

Problem 8.15 2 a. Using V (x) = 5x2 1 + 2x1 x2 + 2x2 , we have _ V = (10x1 + 2x2 )x2 + (2x1 + 4x2 )[x1 x2 (2x2 + x1 )(1 x2 2 )] 2 2 2 2 = 2x1 + 4x1 x2 2x2 2(x1 + 2x2 ) (1 x2 )

_ 2(x1 x2 )2 0. Thus, V _ = 0 ) x1 x2 0 ) x For jx2 j 1 we have V _1 x _ 2 0 ) 3x2 (2 x2 )2 0 ) x2 0. Hence, by LaSalle's theorem, we conclude that the origin is asymptotically stable. _ 0 for all x 2 S . To show that S is a region of attraction, we should show that it is b. Observe rst that V positively invariant. For this, it is enough to show that trajectories at the boundary of S do not leave S . For the _ 0. For the boundaries boundaries that belong on the Lyapunov level set V (x) = 5 this is automatic since V that belong to the absolute value condition jx2 j 1 we evaluate the derivative of jx2 j2 on the constraint: d 2 _ 2 = 2x2 (x1 + x2 ) x2 = 2x2 x dt jx2 j=1

It follows that the right-hand side is nonpositive on the boundaries so trajectories that start in S stay in S . _ 0 in S , by LaSalle's theorem, all trajectories starting inside it, must approach Since the set is compact and V _ = 0g. From part (a), the largest invariant set is the origin, hence S is an the largest invariant set in fx 2 S jV estimate of the region of attraction.

Problem 8.16 Consider the Lyapunov function candidate Z x1 1 1 1 2 1 4 V (x) = x2 + (y y 3 ) dy = x2 2 2 + x1 x1 2 2 2 4 0 p 2 _ = x2 which is negative semidenite and V _ 0) which is positive denite in the region jx1 j < 2. Then, V x2 0 ) x1 x3 0 ) x 0 for j x j < 1. 1 1 1 By Corollary 4.1, the origin is AS with an estimate of the region of attraction c = fx 2 R2 jV (x) cg fx 2 R2 jjx1 j < 1g. This is satised for c = 1=4 for which V is a pdf and the surfaces V (x) = r, r < c, are closed.

Problem 8.17 _ is negative, the vectoreld must point to the interior of the level set V (x) = c. Hence, directions Since V (2), (3) are possible, while (1), (4) are impossible.

Problem 9.5 The closed loop is x _= Using V = x> P x, we get _ V


>

1 A BB > P x + Dg (t; y ) 2

" > # 1 1 > > = x P A BB P + A BB P P x + 2x> P Dg (t; y ) 2 2

1 1 = x> P DD> P x x> C > Cx x> Qx + 2x> P Dg (t; y ) 1 > 1 > x P DD P x ky k2 x> Qx + 2kx> P Dkkky k 1 1 kz k2 ky k2 x> Qx + 2kz kky k; after setting z = D> P x x> Qx; for < 1=k 1

Hence, the origin is GAS. Problem 10.6 (1{4) 1. H (s) = s2 ss+1 . Since H (s) is not Hurwitz we have the case that the nonlinearity sector [; ] should have 0 < < . Using the loop transformation with > 1, GT (s) = G(s)= (1 + G(s)), we need to nd K such that ZT (s) = I + KGT (s) is strictly positive real. In this case, GT is PR and therefore KGT is PR for any K > 0. Hence, ZT is SPR for any K > 0. So the nonlinearity sector would be [; ] for any xed > 1 and arbitrarily large. Alternatively, using the small-gain version of the circle criterion (Nyquist plot contained in the circle) let s the feedback gain be c. The transformed linear system is Hc (s) = s2 +(c 1)s+1 . For large c, the L2 gain of Hc (peak magnitude) is slightly larger than 1=(c 1), say 1=(c 1 ), where > 0 signies a small constant. Hence the sector of the transformed nonlinearity is [c + 1 + ; c 1 ] yielding an original nonlinearity sector [1 + ; 2c 1 ], which is identical to the previous result.
1 2. H (s) = (s+1)( s+2) . In this case H (s) is Hurwitz and we may use nonlinearities of the type 0 = < . We 1 ) = 17:48, yielding a nonlinearity sector [0; 17:48]. compute min! < fH (j!)g 0:0572 =

Alternatively, using the small-gain version of the circle criterion the transformed linear system is Hc (s) = 1 s2 +3s+2+c . Its L2 gain is 1=(2 + c) for 2 < c 2:5 Hence the sector of the transformed nonlinearity is [c 2 + ; c + 2 ] yielding an original nonlinearity sector [2 + ; 2c + 2 ]. From this, the largest radius sector for the nonlinearity is [2 + ; 7 ]
2 Larger gains c produce peak magnitudes given by 3p4 . The corresponding nonlinearity sectors are c1 p p 3 3 [c 2 4c 1 + ; c + 2 4c 1 ] for c > 2:5. Notice that the lower limit is zero when c = 8:74 for which the sector is [0; 17:48], same as before.

1 3. H (s) = (s+1) 2 . Using the same procedure as in part 2, the nonlinearity sectors are any closed subsector p p of (1; 2c + 1), for 1 < c 1, or (c 2 c; c + 2 c) for c > 1.

Notice that in these simple problems, it is possible to derive analytical expressions for the L2 gains and, hence, the sectors in terms of the loop-transformation constant. In general, we only expect a numerical evaluation of such expressions.
2

s 0:5 4. H (s) = (s+1)( s2 +1) . After a loop transformation the transformed system would be stable for 0 < c < 2 (Routh). Using the following MATLAB script, we can evaluate the sector bounds for dierent values of c.

h=tf([1 0 -.5],conv([1 1],[1 0 1])) c=[0:.1:2];lb=0*c;hb=0*c; for i=1:length(c) mag=bode(feedback(h,c(i))); m=1/max(mag); lb(i)=c(i)-m;hb(i)=c(i)+m; end plot(c,lb,c,hb) The largest radius sector occurs at c 1:2 and is (0:6880; 1:7120) and the system will be absolutely stable for nonlinearities in any closed subsector. On the other hand, to nd the sector with the largest upper limit, we observe that when the peak magnitude is at DC, the sector takes the form (2 + 2c; 2). The smallest c for this is approximately 1.7, yielding the sector (1:4; 2). (Keep in mind that these values are produced numerically and small discrpancies may occur, especially near the stability boundary.) Problem 10.11

From equation (10.10), P A + A> P P A + A> P + P + P 2 2 = L> L P = L> L )

Since D is nonsingular this implies that D + D> is nonsingular, hence W > W is nonsingular (from (10.12)), and W is nonsingular. Then P A+ P A+ P A+ I + A+ 2 I + A+ 2 I + A+ 2 > P I 2 > P I 2 > P I 2 1 > = L> W W 1 W > W L 1 > = L> W W > W W L 1 > = L> W D + D> W L

Using (10.11) L> W = C > P B we obtain

1 > P A+ I + A+ I P + C > P B D + D> C B>P 2 2

Problem 10.12 (a). The original system is given by x _ = Ax + BLsat L1 F x Adding and subtracting BF x we get x _ = Ax + BLsat L1 F x + BF x BF x x _ = (A + BF )x + B Lsat L1 F x F x Next, dening u1 = Lsat L1 y F x, y = F x, the system becomes x _ = (A + BF )x + Bu y = F x u = r u1 ; r 0 which has the desired form. (b). Starting with (1), we can write the closed-loop system as an interconnection of a transfer matrix given by G(s) = [A; BL; L1 F; 0] and the nonlinearity (y ) = sat(y ) with a negative feedback. The sector for the saturation nonlinearity is [0; 1] but, since G is unstable, it is clear that 0 is not admissible and we can only expect a local stability result (absolute stability with nite domain). To show the feasibility of such a result we can make the following argument. A condition for asymptotic stability for any nonlinearity in the sector [kmin ; kmax ] would be \ZT (s) is SPR," where 1 ZT (s) = I + (kmax kmin ) G(s) [I + kmin G(s)] (1)

Since F is a stabilizing feedback gain, G(s) [I + kmin G(s)]1 represents a stable system with nite gain for kmin suciently close to 1. Moreover, choosing kmax kmin suciently small, i.e., kmax also close to 1, the whole second term can be made arbitrarily small and therefore ZT should be SPR. The issue is now to choose kmax ; kmin to maximize the domain of absolute stability. For this kmax should be larger than 1 and kmin should be as small as possible.

Alternatively, using a loop transformation, we can write GT (s) = G(s) [I + G(s)] , where GT (s) is 1 1 stable. GT (s) is absolutely stable for nonlinearities in a sector [ ; ], where is the gain of GT (s) 3

and depends on . Then the original system G(s) is is absolutely stable for nonlinearities in a sector 1 1 [ ;+ ]. To nd a suitable we could solve the minimization problem min

s.t.

1 <1 1 + >1

It should be kept in mind that the solutions of these optimization problems may not be unique and/or the minimizers may not be admissible. Still, a feasible solution can be selected within a prescribed tolerance from the minimum (inmum). Then, using Theorem 10.2 and Problem 10.11, we can compute the matrix P that denes the associated Lyapunov function and describe the region of attraction as the maximal level set contained in the region were the Lyapunov derivative is negative denite. (c). Using the matrices of Problem 5.2.d and the procedure described above, we nd that = 0:8037 and = 5:0929, corresponding to the sector [0: 6074; 1]. It now follows that the Lyapunov derivative is negative 1 in the region xj jF xj1 = 0:6074 = 2:7105 . The matrix P , dening the Lyapunov function, is computed by solving the Riccati equation of Problem 10.11 with 1 0:6074 1:414 0 0 0:2776 A= ;B = ;C = ;D = I 0:6074 2:054 0 1:414 0:2776 0:5275 1:1187E2 P = 3:2676E2 1 The maximal level set that is contained in the constraint set xj jF xj1 = 0:6074 = 2:7105 (a polytope) can now be computed along the lines of Problem 5.2 (HW 5). Here we adopt a computational approach, employing the contour function of Matlab to draw the constraint set and level sets for various constants. An example of such a procedure is: 1:0183E2 1:1187E2 p=[1.0183 1.1187;1.1187 3.2676]*1e-2; f=[0 1;1 -1.9]; [X,Y] = meshgrid(-4:.1:4, -4:.1:4); V=X.^2*p(1,1)+X.*Y*2*p(1,2)+Y.^2*p(2,2); y1=f(1,1)*X+f(1,2)*Y;y1=abs(y1); y2=f(2,1)*X+f(2,2)*Y;y2=abs(y2); clf, hold off contour(X,Y,y1,[2.71 2.71]),hold on contour(X,Y,y2,[2.71,2.71]) contour(X,Y,V,[1 1]*0.01) Repeating the last command with dierent multipliers we nd the estimate for the region of attraction as x2I R2 jV (x) 0:013g Problem 10.16 Let N = f 2 I Rp jk k k g, k = 1; :::; p be a convex parallelepiped. Any element 2 N can be written as a convex combination of the 2p vertices such that =
2 X
p

This solution is

(k)

k=1

k=1

2 X

k = 1;

0 k 1; 8k = 1; :::; p

with ( k) being the k-th vertex.1 Then we can write every element aij ( ) as aij ( ) = With this, we can write the matrix A( ) as A( ) =
2 X
p

k=1

2 X

k aij (k)

k=1

k Ak (k)

(k) (k) is the value of aij for a given k. where Ak Next, let v (x) = x> P x. Then _ (x) = x> A> ( )P + P A( ) x V 2p ! 2p X X > > (k) (k) = x k Ak k Ak P +P x
k=1 k=1

(k) (k) Using the assumption A> P + P Ak I , we multiply the k-th inequality by k and perform a k summation to obtain
2 X
p

k=1

k A> k

2p 2p X X (k) (k) k Ak k P +P I k=1 k=1

It now follows that _ (x) x V


>

k=1

2 X

x = x> x

which is negative denite. Therefore A( ) is Hurwitz for any 2 N and, more important, all systems admit the same Lyapunov function.

Problem 10.33 (1,3) For odd, time-invariant, memoryless nonlinearities Z 2 (a) = (a sin ) sin d a 0 1. (y ) = y 5 (a) = = = 2 a Z
0

2a4 5 3 642 5a4 8

2a4 5 3 1 1 1 1 5 3 a sin d = sin cos + sin cos + sin 2 6 6 4 4 2 4 0 0 0


5 6

1 It is straightforward to show that any summation of the above form results in an element of N . On the other hand, any element of N can be written as a linear combination of the vertices with nonnegative coecients whose sum is 1. To show this, consider an ( 1). Then is a element 2 N and a vertix ( 1). Let 1 be the intersection of the boundary of N and the line connecting and ( 1) + 11 1 , with 1 + 11 = 1 and 1 ; 11 0. Next, repeat this expansion for convex combination of ( 1) and 1 , i.e., = 1 ( 2), etc., until all the vertices are used. While the properties of the expansion coecients follow 1 , starting with the next vertix easily from the fact that each step is a convex combination, a complete proof can be obtained by an induction argument. Notice that such expansions are not unique.

3. From the graph of the function we can write (y ) = A + ky; y 0 A + ky; y < 0

(a) = = =

Z 2 2 1 (A + ka sin ) sin d = A cos + ka sin 2 a 0 a 2 4 0 2 2A + ka a 2 4A +k a

Problem 10.39 We need to nd (a) for (y ) = sat(y ). We distinguish two cases, a 1 and a > 1. Case a 1. 2 (a) = a Case a > 1. (a) = = = = = 1
a

Z
0

a sin2 d =

i 2 ha a sin 2 = 1 a 2 4 0 Z
0
2

2 a 4 a

Z
0

(a sin ) sin d =

"Z
0

4 a

(a sin ) sin d # 1 = arcsin a

a sin d +

sin d ;

4 a a 2 sin 2 (cos )j a 2 4 0 4 a 1 + cos a 2 2 # " r 2 1 1 1 1 2 arcsin + a a a = arccos q 1


1 a2

where we used = arcsin for 1 < a < 1.

to obtain the last expression. Notice that 0 < (a) < 1

2bs 1 Solving equations (10.75{10.76) with G(s) = s2 bs+1 , we nd that ! = 1 and (a) = 2 ) a 2:4575 This suggests the possibility of a periodic solution with frequency ! = 1 and amplitude a 2:4575.

For a more detailed investigation and to establish the existence of a periodic solution, we may follow the procedure described in Example 10.24. First, we nd the frequency p range for which the inverse Nyquist is p b+ b2 +4 b+ b2 +4 contained inside the \critical circle" given by < w < . We then compute the function 2 2 1(3!)2 (!) = 6b! and nd the band of uncertainty around the inverse Nyquist plot. The size of b plays a role

when we calculate (! ): if b is small enough ((!) 2 = 0) it is possible to make the band of uncertainty large enough so that is not contained inside the \critical circle". For b=0.1, it turns out that the tangent circles to the real axis are centered around !1 = 0:9982 and !2 = 1:0032 and have radius 1 = 0:0194 and 2 = 0:0195. These are contained inside the \critical circle" and by Theorem 10.9 we expect to have periodic solution with frequency around ! = 1 and magnitude around 2.4575.

Problem 12.2 1. Linearization at the origin yields 1 1 A= ; 1 0

B=

0 1

C = (0 1) ;

The state feedback u = Kx = [7 4]x assigns the closed-loop eigenvalues at 1; 2. Using the observer-based controller _ = (A BK HC )^ x ^ x + Hy; u = K x ^

where H = [43; 12], which assigns the observer eigenvalues at 5; 6, stabilizes the origin, at least locally. _ (x) 0. The (ROA computation can be performed as usual, by nding the largest level set V (x) = c inside V latter is guaranteed to hold in a nbhd of the origin since the linearization of the feedback system is exponentially stable. One can now try various initial conditions to obtain insight on the consrvatism of this estimate. Also, by the nite gain property of the exponentially stable linearization, the system will have bounded response to small commands.) 2. Linearization at the origin yields 0 1 0 1 1 1 0 0 A = @ 1 0 1 A ; B = @ 0 A ; C = (0 1 0) ; 0 0 0 1 The state feedback u = Kx = [16 17 7]x assigns the closed-loop eigenvalues at 1; 2; 3. Using the observer-based controller _ = (A BK HC )^ x ^ x + Hy; u = K x ^

where H = [335; 19; 210], which assigns the observer eigenvalues at 5; 6; 7, stabilizes the origin, at least locally. 3. Linearization at the origin yields 1 0 1 0 1 1 0 0 @ A @ 1 1 0 1 A ; C = (1 0 0) ; ; B= A= 1 0 2 0 The state feedback u = Kx = [1 2 0]x assigns the closed-loop eigenvalues at 1; 2; 3. Using the observer-based controller _ = (A BK HC )^ x ^ x + Hy; u = K x ^

where H = [9; 21; 1], which assigns the observer eigenvalues at 5; 6; 2 and, hence, stabilizes the origin, at least locally. _ (x) 0. The (ROA computation can be performed as usual, by nding the largest level set V (x) = c inside V latter is guaranteed to hold in a nbhd of the origin since the linearization of the feedback system is exponentially stable. One can now try various initial conditions to obtain insight on the consrvatism of this estimate. Also, by the nite gain property of the exponentially stable linearization, the system will have bounded response to small commands.)

You might also like