You are on page 1of 46

Solution Manual

for
Adaptive Control
Second Edition

Karl Johan Åström


Björn Wittenmark
Preface

This Solution Manual contains solutions to selected problems in the second


edition of Adaptive Control published by Addison-Wesley 1995, ISBN 0-201-
55866-1.
PROBLEM SOLUTIONS

SOLUTIONS TO CHAPTER 1

1.5 Linearization of the valve shows that

∆ v = 4v30 ∆ u
The loop transfer function is then

G0 ( s) GPI ( s) 4v30
where GPI is the transfer function of a PI controller i.e.
 
1
GPI ( s) = K 1 +
sTi
The characteristic equation for the closed loop system is

sTi( s + 1) 3 + K ⋅ 4v30 ( sTi + 1) = 0


with K = 0.15 and Ti = 1 we get

( s + 1) s( s + 1) 2 + 0.6v30 = 0
( s + 1)( s3 + 2s2 + s + 0.6v30 ) = 0
The root locus of this equation with respect to vo is sketched in Fig. 1.
According to the Routh Hurwitz criterion the critical case is
r
3 10
0.6v0 = 2
3
⇒ v0 = = 1.49
3
Since the plant G0 has unit static gain and the controller has integral
action the steady-state output is equal to v0 and the set point yr . The
closed-loop system is stable for yr = uc = 0.3 and 1.1 but unstable for
yr = uc = 5.1. Compare with Fig. 1.9.
1
2 Problem Solutions

1.5

0.5
Imag Axis

-0.5

-1

-1.5

-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
Real Axis

Figure 1. Root locus in Problem 1.5.

1.6 Tune the controller using the Ziegler-Nichols closed-loop method. The
frequency ω u , where the process has 180○ phase lag is first determined.
The controller parameters are then given by Table 8.2 on page 382 where

Ku =
j
1
G0 ( iω u )
j
we have
e−s/q
G0 ( s) =
/1+s q

arg G ( iω ) = − − arctan = −π
ω ω
0
q q

q ω G0 ( iω ) K Ti
0.5 1.0 0.45 1 5.24
2.0 0.45 1 2.62
4.1 0.45 1 1.3

A simulation of the system obtained when the controller is tuned for the
smallest flow q = 0.5 is shown Fig. 2. The Ziegler-Nichols method is not
the best tuning method in this case. In the Fig. 3 we show results for
Solutions to Chapter 1 3

Process output

0 10 20 30 40
Control signal

0 10 20 30 40

Figure 2. Simulation in Problem 1.6. Process output and control signal


are shown for q = 0.5 ( full) , q = 1 ( dashed) , and q = 2 ( dotted) . The
controller is designed for q = 0.5.

controller designed for q = 1 and in Fig. 4when the controller is designed


for q = 2.
1.7 Introducing the feedback
u = −k y
2 2

the system becomes



 1− 0 0
  
 0 
 
 1
dx 


= 
 0 −3 0



 x − 


k2 

 


  



2 1 0 1 x + 
0


 u1

dt 

0 0

−1

1
 
0

y1 =  1 1 0x

The transfer function from u1 to y1 is


 −1  
  s+1 0 0   1

 
 
 
G ( s) =  1 1 0    2k2 s + 3 2k2  
 0 


 
  
 
k2 0 s + 1 + k2 0

=

s2 + ( 4 k2 ) s + 3 + k2
( s + 1)( s + 3)( s + 1 + k2)
4 Problem Solutions

Process output

0 10 20 30 40
Control signal

0 10 20 30 40

Figure 3. Simulation in Problem 1.6. Process output and control signal


are shown for q = 0.5 ( full) , q = 1 ( dashed) , and q = 2 ( dotted) . The
controller is designed for q = 1.

The static gain is


3 + k2
G ( 0) =
3( 1 + k2 )
Solutions to Chapter 1 5

Process output

0 10 20 30 40
Control signal

0 10 20 30 40

Figure 4. Simulation in Problem 1.6. Process output and control signal


are shown for q = 0.5 ( full) , q = 1 ( dashed) , and q = 2 ( dotted) . The
controller is designed for q = 2.
6 Problem Solutions

SOLUTIONS TO CHAPTER 2

2.1 The function V can be written as


X
/ X
n n
V ( x1 ⋅ ⋅ ⋅ xm ) = xi x j ( aij + a ji ) 2 + b i xi + c
i, j = 1 i=1

Taking derivative with respect to xi we get

V X
n

x i
=
j=1
( aij + a ji ) x j + bi

In vector notation this can be written as

gradx V ( x) = ( A + AT ) x + b

2.2 The model is


  b 
 0
yt = ϕ tT θ + et =  ut  
ut−1    + et
b1
The least squares estimate is given as the solution of the normal equation
 P 2 P   P 
−  ut ut ut−1 −1  u t yt 
θ = ( Φ Φ) Φ Y =  P
ˆ T 1 T  P 2  P   

utut−1 ut−1 ut−1 yt
( a) Input is a unit step 
1 t ≥ 0
ut =
0 otherwise
Evaluating the sums we get
 N   N 
 X  X 
 yt   yt 

 N N 1
−1 




 − 







1 −
1 










θˆ =  

−  


 
−   
1 1
 
X  = 
  1 N  

N 1 N 1 




N 



yt  N 1 −



X
N 



yt 
2 2
 
 y1 

 

 
θˆ =  
X

N

 
y1 
1


N −1 2
yt 

The estimation error is


 
 e1 
 
θˆ − 

θ = 

 1 X
N








N −1 2
et e1 

Solutions to Chapter 2 7

Hence

−1   1 −1 
− θ )(θ̂ − θ ) 
− N N− 1  →  −1 1 
1

E (θˆ T
= ( Φ Φ ) −1 ⋅ 1 =
T 

 1

when N → ∞. Notice that the variance of the estimates do not go to


zero as N → ∞. Consider, however, the estimate of b + b . 0 1



 1 − 
1 1
E ( b̂ + b̂ − b − b ) =  1 1  
    1
 −1 N  1 = N − 1
  
N −1
0 1 0 1

With a step input it is thus possible to determine the combination


b0 + b1 consistently. The individual values of b0 and b1 can, however,
not be determined consistently.
( b) Input u is white noise with Eu2 = 1 and u is independent of e.
Eu2t = 1 Eutut−1 = 0
 1 
 −1 
 

− N 0 
− N 0 
θ ) = 1 ⋅ E( Φ Φ ) =   = 
 
cov(θˆ
−  
T 1
 
 

0 1 N 1  
N 1
0

In this case it is thus possible to determine both parameters consis-
tently.
2.3 Data generating process:

y( t) = b0 u( t) + b1 u( t − 1) + e( t) = ϕ T
( t)θ 0 + ē( t)
where
ϕ T ( t) = u( t) , θ 0 = b0
and
ē( t) = b1 u( t − 1) + e( t)
Model:
ŷ( t) = b̂u( t)
or
y( t) = b̂u( t) + ε ( t) = ϕ T ( t)θ̂ + ε ( t)
where
ε ( t) = y( t) − ŷ( t)
The least squares estimate is given by
 
 ē( 1) 
 
Φ T Φ (θˆ −θ ) = Φ
0 T
Ed

 .. 
Ed = 




.






ē( N )
8 Problem Solutions

1 X 2
→ Eu →∞
N
1 T
Φ Φ = u ( t) 2
N
N N
1

1 X 1 X
− 1) + e( t))
N N
1 T
Φ Ed = u( t) ē( t) = u( t) (b1 u( t
N N N
1 1

→ b E (u( t) u( t − 1)) + E (u( t) e( t))


1 N →∞
( a) 
1 t ≥ 1
u( t) =
0 t < 1
E ( u2 ) = 1 E ( u( t) u( t − 1)) = 1 Eu( t) e( t) = 0
Hence
b̂ = θˆ →θ 0
+ b1 = b0 + b1 N →∞
i.e. b̂ converges to the stationary gain
( b)
u( t) ∈ N ( 0, σ ) ⇒ Eu2 = σ 2 Eu( t) u( t − 1) = 0 Eu( t) e( t) = 0

Hence
b̂ →b 0 N →∞
2.6 The model is
 a  
yt = ϕ tT θ + εt =  −
yt−1 ut−1  
 
 
b
 +  et + cet−1 
| {z } | {z } | {z }
ϕ tT εt
θ

The least squares estimate is given by the solution to the normal equation
( 2.5) . The estimation error is
θˆ − θ = ( ΦPΦ) − Φ
T 1 T
ε =

 y2t−1 − PPy − u − −
t 1 t 1
1  P

 −yt−1 et c
P
− 
yt−1 et−1 



− P
yt−1 ut−1 u2t−1
 
 P P
ut−1 et + c ut−1 et−1

Notice that Φ T and ε are not independent. ut and et are independent, yt


depends on et, et−1 , et−2, . . . and yt depends on ut−1 , ut−2 , . . ..
Taking mean values we get

E (θˆ − θ ) = E( Φ T
Φ ) −1 E ( Φ T ε )

To evaluate this expression we calculate





P 2
yt−1
P 
yt−1 ut−1 
 −  2

 Eyt 0 


E P

yt−1 ut−1
P 2
ut−1

 = N

0 Eu2t


Solutions to Chapter 2 9

− − −
and  P P   
 yt−1 et c yt−1 et−1   c N Eyt−1 et−1 
E
 P P 
 = 
 

ut−1 et + c ut−1 et−1 0

Eyt−1 et−1 = E ( ayt−2 + but−2 + et−1) et−1 = σ 2

Since
b q+c
yt = ut + et
q+a q+a
xb2 −
1 2ac + c 2 2
Ey2t =
1 a2−+
1 a2
σ

Eu2t = 1 Ee2t =σ 2

we get

− a) = − b σ 2 c( 1 −a )
2
E ( â
− 2ac + c )σ
2
2 + (1 2 2

E ( b̂ − b) = 0 2

2.8 The model is


y( t) = a + bt + e( t) = ϕ T θ + e( t)
   
1  a
ϕ =    θ = 
  
t b
According to Theorem 2.1 the solution is given by equation ( 2.6) , i.e.

θˆ = ( Φ T Φ ) −1 Φ T Y

where
 
 y( 1) 
  
 



 y( 2) 

1 1 1 1  
Φ T
= 
 
 Y = 

 . 


1 2 3 ⋅⋅⋅ N 
 .
. 


 

y( N )
Hence
 N −1  N 
 X X N
  X 

 1 t 
 
 y( t) 


 
 
 

 t=1
 = 
 
 = 

ˆ 
θ =
t 1 
 

t 1 


 X X 
 
 X 



N N

 

N



 t t 
2
   ty( t) 

t=1 t=1 t=1
 




2
N ( N 1) −
(( 2N + 1) s0 3s1) 




= 





− −

 
( ( N + 1) s0 + 2s1) 
 6 
N ( N + 1)( N 1)
10 Problem Solutions

where we have made use of

X
N
N ( N + 1) X
N
N ( N + 1)( N + 2)
t = t2 =
2 6
t=1 t=1

and introduced
X
N X
N
s0 = y( t) s1 = ty( t)
t=1 t=1

The covariance of the estimate is given by

cov(θˆ ) = σ 2 ( Φ T Φ ) −1 =
 
12




( N + 1)( 2N + 1)
− N 2+ 1 
−  6
 
1)  
N ( N + 1)( N 
 N
2
+1
− 1

Notice that the variance of b̂ decreases as N −3 for large N but the


variance of â decreases as N −1 . The reason for this is that the regressor
associated with a is 1 but the regressor associated with b is t. Notice that
there are better numerical methods to solve for θˆ !

2.17
( a) The following derivation gives a formula for the asymptotic LS esti-
mate
X −1 X 
− 1) − 1) ȳ( k)
N N
b̂ = ( Φ T Φ ) −1 Φ Y = φ (k 2
φ (k
k=1 k=1

1 −1  1 
X
− 1) X
− 1) ȳ( k)
N N
= u( k 2
u( k
N N
k=1 k=1

→ E ( u( k − 1) ) 2
−1
E ( u( k − 1) ȳ( k)) , as N →∞
The equations for the closed loop system are

− y( k))
u( k ) = K ( u c ( k )
ȳ( k) = y( k) + ay( k − 1) = bu( k − 1)

The signals u( k) and y( k) are stationary signals. This follows since


the controller gain is chosen so that the closed loop system is stable.

It then follows that E ( u( k 1) 2) = E ( u( k) 2) and E ( u( k 1) ȳ( k)) = −

E ( bu( k 1) 2) = bE ( u( k) 2) exist and the asymptotic LS estimate
becomes b̂ = b, i.e. we have an unbiased estimate.
Solutions to Chapter 2 11

Estimator

d
uc = 0 u y
Σ K Σ b

y 1
−1
q+a

Figure 5. The system redrawn.

( b) Similarly to ( a) , we get
1 X −1  1 X 
− 1) − 1) ȳ( k)
N N
u( k 2
u( k
N N
k=1 k=1

→ ( u2 ( k − 1)) −1
0 −
( u( k 1) ȳ( k))0 , as N →∞
where ( ⋅) 0 denote the stationary value of the argument. We have


u2 ( k 1) 0 = (( u( k)) 0) 2
( u( k − 1) ȳ( k)) 0 = ( u( k))0 b (( u( k) 0 + d0 )

( u( k))0 = Hud ( 1) d0 = − 1 + aK +b K b d 0

and the asymptotic LS estimate becomes


 
b̂ = ( u ( k
2
− 1)) 0
−1
( u( k − 1) ȳ( k))0 = b 1 +
d0
( u( k)) 0
 
= b 1 − 1 + a + Kb
Kb
=
1+a
K
How do we interpret this result? The system may be redrawn as in
Figure 5. Since Uc = 0, we have that u = qK+ a ȳ, and we can regard
K
q + a as the controller for the system in Figure 5. It is then obvious
that we have estimated the negative inverse of the static controller
gain.
( c) Introduction of high pass regressor filters as in Figure 6 eliminates or


at least reduces the influence from the disturbance d on the estimate
of b. One choice of regressor filter could be H f ( q−1) = 1 q−1, i.e. a
differentiator. Another possibility would be to introduce a constant in
12 Problem Solutions

Hf Estimator Hf

uc u b y
Σ K Σ q+a

−1

Figure 6. Introduction of regressor filters.

the regressor and then estimate both b and bd. The regression model
is in this case
  b 
− 
ȳ( t) =  u( t 1) 1  

bd


 = φ ( t) Tθ

2.18 The equations for recursive least squares are


y( t) = ϕ T ( t 1)θ 0

θˆ ( t) = θˆ ( t 1) + K ( t) ε ( t)
ε ( t) = y( t) − ϕ ( t − 1)θ̂ ( t − 1)
T

K ( t) = P ( t) ϕ ( t − 1)
P ( t − 1) ϕ ( t − 1)
λ + ϕ ( t − 1) P ( t − 1) ϕ ( t − 1)
= T

P ( t) = I − K ( t) ϕ ( t − 1) P ( t − 1) /λ
T


Since the quantity P ( t) ϕ ( t − 1) appears in many places it is convenient


to introduce it as an auxiliary variable w = P ϕ . The following computer
code is then obtained:

Input u,y: real


Parameter lambda: real
State phi, theta: vector
P: symmetric matrix

Local variables w: vector, den : real


"Compute residual
e=y-phi^T*theta
"update estimate
w=P*phi
den=w^T*phi+lambda
Solutions to Chapter 2 13

theta=theta+w*e/den
"Update covariance matrix
P=(P-w*w^T/den)/lambda
"Update regression vectors
phi=shift(phi)
phi(1)=-y
phi(n+1)=u
14 Problem Solutions

SOLUTIONS TO CHAPTER 3

3.1 Given the process


B ( z) z + 1.2
H ( z) =
A( z)
= 2
z −
z + 0.25
Design specification: The closed system should have a pole that corre-
spond to following characteristic polynomial in continuous time

s2 + 2s + 1 = ( s + 1) 2

This corresponds to
Am ( z) = z2 + am1 z + am2
with 
am1 = −( e− 1
+ e−1 ) = −2e− 1

am2 = e−2
( a) Determine an indirect STR of minimal order. The controller should
have an integrator and the stationary gain should be 1.
Solution:
Choose Bm such that
B m ( 1)
= 1
Am ( 1)
The integrator condition gives

R = R ′( z − 1)
We get the following conditions

( 1) B T = B m Ao
( 2) AR + B S = Am Ao

As B is unstable must Bm = B Bm ′ . This makes ( 1) ⇔ B T =


′ ′ ′
B Bm Ao ⇔ T = Bm Ao . Choose Bm such that
′ ( 1)
B ( 1) B m ′ ( 1) = A( 1)
= 1 ⇒ Bm
A( 1) B ( 1)
The simplest way is to choose

′ = b′ = A( 1) 0.25
Bm =
m
B ( 1) 2.2
Further we have

( z2 + a1 z + a2)( z 1)( z + r) + ( b0z + b1)( s0z2 + s1 z + s2)
= ( z2 + am1 z + am2 )( z2 + ao1 z + ao2 )
Solutions to Chapter 3 15


with a1 = 1, a2 = 0.25 and ao1 and ao2 chosen so that Ao is stable.
Equating coefficients give


 −
r 1 + a1 + b0 s0 = ao1 + am1

− −
 r + a ( r 1) + a + b s + b s = a + a a + a
− −
1 2 0 1 1 0 o2 m1 o1 m2


 ar + a2 ( r 1) + b0 s2 + b1 s1 = am1 ao2 + am2 ao1

− a2 r + b1 s2 = am2 ao2
or

 1
  
b0 0 0   r   ao1 + am1 + 1 a1 − 





 a − 1 b b 0






 

 s

 





 a + a a + a + a a




−

1 1 0 0 a2 m1 o1 m2 1 2

 
    =  


 a a 0 b b 
 
 s 
 
 a a + a a + a 

      

2 1 1 0 1 m1 o2 m2 o1 2
  
a2 0 0 b1 s2 am2 ao2
Now choose to estimate
 T
θ =  b1 b1 a1 a2 

by equation 3.22 in the textbook.


( b) As H ( z) is not minimum phase we cancel B − = B between R̄ and
S̄. This is difficult, see page 118 in the textbook. An indirect STR is
given by Eq. 3.24.
Ao Am y = R̄u + S̄ y
′ Ao . Furthermore we have
with R̄ = B − R, S̄ = B − S, T = Bm
′ uc = B Tuc = T̄uc
Ao Am y m = Ao B m u c = Ao B B m
1 1
y = R̄ u + S̄ y
Ao Am Ao Am
| {z } | {z }
= uf = yf

1
ym = T̄ uc
Ao Am
| {z }
= uc f

ε = y −y m = R̄u f + S̄ y f − T̄ u cf

Now estimate R̄, S̄ and T̄ with a recursive method in the above


equation. Then cancel B and calculate the control signal.
( c) Take a = 0 in Example 5.7, page 206 in the textbook. This gives



dt0
dt

= γ uc e

= −γ ye

 ds0
dt
with e = y −y m = y −G m uc .
16 Problem Solutions

3.3 The process has the transfer function

b q
G ( s) = ⋅
s+a s+p

where p and q are known. The desired closed loop system has the transfer
function
ω2
Gm ( s) =
s2 + 2ξ ω s + ω 2
Since a discrete time controller is used the transfer functions are sampled.
We get
b0 z + b1
H ( z) = 2
z + a1 z + a2
b m0 z + b m1
H m ( z) =
z2 + am1 z + am2
The fact that p and q are known implies that one of the poles of H is
known. This information will be disregarded. In the following we will
assume
1
G ( s) =
s( s + 1)
With h = 0.2 this gives

0.0187( z + 0.936)
H ( z) =
− −
( z 1)( z 0.819)
Furthermore we will assume that ω = 2 and ζ = 0.7.
Indirect STR:
The parameters of a general pulse transfer function of second order is
estimated by recursive least squares ( See page 51) . We have
 T
θ =  b o b1 a1 a2 
 

ϕ ( t) =  u( t 1) u( t 2) − − y( t − 1) − y( t − 2) 
The controller is calculated by solving the Diophantine equation. We look
at two cases
1.B canceled:

( z2 + a1 z + a2 ) 1 + b0( s0z + s1) = z2 + am1 z + am2


z1 : a1 + b0 s0 = am1 s0 =
am1 a1−
b0
z0 : a2 + b0 s1 = am2 s1 =
am2 a2−
b0
Solutions to Chapter 3 17

The controller is thus given by


/
R ( z) = z + b 1 b 0
S ( z) = s0 z + s1
1 + am1 + am2
T ( z) = t0 z where t0 =
b0
2. B not canceled:
The design equation becomes
( z2 + a1 z + a2 )( z + r1 ) + ( b0z + b1)( s0z + s1 ) = ( z2 + am1 z + am2)( z + ao1)
Identification of coefficients of equal powers of z gives
z2 : a1 + r1 + b0 s0 = am1 + ao1
z1 : a2 + a1 r1 + b1 s0 + b0 s1 = am1 ao1 + am2
z0 : a2 r1 + b1 s1 = am2 ao1
The solution to these linear equations is

b21 n1 b0 b1 n2 + ao1 am2 b20
r1 =

b20 a2 a1 b0 b1 + b21

s0 =
n1 r1 −
b0

s1 =
b 0 n2 − b n − r (a b − b )
1 1 1 1 0 1
b20
where
n1 = a1 −a −am1 o1

n2 = am1 ao1 + am2 −a 2


The solution exists if the denominator of r1 is different from zero, which
means that there is no pole-zero cancellation. It is helpful to have access
to computer algebra for this problems e.g. Macsyma, Maple or Matem-
atica! Figure 7 shows a simulation of the controller obtained when the
polynomial B is canceled. Notice the “ringing” of the control signal which
is typical for cancellation of a poorly damped zero. In this case we have

z = 0.936. In Figure 8 we show a simulation of the controller with no
cancellation. This is clearly the way to solve the problem.
Direct STR:
To obtain a direct self-tuning regulator we start with the design equation
AR + B S = Am Ao B +
Hence
B + Am Ao y = ARy + B Sy = B Ru + B Sy
   
B− B−
y = R u +S y
Ao Am Ao Am
| {z } | {z }
uf yf
18 Problem Solutions

2 uc y

−1

−2
0 10 20 30 40 50
u

10

−10

0 10 20 30 40 50

Figure 7. Simulation in Problem 3.3. Process output and control signal


are shown for the indirect self-tuning regulator when the process zero is
canceled.

From this model R and S can be estimated. The polynomial T is then


given by
t o Ao B m
T =
B−
where to is chosen to give the correct steady state gain. Again we separate
two cases.

/
1. Cancel B:
If the polynomial B is canceled we have B + = z + b1 b0, B − = b0 . From
the analysis of the indirect STR we know that no observer is needed in
this case and that the controller has the structure deg R = deg S = 1.
Hence    
bo bo
y( t) = R u( t) + S y( t)
Am Am
Since b o is not known we include it in the polynomial R and S and
estimate it. The polynomial R then is not monic. We have
   
1 1
y( t) = ( r0 q + r1 ) u( t) + ( s0 q + s1 ) y( t)
Am Am
To obtain a direct STR we thus estimate
 T
θ =  r0 r1 s0 s1 
Solutions to Chapter 3 19

2 uc y

−1

−2
0 10 20 30 40 50
u

10

−10

0 10 20 30 40 50

Figure 8. Simulation in Problem 3.3. Process output and control signal


are shown for the indirect self-tuning regulator when the process zero is
not canceled.

by RLS. The case r0 = 0 must be taken care of separately. Furthermore


T has the form T ( q) = t0 q where
BT B to q to q
= + =
AR + B S B b A Am
| {z }o m
B

To get unit steady state gain choose

t o = Am ( 1)
A simulation of the system is shown in Fig. 9. We see the typical ringing
phenomena obtained with a controller that cancels a poorly damped zero.
To avoid this we will develop an algorithm where the process zero is not
canceled.
2. No cancellation of process zero:
We then have B + = 1 and B − = b0 q + b1. From the analysis of the indirect
STR we know that a first order observer is required, i.e. A0 = q + ao1. We
have as before    
B− B−
y = R u +S y ( ∗)
Ao Am Ao Am
| {z } | {z }
uf yf
20 Problem Solutions

2 uc y

−1

−2
0 10 20 30 40 50
u
10

−10

0 10 20 30 40 50

Figure 9. Simulation in Problem 3.3. Process output and control signal


are shown for the direct self-tuning regulator when the process zero is
canceled.

Since B − is not known we can, however, not calculate u f and y f . One


possibility is to rewrite ( *) as
   
y = |R{z
B} − 1
u + |S{z
B} − 1
y

Ao Am ′
Ao Am
R S

and to estimate R ′ and S ′ as second order polynomials and to cancel


the common factor B − from the estimated polynomials. This is difficult
because there will not be an exact cancellation. Another possibility is to
use some estimate of B − . A third possibility is to try to estimate B − R
and B − S as a bilinear problem. In Fig. 8–11 we show simulation when
the model ( *) is used with

B− = 1
B− = q
q + 0.4
B− =
1.4

B− =
q −
0.4
0.6
Solutions to Chapter 3 21

2 uc y

−1

−2
0 10 20 30 40 50

10 u

−10

0 10 20 30 40 50

Figure 10. Simulation in Problem 3.3. Process output and control signal
are shown for the direct self-tuning regulator when the process zero is not
canceled and when B − = 1.

3.4 The process has the transfer function

b
G ( s) =
s( s + 1)

with proportional feedback

u = k( u c − y)
we get the closed loop transfer function

kb
Gcl ( s) =
s2 + s + kb
/
The gain k = 1 b gives the desired result. Idea for STR: Estimate b and
/
use k = 1 b̂. To estimate b introduce

s( s + 1) y = bu
s( s + 1) 1
y = b u
( s + a) 2 ( s + a) 2
| {z } | {z }
yf ϕ
22 Problem Solutions

2 uc y

−1

−2
0 10 20 30 40 50
u

10

−10

0 10 20 30 40 50

Figure 11. Simulation in Problem 3.3. Process output and control signal
are shown for the direct self-tuning regulator when the process zero is not
canceled and when B − = q.

The equations on page 56 gives

db̂
dt

= P ϕ e = P ϕ ( y f b̂ϕ )

dP
dt

= α P Pϕ ϕ T P = α P −P ϕ
2 2

With b̂( 0) = 1, P ( 0) = 100, α = −0.1, and a = 1 we get the result shown


in Fig. 14.
Solutions to Chapter 3 23

2 uc y

−1

−2
0 10 20 30 40 50
u

10

−10

0 10 20 30 40 50

Figure 12. Simulation in Problem 3.3. Process output and control signal
are shown for the direct self-tuning regulator when the process zero is not
canceled and when B − = ( q + 0.4) /1.4.
24 Problem Solutions

2 uc y

−1

−2
0 10 20 30 40 50
u

10

−10

0 10 20 30 40 50

Figure 13. Simulation in Problem 3.3. Process output and control signal
are shown for the direct self-tuning regulator when the process zero is not
canceled and when B − = ( q − 0.4) /0.6.
Solutions to Chapter 3 25

2 uc y

−2
0 5 10 15 20

10 u

0 5 10 15 20

1 b

0.6

0.2
0 5 10 15 20

Figure 14. Simulation in Problem 3.4. Process output, control signal


and estimated parameter b are shown for the indirect continuous-time
self-tuning regulator.
26 Problem Solutions

SOLUTIONS TO CHAPTER 4

4.1 The estimate b̂ may be small because of a poor estimate. One possibility
is to use a projection algorithm where the estimate is restricted to be in
a given range, b o ≤ b̂ ≤ b1 . This requires prior information about b:s
values. Another possibility is to replace

1 b̂
by
b̂ b̂2 +P
where P is the variance of the estimate. Compare with the discussion of
cautious control on pages 356–358.
4.10 Using ( 4.21) the output can be written as

R∗ S∗ R∗
yt + d = ut + ∗ yt + 1∗ et + do
C ∗ C C ( ∗)
= o t+ t
r u f

Consider minimization of

J = y2t + d + ρ u2t ( +)
Introduce the expression ( *)

J = ( rout + f t) 2 + ρ u2t
= ( r2o + ρ ) u2t + 2ro ut f t + f t2
 
2rout f t
= ( ro + ρ ) ut + 2
2 2
+ f t2
ro + ρ
 2
= ( ro + ρ ) ut + 2
2 ro f t
ro + ρ

r2o f t2
r2o + ρ
+ f t2

Hence  2
1 r2o + ρ ρ
J = 2 ft + ut + 2 f2
ro + ρ ro ro + ρ t
   2
1 ρ ρ
= 2 f t + r o + ut + 2 f2
ro + ρ ro ro + ρ t
 2
1 ρ ρ
= 2 yt + d + ut + 2 f2
ro + ρ ro ro + ρ t
Since r2o + ρ is a constant we find that minimizing ( + ) is the same as to
minimize  
ρ ρ
J1 = y t + d + ut = f t + ro + ut
ro ro
Solutions to Chapter 5 27

SOLUTIONS TO CHAPTER 5

5.1 The plant is


1 B
G ( s) = =
s( s + a) A
The desired response is
ω2 Bm
Gm ( s) = =
s2 + 2ζ ω s + ω 2 Am
( a) Gradient method or MIT rule. Use formulas similar to ( 5.9) . In this
case B + = 1 and Ao is of first order. The regulator structure is

R = s + r1 S = s0 s + s1 T = t0 Ao
This gives the updating rules
 
dr1 1
= γe u
dt Ao Am
 
ds0 p
= γe y
dt Ao Am
 
ds1 1
= γe y
dt Ao Am
 
dt0
dt
= γe − 1
Ao Am
uc

( b) Stability theory approach 1. First derive the error equation. If all


process zeros are cancelled we have
Ao Am y = AR 1 y + b o Sy = B R 1 u + b o Sy
= b o( Ru + Sy)
Further
Ao Am ym = Ao Bm uc = b o Tuc
Hence
Ao Am e = Ao Am ( y −y m ) = bo( Ru + Sy − Tu )
c

e =
b
Ao Am
( Ru + Sy − Tu )
c

/
Since 1 Ao Am is not SPR we introduce a polynomial D such that
/
D Ao Am is SPR. We then get

e =
bD
Ao Am
Ru f + Sy f − Tu cf


where
1 1 1
uf = u yf = y uc f = uc
D D D
28 Problem Solutions

( c) Stability theory approach 2. In this case we assume that all states


measurable. Process model:



ẋ = 
a 0


 
1
x + 
 


1 0 0
| {z } | {z }
Ap Bp
 
y = 0 1 x
| {z }
C

Control law
u = Lr u c − Lx = θ u − θ x − θ x
3 c 1 1 2 2

The closed loop system is

ẋ = ( Ap − B L) x + B L u
p r c = Ax + Buc
y = Cx

where
A(θ ) = Ap −

 a
Bp L = 

− − θ −θ 1 2



1 0
 
θ3 
 
B ( θ ) = Bp L r =  
0
The desired response is given by

ẋm = Am xm + Bm uc

where 
Am = 


 2ζ ω −ω 2 



 2
ω 
Bm = 
 

1 0 0
We have
−A

 2ζ ω a θ 1 0 
=  
− − 

ω −θ
(A m)
T
 2 
2 0
 
(B − B m )T = θ3 −ω 0 2

Introduce the state error

e = x −x m

−x −A x −B
we get
ė = x m = Ax + Buc m m m uc
( ∗)
= Am e + ( A − A ) x + ( B − B )u
m m c

The error goes to zero if Am is stable and

A(θ ) −A m = 0 ( ∗)
Solutions to Chapter 5 29

B (θ ) −B m = 0 ( +)
It is thus necessary that A(θ ) and B (θ ) are such that there is a θ
for which ( *) and ( + ) hold. Introduce the Lyapunov function
V = eT Pe + tr( A − A ) Q (A − A )
m
T
a m

+ tr( B − B ) Q ( B − B )
m
T
b m

Notice that
tr( A + B ) = trA + trB
xT Ax = tr( xxT A) = tr( AxxT )
tr( AB ) = tr( B A)
we get

dV
dt
= tr P ėeT + PeėT + ȦT Qa ( A −A m )
 ( +)
+ (A −A m ) Qa Ȧ + Ḃ Qb ( B
T
−B m ) + (B −B m ) Qb Ḃ
T

But from ( *)

P ėeT = P ( Am e + ( A − A ) x + ( B − B )u ) e
m m c
T

PeėT = Pe ( Am e + ( A − A ) x + ( B − B )u )
m m c
T

Introducing this into ( + ) and collecting terms proportional to ( A −


Am ) T we find that they are

2tr( A Am ) T Qa Ȧ + PexT


Similarly we find that terms proportional to ( B −B m ) are


2tr( B −B m ) T
Qb Ḃ + PeuTc


Hence
dV
= eT PAm e + eT ATm Pe
dt
+ 2tr( A −A ) m
T
Qa Ȧ + PexT


+ 2tr( B − B )

m
T
Qb Ḃ + PeuTc
Hence if the symmetric matrix P is chosen so that
ATm P + PAm = −Q
where Q is positive definite ( can always be done if Am is stable!) and
parameters are updated so that
(
− 
( A Am ) T Qa Ȧ + PexT = 0
( +)
− 
( B Bm ) T Qb Ḃ + PeuTc = 0
30 Problem Solutions

we get
dV
dt
= −e T
Qe

The equations for updating the parameters derived above can now be
used. This gives



− −
2ζ ω a θ 1 0  


 ˙


θ1 − 
θ˙ 2 



T


ω2 θ2 0
  Q a 
0 0
 + Pex  = 0

   ˙  

θ3 ω2 0    Qb 
θ3 
 
0
 + Peuc  = 0

Hence with Qa = I and Qb = 1


dθ 1
= p11 e1 x1 + p12 e2 x1 = ( p11 e1 + p12 e2 ) x1
dt
dθ 2
= p11 e1 x2 + p12 e2 x2 = ( p11 e1 + p12 e2 ) x2
dt
dθ 3
dt

= ( p11 e1 + p12 e2 ) uc

where e1 = x1 −x m1 and e2 = x2 −x m2 . It now remains to determine


P such that
ATm P + PAm = −Q
Choosing ζ = 0.707, ω = 2 and
 
 41.2548 11.3137 

Q =  

11.3137 16.0000
we get  
4 2 
P = 
 

2 16
The parameter update laws become
dθ 1
= ( 4e1 + 2e2) x1
dt
dθ 2
= ( 4e1 + 2e2) x2
dt
dθ 3
dt

= ( 4e1 + 2e2 ) uc

A simulation of the system is given in Fig. 15.


5.2 The block diagram of the system is shown in Fig. 16. The PI version of
the SPR rule is

dt
d

= γ 1 ( uc e) γ 2 uc e
dt
− ( ∗)
Solutions to Chapter 5 31

2 States x1 xm1 x2 xm2

−1

−2
0 10 20 30 40
Estimated parameters

0.5

0 10 20 30 40

Figure 15. Simulation in Problem 5.1. Top: Process and model


states, x1 ( full) , xm1 ( dashed) , x2 ( dotted) , and xm2 ( dash-dotted) .
Bottom: Controller parameters θ 3 ( full) , θ 1 ( dashed) , and θ 2 ( dot-
ted) .

ym
θ0 1
s −
uc e
Σ
y
Π 1
s

Figure 16. Block diagram in Problem 5.2.

To derive the error equation we notice that

dym
= θ 0uc
dt
dy
= θ uc
dt
32 Problem Solutions

Hence
de
dt
= (θ − θ )u
0
c

we get
d2 e
dt2
=

dt
uc + (θ − θ ) dudt
0 c

Inserting the parameter update law ( *) into this we get


 
d2 e
dt2

= γ1
duc
dt
e + uc
de
dt

uc γ 2 u2c e + (θ θ 0 )
duc
dt

Hence  
d2 e
dt 2
+ γ 1 u2c
de
dt
+ γ 1 uc
duc
dt
+ γ 2 u2c e = (θ − θ ) dudt
0 c

Assuming that uc is constant we get the following error equation

d2 e de
2
+ γ 1 u2c + γ 2 u2c e = 0
dt dt
Assuming that we want this to be a second order system with ω and ζ
we get 
γ 1 u2c = 2ζ ω γ 1 = 2ζ ω u2c /
γ 2 uc = ω
2 2
γ 2 = ω 2 u2c /
This gives an indication of how the parameters γ 1 and γ 2 should be
selected. The analysis was based on the assumption that uc was constant.
To get some insight into what happens when uc changes we will give
a simulation where uc is a triangular wave with varying period. The
adaptation gains are chosen for different ω and ζ . Figure 17 shows what
happens when the period of the square wave is 20 and ω = 0.5, 1 and 2.
Corresponding to the periods 12, 6 and 3. Figure 18 show what happen
when uc is changed more rapidly.
5.6 The transfer function is
b0 s2 + b1 s + b2
G ( s) =
s2 + a1 s + a2
The transfer function has no poles and zeros in the right half plane if
a1 ≥ 0, a2 ≥ 0, b0 ≥ 0, b1 ≥ 0, and b2 ≥ 0. Consider
B ( iω ) A( iω )−
G ( iω ) = ⋅
A( iω ) A( iω ) −
The condition Re G ( iω ) ≥ 0 is equivalent to Re ( B ( iω ) A( iω ) ) ≥ 0. But −
− − −
g (ω ) = Re ( b0ω 2 + iω b1 + b2 )( ω 2 iω a1 + a2 )


= b0ω + ( a1 b1
4
− b a − b )ω
0 2 2
2
+ a2 b2
Solutions to Chapter 5 33

Process and model output Process and model output

10 10

5 5

0 0
0 5 10 15 20 0 5 10 15 20

1.5 Estimated parameter 1.5 Estimated parameter

1 1

0.5 0.5

0 0
0 5 10 15 20 0 5 10 15 20

Figure 17. Simulation in Problem 5.2 for a triangular wave of period 20.
Left top: Process and model outputs, Left bottom: Estimated parameter θ
when ω = 0.5 ( full) , 1 ( dashed) , and 2 ( dotted) for ζ = 0.7. Right top:
Process and model outputs, Right bottom: Estimated parameter θ when
ζ = 0.4 ( full) , 0.7 ( dashed) , and 1.0 ( dotted) for ω = 1.

Completing the squares the function can be written as



g ( ω ) = b0 ω 2 +

a1 b1 b0 a2 b2
2

+ a2 b2
( a1 b1
− −b a −b )
0 2 2
2

2b0 4b0
When b0 = 0 the condition for g to be positive is that
a1 b1 −b a −b
0 2 2 ≥ 0 ( i)
If b0 > 0 the function g (ω ) is nonnegative for all ω if either ( i) holds or

− −
if
a1 b1 b0 a2 b2 < 0
and
a2 b2 >
( a1 b1 −b a −b )0 2 2
2

4b0
Example 1. Consider
s2 + 6s + 8
G ( s) =
s2 + 4s + 3

We have a1 b1 b0 a2 b2 = 24 − − − 3 8 = 13 > 0. Hence the transfer
function G ( s) is SPR. Example 2.
3s2 + s + 1
G ( s) =
s2 + 3s + 4
34 Problem Solutions

Process and model output Process and model output

10 10

5 5

0 0
0 5 10 15 20 0 5 10 15 20

1.5 Estimated parameter 1.5 Estimated parameter

1 1

0.5 0.5

0 0
0 5 10 15 20 0 5 10 15 20
Figure 18. Simulation in Problem 5.2 for a triangular wave of period 5.
Left top: Process and model outputs, Left bottom: Estimated parameter θ
when ω = 0.5 ( full) , 1 ( dashed) , and 2 ( dotted) for ζ = 0.7. Right top:
Process and model outputs, Right bottom: Estimated parameter θ when
ζ = 0.4 ( full) , 0.7 ( dashed) , and 1.0 ( dotted) for ω = 1.

we have a1 b1 −a b −b
2 0 2 = 3 − 12 − 1 = −10. Furthermore
a2 b2 = 4

( a1b1 −a b −b )
2 0 2
2
=
100
4b0 12
Hence the transfer function G ( s) is neither PR nor SPR.
5.7 Consider the system
dx
= Ax + B1 u
dt
y = C1 x
where  
 1

 
0
 

 
.
B1 = 
 


 .
. 

 
  
0
Let Q be positive and let P be the solution of the Lyapunov equation

AT P + PA = −Q ( ∗)
Solutions to Chapter 5 35

Define C1 as  
C1 = B T P =  p11 p12 ... p1n 

According to the Kalman-Yacobuvich Lemma the transfer function

G1 ( s) = C1 ( sI − A) − B
1
1

is then positive definite. This transfer function can, however, be written


as
p11 sn−1 + p12 sn−2 + . . . + p1n
G ( s) =
sn + a1 sn−1 + . . . + an
Since there are good numerical algorithms for solving the Lyapunov equa-
tion we can use this result to construct transfer functions that are SPR.
The method is straightforward.

1. Choose A stable.
2. Solve ( *) for given Q positive.
3. Choose B as

B ( s) = p11 sn−1 + p12 sn−2 + . . . + p1n

5.11 Let us first solve the underlying design problem for systems with known
parameters. This can be done using pole placement. Let the plant dy-
namics be
B
y = u
A
and let the controller be
Ru = Tuc − sy
The basic design equation is then

AR + B S = Am Ao ( ∗)
In this case we have
A = ( s + a)( s + p)
B = bq
Am = s2 + 2ζ ω s + ω 2
We need an observer of at least first order. The design equation ( *) then
becomes

( s + a)( s + p)( s + r1 ) + bq( s0s + s1) = ( s2 + 2ζ ω s + ω 2 )( s + a0) ( +)


where Ao = s + ao is the observer polynomial. The controller is thus of
the form
du
dt
+ r1 u = t0 uc s0 y s1− −
dy
dt
36 Problem Solutions

It has four parameters that have to be updated r1 , t0 , s0 , and s1 . If no prior


knowledge is available we thus have to update these four parameters.
When parameter p is known it follows from the design equation ( + ) that
there is a relation between the parameters and it then suffices to estimate
three parameters. This is particularly easy to see when the observer

polynomial is chosen as Ao ( s) = s + p. Putting s = p in ( + ) gives

−s p + s
0 1 = 0
Hence
s1 = ps0 ( ∗∗)
In this particular case we can thus update t0 , s0 , and r1 and compute
s1 from ( **) . Notice, however, that the knowledge of q is of no value
since q always appear in combination with the unknown parameter b.
The equations for updating the parameters are derived in the usual way.
With a0 = p equation ( + ) simplifies to

( s + a)( s + r1 ) + bqs0 = s2 + 2ζ ω s + ω 2
Introducing
A′ ( s) = s + a S ′ ( s) = s0 T ′ = t0
we get
BT′
y =
A′ R + B S ′
uc

 e B
 B
 t0
= ′
A R + BS ′ uc
Am
uc

e = − B T ′ B u = − B y  − B
 s ( A′ R + B S′)
0
2 A′ R + B S ′
c
A m
y

 e = − A B T u = − A y  − A′
′ ′ ′
 r ( A′ R + B S′)
1
2 A′ R + B S ′
c
A m
y

= − u = −
A B B
u
A A m A ( s + p) m

The MIT rule then gives


 
dr1 1
=γ u e
dt ( s + p) Am
 
ds0 1
=γ y e
dt Am
 
dt0
dt
= γ − 1
Am
uc e

A simulation of the controller is given in Fig. 19.


Solutions to Chapter 5 37

2 Model and process output

−1

−2
0 20 40 60 80 100
Estimated parameters
2

1.5

0.5
0 20 40 60 80 100

Figure 19. Simulation in Problem 5.11. Top: Process ( full) and model
( dashed) output. Bottom: Estimated parameters r1 ( full) , s0 ( dashed) , and
t0 ( dotted) .

5.12 The closed loop transfer function is


kb
Gcl ( s) =
s2 + s + kb
This is compatible with the desired dynamics. The error is

e = y −y m =
kb
p2 + p + kb
uc −y m

Hence
e = b
− b2 k
k p2 + p + kb
u c u
( p2 + p + kb) 2 c
= 2
b
p + p + kb
( uc y)−
bp 2
1
+p+1
( u − y) c p =
d
dt
The following adjustment rule is obtained from the MIT rule

− 
 
dk
dt
= γ′
e
k
e = γ ′b
1
| {z } p2 + p + 1
−( u c y) e −
γ
38 Problem Solutions

2 ym y, gamma = 0.05 2 k, gamma = 0.05

0 1

−2 0
0 20 40 60 0 20 40 60

2 ym y, gamma = 0.3 2 k, gamma = 0.3

0 1

−2 0
0 20 40 60 0 20 40 60

2 ym y, gamma = 1 2 k, gamma = 1

0 1

−2 0
0 20 40 60 0 20 40 60

Figure 20. Simulation in Problem 5.12. Left: Process ( full) and model
( dashed) output. Right: Estimated parameter k for different values of γ .

A simulation of the system is given Fig. 20. This shows the behavior of
the system when uc is a square wave.
Solutions to Chapter 6 39

SOLUTIONS TO CHAPTER 6

6.1 The process is given by


b
G ( s) =
s( s + a)
and the regressor filter should be
1 1 1
G f ( s) = = = 2
A f ( s) Am ( s) s + 2ζ ω s + ω 2
The controller is given by
s0 s + s1 t0 ( s + ao )
U ( s) = Y ( s) + uc ( s)
s + r1 s + r1
For the estimation of process parameters we use a continuous-time RLS
algorithm. The process is of second order and the controller is of first
order. The regressor filter is of second order and both inputs and outputs
should be filtered. Hence we need seven states in ξ . The process parame-
ters are contained in θ , and the controller parameters in ϑ . The relation
between these are given by
  
 r1   
2ζ ω + ao â








 s 0









 ( a o 2ζ ω + ω 2
âr1 ) b̂ − /




 = χ (θ )
ϑ = 


 s1 
  




= 



 ω ao b̂
2
/





t0

ω b̂
2
/ 

To find A( ϑ ) , B ( ϑ ) , C ( ϑ ) and D ( ϑ ) we start by finding realizations for


y, y f , u f and the controller. We get
  
d 

  =  
−    
ẏ   a 0   ẏ   b 
 
   +   u
dt y 1 0 y 0
d 
  

ẏ f   2ζ ω
  =   
− −   
ω 2   ẏ f   1 

  +  y
dt y f 1 0 yf 0
and   
d 

u̇ f   2ζ ω
   = 

− −ω 2   



 u̇ f 
 
1
+
  u
dt u f 1 0 uf 0
and the control law can be rewritten as

u = −s y + t u + −sp ++ rr s
0 0 c
1

1
1 0
y+

ao t0 t0 r1
p + r1
uc

We need one state for the controller and it can be realized as

ẋ = −r x + ( −s
1 1 + r1 s0 ) y + ( ao t0 − r t )u
1 0 c

u = x −s y+t u
0 0 c
40 Problem Solutions

Combining the states results in


  



ẏ   a

 

− bs0 − 0 0 0 0 b 




 y 
 
 1 0 0 0 0 0 0 
   
d 








 ẏ f 










0 1 2ζ ω − −ω 2
0 0 0 







 yf   
= 0 0 1 0 0 0 0 
dt    











f 










0 s0 − 0 0 −2ζ ω −ω 2
1 







 u 
 
 0 0 0 0 1 0 0 

 f   
x

0
  

s1 + r1 s0 0

0 0 0 −
r1

 ẏ   bt 0 

   
 y 
  
 
 0




 
 
 

 
    


 ẏ f 
 
 0 

 
    


× yf   +  0 
 uc

 
 
 

 
 u̇ f 
  
 


   t0 


     


 u 
 
 0 

 f   
  
x ao t0 r1 t0


which defines the relation

= A( ϑ )ξ + B ( ϑ ) uc
dt
Now we need to express e and ϕ in the states so that we find the C and
D matrices. The estimator tries to find the parameters in

b
yf = uf
p( p + a)
which is rewritten as
 a
p2 y f =  py f −  
uf  
 
b
 = ϕ Tθ 0

Clearly
 
 ẏ 

 


 y 


 

 
 

0 0
ϕ = 

−1 0 0 0 0


 ẏ f


 yf






0 0 0 0 0 1 0  





 

 u̇ f
 


 


 u 

 f 
x
and
e = p2 y f −ϕ T
θˆ = −2ζ ω ẏ − ω
f
2
y f + y + â ŷ f − b̂u f
Solutions to Chapter 6 41

If we use the relations â = −r 1 /


+ 2ζ ω + ao and b̂ = ω 2 t0 then e can be
written as

−2ζ ω ẏ − ω − − ωt
2
e = f
2
y f + y + ( r1 + 2ζ ω + ao ) ẏ f uf
0
 
 ẏ 

 


 y 


 


 

  ẏ f 

= 0 1 −r
1 + ao 0 0 − ω2
t0 0 







yf








 u̇ f 


 


 


 u 

 f 
x

Combining the expressions for ϕ and e gives

 
 ẏ 

 


 y 

 
  

 
0 1 −r + ao 0 0 − ω2
0





 ẏ f






1
 e 

t0

 
 

 
  = 
0 0 1 0 0 0 0
 
 yf 
 = C ( ϑ )ξ
ϕ 
 
 




0 0 0 0 0 1 0 
 u̇ f 



 


 u 



f 

x

i.e. D ( ϑ ) = 0. As given in the problem description, the estimator is


defined by


= Pϕ e
dt
dP
dt
= α P Pϕ ϕ T P−
where P is a 2 × 2 matrix and e and ϕ are given above.

6.3 The averaged equations for the parameter estimates are given by ( 6.54)
on page 303. In this particular case we have

ab2
G ( s) =
( s + a)( s + b) 2
a
Gm ( s) =
s+a
42 Problem Solutions

To use the averaged equations we need


    
b
avg ( Gm uc ) ( Guc ) = avg v v
( p + b) 2

=
1 2
2
v ⋅ 2
b2
b + ω2
cos 2 ϕ =
a2 u20 b2
2( a2 + ω 2 )( b2 + ω 2 )
2 cos2 ϕ 1


=
a2 b2u20

2b2
1

= − a2 b2 u20 ( b2 ω 2 ) −
2( a + ω )( b + ω ) ω + b
2 2 2 2 2 2 2( a2 + ω 2 )( b2 + ω 2 ) 2
where we have introduced
uc = u0 sin ω t
a
v = uc
p+a
ω
ϕ = atan
b
Similarly we have

avg ( uc Guc ) =
u20
2
j j
G cos ( 2ϕ + ϕ 1 )

=
u20
2
⋅ √a
2
ab2
+ ω 2 ⋅ ( b2 + ω 2 )
( cos 2ϕ cos ϕ 1 − sin 2ϕ sin ϕ )
1

=

u20 ab2 ( ab2 ω 2 ( a + 2b))
2( a2 + ω 2 )( b2 + ω 2 ) 2
where
ω ω
ϕ = atan
ϕ 1 = atan
b a
It follows from the analysis on page 302–304 that the MIT rule gives a
stable system as long as ω < b while the stability condition for the SPR
rule is r
a
ω < b
a + 2b
with b = 10a we get
ω MIT = 10a
ω SP R = 2.18a
6.10 The adaptive system was designed for a process with the transfer function
b
Ĝ ( s) = ( 1)
s+a
The controller has the structure

u = θ 1uc −θ 2y ( 2)
Solutions to Chapter 6 43

The desired response is


bm
Gm ( s) = ( 3)
s + am
Combining ( 1) and ( 2) gives the closed loop transfer function

bθ 1
Gcl =
s + a + bθ 2
Equating this with Gm ( s) given by ( 3) gives

bθ 1 = b m
a + bθ 2 = am

If these equations are solved for θ 1 and θ 2 we obtain the controller


parameters that give the desired closed loop system. Conversely if the
equations are solved for a and b we obtain the parameters of the process
model that corresponds to given controller parameters. This gives

a = am − b θ /θ
m 2 1

b = bm /θ 1

The parameters a and b can thus be interpreted as the parameters of the


model the controller believes in. Inserting the expressions for θ 1 and θ 2
from page 318 we get

 229 31ω 2 −

 a(ω ) =
259 ω 2 − ( +)


 b(ω ) = 458
259 ω 2 −
When r
229
ω = = 2.7179
31
we get a(ω ) = 0. The reson for this is that the value of the plant transfer
function
458
G ( s) = ( ∗)
( s + 1)( s + 30s + 229)
2

is G ( 2.7179i) = −0.6697i. The transfer function of the plant is thus


purely imaginary. The only way to obtain a purely imaginary value of
the transfer function
b
Ĝ = ( ∗∗)
s+a

is to make a = 0. Also notice that b( 2.7179i) = 1.8203 which gives
− √
Ĝ ( 2.7179i) = 0.6697i. When ω = 259 = 16.09 we get infinite values of

a and b. Notice that G ( i 259) = 0.0587 that is real and negative. The
only way to make Ĝ ( iω ) negative and real is to have infinite large values
44 Problem Solutions

of a and b. It is thus easy to explain the behavior of the algorithm from the
system identification point of view. The controller can be interpreted as
if it is fitting a model ( **) to the process dynamics ( *) . With a sinusoidal
input it is possible to get a perfect fit and the parameters are given by
( + ).

You might also like