You are on page 1of 15

Lecture: Integral action in state feedback control

Automatic Control 1

Integral action in state feedback control

Prof. Alberto Bemporad

University of Trento

Academic year 2010-2011

Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 1 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking

Reference tracking
Assume the open-loop system completely reachable and observable
We know state feedback we can bring the output y(k) to zero asymptotically
How to make the output y(k) track a generic constant set-point r(k) ≡ r ?
Solution: set u(k) = Kx(k) + v(k)
v(k) = Fr(k)
We need to choose gain F properly to ensure reference tracking
!"#$%&'$()*+,'-..

r(k) v(k) u(k) x(k) y(k)


F + A,B C
+

K
',#/+,((-+

x(k + 1) = (A + BK)x(k) + BFr(k)


y(k) = Cx(k)
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 2 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking

Reference tracking

To have y(k) → r we need a unit DC-gain from r to y

C(I − (A + BK))−1 BF = I

Assume we have as many inputs as outputs (example: u, y ∈ R)


Assume the DC-gain from u to y is invertible, that is C Adj(I − A)B invertible
Since state feedback doesn’t change the zeros in closed-loop

C Adj(I − A − BK)B = C Adj(I − A)B

then C Adj(I − A − BK)B is also invertible


Set
F = (C(I − (A + BK))−1 B)−1

Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 3 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking

Example

Poles placed in (0.8 ± 0.2j, 0.3). Resulting


closed-loop: 1.4

1.2
    1
1.1 1 0
x(k + 1) = x(k) + u(k) 0.8
0 0.8 1
” — 0.6
y(k) = 1 0 x(k)
” — 0.4
u(k) = −0.13 −0.3 x(k) + 0.08r(k) 0.2

0
0 10 20 30 40
sample steps

Unit step response of the closed-loop


system (=evolution of the system from
The transfer function G(z) from r to y is ” —
initial condition x(0) = 00 and
2
G(z) = 25z2 −40z+17 , and G(1) = 1 reference r(k) ≡ 1, ∀k ≥ 0)

Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 4 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking

Reference tracking

Problem: we have no direct feedback on the tracking error e(k) = y(k) − r(k)
Will this solution be robust with respect to model uncertainties and
exogenous disturbances ?
Consider an input disturbance d(k) (modeling for instance a non-ideal
actuator, or an unmeasurable disturbance)
&#*/0)!&.0/+1$#'-
!"#$%&'$()*+,'-..
d(k)
r(k) + u(k) + x(k) y(k)
F +
A,B C
+

K
',#0+,((-+

Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 5 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking

Example (cont’d)

Let the input disturbance d(k) = 0.01, ∀k = 0, 1, . . .


1.4

1.2

0.8

0.6

0.4

0.2

0
0 10 20 30 40
sample steps

The reference is not tracked !


The unmeasurable disturbance d(k) has modified the nominal conditions for
which we designed our controller

Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 6 / 15
Lecture: Integral action in state feedback control Integral action

Integral action for disturbance rejection


Consider the problem of regulating the output y(k) to r(k) ≡ 0 under the
action of the input disturbance d(k)
Let’s augment the open-loop system with the integral of the output vector

q(k + 1) = q(k) + y(k)


| {z }
integral action

The augmented system is

x(k + 1)
        
A 0 x(k) B B
= + u(k) + d(k)
q(k + 1) C I q(k) 0 0
 
” — x(k)
y(k) = C 0
q(k)

Design a stabilizing feedback controller for the augmented system


 
” — x(k)
u(k) = K H
q(k)

Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 7 / 15
Lecture: Integral action in state feedback control Integral action

Rejection of constant disturbances

&#*/0)!&.0/+1$#'-
!"#$%&'$()*+,'-..
d(k)
+ u(k) + x(k) y(k)
H + A,B C
+

.0$0-)3--!1$'4
K
q(k)
&#0-2+$()$'0&,# !

Theorem
Assume a stabilizing gain [H K] can be designed for the system
augmented with integral action. Then limk→+∞ y(k) = 0 for all constant
disturbances d(k) ≡ d

Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 8 / 15
Lecture: Integral action in state feedback control Integral action

Rejection of constant disturbances


&#*/0)!&.0/+1$#'-
!"#$%&'$()*+,'-..
d(k)
+ u(k) + x(k) y(k)
H +
A,B C
+

.0$0-)3--!1$'4
K
q(k)
&#0-2+$()$'0&,# !

Proof:
The state-update matrix of the closed-loop system is
    
A 0 B ” —
+ K H
C I 0
The matrix has asymptotically stable eigenvalues by construction
h i
x(k)
For a constant excitation d(k) the extended state q(k) converges to a
steady-state value, in particular limk→∞ q(k) = q̄
Hence, limk→∞ y(k) = limk→∞ q(k + 1) − q(k) = q̄ − q̄ = 0 ƒ
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 9 / 15
Lecture: Integral action in state feedback control Integral action

Example (cont’d) – Now with integral action


Poles placed in (0.8 ± 0.2j, 0.3) for the augmented system. Resulting closed-loop:
   
1.1 1 0
x(k + 1) = x(k) + (u(k) + d(k))
0 0.8 1
q(k + 1) = q(k) + y(k)
” —
y(k) = 1 0 x(k)
” —
u(k) = −0.48 −1 x(k) − 0.056q(k)
Closed-loop simulation for x(0) = [0 0]0 , d(k) ≡ 1:
3

2.5

1.5

0.5

−0.5
0 10 20 30 40
sample steps
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 10 / 15
Lecture: Integral action in state feedback control Integral action

Integral action for set-point tracking

&#*/0)!&.0/+1$#'-
!"#$%&'$()*+,'-..
d(k)
r(k) + q(k) + u(k) + x(k) y(k)
-
! &#0-2+$()
H
+ + A,B C
$'0&,#

0+$'4&#2)-++,+ .0$0-)3--!1$'4
K

Idea: Use the same feedback gains (K, H) designed earlier, but instead of feeding
back the integral of the output, feed back the integral of the tracking error

q(k + 1) = q(k) + (y(k) − r(k))


| {z }
integral action

Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 11 / 15
Lecture: Integral action in state feedback control Integral action

Example (cont’d)

3.5
  3
1.1 1
x(k + 1) = x(k) 2.5
0 0.8
  2
0
+ (u(k) + d(k)) 1.5
1
q(k + 1) = q(k) + (y(k) − r(k)) 1

0.5
| {z }
tracking error
” — 0
y(k) = 1 0 x(k) 0 10 20
sample steps
30 40
” —
u(k) = −0.48 −1 x(k) − 0.056q(k)
Response for x(0) = [0 0]0 ,
d(k) ≡ 1, r(k) ≡ 1

Looks like it’s working . . . but why ?

Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 12 / 15
Lecture: Integral action in state feedback control Integral action

Tracking & rejection of constant disturbances/set-points

Theorem
Assume a stabilizing gain [H K] can be designed for the system
augmented with integral action. Then limk→+∞ y(k) = r for all constant
disturbances d(k) ≡ d and set-points r(k) ≡ r

Proof:
The closed-loop system
x(k + 1) A + BK BH
       
x(k) B 0 d(k)
= +
q(k + 1) C I q(k) 0 −I r(k)
 
” — x(k)
y(k) = C 0
q(k)
” d(k) —
has input r(k) and is asymptotically stable by construction
h i
x(k)
For a constant excitation d(k)
” —
r(k) the extended state q(k) converges to a
steady-state value, in particular limk→∞ q(k) = q̄
Hence, limk→∞ y(k) − r(k) = limk→∞ q(k + 1) − q(k) = q̄ − q̄ = 0 ƒ
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 13 / 15
Lecture: Integral action in state feedback control Integral action

Integral action for continuous-time systems


The same reasoning can be applied to continuous-time systems
ẋ(t) = Ax(t) + Bu(t)
y(t) = Cx(t)
Rt
Augment the system with the integral of the output q(t) = 0
y(τ)dτ, i.e.,
q̇(t) = y(t) = Cx(t)
| {z }
integral action

The augmented system is


d x(t)
      
A 0 x(t) B
= + u(t)
dt q(t) C 0 q(t) 0
 
” — x(t)
y(t) = C 0
q(t)
Design a stabilizing controller [K H] for the augmented system
Implement Zt
u(t) = Kx(t) + H (y(τ) − r(τ))dτ
0
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 14 / 15
Lecture: Integral action in state feedback control Integral action

English-Italian Vocabulary

reference tracking inseguimento del riferimento


steady state regime stazionario
set point livello di riferimento

Translation is obvious otherwise.

Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 15 / 15

You might also like