Professional Documents
Culture Documents
Automatic Control 1
University of Trento
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 1 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking
Reference tracking
Assume the open-loop system completely reachable and observable
We know state feedback we can bring the output y(k) to zero asymptotically
How to make the output y(k) track a generic constant set-point r(k) ≡ r ?
Solution: set u(k) = Kx(k) + v(k)
v(k) = Fr(k)
We need to choose gain F properly to ensure reference tracking
!"#$%&'$()*+,'-..
K
',#/+,((-+
Reference tracking
C(I − (A + BK))−1 BF = I
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 3 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking
Example
1.2
1
1.1 1 0
x(k + 1) = x(k) + u(k) 0.8
0 0.8 1
0.6
y(k) = 1 0 x(k)
0.4
u(k) = −0.13 −0.3 x(k) + 0.08r(k) 0.2
0
0 10 20 30 40
sample steps
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 4 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking
Reference tracking
Problem: we have no direct feedback on the tracking error e(k) = y(k) − r(k)
Will this solution be robust with respect to model uncertainties and
exogenous disturbances ?
Consider an input disturbance d(k) (modeling for instance a non-ideal
actuator, or an unmeasurable disturbance)
&#*/0)!&.0/+1$#'-
!"#$%&'$()*+,'-..
d(k)
r(k) + u(k) + x(k) y(k)
F +
A,B C
+
K
',#0+,((-+
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 5 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking
Example (cont’d)
1.2
0.8
0.6
0.4
0.2
0
0 10 20 30 40
sample steps
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 6 / 15
Lecture: Integral action in state feedback control Integral action
x(k + 1)
A 0 x(k) B B
= + u(k) + d(k)
q(k + 1) C I q(k) 0 0
x(k)
y(k) = C 0
q(k)
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 7 / 15
Lecture: Integral action in state feedback control Integral action
&#*/0)!&.0/+1$#'-
!"#$%&'$()*+,'-..
d(k)
+ u(k) + x(k) y(k)
H + A,B C
+
.0$0-)3--!1$'4
K
q(k)
�-2+$()$'0&,# !
Theorem
Assume a stabilizing gain [H K] can be designed for the system
augmented with integral action. Then limk→+∞ y(k) = 0 for all constant
disturbances d(k) ≡ d
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 8 / 15
Lecture: Integral action in state feedback control Integral action
.0$0-)3--!1$'4
K
q(k)
�-2+$()$'0&,# !
Proof:
The state-update matrix of the closed-loop system is
A 0 B
+ K H
C I 0
The matrix has asymptotically stable eigenvalues by construction
h i
x(k)
For a constant excitation d(k) the extended state q(k) converges to a
steady-state value, in particular limk→∞ q(k) = q̄
Hence, limk→∞ y(k) = limk→∞ q(k + 1) − q(k) = q̄ − q̄ = 0
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 9 / 15
Lecture: Integral action in state feedback control Integral action
2.5
1.5
0.5
−0.5
0 10 20 30 40
sample steps
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 10 / 15
Lecture: Integral action in state feedback control Integral action
&#*/0)!&.0/+1$#'-
!"#$%&'$()*+,'-..
d(k)
r(k) + q(k) + u(k) + x(k) y(k)
-
! �-2+$()
H
+ + A,B C
$'0&,#
0+$'4)-++,+ .0$0-)3--!1$'4
K
Idea: Use the same feedback gains (K, H) designed earlier, but instead of feeding
back the integral of the output, feed back the integral of the tracking error
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 11 / 15
Lecture: Integral action in state feedback control Integral action
Example (cont’d)
3.5
3
1.1 1
x(k + 1) = x(k) 2.5
0 0.8
2
0
+ (u(k) + d(k)) 1.5
1
q(k + 1) = q(k) + (y(k) − r(k)) 1
0.5
| {z }
tracking error
0
y(k) = 1 0 x(k) 0 10 20
sample steps
30 40
u(k) = −0.48 −1 x(k) − 0.056q(k)
Response for x(0) = [0 0]0 ,
d(k) ≡ 1, r(k) ≡ 1
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 12 / 15
Lecture: Integral action in state feedback control Integral action
Theorem
Assume a stabilizing gain [H K] can be designed for the system
augmented with integral action. Then limk→+∞ y(k) = r for all constant
disturbances d(k) ≡ d and set-points r(k) ≡ r
Proof:
The closed-loop system
x(k + 1) A + BK BH
x(k) B 0 d(k)
= +
q(k + 1) C I q(k) 0 −I r(k)
x(k)
y(k) = C 0
q(k)
d(k)
has input r(k) and is asymptotically stable by construction
h i
x(k)
For a constant excitation d(k)
r(k) the extended state q(k) converges to a
steady-state value, in particular limk→∞ q(k) = q̄
Hence, limk→∞ y(k) − r(k) = limk→∞ q(k + 1) − q(k) = q̄ − q̄ = 0
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 13 / 15
Lecture: Integral action in state feedback control Integral action
English-Italian Vocabulary
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 15 / 15