Professional Documents
Culture Documents
Output regulation
Michal Kvasnica
State regulation
min s.t.
N 1 k=0
State regulation
p)
( Qx xt+k
+ Qu ut+k
min s.t.
N 1 k=0
( Qx (xt+k 0)
+ Qu (ut+k 0) p )
Output regulation
penalize deviations of predicted outputs from zero
min s.t.
N 1 k=0
yt+k = Cxt+k
min s.t.
N 1 k=0
( Qy yt+k
p + Qu ut+k
p)
= xT t+k Qx xt+k
N 1 k=0
Qx
T T T xT t+k (C Qy C ) xt+k + ut+k Qu ut+k
min
s.t.
uT t+k Qu ut+k
U Hu u t Ku Y H y yt Ky
X Hx xt Kx
N 1 k=0
T yt +k Qy yt+k
uT t+k Qu ut+k
U Hu u t Ku Y H y yt Ky
X Hx xt Kx
s.t.
X = xT t U = uT t
T Y = yt
xT t+1 uT t+1
T yt +1
X = xT t U = uT t
T Y = yt
xT t+1 uT t+1
T yt +1
xT t+N 1 uT t+N 1
T yt +N 1
T T T
C 0 Y = 0 0
0 0 A 0 X = 0 A .. . 0 0 0 C 0 0 .. .
0 0 B 0 X + 0 0 0 0 D 0 0 0 X + 0 0 C 0
0 0 B 0 0 D 0 0
.. . .. .
0 I 0 0 + BU + Ex (t) U + . x(t) = AX 0 . . 0 0 0 0 + DU U = CX 0 D
Hx 0 0 Hx x = H .. 0 . 0 0 0 Hy 0 0 Hy y = H .. 0 . 0 0 0
Kx 0 Kx 0 x = . ,K . . 0 Kx Hx Ky 0 Ky 0 , Kx = . . . 0 Ky Hy
Hu 0 u = H 0 0
0 Hu 0 0
.. .
Ku 0 Ku 0 u = . ,K . . 0 Ku Hu
min s.t.
yY + U T Q uU Y TQ + BU + Ex (t) X = AX + DU Y = CX Hx X K x uU K u H Hy Y K y
objective = 0; constraints = []; for k = 1:N objective = objective + ... y{k}*Qy*y{k} + u{k}*Qu*u{k}; constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == C*x{k} + D*u{k}; ... xmin <= x{k} <= xmax; ... umin <= u{k} <= umax; ... ymin <= y{k} <= ymax ]; end info = solvesdp(constraints+[x{1}==xt], objective);
min
s.t.
B D u H 0 0
u 0 0 U Q X Y 0 0 0 X y Y 0 0 Q U ) 0 (I A X = Ex(t) 0 C I Y u K 0 0 U x x 0 X K H y y Y K 0 H
T T
objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*y{k}, p) + norm(Qu*u{k}, p); constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == C*x{k} + D*u{k}; ... xmin <= x{k} <= xmax; ... umin <= u{k} <= umax; ... ymin <= y{k} <= ymax ]; end info = solvesdp(constraints+[x{1}==xt], objective);
x1 = y
x2
Agenda
x1 (t) 1.42 + u(t) x2 (t) 0.40
Output regulation Tracking of non-zero references Advanced constraints
- slew-rate constraints - soft constraints - move blocking
5 x2 (t) 5
Tracking
min s.t.
N 1 k=0
( Qx (xt+k xref )
+ Qu ut+k
p)
xref = [2; 0.5]; x = {}; u = {}; y = {}; for k = 1:N+1, x{k} = sdpvar(nx, 1); end for k = 1:N, u{k} = sdpvar(nu, 1); end objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qx*(x{k}-xref), p) + norm(Qu*u{k}, p); constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... xmin <= x{k} <= xmax; ... umin <= u{k} <= umax ]; end info = solvesdp(constraints+[x{1}==xt], objective);
min s.t.
N 1 k=0
( Qy (yt+k yref )
+ Qu ut+k
p)
Example
G(s) = 1 (s + 1)(s + 2)
x1 (t + 1) 0.10 0.47 = x2 (t + 1) 0.23 0.60 y (t) = x2 x1 (t) 0.23 + u(t) x2 (t) 0.20
1 x1 (t) 1 1 y (t) 1
Qy = 1
1 x2 (t) 1
objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*(y{k}-yref), p) + norm(Qu*u{k}, p); constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == C*x{k} + D*u{k}; ... xmin <= x{k} <= xmax; ... umin <= u{k} <= umax; ... ymin <= y{k} <= ymax ]; end info = solvesdp(constraints+[x{1}==xt], objective);
1 u(t) 1
Qu = 1 N =5 yref = 0.4
steady-state offset
Qu large
p)
( Qy (yt+k yref )
+ Qu ut+k
min s.t.
N 1 k=0
( Qy (yt+k yref )
+ Qu ut+k
p)
Qy large
min s.t.
N 1 k=0
Solution #1
p
( Qy (yt+k yref )
+ Qu ut+k
p)
min
N 1 k=0
( Qy (yt+k yref )
+ Qu (ut+k uss ) p )
(I A) C
B D
Assumption: the matrix has full column rank Can be used for state tracking as well:
N 1 k=0
min
( Qx (xt+k xss )
+ Qu (ut+k uss ) p )
Example
1 G(s) = (s + 1)(s + 2)
x1 (t + 1) 0.10 0.47 = x2 (t + 1) 0.23 0.60 y (t) = x2 x1 (t) 0.23 + u(t) x2 (t) 0.20
1 x1 (t) 1 1 y (t) 1
Qy = 1
Solution #1
min
N 1 k=0
( Qy (yt+k yref )
+ Qu (ut+k uss ) p )
1 x2 (t) 1
Minimize the difference between predicted inputs and the corresponding steady-state values Steady-state computation: this quantity is pushed towards zero
1 u(t) 1
Qu = 1 N =5 yref = 0.4 uss = 0.8
(I A) C
B D
Assumption: the matrix has full column rank Can be used for state tracking as well:
N 1 k=0
min
( Qx (xt+k xss )
+ Qu (ut+k uss ) p )
YALMIP implementation
yref = 0.5; M = [(eye(nx)-A) -B; C D]; ss = pinv(M)*[zeros(nx, 1); yref]; u_ss = ss(nx+1:end); x = {}; u = {}; y = {}; for k = 1:N+1, x{k} = sdpvar(nx, 1); end for k = 1:N, u{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*(y{k}-yref), p) + ... norm(Qu*(u{k}-u_ss), p); constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == C*x{k} + D*u{k}; ... xmin <= x{k} <= xmax; ... umin <= u{k} <= umax; ... ymin <= y{k} <= ymax ]; end info = solvesdp(constraints+[x{1}==xt], objective);
rank = 3
C= 0 1
(I A) C 0 B = 0 D 0 1 0 1 1 0.5 0
rank = 2
YALMIP implementation
yref = 0.5; M = [(eye(nx)-A) -B; C D]; x = {}; u = {}; y = {}; for k = 1:N+1, x{k} = sdpvar(nx, 1); end for k = 1:N, u{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end x_ss = sdpvar(nx, 1); u_ss = sdpvar(nu, 1); objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qx*(x{k}-x_ss), p) + ... norm(Qu*(u{k}-u_ss), p); constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == C*x{k} + D*u{k}; ... xmin <=x{k}<=xmax; umin <=u{k} <= umax; ... ymin <= y{k} <= ymax ]; end constraints = constraints + ... [ M*[x_ss; u_ss] == [zeros(nx, 1); yref] ]; info = solvesdp(constraints+[x{1}==xt], objective);
M = [(eye(nx)-A) -B; C D]; x = {}; u = {}; y = {}; for k = 1:N+1, x{k} = sdpvar(nx, 1); end for k = 1:N, u{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end x_ss = sdpvar(nx, 1); u_ss = sdpvar(nu, 1); yref_symb = sdpvar(ny, 1); objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qx*(x{k}-x_ss), p) + ... norm(Qu*(u{k}-u_ss), p); constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == C*x{k} + D*u{k}; ... xmin <=x{k}<=xmax; umin <=u{k} <= umax; ... ymin <= y{k} <= ymax ]; end constraints = constraints + ... [ M*[x_ss; u_ss] == [zeros(nx, 1); yref_symb] ]; info = solvesdp(constraints + [x{1}==xt] + ... [yref_symb == yref], objective);
Plant
Plant
1 s
Controller
Controller
Plant
1 s 1 s
Plant
Controller
Controller
Solution #2
min
N 1 k=0
Solution #2
xt+k+1 ut+k yt+k = = A B 0 I C D xt+k B + ut+k ut+k1 I xt+k + Dut+k ut+k1
xt+k ut+k1
( Qy (yt+k yref )
+ Qu ut+k
p)
When at steady-state, the inputs should not change ut+k = ut+k ut+k1 Introduces an integrator into the closed-loop system Standard formulation achieved by state augmentation:
xt+k+1 ut+k yt+k = = A B 0 I C D xt+k B + ut+k ut+k1 I xt+k + Dut+k ut+k1
x t+k+1 yt+k
x t+k =
Hx xt+k Kx
new optimization variable
Hu ut+k Ku
Hx 0
0 Kx x Hu t+k Ku
xu H xu K
x t+k+1 yt+k
Mathematical formulation
min s.t.
N 1 k=0
YALMIP implementation
p)
( Qy (yt+k yref )
+ Qu ut+k
At = [A B; zeros(nu, nx) eye(nu)]; Bt = [B; eye(nu)]; Ct = [C D]; xt = {}; du = {}; y = {}; for k = 1:N+1, xt{k} = sdpvar(nx+nu, 1); end for k = 1:N, du{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end yref = sdpvar(ny, 1); objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*(y{k}-yref), p) + norm(Qdu*du{k}, p); constraints = constraints + ... [ xt{k+1} == At*xt{k} + Bt*du{k}; ... y{k} == Ct*xt{k} + D*du{k}; ... [xmin; umin] <= xt{k} <= [xmax; umax]; ... ymin <= y{k} <= ymax ]; end info = solvesdp(constraints + ... [xt{1}==[x0; uprev]; yref==yr], ... objective);
YALMIP implementation
x = [...]; uprev = zeros(nu, 1); x0 = [x; uprev]; yr = [...]; for t = 1:simsteps solvesdp(constraints + [xt{1} == x0; yref == yr], objective); duopt = double(du{1}); uopt = duopt + uprev; xnext = A*x + B*uopt; uprev = u0opt; x0 = [xnext; uprev]; end
Example
G(s) = 1 (s + 1)(s + 2)
x1 (t + 1) 0.10 0.47 = x2 (t + 1) 0.23 0.60 y (t) = x2 x1 (t) 0.23 + u(t) x2 (t) 0.20
1 x1 (t) 1 1 y (t) 1
Qy = 1 N =5 yref = 0.4
1 x2 (t) 1
1 u(t) 1
Qu = 1
Time-varying reference
G(s) = 1 (s + 1)(s + 2)
x1 (t + 1) 0.10 0.47 = x2 (t + 1) 0.23 0.60 y (t) = x2
yref = 0.4
YALMIP implementation
x1 (t) 0.23 + u(t) x2 (t) 0.20
1 x1 (t) 1
yref = 0.4
1 y (t) 1
1 x2 (t) 1
Without preview
1.5 1 0.5 0 0.5
10
x = [...]; uprev = zeros(nu, 1); x0 = [x; uprev]; for t = 1:simsteps if t < 10 yr = 0.4; else yr = -0.4; end solvesdp(constraints + [xt{1} == x0; yref == yr], objective); duopt = double(du{1}); uopt = duopt + uprev; xnext = A*x + B*uopt; uprev = u0opt; x0 = [xnext; uprev]; end
y (t), r(t)
PSfrag replacements
u(t)
4 3 2 1 0 1 2 0 1 2 3 4 5 6 7 8 9 10
Preview - Example
Without preview
1.5 1 0.5 0 0.5
t[s]
Mathematical formulation
time-varying reference signal
y (t), r(t)
y (t), r(t)
min
0 1 2 3 4 5 6 7 8 9 10
N 1 k=0
( Qy (yt+k yref,t )
+ Qu ut+k
p)
10
5 4 3 2 1 0 1 2 0 1 2 3 4 5 6 7 8 9 10
s.t.
lacements
u(t)
PSfrag replacements
u(t)
4 3 2 1 0 1 2 0 1 2 3 4 5 6 7 8 9 10
t[s]
t[s]
Preview can greatly improve performance Easy with MPC, not straightforward with PID/LQR Compare driving on road looking forward vs. following the road by looking through the side window!
2 3 4 5 6 7 8 9 10
y (t), r(t)
5 4 3
lacements
YALMIP implementation
At = [A B; zeros(nu, nx) eye(nu)]; Bt = [B; eye(nu)]; Ct = [C D]; xt = {}; du = {}; y = {}; for k = 1:N+1, xt{k} = sdpvar(nx+nu, 1); end for k = 1:N, du{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end for k = 1:N, yref{k} = sdpvar(ny, 1); end objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*(y{k}-yref{k}), p) + norm(Qdu*du{k}, p); constraints = constraints + ... [ xt{k+1} == At*xt{k} + Bt*du{k}; ... y{k} == Ct*xt{k} + D*du{k}; ... [xmin; umin] <= xt{k} <= [xmax; umax]; ... ymin <= y{k} <= ymax ]; end info = solvesdp(constraints + ... [xt{1}==[x0; uprev]; ... yref{1}==yr(1); yref{2}==yr(2); ... ..., objective);
YALMIP implementation
N = 4; x = [...]; uprev = zeros(nu, 1); x0 = [x; uprev]; yr = [0.4; 0.4; 0.5; 0.5; 0.1; 0.1; 0.1; 0.1]; yr = [yr; repmat(yr(end), N-1, 1)]; for t = 1:length(yr) solvesdp(constraints + [xt{1} == x0] + ... [yref{1}==yr(t); yref{2}==yr(t+1); ... yref{3}==yr(t+2)], objective); duopt = double(du{1}); uopt = duopt + uprev; xnext = A*x + B*uopt; uprev = u0opt; x0 = [xnext; uprev]; end
Agenda
Output regulation Tracking of non-zero references Advanced constraints
- slew-rate constraints - soft constraints - move blocking
Example
Abrupt changes of the control signal are not good for plant operation
Mathematical formulation
N 1 k=0
min s.t.
( Qy (yt+k yref )
+ Qu ut+k
p)
umin ut umax
YALMIP implementation
At = [A B; zeros(nu, nx) eye(nu)]; Bt = [B; eye(nu)]; Ct = [C D]; xt = {}; du = {}; y = {}; for k = 1:N+1, xt{k} = sdpvar(nx+nu, 1); end for k = 1:N, du{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end yref = sdpvar(ny, 1); objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*(y{k}-yref), p) + norm(Qdu*du{k}, p); constraints = constraints + ... [ xt{k+1} == At*xt{k} + Bt*du{k}; ... y{k} == Ct*xt{k} + D*du{k}; ... [xmin; umin] <= xt{k} <= [xmax; umax]; ... ymin <= y{k} <= ymax; ... dumin <= du{k} <= dumax ]; end info = solvesdp(constraints + ... [xt{1}==[x0; uprev]; yref==yr], ... objective);
( Qy (yt+k yref )
+ Qu ut+k
p)
YALMIP implementation
At = [A B; zeros(nu, nx) eye(nu)]; Bt = [B; eye(nu)]; Ct = [C D]; xt = {}; du = {}; y = {}; for k = 1:N+1, xt{k} = sdpvar(nx+nu, 1); end for k = 1:N, du{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end yref = sdpvar(ny, 1); objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*(y{k}-yref), p) + norm(Qdu*du{k}, p); constraints = constraints + ... [ xt{k+1} == At*xt{k} + Bt*du{k}; ... y{k} == Ct*xt{k} + D*du{k}; ... [xmin; umin] <= xt{k} <= [xmax; umax]; ... ymin <= y{k} <= ymax; ... dumin <= du{k} <= dumax; ... dymin <= y{k+1}-y{k} <= dymax ]; end info = solvesdp(constraints + ... [xt{1}==[x0; uprev]; yref==yr], ... objective);
8 6 4 2 0
1.4
1.6
1.8
0.2
0.4
0.6
0.8
1.2
1.4
1.6
1.8
30 20
30
elevator command
u(t) [deg]
u(t) [deg]
elevator command
flaps command
flaps command
0.8 1 1.2 1.4 1.6 1.8 2
time [s]
time [s]
Improved tuning: System is stable, e cient on attack angle. use of control authority. The MPC controller
understands that the elevator will saturate and automatically uses more ap to control the system. However, attack angle too large during
Agenda
Output regulation Tracking of non-zero references Advanced constraints
- slew-rate constraints - soft constraints - move blocking
initial transient.
Constraints in MPC
Hard constraints
- must be satised at all time - violations are not permitted - examples: input constraints (valve cannot be open to >100 %) safety limits (temperature, pressure)
Soft constraints
- can be violated sometimes - violation of constraints is penalized - examples: - qualitative variables (e.g. concentration should not drop below certain threshold) - control performance (e.g. maximal overshoot, don!t use too much of input authority unless it!s necessary)
Mathematical formulation
min s.t.
N 1 k=0
Soft constraints
min s.t.
N 1 k=0
( Qy yt+k
p + Qu ut+k
p)
( Qy yt+k
+ Qu ut+k
xt+1 = Axt + But yt = Cxt + Dut xt = x(t) yt ymin yt ymax ut umin ut umax
xt+1 = Axt + But yt = Cxt + Dut xt = x(t) yt ymin sy,t yt ymax + sy,t ut umin su,t ut umax + su,t sy,t 0, su,t 0
softening variables
YALMIP implementation
x = {}; u = {}; y = {}; sy = {}; su = {}; for k = 1:N+1, x{k} = sdpvar(nx, 1); end for k = 1:N, u{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end for k = 1:N, sy{k} = sdpvar(ny, 1); end for k = 1:N, su{k} = sdpvar(nu, 1); end objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qx*x{k}, p) + norm(Qu*u{k}, p) + ... Qsy*sy{k} + Qsu*su{k}; constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == Ct*x{k} + D*u{k}; ... (ymin - sy{k}) <= y{k} <= (ymax + sy{k}); ... (umin - su{k}) <= u{k} <= (umax + su{k}); ... 0 <= sy{k} <= symax; 0 <= su{k} <= sumax ]; end info = solvesdp(constraints + [x{1}== x0], ... objective);
0.25 u(t)
0.25 u(t)
0.25 u(t)
[U Y] = -0.0082 -0.0454 -0.0230 -0.0097 -0.0039 -0.0016 -0.0006 -0.0002 -0.0001 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000
0.6199 0.5048 0.2304 0.0948 0.0379 0.0151 0.0060 0.0024 0.0009 0.0004 0.0001 0.0001 0.0000 0.0000
The problem: no control input from [-1, 1] can satisfy output constraints! Therefore the MPC problem is infeasible
u(t) = umax C (Ax(t) + Bu(t)) = 0.804 u(t) = umin C (Ax(t) + Bu(t)) = 0.439
0.25 u(t)
0.25 u(t)
x(t) = [4 0]T 1 ut 1 N = 10 Qx = I Qu = 1 quadratic performance objective state regulation soft output constraints yt 0.4
[U Y] = -0.0297 -0.0636 -0.0233 -0.0099 -0.0040 -0.0016 -0.0006 -0.0003 -0.0001 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000
0.6160 0.5049 0.2351 0.0972 0.0390 0.0155 0.0061 0.0024 0.0010 0.0004 0.0002 0.0001 0.0000 0.0000
x(t) = [4 0]T 1 ut 1 N = 10 Qx = I Qu = 1 quadratic performance objective state regulation soft output constraints yt 0.4
Qs,y = 1
[U Y] = -0.8634 -1.0000 -0.3335 -0.0241 -0.0114 -0.0047 -0.0019 -0.0008 -0.0003 -0.0001 -0.0000 -0.0000 -0.0000 -0.0000
0.4637 0.4665 0.4017 0.2582 0.1139 0.0465 0.0186 0.0074 0.0029 0.0012 0.0005 0.0002 0.0001 0.0000 Qs,y = 1000
Soft constraints
What are they good for:
- prevent infeasibility if we have output and/or state constraints (having just input constraints cannot lead to infeasibility!)
0.25 u(t)
x(t) = [4 0]T 1 ut 1 N = 10 Qx = I Qu = 1 quadratic performance objective state regulation soft input constraints yt 0.4
[U Y] = -1.2124 -1.6684 -1.0953 -0.3078 -0.0238 -0.0112 -0.0047 -0.0019 -0.0007 -0.0003 -0.0001 -0.0000 -0.0000 -0.0000
0.4000 0.4000 0.4000 0.4000 0.2543 0.1120 0.0456 0.0182 0.0072 0.0029 0.0011 0.0004 0.0002 0.0001 Qs,u = 1000
Tuning:
- large penalization minimizes constraint violation
Agenda
Output regulation Tracking of non-zero references Advanced constraints
- slew-rate constraints - soft constraints - move blocking
Complexity of MPC
min s.t.
N 1 k=0
( Qx xt+k
+ Qu ut+k
p)
T Optimization variables: u0
uT 1
. . . uT N 1
Number of optimization variables: N nu The problem: optimization gets more complex with increasing value of the prediction horizon Possible directions to complexity reduction:
- decrease the prediction horizon (not good for performance) - restrict the number of optimization variables (the way to go)
Move blocking
( Qx xt+k
p + Qu ut+k p) + N 1 k=Nc
Qx xt+k
ut+Nc +k = ut+Nc 1 ,
k = 0, . . . , N Nc 1
Alternatives: block inputs to zero: ut+Nc +k = 0 block inputs by a state feedback: ut+Nc +k = Kxt+Nc +k