You are on page 1of 18

Agenda

Output regulation

Model Predictive Control Part 2: Advanced techniques

Tracking of non-zero references Advanced constraints


- slew-rate constraints - soft constraints - move blocking

Michal Kvasnica

State regulation
min s.t.
N 1 k=0

State regulation
p)

( Qx xt+k

+ Qu ut+k

min s.t.

N 1 k=0

( Qx (xt+k 0)

+ Qu (ut+k 0) p )

xt+1 = Axt + But xt = x(t) xt X ut U

xt+1 = Axt + But xt = x(t) xt X ut U

Output regulation
penalize deviations of predicted outputs from zero
min s.t.
N 1 k=0

Quadratic cost, D=0


T T yt +k Qy yt+k + ut+k Qu ut+k

yt+k = Cxt+k

xt+1 = Axt + But yt = Cxt xt = x(t) xt X , ut U


T yt +k Qy yt+k T = (xT t+k C )Qy (Cxt+k ) T = xT t+k (C Qy C )xt+k

min s.t.

N 1 k=0

( Qy yt+k

p + Qu ut+k

p)

xt+1 = Axt + But yt = Cxt + Dut xt = x(t) xt X ut U


output equation still state feedback!

= xT t+k Qx xt+k
N 1 k=0

Qx
T T T xT t+k (C Qy C ) xt+k + ut+k Qu ut+k

min

conversion into a state regulation problem

s.t.

xt+1 = Axt + But yt = Cxt xt = x(t) xt X , ut U

General case, vector formulation


min s.t.
N 1 k=0 T yt +k Qy yt+k

General case, vector formulation


min
xT t+N 1 uT t+N 1
T yt +N 1 T T T

uT t+k Qu ut+k

U Hu u t Ku Y H y yt Ky

X Hx xt Kx

N 1 k=0

T yt +k Qy yt+k

uT t+k Qu ut+k

U Hu u t Ku Y H y yt Ky

X Hx xt Kx

xt+1 = Axt + But yt = Cxt + Dut xt = x(t) xt X , ut U , yt Y

s.t.

X = xT t U = uT t
T Y = yt

xT t+1 uT t+1
T yt +1

xt+1 = Axt + But yt = Cxt + Dut xt = x(t) xt X , ut U , yt Y

... ... ...

X = xT t U = uT t
T Y = yt

xT t+1 uT t+1
T yt +1

... ... ...

xT t+N 1 uT t+N 1
T yt +N 1

T T T

C 0 Y = 0 0

0 0 A 0 X = 0 A .. . 0 0 0 C 0 0 .. .

0 0 B 0 X + 0 0 0 0 D 0 0 0 X + 0 0 C 0

0 0 B 0 0 D 0 0

.. . .. .

0 I 0 0 + BU + Ex (t) U + . x(t) = AX 0 . . 0 0 0 0 + DU U = CX 0 D

Hx 0 0 Hx x = H .. 0 . 0 0 0 Hy 0 0 Hy y = H .. 0 . 0 0 0

Kx 0 Kx 0 x = . ,K . . 0 Kx Hx Ky 0 Ky 0 , Kx = . . . 0 Ky Hy

Hu 0 u = H 0 0

0 Hu 0 0

.. .

Ku 0 Ku 0 u = . ,K . . 0 Ku Hu

General case, vector formulation


min s.t.
N 1 k=0 T T yt +k Qy yt+k + ut+k Qu ut+k

General case, YALMIP implementation


x = for for for {}; k = k = k = u = {}; y = 1:N+1, x{k} 1:N, u{k} 1:N, y{k} {}; = sdpvar(nx, 1); end = sdpvar(nu, 1); end = sdpvar(ny, 1); end

min s.t.

xt+1 = Axt + But yt = Cxt + Dut xt = x(t) xt X , ut U , yt Y

yY + U T Q uU Y TQ + BU + Ex (t) X = AX + DU Y = CX Hx X K x uU K u H Hy Y K y

objective = 0; constraints = []; for k = 1:N objective = objective + ... y{k}*Qy*y{k} + u{k}*Qu*u{k}; constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == C*x{k} + D*u{k}; ... xmin <= x{k} <= xmax; ... umin <= u{k} <= umax; ... ymin <= y{k} <= ymax ]; end info = solvesdp(constraints+[x{1}==xt], objective);

min

s.t.

B D u H 0 0

u 0 0 U Q X Y 0 0 0 X y Y 0 0 Q U ) 0 (I A X = Ex(t) 0 C I Y u K 0 0 U x x 0 X K H y y Y K 0 H
T T

General case, YALMIP implementation


x = for for for {}; k = k = k = u = {}; y = 1:N+1, x{k} 1:N, u{k} 1:N, y{k} {}; = sdpvar(nx, 1); end = sdpvar(nu, 1); end = sdpvar(ny, 1); end

Problem of output regulation


The controller is only able to regulate signals which are penalized in the cost function
min s.t.
N 1 k=0 T T yt +k Qy yt+k + ut+k Qu ut+k

objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*y{k}, p) + norm(Qu*u{k}, p); constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == C*x{k} + D*u{k}; ... xmin <= x{k} <= xmax; ... umin <= u{k} <= umax; ... ymin <= y{k} <= ymax ]; end info = solvesdp(constraints+[x{1}==xt], objective);

xt+1 = Axt + But yt = Cxt + Dut xt = x(t) xt X , ut U , yt Y

If there are no state constraints, some states could run away!

Example: unstable system


G(s) = 1 (s 0.2)(s + 1)
x1 (t + 1) 0.51 0.29 = x2 (t + 1) 0.36 1.08 y (t) = x1 x1 (t) 1.42 + u(t) x2 (t) 0.40

Example: unstable system


G(s) = 1 (s 0.2)(s + 1)
x1 (t + 1) 0.51 0.29 = x2 (t + 1) 0.36 1.08 y (t) = x1
1 u(t) 1 5 y (t) 5

x1 (t) 1.42 + u(t) x2 (t) 0.40

x1 = y

x2

Example: unstable system


G(s) = 1 (s 0.2)(s + 1)
x1 (t + 1) 0.51 0.29 = x2 (t + 1) 0.36 1.08 y (t) = x1
5 y (t) 5 1 u(t) 1

Agenda
x1 (t) 1.42 + u(t) x2 (t) 0.40
Output regulation Tracking of non-zero references Advanced constraints
- slew-rate constraints - soft constraints - move blocking

5 x2 (t) 5

added state constraint

Tracking
min s.t.
N 1 k=0

State tracking, YALMIP implementation


p

( Qx (xt+k xref )

+ Qu ut+k

p)

xt+1 = Axt + But xt = x(t) xt X , ut U

xref = [2; 0.5]; x = {}; u = {}; y = {}; for k = 1:N+1, x{k} = sdpvar(nx, 1); end for k = 1:N, u{k} = sdpvar(nu, 1); end objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qx*(x{k}-xref), p) + norm(Qu*u{k}, p); constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... xmin <= x{k} <= xmax; ... umin <= u{k} <= umax ]; end info = solvesdp(constraints+[x{1}==xt], objective);

min s.t.

N 1 k=0

( Qy (yt+k yref )

+ Qu ut+k

p)

xt+1 = Axt + But yt = Cxt + Dut xt = x(t) xt X , ut U , yt Y

State tracking, YALMIP implementation


yref = 0.5; x = {}; u = {}; y = for k = 1:N+1, x{k} for k = 1:N, u{k} for k = 1:N, y{k} {}; = sdpvar(nx, 1); end = sdpvar(nu, 1); end = sdpvar(ny, 1); end

Example
G(s) = 1 (s + 1)(s + 2)
x1 (t + 1) 0.10 0.47 = x2 (t + 1) 0.23 0.60 y (t) = x2 x1 (t) 0.23 + u(t) x2 (t) 0.20
1 x1 (t) 1 1 y (t) 1
Qy = 1

1 x2 (t) 1

objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*(y{k}-yref), p) + norm(Qu*u{k}, p); constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == C*x{k} + D*u{k}; ... xmin <= x{k} <= xmax; ... umin <= u{k} <= umax; ... ymin <= y{k} <= ymax ]; end info = solvesdp(constraints+[x{1}==xt], objective);

1 u(t) 1
Qu = 1 N =5 yref = 0.4

steady-state offset

The reason for steady-state offset


min s.t.
N 1 k=0

Qu large
p)

( Qy (yt+k yref )

+ Qu ut+k

min s.t.

N 1 k=0

( Qy (yt+k yref )

+ Qu ut+k

p)

xt+1 = Axt + But yt = Cxt + Dut xt = x(t) xt X , ut U , yt Y


this quantity is pushed towards zero

xt+1 = Axt + But yt = Cxt + Dut xt = x(t) xt X , ut U , yt Y


even larger offset

Qy large
min s.t.
N 1 k=0

Solution #1
p

( Qy (yt+k yref )

+ Qu ut+k

p)

min

N 1 k=0

( Qy (yt+k yref )

+ Qu (ut+k uss ) p )

xt+1 = Axt + But yt = Cxt + Dut xt = x(t) xt X , ut U , yt Y


smaller offset, but still non-zero Minimize the difference between predicted inputs and the corresponding steady-state values Steady-state computation: this quantity is pushed towards zero

(I A) C

B D

xss 0 = uss yref

Assumption: the matrix has full column rank Can be used for state tracking as well:
N 1 k=0

min

( Qx (xt+k xss )

+ Qu (ut+k uss ) p )

Example
1 G(s) = (s + 1)(s + 2)
x1 (t + 1) 0.10 0.47 = x2 (t + 1) 0.23 0.60 y (t) = x2 x1 (t) 0.23 + u(t) x2 (t) 0.20
1 x1 (t) 1 1 y (t) 1
Qy = 1

Solution #1
min
N 1 k=0

( Qy (yt+k yref )

+ Qu (ut+k uss ) p )

1 x2 (t) 1

Minimize the difference between predicted inputs and the corresponding steady-state values Steady-state computation: this quantity is pushed towards zero

1 u(t) 1
Qu = 1 N =5 yref = 0.4 uss = 0.8

(I A) C

B D

xss 0 = uss yref

Assumption: the matrix has full column rank Can be used for state tracking as well:
N 1 k=0

zero steadystate offset

min

( Qx (xt+k xss )

+ Qu (ut+k uss ) p )

Example: double integrator


p(t + 1) 1 1 = v (t + 1) 0 1
C= 1 0
(I A) C 0 B = 0 D 1 1 0 0 1 0.5 0

YALMIP implementation
yref = 0.5; M = [(eye(nx)-A) -B; C D]; ss = pinv(M)*[zeros(nx, 1); yref]; u_ss = ss(nx+1:end); x = {}; u = {}; y = {}; for k = 1:N+1, x{k} = sdpvar(nx, 1); end for k = 1:N, u{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*(y{k}-yref), p) + ... norm(Qu*(u{k}-u_ss), p); constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == C*x{k} + D*u{k}; ... xmin <= x{k} <= xmax; ... umin <= u{k} <= umax; ... ymin <= y{k} <= ymax ]; end info = solvesdp(constraints+[x{1}==xt], objective);

p(t) 1 + a(t) v (t) 0.5

rank = 3

C= 0 1
(I A) C 0 B = 0 D 0 1 0 1 1 0.5 0

rank = 2

YALMIP implementation
yref = 0.5; M = [(eye(nx)-A) -B; C D]; x = {}; u = {}; y = {}; for k = 1:N+1, x{k} = sdpvar(nx, 1); end for k = 1:N, u{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end x_ss = sdpvar(nx, 1); u_ss = sdpvar(nu, 1); objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qx*(x{k}-x_ss), p) + ... norm(Qu*(u{k}-u_ss), p); constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == C*x{k} + D*u{k}; ... xmin <=x{k}<=xmax; umin <=u{k} <= umax; ... ymin <= y{k} <= ymax ]; end constraints = constraints + ... [ M*[x_ss; u_ss] == [zeros(nx, 1); yref] ]; info = solvesdp(constraints+[x{1}==xt], objective);

M = [(eye(nx)-A) -B; C D]; x = {}; u = {}; y = {}; for k = 1:N+1, x{k} = sdpvar(nx, 1); end for k = 1:N, u{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end x_ss = sdpvar(nx, 1); u_ss = sdpvar(nu, 1); yref_symb = sdpvar(ny, 1); objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qx*(x{k}-x_ss), p) + ... norm(Qu*(u{k}-u_ss), p); constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == C*x{k} + D*u{k}; ... xmin <=x{k}<=xmax; umin <=u{k} <= umax; ... ymin <= y{k} <= ymax ]; end constraints = constraints + ... [ M*[x_ss; u_ss] == [zeros(nx, 1); yref_symb] ]; info = solvesdp(constraints + [x{1}==xt] + ... [yref_symb == yref], objective);

Why does a PI controller provide offset-free behavior?

Integrator in the plant

Plant

Plant

1 s

Controller

Controller

Integrator on the states

Integrator on the control inputs

Plant
1 s 1 s

Plant

Controller

Controller

Solution #2
min
N 1 k=0

new penalty matrix

Solution #2
xt+k+1 ut+k yt+k = = A B 0 I C D xt+k B + ut+k ut+k1 I xt+k + Dut+k ut+k1
xt+k ut+k1

( Qy (yt+k yref )

+ Qu ut+k

p)

When at steady-state, the inputs should not change ut+k = ut+k ut+k1 Introduces an integrator into the closed-loop system Standard formulation achieved by state augmentation:
xt+k+1 ut+k yt+k = = A B 0 I C D xt+k B + ut+k ut+k1 I xt+k + Dut+k ut+k1

this quantity is pushed towards zero

x t+k+1 yt+k

x u = A t+k + B t+k x u = C t+k + D t+k

x t+k =

Hx xt+k Kx
new optimization variable

Hu ut+k Ku

Hx 0

0 Kx x Hu t+k Ku
xu H xu K

x t+k+1 yt+k

x u = A t+k + B t+k = Cx t+k + Du t+k

Mathematical formulation
min s.t.
N 1 k=0

YALMIP implementation
p)

( Qy (yt+k yref )

+ Qu ut+k

u t x t+1 = Ax t + B x u t yt = C t + D x(t) u(t 1) xu Hxu x t K x t = Hy y t K y


extended state feedback

At = [A B; zeros(nu, nx) eye(nu)]; Bt = [B; eye(nu)]; Ct = [C D]; xt = {}; du = {}; y = {}; for k = 1:N+1, xt{k} = sdpvar(nx+nu, 1); end for k = 1:N, du{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end yref = sdpvar(ny, 1); objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*(y{k}-yref), p) + norm(Qdu*du{k}, p); constraints = constraints + ... [ xt{k+1} == At*xt{k} + Bt*du{k}; ... y{k} == Ct*xt{k} + D*du{k}; ... [xmin; umin] <= xt{k} <= [xmax; umax]; ... ymin <= y{k} <= ymax ]; end info = solvesdp(constraints + ... [xt{1}==[x0; uprev]; yref==yr], ... objective);

YALMIP implementation
x = [...]; uprev = zeros(nu, 1); x0 = [x; uprev]; yr = [...]; for t = 1:simsteps solvesdp(constraints + [xt{1} == x0; yref == yr], objective); duopt = double(du{1}); uopt = duopt + uprev; xnext = A*x + B*uopt; uprev = u0opt; x0 = [xnext; uprev]; end

Example
G(s) = 1 (s + 1)(s + 2)
x1 (t + 1) 0.10 0.47 = x2 (t + 1) 0.23 0.60 y (t) = x2 x1 (t) 0.23 + u(t) x2 (t) 0.20
1 x1 (t) 1 1 y (t) 1
Qy = 1 N =5 yref = 0.4

1 x2 (t) 1

1 u(t) 1
Qu = 1

Time-varying reference
G(s) = 1 (s + 1)(s + 2)
x1 (t + 1) 0.10 0.47 = x2 (t + 1) 0.23 0.60 y (t) = x2
yref = 0.4

YALMIP implementation
x1 (t) 0.23 + u(t) x2 (t) 0.20
1 x1 (t) 1

yref = 0.4

Preview - Example 1 u(t) 1


Qy = 1 Qu = 1 N =5

1 y (t) 1

1 x2 (t) 1

Without preview
1.5 1 0.5 0 0.5

10

x = [...]; uprev = zeros(nu, 1); x0 = [x; uprev]; for t = 1:simsteps if t < 10 yr = 0.4; else yr = -0.4; end solvesdp(constraints + [xt{1} == x0; yref == yr], objective); duopt = double(du{1}); uopt = duopt + uprev; xnext = A*x + B*uopt; uprev = u0opt; x0 = [xnext; uprev]; end

y (t), r(t)

PSfrag replacements
u(t)

4 3 2 1 0 1 2 0 1 2 3 4 5 6 7 8 9 10

Preview - Example
Without preview
1.5 1 0.5 0 0.5

t[s]

Trajectory preview With preview


1.5 1 0.5 0 0.5

Mathematical formulation
time-varying reference signal

y (t), r(t)

y (t), r(t)

min
0 1 2 3 4 5 6 7 8 9 10

N 1 k=0

( Qy (yt+k yref,t )

+ Qu ut+k

p)

10

5 4 3 2 1 0 1 2 0 1 2 3 4 5 6 7 8 9 10

s.t.

lacements
u(t)

PSfrag replacements
u(t)

4 3 2 1 0 1 2 0 1 2 3 4 5 6 7 8 9 10

u t x t+1 = Ax t + B x u t yt = C t + D x t = x(t) u(t 1) xu x xu H t K Hy y t K y

t[s]

t[s]

Controller can exploit knowledge of future references With preview


1.5 1 0.5 0 0.5

Preview can greatly improve performance Easy with MPC, not straightforward with PID/LQR Compare driving on road looking forward vs. following the road by looking through the side window!
2 3 4 5 6 7 8 9 10

y (t), r(t)

5 4 3

lacements

YALMIP implementation
At = [A B; zeros(nu, nx) eye(nu)]; Bt = [B; eye(nu)]; Ct = [C D]; xt = {}; du = {}; y = {}; for k = 1:N+1, xt{k} = sdpvar(nx+nu, 1); end for k = 1:N, du{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end for k = 1:N, yref{k} = sdpvar(ny, 1); end objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*(y{k}-yref{k}), p) + norm(Qdu*du{k}, p); constraints = constraints + ... [ xt{k+1} == At*xt{k} + Bt*du{k}; ... y{k} == Ct*xt{k} + D*du{k}; ... [xmin; umin] <= xt{k} <= [xmax; umax]; ... ymin <= y{k} <= ymax ]; end info = solvesdp(constraints + ... [xt{1}==[x0; uprev]; ... yref{1}==yr(1); yref{2}==yr(2); ... ..., objective);

YALMIP implementation
N = 4; x = [...]; uprev = zeros(nu, 1); x0 = [x; uprev]; yr = [0.4; 0.4; 0.5; 0.5; 0.1; 0.1; 0.1; 0.1]; yr = [yr; repmat(yr(end), N-1, 1)]; for t = 1:length(yr) solvesdp(constraints + [xt{1} == x0] + ... [yref{1}==yr(t); yref{2}==yr(t+1); ... yref{3}==yr(t+2)], objective); duopt = double(du{1}); uopt = duopt + uprev; xnext = A*x + B*uopt; uprev = u0opt; x0 = [xnext; uprev]; end

Agenda
Output regulation Tracking of non-zero references Advanced constraints
- slew-rate constraints - soft constraints - move blocking

Example

Abrupt changes of the control signal are not good for plant operation

Input slew-rate constraints

Mathematical formulation
N 1 k=0

min s.t.

( Qy (yt+k yref )

+ Qu ut+k

p)

u t x t+1 = Ax t + B x u t yt = C t + D x(t) u(t 1) xu x xu H t K x t = Hy y t K y umin ut umax


slew-rate constraint

umin ut umax

YALMIP implementation
At = [A B; zeros(nu, nx) eye(nu)]; Bt = [B; eye(nu)]; Ct = [C D]; xt = {}; du = {}; y = {}; for k = 1:N+1, xt{k} = sdpvar(nx+nu, 1); end for k = 1:N, du{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end yref = sdpvar(ny, 1); objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*(y{k}-yref), p) + norm(Qdu*du{k}, p); constraints = constraints + ... [ xt{k+1} == At*xt{k} + Bt*du{k}; ... y{k} == Ct*xt{k} + D*du{k}; ... [xmin; umin] <= xt{k} <= [xmax; umax]; ... ymin <= y{k} <= ymax; ... dumin <= du{k} <= dumax ]; end info = solvesdp(constraints + ... [xt{1}==[x0; uprev]; yref==yr], ... objective);

Output slew-rate constraints


min s.t.
N 1 k=0

( Qy (yt+k yref )

+ Qu ut+k

p)

u t x t+1 = Ax t + B x u t yt = C t + D x(t) u(t 1) xu x xu H t K x t = Hy y t K y umin ut umax


slew-rate constraint on outputs

ymin yt+1 yt ymax

YALMIP implementation
At = [A B; zeros(nu, nx) eye(nu)]; Bt = [B; eye(nu)]; Ct = [C D]; xt = {}; du = {}; y = {}; for k = 1:N+1, xt{k} = sdpvar(nx+nu, 1); end for k = 1:N, du{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end yref = sdpvar(ny, 1); objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qy*(y{k}-yref), p) + norm(Qdu*du{k}, p); constraints = constraints + ... [ xt{k+1} == At*xt{k} + Bt*du{k}; ... y{k} == Ct*xt{k} + D*du{k}; ... [xmin; umin] <= xt{k} <= [xmax; umax]; ... ymin <= y{k} <= ymax; ... dumin <= du{k} <= dumax; ... dymin <= y{k+1}-y{k} <= dymax ]; end info = solvesdp(constraints + ... [xt{1}==[x0; uprev]; yref==yr], ... objective);

AFTI F16 - II slew-rate constraints AFTI F16 - III Output

no output slew-rate constraint


12 10
12 10

limited output slew-rate

y(t), r(t) [deg]

6 4 2 0 2 0 0.2 0.4 0.6 0.8 1 1.2

pitch angle attack angle

y(t), r(t) [deg]

8 6 4 2 0

pitch angle attack angle

1.4

1.6

1.8

0.2

0.4

0.6

0.8

1.2

1.4

1.6

1.8

30 20

30

elevator command
u(t) [deg]

20 10 0 10 20 30 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

u(t) [deg]

10 0 10 20 30 0 0.2 0.4 0.6

elevator command

flaps command

flaps command
0.8 1 1.2 1.4 1.6 1.8 2

time [s]

time [s]

Initial MPC results:

Improved tuning: System is stable, e cient on attack angle. use of control authority. The MPC controller

Added explicit constraint

understands that the elevator will saturate and automatically uses more ap to control the system. However, attack angle too large during

Agenda
Output regulation Tracking of non-zero references Advanced constraints
- slew-rate constraints - soft constraints - move blocking

initial transient.

Constraints in MPC

Hard constraints
- must be satised at all time - violations are not permitted - examples: input constraints (valve cannot be open to >100 %) safety limits (temperature, pressure)

Soft constraints
- can be violated sometimes - violation of constraints is penalized - examples: - qualitative variables (e.g. concentration should not drop below certain threshold) - control performance (e.g. maximal overshoot, don!t use too much of input authority unless it!s necessary)

Mathematical formulation
min s.t.
N 1 k=0

Soft constraints
min s.t.
N 1 k=0

( Qy yt+k

p + Qu ut+k

p)

( Qy yt+k

+ Qu ut+k

+ Qs,y sy,t+k + Qs,u su,t+k )


heavily penalize violation of constraints

xt+1 = Axt + But yt = Cxt + Dut xt = x(t) yt ymin yt ymax ut umin ut umax

xt+1 = Axt + But yt = Cxt + Dut xt = x(t) yt ymin sy,t yt ymax + sy,t ut umin su,t ut umax + su,t sy,t 0, su,t 0
softening variables

YALMIP implementation
x = {}; u = {}; y = {}; sy = {}; su = {}; for k = 1:N+1, x{k} = sdpvar(nx, 1); end for k = 1:N, u{k} = sdpvar(nu, 1); end for k = 1:N, y{k} = sdpvar(ny, 1); end for k = 1:N, sy{k} = sdpvar(ny, 1); end for k = 1:N, su{k} = sdpvar(nu, 1); end objective = 0; constraints = []; for k = 1:N objective = objective + ... norm(Qx*x{k}, p) + norm(Qu*u{k}, p) + ... Qsy*sy{k} + Qsu*su{k}; constraints = constraints + ... [ x{k+1} == A*x{k} + B*u{k}; ... y{k} == Ct*x{k} + D*u{k}; ... (ymin - sy{k}) <= y{k} <= (ymax + sy{k}); ... (umin - su{k}) <= u{k} <= (umax + su{k}); ... 0 <= sy{k} <= symax; 0 <= su{k} <= sumax ]; end info = solvesdp(constraints + [x{1}== x0], ... objective);

Non-minimum phase system


G(s) = s 0.25 2 s + 3s + 2
x(t + 1) y (t) = = 0.097 0.233 1 0.465 0.233 x(t) + u(t) 0.600 0.200

0.25 u(t)

Non-minimum phase system


G(s) = s 0.25 2 s + 3s + 2
x(t + 1) y (t) = = 0.097 0.233 1 0.465 0.233 x(t) + u(t) 0.600 0.200

Non-minimum phase system


G(s) = s 0.25 2 s + 3s + 2
x(t + 1) y (t) = = 0.097 0.233 1 0.465 0.233 x(t) + u(t) 0.600 0.200

0.25 u(t)

0.25 u(t)

x(t) = [4 0]T yt N = 10 Qx = I Qu = 1 quadratic performance objective state regulation 1 ut 1

[U Y] = -0.0082 -0.0454 -0.0230 -0.0097 -0.0039 -0.0016 -0.0006 -0.0002 -0.0001 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000

0.6199 0.5048 0.2304 0.0948 0.0379 0.0151 0.0060 0.0024 0.0009 0.0004 0.0001 0.0001 0.0000 0.0000

x(t) = [4 0]T 1 ut 1 N = 10 Qx = I Qu = 1 quadratic performance objective state regulation yt 0.4

The problem: no control input from [-1, 1] can satisfy output constraints! Therefore the MPC problem is infeasible

u(t) = umax C (Ax(t) + Bu(t)) = 0.804 u(t) = umin C (Ax(t) + Bu(t)) = 0.439

Soft output constraints


G(s) = s 0.25 2 s + 3s + 2
x(t + 1) y (t) = = 0.097 0.233 1 0.465 0.233 x(t) + u(t) 0.600 0.200

Soft output constraints


G(s) = s 0.25 2 s + 3s + 2
x(t + 1) y (t) = = 0.097 0.233 1 0.465 0.233 x(t) + u(t) 0.600 0.200

0.25 u(t)

0.25 u(t)

x(t) = [4 0]T 1 ut 1 N = 10 Qx = I Qu = 1 quadratic performance objective state regulation soft output constraints yt 0.4

[U Y] = -0.0297 -0.0636 -0.0233 -0.0099 -0.0040 -0.0016 -0.0006 -0.0003 -0.0001 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000

0.6160 0.5049 0.2351 0.0972 0.0390 0.0155 0.0061 0.0024 0.0010 0.0004 0.0002 0.0001 0.0000 0.0000

x(t) = [4 0]T 1 ut 1 N = 10 Qx = I Qu = 1 quadratic performance objective state regulation soft output constraints yt 0.4

Qs,y = 1

[U Y] = -0.8634 -1.0000 -0.3335 -0.0241 -0.0114 -0.0047 -0.0019 -0.0008 -0.0003 -0.0001 -0.0000 -0.0000 -0.0000 -0.0000

0.4637 0.4665 0.4017 0.2582 0.1139 0.0465 0.0186 0.0074 0.0029 0.0012 0.0005 0.0002 0.0001 0.0000 Qs,y = 1000

Soft input constraints


G(s) = s 0.25 2 s + 3s + 2
x(t + 1) y (t) = = 0.097 0.233 1 0.465 0.233 x(t) + u(t) 0.600 0.200

Soft constraints
What are they good for:
- prevent infeasibility if we have output and/or state constraints (having just input constraints cannot lead to infeasibility!)

0.25 u(t)

x(t) = [4 0]T 1 ut 1 N = 10 Qx = I Qu = 1 quadratic performance objective state regulation soft input constraints yt 0.4

[U Y] = -1.2124 -1.6684 -1.0953 -0.3078 -0.0238 -0.0112 -0.0047 -0.0019 -0.0007 -0.0003 -0.0001 -0.0000 -0.0000 -0.0000

0.4000 0.4000 0.4000 0.4000 0.2543 0.1120 0.0456 0.0182 0.0072 0.0029 0.0011 0.0004 0.0002 0.0001 Qs,u = 1000

When are they typically used:


- non-minimum phase systems - systems with time delay - unstable systems

Tuning:
- large penalization minimizes constraint violation

Agenda
Output regulation Tracking of non-zero references Advanced constraints
- slew-rate constraints - soft constraints - move blocking

Complexity of MPC
min s.t.
N 1 k=0

( Qx xt+k

+ Qu ut+k

p)

xt+1 = Axt + But xt = x(t) xt X ut U

T Optimization variables: u0

uT 1

. . . uT N 1

Number of optimization variables: N nu The problem: optimization gets more complex with increasing value of the prediction horizon Possible directions to complexity reduction:
- decrease the prediction horizon (not good for performance) - restrict the number of optimization variables (the way to go)

Open-loop input sequence

Blocked open-loop input sequence

Blocked open-loop input sequence


min s.t.
Nc 1 k=0

Move blocking
( Qx xt+k
p + Qu ut+k p) + N 1 k=Nc

Qx xt+k

xt = x(t) xt+k+1 = Axt+k + But+k , xt+k X , ut+k U , k = 0, . . . , N 1 k = 0, . . . , N 1 k = 0, . . . , N 1

ut+Nc +k = ut+Nc 1 ,

k = 0, . . . , N Nc 1

Alternatives: block inputs to zero: ut+Nc +k = 0 block inputs by a state feedback: ut+Nc +k = Kxt+Nc +k

You might also like