You are on page 1of 12

Aufomcrika,Vol. 31,No. 10,~~.

1471-1482,lWS
Elwier Science Ltd
Pergamon ooo5-109q95)ooo53-4 hinted in Great Britain
Lnm-1098195 $9.50 + 0.00

Brief Paper

Internal Model Predictive Control (IMPC)*

ERIC COULIBALY,t SANDIP MAITIS and COLEMAN BROSILOW§

Key Words-Predictive control; multivariable control; saturation compensation; model-based control.

Abstract-A computationally simple model predictive hardware and software and thereby achieve short
control algorithm incorporates the attractive features of the computational times.
internal model control (IMC) law. The algorithm first
On the other hand, anti-windup algorithms suffer from what
computes the IMC control effort via a model srate feedback
Walgram et al. (1992) termed a short-sightedness property
implementation that automatically compensates for past
that can seriously degrade control system performance.
control effort saturation. Before applying the calculated
Qualitatively, this shortsightedness property arises from
control, the algorithm checks to see if this control effort,
computing the current control effort based only on the
when applied over a single sampling interval and followed by
current state of the process and/or process model. This can
a control effort at the opposite limit (relative to its
result in the application of a control effort that drives the
steady-state level), will cause the model output to exceed its
desired trajectory. If not, the calculated control is applied. system harder than can be compensated for later owing to
Otherwise the control is reduced appropriately. Application controller saturation, thereby degrading performance. MPC
of the new algorithm to a variety of linear single-input algorithms, such as QDMC, do not suffer from this defect,
single-output systems shows a smooth, rapid response, because they calculate the current control effort based on
without significant overshoot. Comparisons with a QDMC predictions of future outputs. The penalty for the prediction
algorithm, tuned to give the same unconstrained behavior as in QDMC is that the control system must solve a fairly large
the IMC system and the best possible constrained quadratic program at each sampling time.
performance, favor the IMPC system. Application of the new To overcome the short-sightedness problem in anti-windup
algorithm to a simple multivariable problem drawn from web controllers, we propose adjusting the computed control effort
control in film manufacturing demonstrates the flexibility of based on a prediction of future outputs. The anti-windup
the algorithm in dealing with control effort saturation in algorithm that we use is a model state feedback (MSF)
multivariable systems. implementation of internal model control (IMC). The MSF
implementation overcomes the potentially sluggish behavior
1. Introduction of lead-lag IMC controllers in the presence of control effort
Most feedback control algorithms that are designed to saturation, because the model states reflect the control effort
accommodate control effort saturation can be grouped into that was actually applied, and so the controller has an
either anti-windup-type algorithms (Walgama et al., 1992, estimate of where the process is, as opposed to assuming that
Walgama & Sternby, 1993; Kothare et al., 1993) or model computed controls are applied, as is the case with a lead-lag
predictive control (MPC) algorithms such as DMC (Cutler IMC implementation. Also, the model state feedback
and Ramaker, 1980) and QDMC (Garcia and Morshedi, implementation of IMC is simpler to implement than
1986). The MPC algorithms are quite popular in the chemical lead-lag IMC controllers, and facilitates transfers from
and petroleum industries, whereas the anti-windup algo- manual to automatic control and vice versa. The reasons for
rithms are most often used in the control of mechanical choosing an IMC controller are that
systems and arrcraft, where the time scales for control are (i) there is an extensive literature on the design and tuning
much shorter than in typical chemical processes. The of IMC controllers in the absence of control effort
anti-windup algorithms are usually adaptations of linear constraints (Morari and Zaferiou, 1989);
control algorithms, and hence have the advantage of allowing
the control system designer to: (ii) The desired future trajectory of the system is easy to
compute, because it is the output of the IMC filter whose
(i) tailor the output response as needed in the linear region initial state is the current model state.
(i.e. away from the control constraints);
The model state feedback algorithm presented in Sections
(ii) accommodate anticipated modeling errors using recently 2 and 6 can be shown to yield the same controls under
developed frequency- or time-domain methods (Doyle, saturation as the saturation compensation algorithms of
1985; Laiseca and Brosilow, 1992, 1993; Boyd et nb, Campo and Morari (1990) for appropriate choices of
1994); parameters (Kothare et al., 1994). However, our version of
model state feedback differs from that in the aforementioned
(iii) implement the control system efficiently in both references in that the algorithm is explicit in terms of the
model states and the controller tuning parameters. This
*Received 21 December 1992; revised 20 March 1994; feature of our algorithm is important in stabilizing it and in
revised 3 January 1995; revised 27 January 1995; received in extending it to multivariable systems as discussed in Sections
final form 15 March 1995. This paper was not presented at 3 and 6. Our implementations are also somewhat simpler
any IFAC meeting. This paper was recommended for than those of Campo and Morari (1990).
publication in revised from by Associate Editor Wayne When the model state feedback implementation of IMC is
Bequette under the direction of Editor Yaman Arkun. combined with the predictive algorithm of Section 3, it
Corresponding author Professor Coleman B. Brosilow. Tel. becomes what we consider to be a model predictive
+I216 368 3810; Fax +1216 368 3016; E-mail algorithm, even though there is no integral performance
cbb@po.cwru.edu. measure that is minimized. When the control constraints are
t Nordson Corporation, Amherst, Ohio, U.S.A. not active, the new IMPC algorithm behaves just like the
5 Control Soft Corp., Cleveland, Ohio, U.S.A. IMC algorithm. When the control constraints are active, it
8 Chemical Engineering Department, AW Smith Building, behaves as a MPC algorithm, as shown in Section 5.
Case Western Reserve University, 10900 Euclid Avenue, Why do we need yet another MPC algorithm? While
Cleveland, OH 44106-7217, U.S.A. IMPC will be seen to be computationally less expensive than

1471
1472 Brief Papers

QDMC, the real potential advantage of IMPC is in the we recommend a tilter of the form
relative ease with which it can be designed to achieve desired
behavior (e.g. non-interaction) and tuned to accommodate Q(z)
anticipated modeling errors. Further, as we show in Section w=(z_a)” (6)
5, the performance of IMPC compares very favorably with
where Q&z) is a polynomial of order r - 1 and r is the
QDMC for systems without right-half-plane zeros.
relative order of the continuous process model G(s). The
The models used in this paper are all parameteric.
However, both IMC and IMPC can be extended to include form of F,(z) given by (6) is similar to that obtained from a
zeroth-order-hold z transform of the filter for the continuous
the use of non-parameteric models such as step responses
IMC controller that is of the form l/(&s + 1): Unfortunately,
(Morari and Lee 1991).
the z transform of such a filter can have a numerator
polynomial PM(z) that has zeros outside the unit circle.
2. Model state feedback (MSF) Therefore we recommend either
A model state feedback controller forms the IMC control
effort from a linear combination of the states of the model (a) forming Q&z) by replacing the zeros outside the unit
rather than from the output of a lead-lag controller as in a circle at zj with zeros at l/z,, or
standard implementation. The advantage of such an (b) forming Q&z) by replacing all the zeros of Q(Z) with
implementation is that it makes use of the model’s zeros at the origin, that is, setting QM(z) = (1 -
knowledge of past controls, which is contained in its state. r-rY-‘Zr-‘.
Thus a model state feedback controller automatically
compensates for past control effort saturation, unlike the Choice (a) yields an F,(z) whose step response closely
standard lead-lag IMC controller implementation, which approximates that of the continuous filter. Choice (b) yields
computes its output undeterred by whether or not the an FM(z) whose step response is not generally as close to the
computed controls are actually applied. continuous filter step response as that of (a), but the resulting
The plant to be controlled is assumed to be linear, F,(z) now has only a single tuning parameter, Q, and so this
time-invariant and continuous. We consider a single-input filter is much easier to tune in the z domain than the F(z)
single-output model described by the transfer function given by (a).
The IMC controller using (5) gives the following control
G(s) = !!!&sB law:
D(z)QM(z)
D(s) ’ [r(z) -d(z)]. (7)
‘(‘) = Nu(z)(z - (Y)’
where N(s) is a polynomial of degree m, D(s) is a
polynomial of degree n (m <n) and 0 is the model This law can be converted into model state feedback form by
dead-time. introducing the extended discrete state xn(z) as
2.1. Single-degree-of -freedom controllers. The controller 1
is designed based on the discrete-time zeroth-order hold xc(z) = v(z)U(Z)’ (8)
(ZOH) transfer function G(z) obtained from G(s) given by
(1): where V(z) is the polynomial defined below by (12). The
N(z) discrete model state feedback law relating u(z) and X(Z) is
G(z) =jjyfp
u(z) = -K(z)x(z) + k,,[r(z) - d(z)l, (9)
where N(z) is the numerator polynomial of degree n - 1 and where K(z) is the discrete state feedback polynomial. Some
D(z) is the denominator polynomial of degree n. 8 = B/T is manipulations of (8) and (9) yield the following expression
the discrete dead-time multiple of the sampling time T. B is for u(z):
the continuous dead-time and T is the sampling time.
The design objective is to compute the discrete IMC u(z) = k,, v(z;$(z) [r(z) -J(z)]. (10)
control law using the states of the model. The discrete IMC
controller is given by For the two expressions of u(z) in (7) and (10) to be
equivalent, K(z) and V(z) should be defined as follows:
u(z) = G,‘(z)&(z)[r(z) -d(z)]. (3)
K(z) = k,Kdz)(z - a)’ - Nz)QM(zL (11)
GH(z) is the minimum-phase portion of G(z) with all the
zeros with negative real parts substituted by zeros at the V(z) = D(z)Qdz). (12)
origin (Morari and Zatiriou, 1989) and FM(z) is a filter as
By (12) the discrete states defined in (8) are
given below in (6).
The minimum-phase portion of G(z) can be written as 1
u(z). (13)
I&) = D(z)Q&)
NH(Z)
G&)=D(z)~ The above states are called extended states because the
mode1 states defined by l/D(z) are augmented by the states
where NH(z) is a polynomial of degree n - 1. It contains
some or all of the zeros of N(z) inside the unit circle and of l/Qtv,(z).
those outside replaced by their reciprocal. All the zeros with
K(z) has the form
negative real parts are replaced by zeros at the origin to n+r-Z
avoid intersample ripple in the output caused by ringing of K(z) = x k,z’. (14)
i=”
the control effort. The steady-state gain of NH(z) is equal to
the steady-state gain of N(z). k,, in (11) is chosen so that k,+r-l = 0.
The discrete IMC controller q(z) is chosen to force the u(j) is computed in terms of the extended states as follows:
discrete model G”(z) to track a desired linear system of filter n+rm2
FM(z). That is, u(i) = - izO kin& + i) + k&(j) - d(i)l, (15)
q(z) = G,‘(z)F(z). (3
where j is the discrete time. The values of xn required by (15)
Morari and Zafiriou (1989) suggested the use of a filter of are available directly from the states of D-‘(z) and QG’(z) if
the form (1 - n)z/(z - a). However, such a choice of filter the model simulation is carried out by cascading delays (i.e.
leads to growing control efforts as the sampling time is by z -’ operators).
reduced, and the output from the discrete controller does not Figure 1 represents the conceptual diagram of the model
approach that of the continuous controller as the sampling state feedback control for discrete time.
time tends to zero. To obtain a discrete controller that does 2.2. Two-degree-of -freedom controllers. The model state
approach the continuous controller for small sampling times, feedback algorithm of Section 2.1 extends readily to
Brief Papers 1473

Fig. 1. Discrete-time model state feedback implementation of IMC.

two-degree-of-freedom control systems (Morari and Zafiriou, yL(k + i 1k) is obtained from
1989). However, in predicting the output into the future, as ys(k + i ) k) = NH(z)xo(k + i ( k), i = 1,. , p. (18)
described in the next section, we recemmend assuring that
the current disturbance estimate (i.e. d) is constant into the The opposite constraint boundary is always computed
future rather than assuming that the difference between the relative to the steady-state control effort required to achieve
process and model outputs remain constant. the current set point, assuming that the current disturbance
estimate remains constant. For example, if the current
control effort is greater than the steady-state control effort
then the opposite constraint boundary will be the
3. Predictive control algorithm for IMPC lower-bound constraint on the control effort.
The following predictive control is added to the model Step 3: generate the desired output trajectory. Compute the
state feedback algorithm so as to stabilize it in the presence desired unconstrained output trajectory with initial states of
of control effort constraints. The concept is similar to that of the filter F,(z) at the current model states. The prediction
QDMC. By projecting into the future and minimizing an horizon is the same as for the predicted model output. The
integral objective function, the QMDC algorithm ensures desired output trajectory y,(k + i ) k) is calculated from
that the control effort applied at the current sample will not
cause future problems. Similarly, the IMPC algorithm (L - o)‘y,,(k + i I k) = QM(z)[r(k + i ) k) - d(k + i I k)],
projects the output into the future to determine whether the i=o, 1,2,. ,p, (19)
control effort computed by the model state feedback
algorithm can be applied without causing future problems. with
The projection is carried out by predicting the
y,(k + i ( k) = x(k + i), i = 0, 1.. , r - 1, (20)
minimum-phase portion of the model output (i.e. the output
of Go(z) of (4) with the calculated control effort applied over r(k+iIk)=r(k), i=O,l,2 ,_.., p, (214
the current sampling interval and all future controls applied
d(k+i(k)=d(k), i=O,1,2 ,..., p, (21b)
at the opposite constraint boundary). If the minimum-phase
portion of the process output predicted using this control where r(k + i ) k) is the projection of the set point i units into
effort trajectory does not overshoot the desired output the future from t = kT, and d(k + i 1k) is the estimated
trajectory (i.e. the future output of &M(z) given by 6) then it disturbance i units into the future from t = kT. Generally,
is safe to apply the calculated model state feedback control r = n, so that the initial state of the desired trajectory
effort. If overshoots are predicted, the algorithm solves for y,(k + i ) k) is the same as the model state, x(k + i).
the control effort to be applied at the current sampling Step 4: adjust the current control effort if necessary. We
interval to ensure that the predicted model output will not want the predicted trajectory computed in Step 2 (i.e.
exceed the desired unconstrained output trajectory. This yn(k + i 1k), i = 0,1, . . . ,p) not to overshoot the desired
procedure is repeated every sampling period. The steps in trajectory computed in Step 3 (i.e. yo(k + i ) k), i =
the computation are as follows: 0, 1,. ,p). If r(k) -d(k) >x(k) then not overshooting
Step 1: compute the model state feedback control. Compute means that the quantity y,(k + i ) k) -y,(k + i ) k) _should
the current control effort using the model state feedback be zero or positive for all i ‘p. Conversely if r(k) - d(k) <
control implementation of IMC. x(k) then not overshooting ,vp(k + i I k) means that the
Step 2: predict the model output. Predict the minimum- quantity y&k + i I k) - yH(k + i 1k) should be zero or
phase portion of the model output y,(k + i ( k) over the negative. If no overshoot occurs then we apply the value of
prediction horizon p; u(k) computed by the model state feedback law given by
y,(k+iIk)=yO,(k+iIk)+htu(k), i=l,...,p. (16) (15). Otherwise, for each value i = j where there is an
overshoot, we set yH(k + j [ k) = yD(k + j ( k) and solve for
Here yH(k + i 1k) is the prediction of the minimum-phase u,(k) from (16). That is,
portion of the model output i units into the future, starting
from time k; y%(k + i 1k) is the prediction of the a,(k) = [y&k +i 1k) -yO,(k + j I k)]lhj, (22)
minimum-phase portion of the model output i units into the or u,(k) is taken as the upper or lower limit if uj(k)
future with u(k) set to zero and u(k + i (k) = uopp for computed from (22) exceeds these limits.
i = 1,. ,p; h, is the ith impulse response coefficient of The desired current control u(k) is
N,(z)/D(z); u(k) is the value of the control effort computed
from (15) at time t = kT, or is taken as the upper or lower uj = value of IL, that maximizes the quantity
limit if u(k) from (15) exceeds these limits. lu,(k) - u,(k)/ over all m, (23)
The output y$(k + i ] k) is obtained by solving for
x*(k + i / k) from (17) below and forming y$k + i 1k) as the where u,(k) -value of u(k) computed by model state
linear combination of the states of x*(k + i 1k) given by (18): feedback (i.e. by (15)). The policy given by (23) allows for
i = 1.. ,p,
positive or negative process gains and for positive or negative
D(z-‘)x*(k + i ( k) = ur(k + i), (17) values of u.
with
Step 4 assures that the applied control is not overly
x”(k+iIk)=x(k+i), i=l,..., n, agressive. It does not constitute a constraint on the output,
u,(k) = 0,
which may or may not exceed the set point, depending on
modeling error, controller tuning, and/or disturbances. Note
ur(k + i) = uopp, i = 1, , p. that the above calculations are based only on the model, and,
Brief Papers

as is typical with IMC controllers, modeling errors are Ylf*,“, the output of a filter whose time constant F* is greater
accomodated by adjusting the filter time constant (Y. than the specified filter time constant E.
The model state at time I must either have evolved from
4. Stability of the IMPC algorithm the model state at 1 - 1 according to a filter trajectory y[*,‘-’
4.1. Perfect model. In the following we show that the with E 5 E* < 00 or from the initial state yg*‘-q along a
IMPC algorithm generates a trajectory that converges to the trajectory Y?,‘-~, j=l,..., 4 ,..., p, where y2’-9=yy-9.
desired steady state by showing that the IMPC trajectory lies Thus
either on or between two other trajectories, both of which
converge to the steady state.
The portion of the process model that is being driven to Since both y,@.‘-’ and y>‘-q approach r as j + m, yz’ must
steady state is that given by (4). The desired trajectory is approach r as I+ cc. Further, there must be a point n such
always obtained as the output of the linear system given by that ~2,” IS close enough to the set point r, so that uy lies
(6) and driven by r(z), where r(z) is the z transform of the within the control constraints for all j 20 and therefore
desired set point (a constant). Without loss of generality, we Yim*n= yy for all j 2 0. This completes the proof. q
take the initial state of the model and filter as the zero state. 4.2. Imperfect models. As before, we consider a linear,
To keep the discussion simple, we also assume, again stable, minimum-phase system with no disturbance and a
without loss of generality, that the set point, r and model zero set point. Now, however, we add the restriction that the
gain are positive, that the lower bound @on the control effort IMC filter time constant must be large enough that the IMC
is less than zero, and that the upper bound U is such that the control system is stable over all possible processes, in the
set point is achievable. We adopt the following notation. y:’ absence of control effort constraints. If this latter restriction
is the filter output trajectory j time units beyond the current is satisfied then we claim that the IMPC algorithm will be
time (.) starting from the state of the model at time (.). The stable with constraints on the control.
filter time constant E is the specified filter time constant. Yp“ To justify our claim, we note that the difference between
is the filter output trajectory j units beyond the current time the current case and the previous case is that the disturbance
(.), with ypv’ = ~0”s’.The filter has a time constant E* that is estimate d(k) (cf. Fig. 1) is not necessarily zero, even though
greater than E. Yjm.’ is the model output trajectory j units d(k) = O._Therefore the control effort u(k) now contains a
beyond the current time (.) computed by first applying the term &d(k) in addition to the linear combination of the
control us’, followed by applying the control u for all future states of the model. However, this additional term has no
times. That is, u$’ is the applied control up to time (.), effect on the IMPC algorithm. The model output is projected
into the future as before. The current control effort is still
u?.’ = US. for j = 0, adjusted to prevent the model output from exceeding the
I 1 qr for j = 1,. , co, filter output when the model is driven by the current control
and u;‘*. is the control that makes yf”,’ track yf’ for effort for one sampling interval followed by the minimum
j = 1,. . , m. Recall that u$’ is computed so that either control effort for all future sampling intervals. The final
result of the algorithm is still that the model output follows a
Y?’ = Yf*,.
1 or the trajectory of yy*’ just touches the sequence of arcs defined by IMC filter output with filter time
trajectory ~7’ at some point p in the future (i.e. Y;lB’sYj’,
j=l,2 ,..., p-landy,“,‘=Yz). constants equal to or greater than the filter time constant that
At the first time step, the initial state of the model and stabilizes the unconstrained system. Therefore the algorithm
filter are zero, and there are three possibilities. is again stable in the presence of control effort constraints.

1. The control does not saturate (i.e. u$’ < U) and applying
the calculated control uf,‘, j =O, . , m, yields y/m,‘, 5. Performance comparison of IMPC with QDMC
j = 1, . , 00,that does not cross ~7, j = 1,. , m. We compare the IMPC algorithm with the quadratic
dynamic matrix control (QDMC) method (Garcia and
2. The control saturates at the upper limit (i.e. u$’ = U) but Morshedi, 198.5) in the presence of hard constraints on the
applying u>“, j = 0, 1,. . , m, yields a model response control effort and for perfect models. The assumption of no
y,?,‘, j = 1, , m, that does not cross y;‘, j = 1, . , m. modeling error allows us to tune the IMC and QDMC
controllers to provide relatively rapid responses to set point
3. Irrespective of whether or not the control saturates (at the
changes. Control effort constraints are more frequently
upper limit), applying qcfor all future control shows that
active, and cause more difficulties when the controller drives
y;“~‘,j = 1, . , m, crosses yy, j = 1, , co.
the process hard than when the controller is tuned less
In Case 1 the model output y,m,o= y$’ at the next time step. aggressively, as it must be when there is modeling error.
That is, the model output tracks the desired trajectory Further, tuning to accommodate anticipated modeling error
starting from the zero state for one time step. is generally accomplished in the linear region, where control
In Case 2 the model output at the next time step, yy”, will effort constraints are not active. Finally, modeling error does
lie below the filter output. That is, not play a direct role in the computation within a MPC
algorithm, because the computations are always performed
yi.0 < y?“. (24) for the model and not the process.
However, the model output y$” must lie above zero, and is While it is our aim to tune the MPC controllers relatively
given by aggressively, we do not wish to be so aggressive as to amplify
m.0= y:*.O3,O. process noise excessively. Therefore we have limited noise
Y1 (25) amplification to a factor of 20, as is done for the derivative
the inequality (25) follows from the fact that there must exist action of PID controllers. The exact way in which this is
a filter time constant less than infinity, but greater than the accomplished is given in Section 5.2.
specified filter time constant for which the computed control 5.1. Tuning the IMPC and QDMC algorithms. For a SISO
will lie at the upper extreme. Applying the upper extreme system, we need to adjust at least three tuning parameters in
control will therefore cause the model output to track the the QDMC algorithm (prediction horizon, control horizon,
trajectory of the filter with this larger time constant. and control effort penalty), compared with two in IMPC
In Case 3 the control effort is reduced so that at some (filter time constant E and prediction horizon).
point p, yF” = y;” and y;“” syy, j ‘p. Since we are starting The IMC filter time constant E is chosen based on either a
from the zero state, it will also be true that yF” = y:*,“. Note noise rejection criterion or on the degree of modeling
that the controls I.@‘, I= 1,2,. . , can lie at uopp for at most uncertainty. The prediction horizon is chosen long enough to
p future time umts, because yp+, m.” < y$, and the algorithm ensure that the controlled process reaches steady state. A
requires that at every time step up,’ be computed to make safe choice of the prediction horizon is the open-loop settling
y;“*’track ~7’. time (Coulibaly, 1992).
From the above, we see that at time step zero the actual The QDMC simulations were generated using the Matlab
model trajectory yrx’ is bounded above by yt” and below by MPC toolbox (Morari and Ricker (1992)). Additional
Brief Papers 1475

available tuning parameters in the simulator that were not Process 1. This is a second-order underdamped minimum-
used are output penalty and control effort move suppression. phase process with
The tuning method presented below for QDMC allows US to
compare the performance of QDMC with IMPC.
G(s)= 5
25s2+s+l
= G&),
Srep 1. Select a large prediction horizon (open loop settling
time, to be safe). 0.098~ + 0.097
‘(‘) = z2 - 1.92~ + 0.96 ’ (29)
Step 2. Adjust the control horizon to produce the same
response as IMPC without control effort constraints and
with the control effort weight (or penalty) parameter set to 0.19z NH(Z)
zero. GH(z)=z2-I.92z +O.%=D(z)’
Step 3. If in Step 2 the desired response cannot be met by
adjustments of the prediction and control horizons alone, 1
F(s) = ___
increase the weight on the control effort from zero and (1.2s + 1)2 ’
repeat Step 2.
0.203(z + 0.573)= Q&z)
The above tuning method ensures that the speed of Mz) =
(z - 0.435)2 (z - lY)2’
response and the control effort are nearly the same for both
the IMPC and QDMC algorithms, in the absence of
k,, = 1.04, K(r) = 2.36~~ + 1.62~ - 2.7 naf = 9.8. (31)
constraints. Thus it enables one to observe the performance
of the algorithms when they handle constraints, and make Constraints on control output (CO) for IMPC and QDMC:
qualitative comparisons. constraint set 1: 0.15 CO 5 1.1;
5.2. Simulation results. In each of the following simula- constraint set 2: 0.1 I CO 5 0.3.
tions the IMPC controller was formed from a filter given by
(5) using choice (a) from Section 2.1 to find Q(Z). That is, IMPC tuning parameters:
the IMPC filter is selected as the zeroth-order-hold z filter time constant E = 1.2;
transform of the continuous filter. The time constant E of the prediction horizon P = 30.
continuous controller q(s) was chosen as the smallest value
satisfying the following criterion: QDMC tuning parameters:
prediction horizon P = 30;
choose E such that max w 5 20. control horizon M = 3;
(27)
w I I control weight uwt = 0.44.

This criterion limits the noise amplification in the continuous Figure 2 compares the discrete IMC lead-lag impiementa-
controller to a factor of 20. tion using the controller given by GH’(z)F(z) with the model
The sampling time for both the IMPC and QDMC is one state feedback implementation of IMC using K(z) as given
time unit. This is a relatively large sampling time compared above and with the results from the IMPC implementation.
with the process dynamics. Thus, except for Process 2, the To compare IMPC with QDMC, an initially large
noise amplification in the sampled controller is less than 20. prediction horizon was chosen as P = 30. As described
By analogy with (27), we define the discrete controller noise earlier, the unconstrained speed of response was decided by
amplification factor (naf) as a suitable choice of .s, for the IMPC algorithm. The QDMC
algorithm was tuned in the absence of constraints, and the
naf=max 9 control horizon M and control weight (uwt) were chosen by
w I I trial and error to match the unconstrained response of the
process for both the control algorithms. The settling time of
The naf is given in each of the following simulations as a the process is found to be 10 time units and the control
measure of how hard the controller drives the process. efforts for the two algorithms are virtually identical.
Because any process dead time is outside the model state When constraint set 1 is imposed, the QDMC algorithm
feedback loop, as shown in Fig. 1, an arbitrary dead time can generates a response with no overshoot and settling time
be added to any of the following examples without changing t = 18 units, while the IMPC algorithm produces a faster
the algorithm or the results except to shift all curves to the response with no overshoot in 13 units (Fig. 3). On imposing
right by the amount of the dead time. Thus the following constraint set 2, the response time of QDMC is 19 units and
examples are quite typical of those found in practice. that of the IMPC 12 units (Fig. 4).

_.-._ P”“MC) .‘-. . . .

2.00

1.80
T _.-.-._
"i...... ._
1.60 /- '._ '.\.
I, '. \.
1.40 /' .. \
i '_ '.

Fig. 2. Process 1 response under constraint set 1: IMC versus IMPC.


1476 Brief Papers

- PV(lMPC) ....... PV(QDMC)

- co (&WC) ....... CO(QDMC)

0.50

0.40

I
.::
::
::
0.30

co

0.20

0.10
ii
0.00 4 1
0 5 10 IS 20 25 30 35 40
Time

Fig. 3. Process 1 response under constraint set 1.

- PV (IMPC) ....-.. PV(QDMCJ

1.20 -

l.oLl .-

0.80 . .

PVO.60 --

0 5 IO 15 20 25 30 35 40
Tlmc

- co (IMPC) ....... CO(QDMC)

0.00 I
0 5 10 I5 20 25 30 3s 40
Time

Fig. 4. Process 1 response under constraint set 2.


Brief Papers 1477

Process 2. This is a fourth-order overdamped minimum- horizon M and control weight (uwt) is made by trial and
phase process with error, while attempting to match the unconstrained response
of the process to the response of the IMPC algorithm. The
1
(324 settling time of the controlled process in the absence of
G(f) = (16s + 1)(62r + 1)(3.4s + 1)(2.& + 1) ’ control effort constraints is found to be around 25 units (Fig.
G(z) = (3.71~~ + 34.3~~ + 28.82 +2.19) x lo-’ 5), and the control effort plots are similar.
(32b) When constraint set 1 is imposed, the QDMC algorithm
z4 - 3.235~~ + 3.908~~ - 2.089~ + 0.4168’
generates a response with 3% overshoot and settling time
6.906 X 1O-4 t = 20 units, while the IMPC algorithm generates a zero
G(Z) = z4 - 3.235~~ + 3.908z2 - 2.089~ t- 0.4168 ’ (32~) overshoot response with about the same setting time (Fig. 6).
On imposing constraint set 2, the QDMC algorithm gives a
1 response with an overshoot of about 18%, while the IMPC
‘(‘) = (2.35s + 1)“ ’ algorithm again yields no overshoot, with about the same
(33) setting time (Fig. 7).
ri*,(z) = (6.89~~ + 6.37~’ + 1.132 + 0.0496) X lO-3
In addition to the IMPC and QDMC responses, Fig. 7 also
(z - 0.653)4 shows that the model state feedback response (MSF) (no
K(~) = -0.3026z6 + 1.483~~ - 2.123~~ + 1.0g77z3 - O.O7592z* prediction) yields an overshoot of about 21%. These results
are perhaps not surprising, since the lower bound in
- 0.05232~ - 0.002999, (34a) constraint set 2 is the steady-state control effort. Thus,
,&, = 0.982, naf = 20.2. driving the system relatively hard initially as do QDMC and
(34b)
MSF cannot help but lead to an undesired overshoot.
Constraints on control output (CO) for IMPC and QDMC: Process 3. This is a third-order oscillatory non-minimum-
constraint set 1: 0 S CO 5 15; phase process with
constraint set 2: 15 CO 5 15.
5(1 - 4s)
IMPC tuning parameters: G(s) = (25~~+ s + l)(loS + 1) = GM(s)’
filter time constant E = 2.35; (35)
-0.0322 + O.Olz + 0.04
prediction horizon P = 40. G(z) =
z3 - 2.83~~ + 2.70~ - 0.87 ’
QDMC tuning parameters: 0.841z* - 0.06552 NH(Z)
prediction horizon P = 50; G(z) =-iz- - 2.83z2+ 2.70~ -0.869-o(7)’ (36)
control horizon M = 2;
control weight uwt = 0.080. 1
F(r)=(1.8s + 1)2’
For the IMPC algorithm, we select the unconstrained (37)
speed of response by a suitable choice of E, satisfying the o.l08(z +0.690)= Q,,,(z)
&r(z) =
noise rejection criteria for the controller given by (27). We (2 - 0.574)2 (z - a)* ’
pick the prediction horizon as the open-loop settling time.
Similarly, for the QDMC algorithm, a large prediction k, = 1.28, K(z) = 1.95~’ + 4.41z2- 11.6~ + 5.58,
horizon is selected initially as P = 50. The choice of control naf = 12.6. (38)

- PV(QDMC) . . . - - . PV(IMPC)

1 11 21 31 41 51 61
Time
- CO(QDMC) -.._.- CQ(IMPC)

Time
Fig. 5. Process 2 unconstrained response.
1478 Brief Papers

- PV(QDMC) - -. - - - PV(IMPC)

PV

Time

- CO(QDMC) . - -. - - CO(IMPC)

,
11 21 31 41 51 61 71
Time

Fig. 6. Process 2 response under constraint set 1.

PV(QDMC) - - - - - - - PV(IMPC) - - - - PV(MSF)

1 11 21 31 41 51 61 71

- CO(QDMC) - - - - - - - CO(IMPC) - - - - CO(?vlSF)

0 l--- : ,
1 11 21 31 41 51 61 71
Time

Fig. 7. Process 2 response under constraint set 2.


Brief Papers 1479

Constraints on control output (CO) for IMPC and QDMC: and D-‘(s) can be viewed as matrix fraction representations
constraint set 1: 0.15 CO 5 1.5; of the multivariable system (Kailath, 1980), and K(s), is a
constraint set 2: 0.15 CO 5 0.3. matrix of polynomials in s.
The coefficients of the poiynomials in K(s) depend on the
IMPC tuning parameters: filter time constants. This fact is important because it allows
filter time constant e = 1.8; us to modify the size and direction of the IMC control vector
prediction horizon P = 30. by adjusting filter time constants. To illustrate the
importance of this flexibility, we shall apply model state
QDMC tuning parameters: feedback to a two-variable process studied by Campo and
prediction horizon P = 30; Morari (1990). Their process is a simple version of a model
control horizon M = 4; given by Laughlin et al. (1993) for cross-directional basis
control weight uwt = 0.39. weight control in paper machines. Laughlin’s model is given
as
A large prediction horizon is chosen initially as P = 30,
W(S)= AD-‘(s)e-%(s), (39)
and the unconstrained responses under both the algorithms is
matched by a suitable choice of the control effort M and where, w(s) is the n-vector of basis-weights at n positions
control weight (uwt) parameters in the QDMC algorithm. across the paper, u(s) is the n-vector of slice openings, A is
The speed of the IMPC response is decided by a suitable an n x n tridiagonal matrix of constants with positive
choice of E, based on the noise rejection criterion, described diagonal elements and negative off-diagonal elements, D(s)
earlier. is an n x n diagonal matrix of the form (zs + 1)1, and 0 is the
The settling time of the controlled process in the absence process deadtime. The number of adjustable slice openings,
of control effort constraints is found to be 26 units, and the n, ranges from 20 to about 100. The IMC controller for the
process variable and control effort responses are almost process given by (39) is
identical. On imposing constraint set 1, the settling time for q(s) = D(s)F(&, s)A-‘, (40)
the QDMC and IMPC is 29 units) and again the responses
are almost identical. On imposing constraint set 2, the where q(s) is the IMC controller, F(E, s) is a diagonal matrix
QDMC response is faster than that of the IMPC algorithm with elements l/(~~s + l), i = 1,. , n, and ei is the filter
and it too shows no overshoot (Fig. 8) time constant for output i. The continuous model state
feedback implementation of (40) is given in Fig. 9, in which
6. Application of model state feedback to multivariable G(s) = AD-‘(s)e-&, (4W
systems
The block diagram of Fig. 1 extends immediately to K(s) = A-‘a-‘A - I, (4W
multivariable systems where the process model is a k,, = A-%-‘, (41c)
finite-dimensional multivariable system cascaded with a
diagonal matrix of dead times. In this case the terms N(s) where Q is a diagonal matrix with elements q, ai = E~/T.

..
- PV (IMPC) PV (QDMC)

1.20 -

0.80 ..

Iv 0.40 .-

- co (IMPC) “..... CO(QDMC)

osmI 1

0 5 10 15 20 25 30 35 40
The
Fig. 8. Process 3 response under constraint set 2.
1480 Brief Papers

Fig. 9. Model state feedback implementation of IMC.

That the above substitutions result in the same control Morari (1990) obtain the same results as shown in Fig. 10
effort as (4O), in the absence of constraints, can be verified by using a different control structure, but one that yields the
collapsing the feedback loop through K(s) in Fig. 9. The identical control effort for the above example. They ascribe
reason that K(s) in (41b) does not involve s is that the the poor performance of the control system to the distortion
process is first-order (i.e D-‘(s) = (rs + 1))‘Z). of directionality that occurs when the computed controls are
Campo and Morari’s version of the process given by (39) is truncated by saturation. To maintain directionality, they
propose an algorithm that shortens the control vector by the
4 -5 least amount that brings all components within the constraint
A= ?=lo, T=O. (42)
[ -3 4’1 set. This algorithm gives good results for the example
problem.
Also, they take An alternate view of the source of the poor responses in
-151u,SlS, i=l,2, (43a) Fig. 10 is that saturation results in the application of control
efforts that are not consistent with the computed IMC
E, = 1, i=1,2, (43b) control efforts. To obtain controls that are consistant with an

1
0.61 IMC control, we suggest temporarily increasing one or more
r(t) = o,J9 I t > 0, r(O) = 0, w(0) = 0, (43c) filter time constants to bring the IMC control vector within
[
the constraint set. For the example problem, when both filter
where r(r) is the set point (Fig. 9). time constants are increased by the same fraction, the control
Applying the step set-point change given above to the efforts are indistinguishable from those found by Campo and
model state feedback system of Fig. 9 with process given by Morari (1990) using their algorithm that maintains the
(39), (42) and (43) gives the output and controls shown in control vector directionality. The two algorithms are not
Fig. IO. The output responses deviate substantially from the identical, however, and may behave quite differently on
ideal loop responses, which are first-order lags with time other examples. Further, the simple policy of increasing both
constants of one unit (i.e. w(s)q(s) = (s + l)-‘I). Campo and filter time constants proportionally is not the only possible

Time
15.00 * :
1
I
\ I
12.00 :r
\
9.00
2
co 6.00 L -*.--.. . . -a..L... ~““I..“.~.“-~_...

0.00 -.

3.00 i I
0 2 4 6 a 10 12 14
Time
Fig. 10. Response of example problem using uncompensated model state feedback.
Brief Papers 1481

0.40
------ Priority for output 1
PV
._ Equal priorities
0.20

6.00 8.00 10.00 12.00 14.00


Time

16.00
1

----- Priority for output 1


_ Equal priorities

10.00
co
8.00.

6.OC-

4.od~-I 4

0.00 2.00 4.00 6.00 8.00 10.00 12.00 14.00


Time

Fig. 11. Comparison of responses for alternate filter time constant adjustment policies using compensated model state feedback.

policy. When process variable 1 is more important than to limit cycles. Thus, for multivariable systems, we need to
process variable 2, it makes sense to give priority to process develop model predictive algorithms like the IMPC
variable 1 by first increasing filter time constant 2, and only if algorithm presented in this paper for single-input single-
this fails to bring the control vector within constraints, output systems.
increasing filter time constant 1. The results of this policy for
the same problem and set-point changes are shown in Fig. 11 7. Conclusions
labeled ‘priority for output 1’. Up to about one unit of time, In our SISO examples the IMPC algorithm compensates
filter time constant 2 is increased to infinity (we took for control effort constraints as well as, or better than
loo0 = m), thus causing process variable 2 to remain at its QDMC for processes without right-half-plane zeros. For a
initial set point. Notice that process variable 1 responds much process with right-half-plane zeros, QDMC performed as
more rapidly under this new policy, while the actual setting well as or better than IMPC when encountering control effort
time of process variable 2 is not increased appreciably. constraints. The aforementioned results are perhaps not very
The policy of increasing filter time constants to shrink the surprising (at least in hindsight). The qualitatively best
control vector works for the example because the computed response of a process with no right-half-plane zeros is one
controls always exceed their steady-state levels. That is, that achieves its set point rapidly and without overshoot.
available controls are always driving the process hard to Such response can be obtained from a quadratic objective
match a response with a (filter) time constant of unity. functional minimization only by penalizing the control effort
Therefore, by asking for a slower response (i.e. by increasing through either specifying a small number of allowable control
filter time constants), one obtains a less aggressive IMC moves and/or a substantial weight on the square of the size
control. In higher-order processes, however, it can happen of the control moves. Both of these actions makes the
that the process gets to a state where the IMC control effort performance measure, and the actual performance, sensitive
must slow down the process in order to track the desired to the location and size of the control effort constraints. On
response. If the control effort required to slow the process the other hand, the best performance for processes with
lies outside the constraint set then, in order to bring the right-half-plane zeros is obtained by minimizing a perfor-
vector within constraints, the filter time constants will have to mance criterion that is the same as that used by QDMC with
be decreased. That is, we need to request a faster response. no weight given to the control effort. Thus QDMC handles
But considerations relating to modeling errors and/or noise processes with right-half-plane zeros better than does IMPC.
amplification may prohibit decreasing the filter time The converse is true for processes without right-half-plane
constants. The solution to this problem is to never allow zeros.
process control actions to bring the process to a state where The IMPC algorithm is substantially easier to tune than
the only way to bring the control vector into the constraint the QDMC algorithm, since it has only one tuning
set is to decrease filter time constants below those consistent parameter, the filter time constant. A substantial effort was
with modeling errors and noise. In addition, the control required to adjust the QDMC algorithm to give the desired
algorithm has to prevent controller-induced instabilities due unconstrained response. A similar effort is required to obtain
1482 Brief Papers

any desired response. Further, there is no published method control-a computer control algorithm. In Proc. Joint
for adjusting the QDMC response to accommodate general Automatic Control Conf, San Francisco, CA, p. WP-5.
process uncertainty descriptions (an algorithm is suggested Daoutidis, P. and C. Kravaris (1989). Synthesis of
by Prett and Garcia (1988) for process gain uncertainties). feedforward/state feedback controllers for nonlinear
On the other hand, several algorithms are available for processes. AIChE I., 35, 1602-1616.
finding an IMC filter time constant that accommodates Doyle, J. C. (1985). Structured uncertainty in control system
general uncertainty descriptions (Morari and Zafiriou, 1989; design. In Proc. 24th IEEE Conf on Decision and Control,
Laiseca and Brosilow, 1992). Ft Lauderdale, FL.
Extension of model state feedback to multivariable Garcia, C. E. and A. M. Morshedi (1986). Quadratic
systems that have simple matrix fraction representations programming solution of dynamic matrix control (QDMC).
leads to new insights and new methods for accommodating Chem. Engng Commun., 46,73-87.
control effort saturation. The discussion points out, however, Kailath, T. (1980). Linear Systems. Prentice-Hall, Englewood
that it will be necessary to combine MSF with predictive Cliffs, NJ.
algorithms, perhaps similar to that presented here for SISO Kothare, M. V., P. 1. Campo, M. Morari and C. N. Nett
systems, in order assure good performance for even relatively (1994). A unified framework for the study of anti-windup
simple multivariable systems. designs. Automatica, 30, 1869-1883.
Finally, while this work has dealt only with linear Laiseca, M. and C. Brosilow (1992). Tuning control systems
processes, the methods presented for dealing with control for parametric uncertainty. Presented at Automatic
effort constraints should extend in a straightforward way to Control Conf., Chicago, IL.
geometric nonlinear control systems (Daoutidis and Kravaris, Laughlin, S., M. Morari and R. Braatz (1993). Robust
1989). performance of cross directional basis-weight control in
paper machines. Automatica, 29, 1395-1410.
Morari, M. and J. H. Lee (1991). Model predictive control:
Acknowledgement-The authors are indebted to their the good, the bad and the ugly. In Proc. Conf on Chemical
colleague Professor Ken Loparo for his assistance with the Process Control, CPC-IV, pp. 419-444. AIChE, New
stability proof. York.
Morari, M. and L. Ricker (1992). CACHE Model Predictive
Control Toolbox. CACHE Corp. V1.0.
References Morari, M. and E. Zafiriou (1989). Robust Process Control.
Boy& S., L. El Ghaoui, E. Feron and V. Balakrishnan Prentice-Hall, Englewood Cliffs, NJ.
(1994). Linear Matrix Inequalities in System and Control Prett. D. M. and C. E. Garcia (1988). Fundamental Process
Theory. SIAM, New York. Control. Butterworths, Stoneham, MA.
Campo, P. J. and M. Morari (1990). Robust control of Walgama, K. S. and J. Sternby (1993). Conditioning
processes subject to saturation nonlinearities. ComP. technique for multinput multioutput processes with input
C/rem. Engng, 14,343-358. saturation. IEE Proc. Pt D. 140,231-241.
Coulibaly, E. (1992). Internal model predictive control using Walgama, K. S., S. Ronnbkk’ and J. Sternby (1992).
model state feedback. MS thesis, CWRU. Generalization of conditioning technique for anti-windup
Cutler, C. R. and B. C. Ramaker (1980). Dynamic matrix compensators. IEE Proc., Pt D, 139, 109-118.

You might also like