You are on page 1of 264

Lecture Notes for the Course SC4060

Model Predictive Control

Faculty of Mechanical, Maritime


and Materials Engineering

January 2013

dr.ir. Ton J.J. van den Boom

Delft
University of
Technology

Artikelnummer 06917690017

Model Predictive Control

dr.ir. Ton J.J. van den Boom


Delft Center for Systems and Control
Delft University of Technology
Mekelweg 2, NL-2628 CD Delft, The Netherlands
Tel. (+31) 15 2784052
Email:
A.J.J.vandenBoom@tudelft.nl

Contents
1 Introduction
1.1 Model predictive control . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 History of Model Predictive Control . . . . . . . . . . . . . . . . . . . . . .

9
9
10

2 The basics of model predictive control


2.1 Introduction . . . . . . . . . . . . . . .
2.2 Process model and disturbance model .
2.2.1 Input-output (IO) models . . .
2.2.2 Other model descriptions . . . .
2.3 Performance index . . . . . . . . . . .
2.4 Constraints . . . . . . . . . . . . . . .
2.5 Optimization . . . . . . . . . . . . . .
2.6 Receding horizon principle . . . . . . .

.
.
.
.
.
.
.
.

13
13
13
15
20
21
25
28
29

3 Prediction
3.1 Noiseless case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Noisy case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 The use of a priori knowledge in making predictions . . . . . . . . . . . . .

31
32
33
36

4 Standard formulation
4.1 The performance index . . . . . . . . . . . .
4.2 Handling constraints . . . . . . . . . . . . .
4.2.1 Equality constraints . . . . . . . . .
4.2.2 Inequality constraints . . . . . . . . .
4.3 Structuring the input with orthogonal basis
functions . . . . . . . . . . . . . . . . . . .
4.4 The standard predictive control problem . .
4.5 Examples . . . . . . . . . . . . . . . . . . .

.
.
.
.

39
39
44
46
48

. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .

50
51
52

5 Solving the standard predictive control problem


5.1 The nite horizon SPCP . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.1 Unconstrained standard predictive control problem . . . . . . . . .
5.1.2 Equality constrained standard predictive control problem . . . . . .

55
56
56
58

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

CONTENTS

5.2

5.3

5.4
5.5

5.1.3 Full standard predictive control problem . . . . . . . . . . . . . . .


Innite horizon SPCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2.1 Steady-state behavior . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2.2 Structuring the input signal for innite horizon MPC . . . . . . .
5.2.3 Unconstrained innite horizon SPCP . . . . . . . . . . . . . . . . .
5.2.4 The innite Horizon Standard Predictive Control Problem with control horizon constraint . . . . . . . . . . . . . . . . . . . . . . . . .
5.2.5 The Innite Horizon Standard Predictive Control Problem with structured input signals . . . . . . . . . . . . . . . . . . . . . . . . . . .
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1 Implementation and computation in LTI case . . . . . . . . . . . .
5.3.2 Implementation and computation in full SPCP case . . . . . . . . .
Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 Stability
6.1 Stability for the LTI case . . . . . . . . . . . . . . .
6.2 Stability for the inequality constrained case . . . .
6.3 Modications for guaranteed stability . . . . . . . .
6.4 Relation to IMC scheme and Youla parametrization
6.4.1 The IMC scheme . . . . . . . . . . . . . . .
6.4.2 The Youla parametrization . . . . . . . . . .
6.5 Robustness . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

7 MPC using a feedback law, based on linear matrix inequalities


7.1 Linear matrix inequalities . . . . . . . . . . . . . . . . . . . . . . .
7.2 Unconstrained MPC using linear matrix inequalities . . . . . . . . .
7.3 Constrained MPC using linear matrix inequalities . . . . . . . . . .
7.4 Robustness in LMI-based MPC . . . . . . . . . . . . . . . . . . . .
7.5 Extensions and further research . . . . . . . . . . . . . . . . . . . .
8 Tuning
8.1 Initial settings for the parameters . . . . . . . . . . . . .
8.1.1 Initial settings for the summation parameters . .
8.1.2 Initial settings for the signal weighting parameters
8.2 Tuning as an optimization problem . . . . . . . . . . . .
8.3 Some particular parameter settings . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

62
65
66
67
69
72
76
89
89
91
96
97

.
.
.
.
.
.
.

101
101
105
105
116
116
121
124

.
.
.
.
.

131
132
134
137
140
146

.
.
.
.
.

151
152
152
154
158
163

Appendix A: Quadratic Programming

167

Appendix B: Basic State Space Operations

177

Appendix C: Some results from linear algebra

181

CONTENTS

183
Appendix D: Model descriptions
D.1 Impulse and step response models . . . . . . . . . . . . . . . . . . . . . . . 183
D.2 Polynomial models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Appendix E: Performance indices

201

Appendix F: Manual MATLAB Toolbox


Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

205
206
207
209

Index

257

References

259

CONTENTS

Abbreviations
CARIMA model
CARMA model
DMC
EPSAC
GPC
IDCOM
IIO model
IO model
LQ control
LQPC
MAC
MBPC
MHC
MIMO
MPC
PFC
QDMC
RHC
SISO
SPC
UPC

controlled autoregressive integrated moving average


controlled autoregressive moving average
Dynamic Matrix Control
Extended Prediction Self-Adaptive Control
Generalized Predictive Control
Identication Command
Increment-Input Output model
Input Output model
Linear-Quadratic control
Linear-Quadratic predictive control
Model Algorithmic Control
Model Based Predictive Control
Moving Horizon Control
Multiple-Input Multiple-Output
Model Predictive Control
Predictive Functional Control
Quadratic Dynamic Matrix Control
Receding Horizon Control
Single-Input Single-Output
Standard Predictive Control
Unied Predictive Control

Chapter 1
Introduction
1.1

Model predictive control

Process industry is characterized by product quality specications which become more


and more tight, increasing productivity demands, new environmental regulations and fast
changes in the economical market. In the last decades Model Predictive Control (MPC),
also referred to as Model Based Predictive Control (MBPC), Receding Horizon Control
(RHC) and Moving Horizon Control (MHC), has shown to be able to respond in an eective
way to these demands in many practical process control applications and is therefore widely
accepted in process industry. MPC is more of a methodology than a single technique.
The dierence in the various methods is mainly the way the problem is translated into
a mathematical formulation, so that the problem becomes solvable in a limited amount
of time. However, in all methods ve important items are recognizable in the design
procedure:
Process and disturbance model: The process and disturbance models are mostly chosen linear. Extensions to nonlinear models can be made, but will not be considered
in this course. On the basis of the model a prediction of the process signals over a
specied horizon is made. A good performance in the presence of disturbances can
be obtained by using a prediction of the eect of the disturbance signal, given its
characteristics in the form of a disturbance model.
Performance index: A quadratic (2-norm) performance-index or cost-criterion is formulated, reecting the reference tracking error and the control action.
Constraints: In practical situations, there will be constraints on control, state and output
signals, motivated by safety and environmental demands, economic requirements,
such as cost, and equipment limitations.
Optimization: An optimization algorithm will be applied to compute a sequence of future
control signals that minimizes the performance index subject to the given constraints.
9

10

CHAPTER 1. Introduction
For linear models with linear constraints and a quadratic performance index the
solution can be found using quadratic programming algorithms.

Receding horizon principle: Predictive control uses the so-called receding horizon principle. This means that after computation of the optimal control sequence, only the
rst control sample will be implemented, subsequently the horizon is shifted one
sample and the optimization is restarted with new information of the measurements.

1.2

History of Model Predictive Control

Since 1970 various techniques have been developed for the design of model based control
systems for robust multivariable control of industrial processes ([10], [24], [25], [29], [43],
[46]). Predictive control was pioneered simultaneously by Richalet et al. [53], [54] and
Cutler & Ramaker [18]. The rst implemented algorithms and successful applications were
reported in the referenced papers. Model Predictive Control technology has evolved from
a basic multivariable process control technology to a technology that enables operation
of processes within well dened operating constraints [2], [6]), [50]. The main reasons for
increasing acceptance of MPC technology by the process industry since 1985 are clear:
MPC is a model based controller design procedure, which can easily handle processes
with large time-delays, non-minimum phase and unstable processes.
It is an easy-to-tune method, in principle there are only three basic parameters to be
tuned.
Industrial processes have their limitations in, for instance, valve capacity and other
technological requirements and are supposed to deliver output products with some
pre-specied quality specications. MPC can handle these constraints in a systematic
way during the design and implementation of the controller.
Finally MPC can handle structural changes, such as sensor and actuator failures,
changes in system parameters and system structure by adapting the control strategy
on a sample-by-sample basis.
However, the main reasons for its popularity are the constraint-handling capabilities, the
easy extension to multivariable processes and most of all increased prot as discussed in the
rst section. From academic side the interest in MPC mainly came from the eld of selftuning control. The problem of Minimum Variance control (
Astrom & Wittenmark [3]) was
studied while minimizing the performance index J(u, k) = E {( r(k + d) y(k + d) )2 }
at time k, where y(k) is the process output signal, u(k) is the control signal, r(k) is the
reference signal, E() stands for expectation and d is the process dead-time. To overcome
stability problems with non-minimum phase plants, the performance index was modied
by adding a penalty on the control signal u(k). Later this u(k) in the performance index
was replaced by the increment of the control signal u(k) = u(k) u(k 1) to guarantee

1.2. HISTORY OF MODEL PREDICTIVE CONTROL

11

a zero steady-state error. To handle a wider class of unstable and non-minimum phase
systems and systems with poorly known delay the Generalized Predictive Control (GPC)
scheme (Clarke et al. [13],[14]) was introduced with a quadratic performance index.
In GPC mostly polynomial based models are used. For instance, Controlled AutoRegressive Moving Average (CARMA) models or Controlled AutoRegressive Integrated Moving
Average (CARIMA) models are popular. These models describe the process using a minimum number of parameters and therefore lead to eective and compact algorithms. Most
GPC-literature in this area is based on Single-Input Single-Output (SISO) models. However, the extension to Multiple-Input Multiple-Output (MIMO) systems is straightforward
as was shown by De Vries & Verbruggen [21] using a MIMO polynomial model, and by
Kinnaert [34] using a state-space model.
This text covers state-of-the-art technologies for model predictive process control that are
good candidates for future generations of industrial model predictive control systems. Like
all other controller design methodologies, MPC also has its drawbacks:
A detailed process model is required. This means that either one must have a good
insight in the physical behavior of the plant or system identication methods have
to be applied to obtain a good model.
The methodology is open, and many variations have led to a large number of MPCmethods. We mention IDCOM (Richalet et al. [54]), DMC (Cutler & Ramaker [17]),
EPSAC (De Keyser and van Cauwenberghe [20]), MAC (Rouhani & Mehra [57]),
QDMC (Garca & Morshedi [27]), GPC (Clarke et al. [14][15]), PFC (Richalet et al.
[52]), UPC (Soeterboek [61]).
Although, in practice, stability and robustness are easily obtained by accurate tuning,
theoretical analysis of stability and robustness properties are dicult to derive.
Still, in industry, for supervisory optimizing control of multivariable processes MPC is often
preferred over other controller design methods, such as PID, LQ and H . A PID controller
is also easily tuned but can only be straightforwardly applied to SISO systems. LQ and
H can be applied to MIMO systems, but cannot handle signal constraints in an adequate
way. These techniques also exhibit diculties in realizing robust performance for varying
operating conditions. Essential in model predictive control is the explicit use of a model
that can simulate dynamic behavior of the process at a certain operating point. In this
respect, model predictive control diers from most of the model based control technologies
that have been studied in academia in the sixties, seventies and eighties. Academic research
has been focusing on the use of models for controller design and robustness analysis of
control systems for quite some time. With their initial work on internal model based
control, Garcia and Morari [26] made a rst step towards bridging academic research in
the area of process control and industrial developments in this area. Signicant progress
has been made in understanding the behavior of model predictive control systems, and a lot
of results have been obtained on stability, robustness and performance of MPC (Soeterboek
[61], Camacho and Bordons [11], Maciejowski [44], Rossiter [56]).

12

CHAPTER 1. Introduction

Since the pioneering work at the end of the seventies and early eighties, MPC has become
the most widely applied supervisory control technique in the process industry. Many papers
report successful applications (see Richalet [52], and Qin and Badgewell [50]).

Chapter 2
The basics of model predictive
control
2.1

Introduction

MPC is more of a methodology than a single technique. The dierence in the various
methods is mainly the way the problem is translated into a mathematical formulation, so
that the problem becomes solvable in the limited time interval available for calculation of
adequate process manipulations in response to external inuences on the process behavior (Disturbances). However, in all methods ve important items are part of the design
procedure:
1. Process model and disturbance model
2. Performance index
3. Constraints
4. Optimization
5. Receding horizon principle
In the following sections these ve ingredients of MPC will be discussed.

2.2

Process model and disturbance model

The models applied in MPC serve two purposes:


Prediction of the behavior of the future output of the process on the basis of inputs
and known disturbances applied to the process in the past
Calculation of the input signal to the process that minimizes the given objective
function
13

14

CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL

The models required for these tasks do not necessarily have to be the same. The model
applied for prediction may dier from the model applied for calculation of the next control
action. In practice though both models are almost always chosen to be the same.
As the models play such an important role in model predictive control the models are
discussed in this section. The models applied are so called Input-Output (IO) models.
These models describe the input-output behavior of the process. The MPC controller
discussed in the sequel explicitly assumes that the superposition theorem holds. This
requires that all models used in the controller design are chosen to be linear. Two types
of IO models are applied:
Direct Input-Output models (IO models) in which the input signal is directly applied
to the model.
Increment Input-Output models (IIO models) in which the increments of the input
signal are applied to the model instead of the input directly.
The following assumptions are made with respect to the models that will be used:
1. Linear
2. Time invariant
3. Discrete time
4. Causal
5. Finite order
Linearity is assumed to allow the use of the superposition theorem. The second assumption -time invariance- enables the use of a model made on the basis of observations of
process behavior at a certain time for simulation of process behavior at another arbitrary
time instant. The discrete time representation of process dynamics supports the sampling
mechanisms required for calculating control actions with an MPC algorithm implemented
in a sequential operating system like a computer. Causality implies that the model does
not anticipate future changes of its inputs. This aligns quite well with the behavior of
physical, chemical, biological systems. The assumption regarding the order of the model
to be nite implies that models and model predictions can always be described by a set of
explicit equations. Of course, these assumptions have to be validated against the actual
process behavior as part of the process modeling or process identication phase.
In this chapter only discrete time models will be considered. The control algorithm will
always be implemented on a digital computer. The design of a discrete time controller is
obvious. Further a linear time-invariant continuous time model can always be transformed
into a linear time-invariant discrete time model using a zero-order hold z-transformation.
The most general description of linear time invariant systems is the state space description. Models, given in a impulse/step response or transfer function structure, can easily be
converted into state space models. Computations can be done numerically more reliable if

2.2. PROCESS MODEL AND DISTURBANCE MODEL

15

based on (balanced) state space models instead of impulse/step response or transfer function models, especially for multivariable systems (Kailath [33]). Also system identication,
using the input-output data of a system, can be done using reliable and fast algorithms
(Ljung [41]; Verhaegen & Dewilde [71]; Van Overschee [70]).

2.2.1

Input-output (IO) models

In this course we consider causal, discrete time, linear, nite-dimensional, time-invariant


systems given by
y(k) = Go (q) u(k) + Fo (q)do(k) + Ho (q)eo (k)

(2.1)

in which Go (q) is the process model, Fo (q) is the disturbance model, Ho (q) is the noise
model, y(k) is the output signal, u(k) is the input signal, do (k) is a known disturbance
signal, eo (k) is zero-mean white noise (ZMWN) and q is the shift-operator q 1 y(k) =
y(k 1). We assume Go (q) to be strictly proper, which means that y(k) does not depend
on the present value of u(k), but only on past values u(k j), j > 0.
eo (k)

Ho
u(k) Go

-?
e
6

y(k)
-

Fo
6

do (k)

Figure 2.1: Input-Output (IO) model


A state space representation for this system can be given as
xo (k + 1) = Ao xo (k) + Ko eo (k) + Lo do (k) + Bo u(k)
y(k) = Co xo (k) + DH eo (k) + DF do (k)

(2.2)
(2.3)

and so the transfer functions Go (q), Fo (q) and Ho (q) can be given as
Go (q) = Co ( q I Ao )1 Bo
Fo (q) = Co ( q I Ao )1 Lo + DF
Ho (q) = Co ( q I Ao )1 Ko + DH

(2.4)
(2.5)
(2.6)

16

CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL

eo (k)
?

u(k)

Ko

Bo

Lo

DH
-?
e xo (k+1)J
]
6
J
J
J

q 1

xo (k) Co

Ao 

-?
e
6

y(k)-

DF
6

do (k)

Figure 2.2: State space representation of the model

Increment-input-output (IIO) models


Sometimes it is useful not to work with the input signal u(k) itself, but with the input
increment instead, dened as
u(k) = u(k) u(k 1) = (1 q 1 )u(k) = (q)u(k)
where (q) = (1 q 1 ) is denoted as the increment operator. Using the increment of the
input signal implies that the model keeps track of the actual value of the input signal. The
model needs to integrate the increments of the inputs to calculate the output corresponding
with the input signals actually applied to the process. We obtain an increment-inputoutput (IIO) model:
IIO-model:

y(k) = Gi (q) u(k) + Fi (q) di(k) + Hi (q) ei (k)

(2.7)

where di (k) is a known disturbance signal and ei (k) is ZMWN. A state space representation
for this system is given by
xi (k + 1) = Ai xi (k) + Ki ei (k) + Li di(k) + Bi u(k)
y(k) = Ci xi (k) + DH ei (k) + DF di (k)

(2.8)
(2.9)

The transfer functions Gi (q) and Hi (q) become


Gi (q) = Ci ( q I Ai )1 Bi
Fi (q) = Ci ( q I Ai )1 Li + DF
Hi (q) = Ci ( q I Ai )1 Ki + DH

(2.10)
(2.11)
(2.12)

2.2. PROCESS MODEL AND DISTURBANCE MODEL

17

Relation between IO and IIO model


Given an IO-model with state space realization
xo (k + 1) = Ao xo (k) + Ko eo (k) + Lo do (k) + Bo u(k)
y(k) = Co xo (k) + DH eo (k) + DF do (k)
Dene the system matrices




I Co
0
Ai =
Bi =
Bo
0 Ao


Ki =

DH
Ko


Ci =

I Co


Li =

DF
Lo

the disturbance and noise signals


di (k) = do (k) = do (k) do (k 1)
ei (k) = eo (k) = eo (k) eo (k 1)
and new state


y(k 1)
xi (k) =
xo (k)
where xo (k) = xo (k) xo (k 1) is the increment of the original state. Then the state
space realization, given by
xi (k + 1) = Ai xi (k) + Ki ei (k) + Li di (k) + Bi u(k)
y(k) = Ci xi (k) + DH ei (k) + DF di (k)
looks like a IIO model of the original IO model. However, there is one crucial dierence.
In the IO model, the noise eo is ZMWN but this implies that ei as dened above will be
integrated ZMWN. However, for the IIO model it is customary to assume that ei itself is
ZMWN. It is common to use the above model and assume that ei is ZMWN. This changes
the model compared to the original IO model but the advantage is that the resulting IIO
model is better able to handle drift terms in the noise.
An interesting observation in comparing the IO model with the corresponding IIO model
is the increase of the number of states with the number of outputs of the system. As can
been seen from the state matrix Ai of the IIO model this increase in the number of states
compared to the state matrix Ao of the IO model is related to the integrators required for
calculating the actual process outputs on the basis of input increments. The additional
eigenvalues of Ai are all integrators: eigenvalues i = 1.
Proof:
From (2.2) we derive
xo (k + 1) = Ao xo (k) + Ko eo (k) + Lo do (k) + Bo u(k)
y(k) = Co xo (k) + DH eo (k) + DF do (k)

18

CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL

and so for the output signal we derive


y(k) = y(k 1) + y(k) =
= y(k 1) + Co xo (k) + DH eo (k) + DF do (k)
so based on the above equations we get:


 



I Co
DH
y(k 1)
y(k)
=
+
eo (k) +
0 Ao
Ko
xo (k)
xo (k + 1)




0
DF
do (k) +
u(k)
+
Bo
Lo


 y(k 1)

I Co
+ DH eo (k) + DF do (k)
y(k) =
xo (k)
by substitution of the variables xi (k), di (k) and ei (k) and the matrices Ai , Bi , Li , Ki and
Ci the state space IIO model is found.
2 End Proof
The transfer functions of the IO-model and IIO-model are related by
Gi (q) = Go (q)(q)1
Fi (q) = Fo (q)(q)1
Hi (q) = Ho (q)(q)1

(2.13)

where (q) = (1 q 1 ). In appendix D, we will show how the other models, like impulse
response models, step response models and polynomial models relate to state space models.

The advantage of using an IIO model


The main reason for using an IIO model is the good steady-state behavior of the controller
designed on the basis of this IIO model. As indicated above the IIO model simulates
process outputs on the basis of input increments. This implies that the model must have
integrating action to describe the behavior of processes that have steady state gains unequal
to zero. The integrating behavior of the model also implies that the model output can take
any value with inputs being equal to zero. This property of the IIO model appears to be
very attractive for good steady state behavior of the control system as we will see.
Steady-state behavior of an IO model: Consider a IO model Go (q) without a pole
in q = 1, so Go (1) < . Further assume eo (k) = 0 and do (k) = 0 for all k. Moreover,
assume that our controller stabilizes the system. In that case, if u uss for k then
y yss = Go (1)uss for k . This process is controlled by a predictive controller,
minimizing the IO performance index
J(k) =

N2

j=1

y(k + j) r(k + j)2 + 2 u(k + j 1)2

2.2. PROCESS MODEL AND DISTURBANCE MODEL

19

Let the reference be constant r(k) = rss = 0 for large k, and let k . Then u and y
will reach their steady-state and so J(k) = N2 Jss for k , where
Jss = yss rss 2 + 2 uss 2
= Go (1)uss rss 2 + 2 uss2


T
= uTss GTo (1)Go(1) + 2 I uss 2uTssGTo (1)rss + rss
rss
Minimizing Jss over uss means that


Jss /uss = 2 GTo (1)Go (1) + 2 I uss 2GTo (1)rss = 0
so

1
uss = GTo (1)Go(1) + 2 I
GTo (1)rss
The steady-state output becomes

1
GTo (1)rss
yss = Go (1) GTo (1)Go (1) + 2 I
It is clear that yss = rss for > 0 and for this IO model there will always be a steady-state
error for rss = 0 and > 0.
Steady-state behavior of an IIO model:
Consider the above model in IIO form, so Gi (q) = 1 Go (q) under the same assumptions
as before, i.e. Go (1) is well-dened. Moreover, eo = 0 and do = 0 which implies that ei = 0
and di = 0. Let it be controlled by a stabilizing predictive controller, minimizing the IIO
performance index

J(k) =

N2


y(k + j) r(k + j)2 + 2 u(k + j 1)2

j=1

In steady state the output is given by yss = Go (1)uss and the increment input becomes
uss = 0, because the input signal u = uss has become constant. In steady-state we will
reach the situation that the performance index becomes J(k) = N2 Jss for k , where
Jss = yss rss 2
The optimum Jss = 0 is obtained for yss = rss which means that no steady-state error
occurs. It is clear that for a good steady-state behavior in the presence of a reference
signal, it is necessary to design the predictive controller on the basis of an IIO model.

20

CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL

The standard model


In this course we consider the following standard model:
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)

(2.14)
(2.15)

where x(k) is the state, e(k) is zero-mean white noise (ZMWN), v(k) is the control signal,
w(k) is a vector containing all known external signals, such as the reference signal and
known (or measurable) disturbances, and y(k) is the measurement. The control signal
v(k) can be either u(k) or u(k).

2.2.2

Other model descriptions

In this course we will consider state space description (2.14)-(2.15) as the model for the
system. In literature three other model descriptions are also often used, namely the impulse
response models, the step response models and the polynomial model. In this section we
will shortly describe their features, and discuss the choice of model description.
Impulse and step response models
MPC has part of its roots in the process industry, where the use of detailed dynamical
models is less common. To get reliable dynamical models of these processes on the basis
of physical laws is dicult and it is therefore not surprising that the rst models used in
MPC were impulse and step response models (Richalet [54]), (Cutler & Ramaker [17]).
The models are easily obtained by rather simple experiments and give good models for
suciently large length of the impulse/step response.
In appendix D.1 more detailed information is given on impulse and step response models.
Polynomial models
Sometimes good models of the process can be obtained on the basis of physical laws or
by parametric system identication. In that case a polynomial description of the model
is preferred (Clarke et al.[14],[15]). Less parameters are used than in impulse or step
response models and in the case of parametric system identication, the parameters can
be estimated more reliably (Ljung [41]).
In appendix D.2 more detailed information is given on polynomial models.
Choice of model description
Whatever model is used to describe the process behavior, the resulting predictive control
law will (at least approximately) give the same performance. Dierences in the use of a
model are in the eort of modeling, the model accuracy that can be obtained, and in the
computational power that is needed to realize the predictive control algorithm.

2.3. PERFORMANCE INDEX

21

Finite step response and nite impulse response models (non-parametric models) need
more parameters than polynomial or state space models (parametric models). The larger
the number of parameters of the model the lower the average number of data points
per parameter that needs to be estimated during identication. As a consequence the
variance of the estimated parameters will be larger for the parameters of non-parametric
models than for semi-parametric and parametric models. In turn, the variance of estimated
parameters of semi-parametric models will be larger than the variance of the parameters
of parametric models. This does not necessarily imply that the quality of the predictions
with parametric models outperforms the predictions of non-parametric models! If the test
signals applied for process identication have excited all relevant dynamics of the process
persistently, the qualities of the predictions are generally the same despite the signicantly
larger variance of the estimated model parameters. On the other hand step response
models are intuitive, need less a priori information for identication and the predictions
are constructed in a natural way. The models can be understood without any knowledge
of dynamical systems. Step response models are often still preferred in industry, because
advanced process knowledge is usually scarce and additional experiments are expensive.
The polynomial description is very compact, gives good (physical) insight in the systems
properties and the resulting controller is compact as well. However, as we will see later,
making predictions may be cumbersome and the models are far from practical for multivariable systems.
State space models are especially suited for multivariable systems, still providing a compact
model description and controller. The computations are usually well conditioned and the
algorithms easy to implement.

2.3

Performance index

A performance-index or cost-criterion is formulated, measuring the reference tracking error


and the control action. Let z1 (k) be a signal, reecting the reference tracking error and let
z2 (k) be a signal, reecting the control action. The following 2-norm performance index is
introduced:
J(v, k) =

N


z1T (k + j 1|k)
z1 (k + j 1|k) +

j=Nm

N


z2T (k + j 1|k)
z2 (k + j 1|k)

(2.16)

j=1

where zi (k + j 1|k), i = 1, 2 is the prediction of zi (k + j 1) at time k. The variable N


denotes the prediction horizon, the variable Nm denotes the minimum cost-horizon.
In the MPC literature and in industrial application of MPC three performance indices
often appear:
Generalized Predictive Control (GPC) performance index.
Linear Quadratic Predictive Control (LQPC) performance index
Zone performance index

22

CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL

These three performance indices can all be rewritten into the standard form of (2.16).
Generalized Predictive Control (GPC)
In the Generalized Predictive Control (GPC) method (Clarke et al., [14][15]) the performance index is based on control and output signals:
J(u, k) =

N 


yp (k + j|k) r(k + j)

T 

yp (k + j|k) r(k + j) +

j=Nm

N


uT (k + j 1|k)u(k + j 1|k)

(2.17)

j=1

yp (k)
r(k)
y(k)
u(k)
Nm
N
Nc

P (q)

= P (q)y(k) is the weighted process output signal


is the reference trajectory
is the process output signal
is the process control increment signal
is the minimum cost-horizon
is the prediction horizon
is the control horizon
is the weighting on the control signal
= 1+p1q 1 +. . .+pnp q np is a polynomial with desired closed-loop poles

where yp (k + j|k) is the prediction of yp (k + j), based on knowledge up to time k, the


increment input signal is u(k) = u(k) u(k 1) and u(k + j) = 0 for j Nc .
determines the trade-o between tracking accuracy (rst term) and control eort (second
term). The polynomial P (q) can be chosen by the designer and broaden the class of control
objectives. It can be shown that np of the closed-loop poles will be placed at the location
of the roots of the polynomial P (q).
Note that the GPC performance index can be translated into the form of (2.16) by choosing


 
z1 (k)
yp (k + 1) r(k + 1)
z(k) =
=
z2 (k)
u(k)
Linear Quadratic Predictive Control (LQPC)
In the Linear Quadratic Predictive Control (LQPC) method (Garca et al., [28]) the performance index is based on control and state signals:
J(u, k) =

N

j=Nm

xT (k+j|k)Q
x(k+j|k) +

N

j=1

uT (k+j 1|k)Ru(k+j 1|k)

(2.18)

2.3. PERFORMANCE INDEX


x(k)
u(k)
Q
R

is
is
is
is

the
the
the
the

23

state signal vector


process control signal
state weighting matrix
control weighting matrix

where x(k +j|k) is the prediction of x(k +j), based on knowledge until time k. Q and R
are positive semi-denite matrices. In some papers the control increment signal u(k) is
used instead of the control signal u(k). Note that the LQPC performance index can be
translated into the form of (2.16) by choosing


  1/2
z1 (k)
Q x(k + 1)
z(k) =
.
=
z2 (k)
R1/2 u(k)
Zone performance index
A performance index that is popular in industry is the zone performance index:
J(u, k) =

N 


T 
(k+j|k)


(k+j|k) +

N


uT (k+j 1|k)u(k+j 1|k)

(2.19)

j=1

j=Nm

where i (i = 1, . . . , m) contribute to the performance index only if |yi(k) ri (k)| > max,i .
To be more specic:

for |yi (k) ri (k)| max,i


0
yi(k) ri (k) max,i for yi (k) ri (k) max,i
i (k) =

yi(k) ri (k) + max,i for yi (k) ri (k) max,i


so
|i (k)| =

min

|i (k)|max,i

|yi(k) ri (k) + i (k)|

The relation between the tracking error y(k)r(k), the zone-bounds and zone performance
signal (k) are visualized in gure 2.3. Note that the zone performance index can be
translated into the form of (2.16) by choosing

 

z1 (k)
r(k + 1) y(k + 1) + (k + 1)
z(k) =
=
z2 (k)
u(k)
and

v(k) =

u(k)
(k + 1)

with an additional constraint that |(k+j)| max .

24

CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL


y(k) r(k)

3
2

1
0

1
2
3

10

12

14

16

18

20

10

12

14

16

18

20

1.5

(k)

1
0.5
0
0.5
1
1.5

Figure 2.3: Tracking error y(k) r(k) and zone performance signal (k)
Standard performance index
Most papers on predictive control deal with either the GPC or the LQPC performance
indices which are clearly weighted squared 2-norms, which can be rewritten in the form of
(2.16). Now introduce the diagonal selection matrix


0
0

0 I


(j) =
I
0

0 I

for

0 j < Nm 1
,

for

(2.20)

Nm 1 j N 1

and dene

z(k) =


z1 (k)
.
z2 (k)

The performance index can be rewritten in the standard form:

J(v, k) =

N
1


zT (k + j|k)(j)
z (k + j|k)

(2.21)

j=0

In this text, we will mainly deal with this 2-norm standard performance index (2.21). Other
performance indices can also be used These are often based on other norms, for example
the 1-norm or the -norm (Genceli & Nikolaou [30], Zheng & Morari [80]).

2.4. CONSTRAINTS

2.4

25

Constraints

In practice, industrial processes are subject to constraints. Specic signals must not violate
specied bounds due to safety limitations, environmental regulations, consumer specications and physical restrictions such as minimum and/or maximum temperature, pressure,
level limits in reactor tanks, ows in pipes and slew rates of valves.
Careful tuning of the controller parameters may keep these values away from the bounds.
However, because of economical motives, the control system should drive the process towards the constraints as close as possible, without violating them: closer to limits in general
often means closer to maximum prot.
Therefore, predictive control employs a more direct approach by modifying the optimal
unconstrained solution in such a way that constraints are not violated. This can be done
using optimization techniques such as linear programming (LP) or quadratic programming
(QP) techniques.
In most cases the constraints can be translated in bounds on control, state or output
signals:
umin u(k) umax , k
ymin y(k) ymax , k

umin u(k) umax , k


xmin x(k) xmax ,
k

Under no circumstances the constraints may be violated, and the control action has to be
chosen such that the constraints are satised. However, when we implement the controller
we do not know the future state and replace the original constraints by constraints on the
predicted state.
Besides inequality constraints we can also use equality constraints. Equality constraints
are usually motivated by the control algorithm itself. An example is the control horizon,
which forces the control signal to become constant:
u(k+j|k) = 0 for j Nc
This makes the control signal smooth and the controller more robust. A second example
is the state end-point constraint
x(k + N|k) = xss
which is related to stability and forces the state at the end of the prediction horizon to its
steady state value xss .
To summarize, we see that there are two types of constraints (inequality and equality
constraints). In this course we look at the two types in a generalized framework:
Inequality constraints:
(k) (k) k

(2.22)

Since we generally do not know future values of , we often use in the optimization at time
k, constraints of the form:
+ j|k) (k + j)
(k

(2.23)

26

CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL

+ j|k) is the prediction of (k + j) at time k. Note that (k)


for j = 1, . . . , N, where (k
does not depend on future inputs of the system.
Equality constraints:

(k) = 0 k

(2.24)

Again we generally do not know future values of but we also often want to use equality
constraints, as noted before, to force the control signal to become constant or to enforce
an endpoint penalty. Hence we use:
+ j|k) = 0
Aj (k

(2.25)

+ j|k) is the prediction of (k + j) at time k. The matrix Aj is


for j = 1, . . . , N, where (k
used to indicate that certain equality constraints are only active for specic j. For instance,
in case of an endpoint penalty, we would choose AN = I and Aj = 0 for j = 0, . . . , N 1.
In the above formulas (k) and (k) are vectors and the equalities and inequalities are
meant element-wise. If there are multiple constraints, for example 1 (k) = 0 and 2 (k) = 0,
1 (k) 1 (k), . . . , 4 (k) 4 (k) we can combine them by both stacking them into one
equality constraint and one inequality constraint.

(k) =

1 (k)
2 (k)


1 (k)
1 (k)

..
(k) = ...
= (k)
.
4 (k)
4 (k)


=0

Note that if a variable is bounded two-sided


1,min 1 (k) 1,max
we can always translate that into two one-sided constraints by setting:

(k) =

1 (k)
1 (k)

1,max
1,min


= (k)

which corresponds to the general form of eqaution (2.22).


The capability of predictive control to cope with signal constraints is probably the main
reason for its popularity. It allows operation of the process within well dened operating
limits. As constraints are respected, better exploitation of the permitted operating range is
feasible. In many applications of control, signal constraints are present, caused by limited
capacity of liquid buers, valves, saturation of actuators and the more. By minimizing
performance index (2.21), subject to these constraints, we obtain the best possible control
signal within the set of admissible control signals.

2.4. CONSTRAINTS

27

Structuring the input signal


To obtain a tractable optimization problem, the input signal should be structured. In
other words, the degrees of freedom in the future input signal [u(k|k), u(k + 1|k), . . . , u(k +
N 1|k)] must be reduced. This can be done in various ways. The most common and
most frequently used method is by using a control horizon, introduce input blocking or
parametrize the input with (orthogonal) basis functions.
Control horizon:
When we use a control horizon, the input signal is assumed to be constant from a certain
moment in the future, denoted as control horizon Nc , i.e.
u(k+j|k) = u(k + Nc 1|k)

for j Nc

Blocking:
Instead of making the input constant beyond the control horizon, we can force the input
to remain constant during some predened (non-uniform) intervals. In that way, there is
some freedom left beyond the control horizon. Dene nbl intervals with range (m , m+1 ).
The rst m1 control variables (u(k), . . . , u(k + m1 1)) are still free. The input beyond
m1 is restricted by:
u(k+j|k) = u(k + m 1|k)

for m j < m+1 ,  = 1, . . . , nbl

Basis functions:
Parametrization of the input signal can also be done using a set of basis functions:
u(k+j|k) =

M


Si (j) i (k)

i=0

where Si (j) are the basis functions, M are the degrees of freedom left, and the scalars
i (k), i = 1, . . . , M are the parameters, to be optimized at time k.
The already mentioned structuring of the input signal, with either a control horizon or
blocking, can be seen as a parametrization of the input signal with a set of basis functions.
For example, by introducing a control horizon, we choose M = Nc 1 and the basis
functions become:
Si (j) = (j i) for i = 0, . . . , Nc 2
SNc 1 (j) = E(j Nc + 1)
where (k i) is the discrete-time impulse function, being 1 for j = i and zero elsewhere,
and E is the step function, i.e.

0 v<0
E[v] =
1 v0

28

CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL

The optimization parameters are now equal to


i (k) = u(k + i|k)

for i = 0, . . . , Nc 1

which indeed corresponds to the Nc free parameters.


In the case of blocking, we choose M = m1 + nbl and the basis functions become:
Si (j) = (j i) for i = 0, . . . , m1 1
Sm1 + (j) = E(j m ) E(j m+1 ) for  = 1, . . . , nbl
The optimization parameters are now equal to
i (k) = u(k + i|k)

for i = 0, . . . , m1 1

and
m1 + (k) = u(k + m |k)

for  = 1, . . . , nbl

which corresponds to the Nc 1 + nbl free parameters. Figure 2.4 gives the basis functions
in case of a control horizon approach and a blocking approach, respectively.

SNc1

SM

SNc2

SM1

SNc3

SM2

SNc4

SM3

SNc5

SM4

S2

S2

S1

S1

S0
0

Nc 1

(a) control horizon

S0

N 1

(b) blocking

Figure 2.4: Basis functions in case of a control horizon approach (a) and a blocking approach (b).
To obtain a tractable optimization problem, we can choose a set of basis functions Si (k)
that are orthogonal. These orthogonal basis functions will be discussed in section 4.3.

2.5

Optimization

An optimization algorithm will be applied to compute a sequence of future control signals


that minimizes the performance index subject to the given constraints. For linear models

2.6. RECEDING HORIZON PRINCIPLE

29

with linear constraints and a quadratic (2-norm) performance index the solution can be
found using quadratic programming algorithms. If a 1-norm or - norm performance index
is used, linear programming algorithms will provide the solution (Genceli & Nikolaou [30],
Zheng & Morari [80]). Both types of algorithms are convex and show fast convergence.
In some cases, the optimization problem will have an empty solution set, so the problem
is not feasible. In that case we will have to relax one or more of the constraints to nd a
solution leading to an acceptable control signal. A prioritization among the constraints is
an elegant way to tackle this problem.
Procedures for dierent types of MPC problems will be presented in chapter 5.

2.6

Receding horizon principle

Predictive control uses the receding horizon principle. This means that after computation of the optimal control sequence, only the rst control sample will be implemented,
subsequently the horizon is shifted one sample and the optimization is restarted with new
information of the measurements. Figure 2.5 explains the idea of receding horizon. At
PAST

FUTURE

r(k)

y(k)

u(k)

Nm
k

Nc

N
k+j

Figure 2.5: The Moving horizon in predictive control


time k the future control sequence {u(k|k), . . . , u(k + Nc 1|k)} is optimized such that
the the performance-index J(u, k) is minimized subject to constraints.
At time k the rst element of the optimal sequence (u(k) = u(k|k)) is applied to the real
process. At the next time instant the horizon is shifted and a new optimization at time
k + 1 is solved.
The prediction yp (k+j|k) in (2.17) and x(k+j|k) in (2.18) are based on dynamical models.
Common model representations in MPC are polynomial models, step response models

30

CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL

impulse response models or state space models. In this course we will consider a state
space representation of the model. In appendix D it is shown how other realizations can
be transformed in a state space model.

Chapter 3
Prediction
In this chapter we consider the concept of prediction. We consider the following standard
model:
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
p(k) = Cp x(k) + Dp1 e(k) + Dp2 w(k) + Dp3 v(k)

(3.1)

The prediction signal p(k) can be any signal which can be formulated as in expression (3.1)
(so the state x(k), the output signal y(k) or tracking error r(k) y(k) ). We will come
back on the particular choices of the signal p in Chapter 4.
The control law in predictive control is, no surprise, based on prediction. At each time
instant k, we consider all signals over the horizon N. To do so we make predictions at time
k of the signal p(k + j) for j = 1, 2, . . . , N. These predictions, denoted by p(k + j|k), are
based on model (3.1), using information given at time k and future values of the control
signal v(k+j|k). The predictions p(k+j|k) of the signal p(k+j) are based on the knowledge
at time k and the future control signals v(k|k), v(k+1|k), . . . , v(k+N 1|k).
At time instant k we dene the signal vector p(k) with the predicted signals p(k + j|k),
the signal vector v(k) with the future control signals v(k + j|k) and the signal vector w(k)

with the future reference signals w(k + j|k) in the interval 0 j N 1 as follows

w(k|k)
p(k|k)
v(k|k)
w(k+1|k)
p(k+1|k)
v(k+1|k)

=
p(k) =
(3.2)
v(k) =
w(k)
..
..
..

.
.
.
w(k+N 1|k)
p(k+N 1|k)
v(k+N 1|k)
The goal of the prediction is to nd an estimate for the prediction signal p(k), composed
p3 such that
of a free-response signal p0 (k) and a matrix D
p3 v(k)
p(k) = p0 (k) + D
p3 v(k) is the part of p(k), which is due to the selected control signal
where the term D
v(k + j|k) for j 0 and p0 (k) is the so called free-response. This free-response p0 (k) is
31

32

CHAPTER 3. PREDICTION

the predicted output signal when the future input signal is put to zero (v(k + j|k) = 0 for
j 0) and depends on the present state x(k), the future values of the known signal w,
and the characteristics of the noise signal e.

3.1

Noiseless case

The prediction mainly consists of two parts: Computation of the output signal as a result
of the selected control signal v(k + j|k) and the known signal w(k + j) and a prediction of
the response due to the noise signal e(k + j). In this section we will assume the noiseless
case (so e(k + j) = 0) and we only have to concentrate on the response due to the present
and future values v(k + j|k) of the input signal and known signal w(k + j).
Consider the model from (3.1) for the noiseless case (e(k) = 0):
x(k + 1) = Ax(k) + B2 w(k) + B3 v(k)
p(k) = Cp x(k) + Dp2 w(k) + Dp3 v(k)
For this state space model the prediction of the state is computed using successive substitution (Kinnaert [34]):
x(k + j|k) = Ax(k + j 1|k) + B2 w(k + j 1|k) + B3 v(k + j 1|k) =
A2 x(k + j 2|k) + A B2 w(k + j 2|k) + B2 w(k + j 1|k) +
+A B3 v(k + j 2|k) + B3 v(k + j 1|k)
..
.
j
j


Ai1 B2 w(k + j i|k) +
Ai1 B3 v(k + j i|k)
= Aj x(k) +
i=1

i=1

p(k + j|k) = Cp x(k + j|k) + Dp2w(k + j|k) + Dp3 v(k + j|k)


j

= Cp Aj x(k) +
Cp Ai1 B2 w(k + j i|k) + Dp2 w(k + j|k) +
i=1


j

Cp Ai1 B3 v(k + j i|k) + Dp3 v(k + j|k)

i=1

Now dene

Cp =

Cp
Cp A
Cp A2
..
.
Cp AN1

p2
D

Dp2

Cp B2

Dp2

Cp AB2
..
.

Cp B2

Cp AN2 B2

0
0
..
..
..
.
.
.
..
.
0
0
..
. Dp2
0
Cp B2 Dp2

(3.3)

3.2. NOISY CASE

p3
D

33

Dp3

Cp B3

Dp3

Cp AB3
..
.

Cp B3

Cp AN2 B3

0
0
..
..
..
.
.
.
..
.
0
0
..
. Dp3
0
Cp B3 Dp3

(3.4)

then the vector p(k) with the predicted output values can be written as
p2 w(k)
p3 v(k)
p(k) = Cp x(k) + D

+D
p3 v(k)
= p0 (k) + D
p2 w(k)
In this equation p0 (k) = Cp x(k) + D

is the prediction of the output if v(k) is


p3 is the predictor matrix,
chosen equal to zero, denoted as the free-response signal, and D
describing the relation between future control vector v(k) and predicted output p(k).
p3 is a Toeplitz matrix. The matrix relates future inputs to future
The predictor matrix D
output signals, and contains the elements Dp3 , and Cp Aj B3 , j 0, which are exactly equal
to the impulse response parameters

Dp3
for i = 0
gi =
Cp Ai1 B3 for i > 0
of the transfer function Pp3 (q) = Cp (qI A)1 B3 + Dp3 , which describes the input-output
relation between the signals v(k) and p(k).

3.2

Noisy case

As was stated in the previous section, the second part of the prediction scheme is making
a prediction of the response due to the noise signal e. If we know the characteristics of
the noise signal, this information can be used for prediction of the expected noise signal
over the future horizon. In case we do not know anything about the characteristics of
the disturbances, the best assumption we can make is the assumption that e is zero-mean
white noise. In these lecture notes we use this assumption of zero-mean white noise e, so
the best prediction for future values of signal e(k + j), j > 0 will be zero (
e(k + j) = 0 for
j > 0).
Consider the model given in equation (3.1):
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
p(k) = Cp x(k) + Dp1 e(k) + Dp2 w(k) + Dp3 v(k)
the prediction of the state using successive substitution of the state equation is given by

34

CHAPTER 3. PREDICTION

(Kinnaert [34]):
j

x(k + j|k) = A x(k) +

j


Ai1 B1 e(k + j i|k) +

i=1


j

Ai1 B2 w(k + j i|k) +

i=1

j


Ai1 B3 v(k + j i|k)

i=1

p(k + j|k) = Cp x(k + j|k) + Dp1e(k + j|k) + Dp2 w(k + j|k) + Dp3 v(k + j|k)
The term e(k + m|k) for m > 0 is a prediction of e(k + m), based on measurements at time
k. If the signal e is assumed to be zero mean white noise, the best prediction of e(k + m),
made at time k, is
e(k + m|k) = 0

for m > 0

Substitution then results in (j > 0)


j

x(k + j|k) = A x(k) +

j


Ai1 B3 v(k + j i|k) + Aj1 B1 e(k|k)

i=1


j

Ai1 B2 w(k + j i|k)

i=1

p(k + j|k) = Cp x(k + j|k) + Dp2 w(k + j|k) + Dp3 v(k + j|k)
= Cp Aj x(k) + Cp Aj1B1 e(k|k) +
j

+
Cp Ai1 B2 w(k + j i|k) + Dp2 w(k + j|k)
i=1
j

Cp Ai1 B3 v(k + j i|k) + Dp23 v(k + j|k)

i=1

p(k|k) = Cp x(k) + Dp1 e(k|k) + Dp2 w(k) + Dp3 v(k)


Dene

p1
D

Dp1
Cp B1
Cp AB1
..
.

(3.5)

Cp AN2 B1
then, the vector p with the predicted output values can be written as
p1 e(k|k) + D
p2 w(k)
p3 v(k)

+D
p(k) = Cp x(k) + D
p3 v(k)
= p0 (k) + D

(3.6)

3.2. NOISY CASE

35

In this equation the free-response signal is equal to


p1 e(k|k) + D
p2 w(k)

p0 (k) = Cp x(k) + D

(3.7)

Now, assuming D11 to be invertible, we derive an expression for e(k) from (3.1):


1
e(k|k) = D11 y(k) C1 x(k) D12 w(k)
in which y(k) is the measured process output and x(k) is the estimate of state x(k) at
previous time instant k 1.
p1 e(k) + D
p2 w(k)

=
p0 (k) = Cp x(k) + D


p1 D 1 y(k) C1 x(k) D12 w(k) + D
p2 w(k)

=
= Cp x(k) + D
11

(3.8)
(3.9)

p1 D 1 C1 ) x(k) + D
p1 D 1 y(k) + (D
p2 D
p1 D 1 D12 Ew ) w(k)
= (Cp D

(3.10)
11
11
11


where Ew = I 0 0 is a selection matrix such that
w(k) = Ew w(k)

Example 1 : Prediction of the output signal of a system


In this example we consider the prediction of the output signal p(k) = y(k) of a second
order system. We have given
xo1 (k + 1) = 0.7 xo1 (k) + xo2 (k) + eo (k) + 0.1 do(k) + u(k)
xo2 (k + 1) = 0.1 xo1 (k) 0.2 eo (k) 0.05 do(k)
y(k) = xo1 (k) + eo (k)
where y(k) is the proces output, v(k) = u(k) is the proces input, w(k) = d(k) is a known
disturbance signal and e(k) is assumed to be ZMWN. We obtain the model of (3.1) by
setting






 
0.7 1
1
0.1
1
A=
B1 =
B2 =
B3 =
0.1 0
0.2
0.05
0


 
 
C1 = Cp = 1 0
D11 = Dp1 = 1
D12 = Dp2 = 0
For N = 4 we nd:

1
0
C1
C1 A 0.7
1.0

C1 =
C1 A2 = 0.39 0.7
C1 A3
0.203 0.39

1
D11

11 = C1 B1 = 1
D
C1 AB1 0.5
C1 A2 B1
0.25

36

CHAPTER 3. PREDICTION

12
D

13
D

3.3

0
0
0
D12
C1 B2
D12
0
0
=
C1 AB2 C1 B2
D12
0
C1 A2 B2 C1 AB2 C1 B2 D12

0
0
0
D13
C1 B3
D13
0
0
=
C1 AB3 C1 B3
D13
0
C1 A2 B3 C1 AB3 C1 B3 D13

0
0
0.1
0
=
0.02 0.1
0.004 0.02

0
0 0
1
0 0
=
0.7 1 0
0.39 0.7 1

0
0
0
0.1

0
0

0
0

0
0

0
0

The use of a priori knowledge in making predictions

In this chapter we have discussed the prediction of the signal p(k), based on the vector

w(k)
w(k+1)

w(k)

...
w(k+N 1)
This means that we assume future values of the reference signal or the disturbance signals
to be known at time k. In practice this is often not the case, and estimations of these
signals have to be made.

w(k|k)

w(k+1|k)

w(k)

...
w(k+N

1|k)
The estimation can be done in various ways. One can use heuristics to compute the
estimates w(k
+ j|k), or one can use an extrapolation technique. With extrapolation we
can construct the estimates of w beyond the present time instant k. It is similar to the
process of interpolation, but its results are often subject to greater uncertainty. We will
discuss three types of extrapolation, zero-th order, rst order and polynomial extrapolation.
We assume the past values of w to be known, and compute the future values of w on the
basis of the past values of w.
Zero-th order extrapolation:
In an zero-th order extrapolation we assume w(k + j) to be constant for j 0, and so
w(k
+ j) = w(k 1) for j 0. This results in

w(k 1)
w(k 1)

w(k)

...
w(k 1)

3.3. THE USE OF A PRIORI KNOWLEDGE IN MAKING PREDICTIONS

37

Linear (rst order) extrapolation:


Linear (or rst order) extrapolation means creating a tangent line at the last values of w(k)
and extending it beyond that limit. Linear extrapolation will only provide good results
when used to extend the graph of an approximately linear function or not too far beyond the
known data. We dene w(k
+j) = w(k 1)+j (w(k 1)w(k 2)) = (j +1j q 1 )w(k 1)
for j 0. This results in

2w(k 1) w(k 2)

3w(k 1) 2w(k 2)

w(k)

...
(N + 1)w(k 1) Nw(k 2)
Polynomial extrapolation:
A polynomial curve can be created through more than one or two past values of w. For an
nth order extrapolation we use n + 1 past values of w. Note that high order polynomial
extrapolation must be used with due care. The error estimate of the extrapolated value
will grow with the degree of the polynomial extrapolation. Note that the zero-th and rst
order extrapolations are special cases of polynomial extrapolation.
The quality of extrapolation
Typically, the quality of a particular method of extrapolation is limited by the assumptions about the signal w made by the method. If the method assumes the signal is smooth,
then a non-smooth signal will be poorly extrapolated. Even for proper assumptions about
the signal, the extrapolation can diverge exponentially from the real signal. For particular problems, additional information about w may be available which may improve the
convergence.
In practice, the zero-th order extrapolation is mostly used, mainly because it never diverges,
but also due to lack of information about the future behavior of the signal w.

38

CHAPTER 3. PREDICTION

Chapter 4
Standard formulation
In this chapter we will standardize the predictive control setting and formulate the standard predictive control problem. First we consider the performance index, secondly the
constraints.

4.1

The performance index

In predictive control various types of performance indices are presented, in which either
the input signal is measured together with the state (LQPC), or the input increment
signal is measured together with the tracking error (GPC). This leads to two dierent
types of performance indices. We will show that both can be combined into one standard
performance index:
The standard predictive control performance index:
In this course we will use the state space setting of LQPC as well as the use of a reference
signal as is done in GPC. Instead of the state x or the output y we will use a general
signal z in the performance index. We therefore adopt the standard predictive control
performance index:

J(v, k) =

N
1


zT (k + j|k)(j)
z (k + j|k)

(4.1)

j=0

where z(k + j|k) is the prediction of z(k + j) at time k and (j) is a diagonal selection
matrix with ones and zeros on the diagonal. Finally, we use a generalized input signal
v(k), which can either be the input signal u(k) or the input increment signal u(k) and we
use a signal w(k), which contains all known external signals, such as the reference signal
r(k) and the known disturbance signal d(k). In most cases, the noise signal e is ZMWN
although in some cases we actually have integrated ZMWN. The state signal x, input signal
v, output signal y, noise signal e, external signal w and performance signal z are related
39

40

CHAPTER 4. STANDARD FORMULATION

by the following equations:


x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)

(4.2)
(4.3)
(4.4)

We will show how both the LQPC and GPC performance index can be rewritten as a
standard performance index.
The LQPC performance index:
In the LQPC performance index (see Section 2.3) the state is measured together with the
input signal. The fact that we use the input in the performance index means that we will
use an IO-model. The LQPC performance index is now given by
J(u, k) =

N


xTo (k

+ j|k)Q
xo (k + j|k) +

N


uT (k + j 1|k)Ru(k + j 1|k) (4.5)

j=1

j=Nm

where xo is the state of the IO-model


xo (k + 1) = Ao xo (k) + Ko eo (k) + Lo do (k) + Bo u(k)
y(k) = Co xo (k) + DH eo (k) + DF do (k)

(4.6)
(4.7)

and N Nm 1 while Q and R are positive semi-denite.


Theorem 2 Transformation of LQPC into standard form
Given the MPC problem with LQPC performance index (4.5) for IO model (4.6),(4.7).
This LQPC problem can be translated into a standard predictive control problem with performance index (4.1) and model (4.2),(4.3),(4.4) by the following substitutions:
x(k) = xo (k)
A = Ao

v(k) = u(k)
B1 = Ko

w(k) = d(k)

B2 = Lo

e(k) = eo (k)

B3 = Bo

C1 = Co
D11 = DH
D12 = DF
 1/2
 1/2
 1/2



Q Ao
Q Ko
Q Lo
C2 =
D21 =
D22 =
0
0
0


1 (j) 0
(j) =
0
I

0 for 0 j < Nm 1
1 (j) =

I for Nm 1 j N 1


D23 =

Q1/2 Bo
R1/2

4.1. THE PERFORMANCE INDEX

41

The proof of this theorem is in Appendix E.


The GPC performance index:
In the GPC performance index (see Section 2.3) the tracking error signal is measured
together with the increment input signal. The fact that we use the increment input in the
performance index means that we will use an IIO-model. The GPC performance index is
now given by
J(u, k) =

N 


T 

yp (k + j|k) r(k + j)
yp (k + j|k) r(k + j)

j=Nm

+2

N


uT (k + j 1|k)u(k + j 1|k)

(4.8)

j=1

where N Nm 1 and R. The output y(k) and increment input u(k) are related
by the IIO model
xi (k + 1) = Ai xi (k) + Ki ei (k) + Li di(k) + Bi u(k)
y(k) = Ci xi (k) + DH ei (k)

(4.9)
(4.10)

and the weighted output yp (k) = P (q)y(k) is given in state space representation
xp (k + 1) = Ap xp (k) + Bp y(k)
yp (k) = Cp xp (k) + Dp y(k)

(4.11)
(4.12)

(Note that for GPC we choose DF = 0)


Theorem 3 Transformation of GPC into standard form
Given the MPC problem with GPC performance index (4.8) for IIO model (4.9),(4.10)
and the weighting lter P (q), given by (4.11),(4.12). This GPC problem can be translated into a standard predictive control problem with performance index (4.1) and model
(4.2),(4.3),(4.4) by the following substitutions:




xp (k)
di (k)
e(k) = ei (k)
x(k) =
, v(k) = u(k), w(k) =
xi (k)
r(k + 1)








Bp DH
0 0
0
Ap Bp Ci
B3 =
B1 =
B2 =
A=
0
Ai
Ki
Li 0
Bi




D11 = DH
D12 = 0 0
C1 = 0 Ci




Cp Ap Cp Bp Ci + Dp Ci Ai
Cp Bp DH + Dp Ci Ki
C2 =
D21 =
0
0
0




Dp Ci Li I
Dp Ci Bi
D22 =
D23 =
0
0
I

42

CHAPTER 4. STANDARD FORMULATION




1 (j) 0
(j) =
0
I

0 for
1 (j) =

I for

0 j < Nm 1
Nm 1 j N 1

The proof of this theorem is in Appendix E.


The zone performance index: The zone performance index (see Section 2.3) uses the
IO-model, and is given by
J(u, k) =

N


 (k + j|k)
(k + j|k) +

j=Nm

N


2 uT (k + j 1|k)u(k + j 1|k)

(4.13)

j=1

where i (k), i = 1, . . . , m contribute to the performance index only if |yi(k)ri (k)| > max,i :

for |yi (k) ri (k)| max,i


0
yi(k) ri (k) max,i for yi (k) ri (k) max,i
i (k) =

yi(k) ri (k) + max,i for yi (k) ri (k) max,i


so
|i (k)| =

min

|i (k)|max,i

|yi(k) ri (k) + i (k)|

There holds N Nm 1 and R. The output y(k) and input u(k) are related by the
IO model
xo (k + 1) = Ao xo (k) + Ko eo (k) + Lo do (k) + Bo u(k)
y(k) = Co xo (k) + DH eo (k) + DF do (k)

(4.14)
(4.15)

Theorem 4 Transformation of zone control into standard form


Given the MPC problem with zone performance index (4.13) for IO model (4.14),(4.15).
This GPC problem can be translated into a standard predictive control problem with performance index (4.1) and model (4.2),(4.3),(4.4) by the following substitutions:




y(k + 1|k) r(k + 1) + (k + 1)
u(k)
z(k) =
v(k) =
u(k)
(k + 1)



do (k)
w(k) =
e(k) = eo (k)
r(k + 1)






A = Ao
B1 = Ko
B2 = Lo 0


C1 = Co D11 = DH D12 = DF 0

B3 =

Bo 0

4.1. THE PERFORMANCE INDEX






Co Lo
Co Ao
C o Ko
C2 =
D22 =
D21 =
0
0
0
 
 
 

0
0
0
D41 =
D42 =
D43 =
C4 =
0
0
0


43
I
0

Co Bo I
D23 =
I 0



0 I
max
=
0 I
max

then:
x(k + 1)
y(k)
z(k)
(k)

=
=
=
=

A x(k) + B1 e(k) + B2 w(k) + B3 v(k)


C1 x(k) + D11 e(k) + D12 w(k)
C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)
C4 x(k) + D41 e(k) + D42 w(k) + D43 v(k)

(4.16)

The proof is left to the reader. Note that we need a constraint on . This can be incorporated through the techniques presented in the next section.
Both the LQPC performance index and the GPC performance index are frequently used in
practice and in literature. The LQPC performance index resembles the LQ optimal control
performance index. The choices of the matrices Q and R can be done in a similar way as
in LQ optimal control. The GPC performance index originates from the adaptive control
community and is based on SISO polynomial models.
The standard predictive control performance index:
We have found that the GPC, LQPC and the zone performance indices can be rewritten
into the standard form:
J(v, k) =

N1


zT (k + j|k)(j)
z (k + j|k)

(4.17)

j=0

If we dene the signal vector z(k) with the predicted performance signals z(k + j|k) in the
as follows
interval 0 j N 1 and the block diagonal matrix

(0)
0
...
0
z(k|k)
0

z(k+1|k)
(1)
0

=
(4.18)
z(k) =

..
..
..
..

.
.
.
.
0
0
. . . (N 1)
z(k+N 1|k)
then the standard predictive control performance index (4.17) can be rewritten in the
following way:
J(v, k) =

N1


zT (k + j|k)(j)
z (k + j|k)

j=0

z (k)
= zT (k)
where z(k) can be computed using the formulas from Chapter 3:
21 e(k) + D
22 w(k)
23 v(k)
z(k) = C2 x(k) + D

+D

(4.19)

44

CHAPTER 4. STANDARD FORMULATION

with

C2 =

C2
C2 A
C2 A2
..
.

22
D

C2 AN1

21
D

D21
C2 B1
C2 AB1
..
.

C2 B2

D22

C2 AB2
..
.

C2 B2

0
0
..
..
..
.
.
.
..
.
0
0
..
. D22
0
C2 B2 D22

C2 AN2 B2

C2 AN2 B1

4.2

D22

23
D

D23

C2 B3

D23

C2 AB3
..
.

C2 B3

C2 AN2 B3

0
0
..
..
..
.
.
.
..
.
0
0
..
. D23
0
C2 B3 D23

Handling constraints

As was already discussed in chapter 1, in practical situations there will be signal constraints. These constraints on control, state and output signals are motivated by safety
and environmental demands, economical perspectives and equipment limitations. There
are two types of constraints:
Equality constraints:
In general, we impose constraints on the actual inputs and states. Clearly, for future
constraints we often replace the actual input and state constraints by constraints
on the predicted state and input. However, in particular for equality constraints it
often occurs that we actually want to impose constraints directly on the predicted
and state and input. This is due to the fact, that equality constraints are usually
motivated by the control algorithm. For example, to decrease the degrees of freedom,
we introduce the control horizon Nc , and to guarantee stability we can add a state
end-point constraint to the problem.
Hence we use:
+ j|k) = 0
Aj (k
+ j|k) is the prediction of (k + j) at time k. The matrix
for j = 1, . . . , N, where (k
Aj is used to indicate that certain equality constraints are only active for specic j.
For instance, in case of an endpoint penalty, we would choose AN = I and Aj = 0
for j = 0, . . . , N 1.

4.2. HANDLING CONSTRAINTS

45

In this course we look at equality constraints in a generalized form. We can stack all
constraints at time k on predicted inputs or states in one vector:

+ 1|k)
A1 (k

A (k
2 + 2|k)

(k) =

..

AN (k + N|k)
Clearly, if some of the Ai are zero or do not have full rank we can reduce the size of
this vector by only looking at the active constraints.
Using this framework, we can express the equality constraints in the form:

31 e(k) + D
32 w(k)
33 v(k) = 0
(k)
= C3 x(k) + D

+D

(4.20)

where w(k)

contains all known future external signals, i.e. w(k)

is a vector consisting
of w(k), w(k + 1), . . . , w(k + N). Similarly, v(k) contains the predicted future input
signals, i.e. v(k) is a vector consisting of v(k), v(k + 1|k), . . . , v(k + N|k).
Inequality constraints:
Inequality constraints are due to physical, economical or safety limitations. We
actually prefer to work with constraints of a (k) which consists of actual inputs and
states:
(k) (k) k
Since these are unknown we have to work with predictions, i.e. we have constraints
of the form:
+ j|k) (k + j)
(k
for j = 0, . . . , N. Again, we stack all out inequality constraints:

(k|k)

(k
+ 1|k)

=
(k)

..

.
+ N|k)
(k

and similarly (k)


is a stacked vector consisting of (k), (k + 1), . . . , (k + N).
In this course we then look at inequality constraints in a generalized form:

41 e(k) + D
42 w(k)
43 v(k) (k)

(k)
= C4 x(k) + D

+D

where (k)
does not depend on future input signals.

(4.21)

46

CHAPTER 4. STANDARD FORMULATION

In the above formulas (k)


and (k)
are vectors and the equalities and inequalities are
meant element-wise. If there are multiple constraints, for example 1 (k) = 0 and 2 (k) = 0,
1 (k), . . . , 4 (k)
4 (k) we can combine them by both stacking them into one
1 (k)
equality constraint and one inequality constraint.


(k)
=

1 (k)
2 (k)


=0


1 (k)

(k)
= ...
4 (k)

1 (k)

..

4 (k)

We will show how the transformation is done for some common equality and inequality
constraints.

4.2.1

Equality constraints

Control horizon constraint for IO systems


The control horizon constraint for an IO-system (with v(k) = u(k)) is given by
v(k + Nc + j|k) = v(k + Nc 1|k) for j = 0, . . . , N Nc 1
Then the control horizon constraint is equivalent with

v(k + Nc |k) v(k + Nc 1|k)


v(k + Nc + 1|k) v(k + Nc 1|k)
..
.

v(k + N 1|k) v(k + Nc 1|k)

0 0 I I
0 0 I 0
.. .. ..
..
. . .
.
0 0 I 0

0
. . ..
. .
I
..
..
. 0
.
0 I
0

v(k|k)
..
.

v(k + Nc 2|k)

v(k + Nc 1|k)

v(k + Nc |k)

v(k + Nc + 1|k)

..

.
v(k + N 1|k)

33 v(k)
= D
= 0
31 = 0 and D
32 = 0 , the control horizon constraint can be translated
By dening C3 = 0 , D
into the standard form of equation (4.20).

4.2. HANDLING CONSTRAINTS

47

Control horizon constraint for IIO systems


The control horizon constraint for an IIO-system (with v(k) = u(k)) is given by
v(k + Nc + j|k) = 0 for j = 0, . . . , N Nc 1
Then the control horizon constraint is equivalent with

v(k + Nc |k)
v(k + Nc + 1|k)
..
.

0 0 I

0 0 0
= .
.. ..

..
. .
v(k + N 1|k)
0 0 0

v(k|k)
..
.

. . .. v(k + Nc 1|k)
. .
I
v(k + Nc |k)
..
..
. 0
.
v(k + Nc + 1|k)
..
0 I

.
0

v(k + N 1|k)
33 v(k)
= D
= 0
31 = 0 , D
32 = 0 , the control horizon constraint can be translated
By dening C3 = 0 , D
into the standard form of equation (4.20).

State end-point constraint


The state end-point constraint is dened as
x(k + N|k) = xss
where xss is the steady-state. The state end-point constraint guarantees stability of the
closed loop (see chapter 6). In section 5.2.1 we will discuss steady-state behavior, and we
derive that xss can be given by
xss = Dssx w(k)

where the matrix Dssx is related to the steady-state properties of the system. The state
end-point constraint now becomes
x(k + N|k) Dssx w(k)

=


N

= A x(k)+AN 1B1 e(k)+ AN 1 B2 AN 2 B2 B2 w(k)


 N 1

N 2
B3 A
B3 B3 v(k) Dssx w(k)
+ A

= 0
By dening
C3 = AN

32 =
D

31 = AN 1 B1
D

33 =
D




AN 1 B2 AN 2 B2 B2
AN 1 B3 AN 2 B3 B3

Dssx

the state end-point constraint can be translated into the standard form of equation (4.20).

48

CHAPTER 4. STANDARD FORMULATION

4.2.2

Inequality constraints

For simplicity we only consider constraints of the form


p(k) pmax
i.e. we only consider upper bounds. Lower bounds such as a constraint p(k) pmin are
easily translated into upper bounds by considering
p(k) pmin
Consider a constraint
+ j) (k
+ j) , for j = 0, . . . , N
(k

(4.22)

where (k) is given by


(k) = C4 x(k) + D41 e(k) + D42 w(k) + D43 v(k)

(4.23)

Linear inequality constraints on output y(k) and state x(k) can easily be translated into
linear constraints on the input vector v(k). By using the results of chapter 3, values of
(k + j) can be predicted and we obtain the inequality constraint:

41 e(k) + D
42 w(k)
43 v(k) (k)

(k)
= C4 x(k) + D

+D
where

(k) =

(k)
+ 1)
(k
..
.

(k) =

C4 =

C4
C4 A
C4 A2
..
.

42
D

C4 AN1

41
D

D41
C4 B1
C4 AB1
..
.

(k + N 1)

+ N 1)
(k

(k)
(k + 1)
..
.

(4.24)

C4 AN2 B1
In this way we can formulate:

43
D

D42

C4 B2

D42

C4 AB2
..
.

C4 B2

C4 AN2 B2

D43

C4 B3

=
C4 AB3

..

.
C4 AN2 B3

0
0
..
..
..
.
.
.
..
.
0
0
..
. D42
0
C4 B2 D42
0

D43
C4 B3

0
0
..
.
..
..
.
.
..
.
0
0
..
. D43
0
C4 B3 D43

4.2. HANDLING CONSTRAINTS

49

Inequality constraints on the output (y(k + j) ymax ):


(k) = y(k) = C4 x(k) + D41 e(k) + D42 w(k) + D43 v(k)
for C4 = C1 , D41 = D11 , D42 = D12 , D43 = D13 and (k) = ymax .
Inequality constraints on the input (v(k + j) vmax ):
(k) = v(k) = C4 x(k) + D41 e(k) + D42 w(k) + D43 v(k)
for C4 = 0, D41 = 0, D42 = 0, D43 = I and (k) = vmax .
Inequality constraints on the state (x(k + j) xmax ):
(k) = x(k) = C4 x(k) + D41 e(k) + D42 w(k) + D43 v(k)
for C4 = I, D41 = 0, D42 = 0, D43 = 0 and (k) = xmax .
Problems occur when (k) in equation (4.22) depends on past values of the input. In that
case, prediction of (k + j) may depends on future values of the input, when we use the
derived prediction formulas. Two clear cases where this happen will be discussed next:
Constraint on the increment input signal for an IO model:
u(k + j|k) umax for j = 0, . . . , N 1
Consider the case of an IO model, so v(k) = u(k).

u(k 1) + umax

umax

D43 =
(k) =

..

umax
C4 = 0

41 = 0
D

Then by dening
0
..
..
.
I I
.
. . ..
..
..
. .
.
.
0
.. . .
..
..
. 0
.
.
.
0 0 I I
I

42 = 0
D

the input increment constraint can be translated into the standard form of equation (4.21).
Constraint on the input signal for IIO model:
Assume that we have the following constraint:
u(k + j|k) umax for j = 0, . . . , N 1
Consider the case of an IIO model, so v(k) = u(k). We note:
u(k + j) = u(k 1) +

j

i=0

u(k + i)

50

CHAPTER 4. STANDARD FORMULATION

Then by dening

u(k 1) + umax
u(k 1) + umax

(k)
=
..

.
u(k 1) + umax

43 =
D

I
I
..
.
I

0
. . ..
. .
I

..
..
. 0
.
I I
0

the input constraint can be translated into the standard form of equation (4.21).

4.3

Structuring the input with orthogonal basis


functions

On page 27 we introduced a parametrization of the input signal using a set of basis functions:
v(k+j|k) =

M


Si (j) i (k)

i=0

where Si (j) are the basis functions. A special set of basis functions are orthogonal basis
functions [69], which have the property:



0 for i = j
Si (k)Sj (k) =
1 for i = j
k=

Lemma 5 [68] Let Ab , Bb , Cb and Db be matrices such that


T 


Ab Bb
Ab Bb
=I
C b Db
C b Db
(this means that Ab , Bb , Cb and Db are the balanced system matrices of a unitary function).
Dene

Ab Bb Cb Bb Db Cb Bb Dbnv 2 Cb
0
Ab
Bb Cb Bb Dbnv 3 Cb

.
..
..
..

.
.
Ab
.
Av = ..

.
.
.

..
..
..
Bb Cb
0

0
Ab


Cv = C Db Cb Db2 Cb Dbnv 1 Cb
and let
Si (k) = Cv Akv ei+1

4.4. THE STANDARD PREDICTIVE CONTROL PROBLEM

51

where
ei =

0 0 1 0 0

T

with the 1 at the ith position. Then, the functions Si (k) i = 0, . . . , M form an orthogonal
basis.

T
When we dene the parameter vector = 1 2 M , the input vector is given
by the expression
v(k+j|k) = Cv Ajv

(4.25)

The number of degrees of freedom in the input signal is now equal to dim() = M = nv n1 ,
the dimension of parameter vector. One of the main reasons to apply orthogonal basis
functions is to obtain a reduction of the optimization problem by reducing the degrees of
freedom, so usually M will be much smaller than horizon N.
Consider an input sequence of N future values:

Cv
v(k|k)
v(k + 1|k) Cv Av

v(k) =
=
= Sv
..
..

.
.
N
v(k + N|k)
Cv Av
Suppose N is such that Sv has full column-rank, and a left-complement is given by Sv
(so Sv Sv = 0, see appendix C for a denition). Now we nd that
Sv v(k) = 0

(4.26)

We observe that the orthogonal basis function can be described by equality constraint 4.26.

4.4

The standard predictive control problem

From the previous section we learned that the GPC and LQPC performance index can be
formulated in a standard form (equation 4.1). In fact most common quadratic performance
indices can be given in this standard form. The same holds for many kinds of linear equality
and inequality constraints, which can be formulated as (4.21) or (4.20). Summarizing we
come to the formulation of the standard predictive control problem:
Denition 6 Standard Predictive Control Problem (SPCP) Consider a system given by
the state-space realization
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)

(4.27)
(4.28)
(4.29)

52

CHAPTER 4. STANDARD FORMULATION

The goal is to nd a controller v(k) = K(w,


y, k) such that the performance index
J(v, k) =

N1


zT (k + j|k)(j)
z (k + j|k)

j=0

z (k)
= zT (k)
is minimized subject to the constraints

31 e(k) + D
32 w(k)
33 v(k) = 0

+D
(k)
= C3 x(k) + D

(4.30)

41 e(k) + D
42 w(k)
43 v(k) (k)

+D
(k)
= C4 x(k) + D

(4.31)

where

z(k) =

z(k|k)
z(k+1|k)
..
.

v(k) =

z(k+N 1|k)

v(k|k)
v(k+1|k)
..
.

w(k)

w(k|k)
w(k+1|k)
..
.

w(k+N 1|k)

v(k+N 1|k)

How the SPCP problem is solved will be discussed in the next chapter.

4.5

Examples

Example 7 : GPC as a standard predictive control problem


Consider the GPC problem on the IIO model of example 53 in Appendix D. The purpose
is to minimize the GPC performance index with
P (q) = 1 1.4q 1 + 0.48q 2
In state space form this becomes:




0 1
1.4
Ap =
Bp =
0 0
0.48

Cp =

1 0

Dp =

Further we choose N = Nc = 4 and = 0.001:


Following the formulas in section 4.1, the GPC performance index is transformed into the
standard performance index by choosing

1.4
0 1 1.4 1.4 0
 0.48
 0 0 0.48 0.48 0



D
B
Ap Bp Ci
p
H
1
0 0
1
1
0
=
=
A=
B1 =

0
Ai
Ki
1
0 0
0
0.7 1
0.2
0 0
0
0.1 0

4.5. EXAMPLES

53


B2 =

C1 =




0 0
Li 0

0 Ci

0
0
0
0.1
0.05

0
0
0
0
0

0 0 1 1 0


B3 =

0
Bi

0
0
0
1
0


 
Cp Ap Cp Bp Ci + Dp Ci Ai
0 1 0.4 0.3 1
=
C2 =
0
0
0 0
0
0 0
 


0.6
Cp Bp + Dp Ci Ki
=
D21 =
0
0



 
 

Dp Ci Li I
Dp Ci Bi
1
0.1 1
=
=
D23 =
D22 =
I
0.001
0
0
0
0
The prediction matrices are constructed according to chapter 3. We obtain:

0 1 0.4
0.3
1
0.6
0 0
0
0
0
0

0 0 0.08 0.19
0.18
0.3

0 0
0
0
0
0
21 =

D
C2 =
0 0 0.08 0.183 0.19
0.21

0 0
0
0
0
0

0 0 0.08 0.1891 0.183


0.225
0 0
0
0
0
0

22
D

23
D

0.1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.02 0
0.1 1
0
0
0
0
0
0
0
0
0
0
0
0
0.004 0 0.02 0
0.1 1 0
0
0
0
0
0
0
0
0
0
0.0088 0 0.004 0 0.02 0 0.1 1
0
0
0
0
0
0
0
0

1
0
0
0
0.001
0
0
0

0.3
1
0
0

0
0.001
0
0

0.19
0.3
1
0

0
0
0.001
0

0.183 0.19
0.3 1.000
0
0
0
0.001

54

CHAPTER 4. STANDARD FORMULATION

Example 8 : Constrained GPC as a standard predictive control problem


Now consider example 7 for the case where we have a constraint on the input signal:
u(k) 1
and a control horizon constraint
u(k + Nc + j|k) = 0 for j = 0, . . . , N Nc 1
31 , D
32 , D
33 , D
41 , D
42 , D
43 and the vector (k)

We will compute the matrices C3 , C4 , D


for N = 4 and Nc = 2.
and the matrix D
43 become:
Following the formulas of section 4.2 the vector

1 u(k 1)
u(k 1) + umax
u(k 1) + umax 1 u(k 1)

(k)
=
u(k 1) + umax = 1 u(k 1)
u(k 1) + umax
1 u(k 1)

43
D

1
1
=
1
1

0
1
1
1

0
0
1
1

0
0

0
1

41 = 0 and D
42 = 0.
Further C4 = 0, D
31 , D
32 and
Next we compute the matrices C3 , D
constraint (N = 4, Nc = 2):



0
0
0
0
0
31 =
C3 =
D
0 0 0 0 0



0
0
0
0
33 = 0
32 =
D
D
0 0 0 0
0

33 corresponding to the control horizon


D
0
0

0 1 0
0 0 1

Chapter 5
Solving the standard predictive
control problem
In this chapter we solve the standard predictive control problem as dened in section 4.4.
We consider the system given by the state-space realization
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)

(5.1)
(5.2)
(5.3)

Here y contains the measurements so that at time k we have available all past inputs
v(0), . . . , v(k1) and all past and current measurements y(0), . . . , y(k). z contains variables
that we want to control and hence appear in the performance index.
The performance index and the constraints are given by
J(v, k) =

N1


zT (k + j|k)(j)
z (k + j|k)

(5.4)

j=0

z (k)
= zT (k)

(5.5)

31 e(k|k) + D
32 w(k)
33 v(k) = 0

+D
(k)
= C3 x(k|k) + D

41 e(k|k) + D
42 w(k)
43 v(k) (k)

+D
(k)
= C4 x(k|k) + D

(5.6)
(5.7)

Note that the performance index at time k contains estimates of z(k + j) based on all
inputs v(0), . . . , v(k + j) as well as past and current measurements y(0), . . . y(k). Clearly
v(k), . . . , v(k + j) are not yet available and need to be chosen to minimize the performance

criterion subject to the constraints. As described before, (k)


and (k)
are stacked vectors
containing predicted states or future inputs on which we want to impose either inequality
or equality constraints. Predictions of future states and inputs are clearly a function of all
past measurements but it can be shown that it can be expressed in terms of predictions
x(k|k) and e(k|k) of the state x(k) and the noise e(k) respectively. Implementation issues
about x(k|k) and e(k|k) will be discussed in Section 5.3.
55

56 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


The standard predictive control problem is to minimize (5.5) given (5.1), (5.2), (5.3) subject
to (5.6) and (5.7).
In this chapter we consider 3 subproblems:
1. The unconstrained SPCP: Minimize (5.5).
2. The equality constrained SPCP: Minimize (5.5) subject to (5.6).
3. The (full) SPCP: Minimize (5.5) subject to (5.6) and (5.7).

Assumptions
To be able to solve the standard predictive control problem, the following assumptions are
made:
33 has full row rank.
1. The matrix D
23
D
T has full rank, where
= diag((0), (1), . . . , (N 1))
2. The matrix D
23
nz N nz N
R
.
3. The pair (A, B3 ) is stabilizable.
Assumption 1 basically guarantees that we have only equality constraints on the input.
If the system is subject to noise it is only in rare possible to guarantee that equality
constraints on the state are satised. Assumption 2 makes sure that the optimization
problem is strictly convex which guarantees a unique solution to the optimization problem.
Assumption 3 is necessary to be able to control all unstable modes of the system.

5.1

The nite horizon SPCP

5.1.1

Unconstrained standard predictive control problem

In this section we consider the problem of minimizing the performance index


J(v, k) =

N1


zT (k + j|k)(j)
z (k + j|k)

(5.8)

j=0

z (k)
= zT (k)

(5.9)

for the system as given in (5.1), (5.2) and (5.3), and without equality or inequality constraints. In this section we consider the case for nite N. The innite case will be treated
in section 5.2.

5.1. THE FINITE HORIZON SPCP

57

The performance index, given in (5.9), is minimized for each time instant k, and the optimal
future control sequence v(k) is computed. We use the prediction law
21 e(k|k) + D
22 w(k)
23 v(k)

+D
z(k) = C2 x(k|k) + D
23 v(k)
= z0 (k) + D

(5.10)
(5.11)

as derived in chapter 3, where


21 e(k|k) + D
22 w(k)

z0 (k) = C2 x(k|k) + D
is the free-response signal given in equation (3.7). Without constraints, the problem becomes a standard least squares problem, and the solution of the predictive control problem
can be computed analytically. We assume the reference trajectory w(k + j) to be known
over the whole prediction horizon j = 0, . . . , N. Consider the performance index (5.9).
Now substitute (5.11) and dene matrix H, vector f (k) and scalar c(k) as
T

H = 2D
23 D23 ,

T
z0 (k)
f (k) = 2D
0T (k)
23 z0 (k) and c(k) = z

We obtain
(k) + 2

T
T
z0 (k) =
J(v, k) = vT (k)D
v T (k)D
0T (k)
23 D23 v
23 z0 (k) + z
1 T
=
v (k)H v(k) + vT (k)f (k) + c(k)
2
The minimization of J(v, k) has become a linear algebra problem. It can be solved, setting
the gradient of J to zero:
J
= H v(k) + f (k) = 0

v
By assumption 2, H is invertible and hence the solution v(k) is given by
v(k) = H 1 f (k)
1

T
T

D
= D23 D23
23 z0 (k)

1


D
23
T
C2 x(k|k) + D
21 e(k|k) + D
22 w(k)
T
D

= D
23
23
Because we use the receding horizon principle, only the rst computed change in the control
signal is implemented and at time k+1 the computation is repeated with the horizon moved
one time interval. This means that at time k the control signal is given by


v(k|k) = I 0 . . . 0 v(k) = Ev v(k)
We obtain
v(k|k) = Ev v(k)



1
21 e(k|k) + D
22 w(k)
D
23
T
C2 x(k|k) + D
T
= Ev D

D
23
23
= F x(k|k) + De e(k|k) + Dw w(k)

58 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


where
F
De
Dw
Ev

=
=
=
=

1 T
T
Ev ( D
23 D23 ) D23 C2
T
1 T
Ev ( D
23 D23 ) D23 D21
T
T
23
23
D23 )1 D
D22
Ev ( D


I 0 ... 0

with Ev such that v(k|k) = Ev v(k).


The results are summarized in the following theorem (Kinnaert [34],Lee et al. [38],Van den
Boom & de Vries [67]):
Theorem 9
Consider system (5.1) - (5.3) The unconstrained (nite) horizon standard predictive control
problem of minimizing performance index
J(v, k) =

N1


zT (k + j|k)(j)
z (k + j|k)

(5.12)

j=0

is solved by control law


v(k) = F x(k|k) + De e(k|k) + Dw w(k)

(5.13)

where
F
De
Dw
Ev

=
=
=
=

T
1 T
Ev ( D
23 D23 ) D23 C2
1 T
T
Ev ( D
23 D23 ) D23 D21
1 T
T
Ev ( D
23 D23 ) D23 D22


I 0 ... 0

(5.14)
(5.15)
(5.16)
(5.17)

Note that for the computation of optimal input signal v(k), an estimate x(k|k) and e(k|k),
for the state x(k) and the noise signal e(k) respectively, have to be available. This can
easily be done with an observer and will be discussed in section 5.3.

5.1.2

Equality constrained standard predictive control problem

In this section we consider the problem of minimizing the performance index


J(v, k) =

N1


zT (k + j|k)(j)
z (k + j|k)

(5.18)

j=0

for the system as given in (5.1), (5.2) and (5.3), and with equality constraints (but without
inequality constraints). In this section we consider the case for nite N. In section 5.2.4
we discuss the case where N = and a nite control horizon as the equality constraint.

5.1.2 EQUALITY CONSTRAINED SPCP

59

Consider the equality constrained standard problem


z (k)
min zT (k)
v

subject to the constraint:

31 e(k|k) + D
32 w(k)
33 v(k) = 0
(k)
= C3 x(k|k) + D

+D
We will solve this problem by elimination of the equality constraint. We therefore give the
following lemma:
Lemma 10
Consider the equality constraint
C =

(5.19)

for full-row rank matrix C Rmn , Rm1 , Rn1 , and m n. Choose C r to be a


right-inverse of C and C r to be a right-complement of C, then all satisfying (5.19) are
given by
= C r + C r
where R(nm)1 is arbitrary.
2 End Lemma
Proof of lemma 10:
From appendix C we know that for



 V1T
C=U 0
V2T
the right-inverse C r and right-complement C r are dened as
C r = V1 1 U T

C r = V2

Choose vectors and as follows:


 
   T 



V1

so
= V1 V2
=

V2T

then there must hold


C =U


= U =

and so we have to choose = 1 U T , while can be any vector (with the proper
dimension). Therefore, all satisfying (5.19) are given by
= V1 1 U T + V2 = C r + C r

60 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


2 End Proof
r and D
r as in the above lemma, then all v that satisfy the equality constraint
Dene D
33
33

(k)
= 0 are given by:


31 e(k|k) + D
32 w(k)
r
r C3 x(k|k) + D

+D
(k)
v(k) = D
33

33

r
= vE (k) + D
33 (k)
r
Note that by choosing v(k) = vE (k) + D
33 (k) the equality constraint is always satised
while all remaining freedom is still in the vector (k), which is of lower dimension. So
by this choice the problem has reduced in order while the equality constraint is always
satised for all
(k).
r
Substitution of v(k) = vE (k) + D
33 (k) in (5.10) gives:
z(k) =
=
+
=

21 e(k|k) + D
22 w(k)
23 v(k)
C2 x(k|k) + D

+D
r C3 )x(k|k) + (D
r D

23 D
21 D
23 D
(C2 D
33
33 31 )e(k|k)
r D

r
22 D
23 D
23 D
(D
+D
33 32 )w(k)
33 (k)
r

33
23 D
(k)
zE (k) + D

where zE is given by:


r C3 )x(k|k) + (D
r D

23 D
21 D
23 D

r
zE (k) = (C2 D
33
33 31 )e(k|k) + (D22 D23 D33 D32 )w(k)
The performance index is given by
z (k) =
J(k) = zT (k)
T
r
r
23 D
23 D
zE (k) + D
= (
zE (k) + D
33 (k)) (
33 (k)) =


T
r (k)
r )T D
=
T (k) (D
23 D23 D33
33
r
D
23 D
zE (k) =
ET (k)
+2
zET (k)
33 (k) + z
1 T
=
+ f T (k)
(k) + c(k)

H
2
where the matrices H, f and c are given by

(5.20)
(5.21)

(5.22)
(5.23)

r )T D
T
r
H = 2 (D
23 D23 D33
33
T

r )T D
f (k) = 2 (D
23 zE (k)
33
zE
c(k) = zET
T
D23 is invertible. By construction, the right
Note that by assumption 2, the matrix D23
r

is injective and hence H is invertible.


complement D
33
When inequality constraints are absent, we are left with an unconstrained optimization

1 T
min
H
+ fT

5.1.2 EQUALITY CONSTRAINED SPCP

61

which has an analytical solution. In the same way as in section 5.1, we derive:
J
= H
(k) + f (k) = 0

and so

(k) = H 1 f (k)

1
r T T
r
r T T
33
33
33
) D23 D23 D
(D
) D23 zE (k)
= (D
= zE (k)
r
r
33
33
23 D
21 )e(k|k)
23 D
C3 C2 )x(k|k) + (D
D31 D
= (D
r D

23 D

+(D
33 32 D22 )w(k)

(5.24)
(5.25)

where
1

T
D
23 D
r
T

r )T D
r )T D
(D
= (D
23
23
33
33
33
Substitution gives:


r C3 x(k|k) + D
31 e(k|k) + D
32 w(k)
r
v(k) = D

+D
33
33 (k)


r C3 C2 )x(k|k) +
31 e(k|k) + D
32 w(k)
r (D
23 D
r C3 x(k|k) + D

+D
= D
33
33
33
r D

23 D

r r

r (D
+D
33 31 D21 )e(k|k) + D33 (D23 D33 D32 D22 )w(k)
33
r C2 D
r C3 )x(k|k)
23 D
r C3 D
r D
= (D
33
33
33
33
r
r
r
r
33
33
33
33
D31 )e(k|k) +
D31 D
+(D
D23 D
D21 D
r

r
r

r
D

D
23 D


+(D
33 32 D33 D22 D33 D32 )w(k)
33
The control signal can now be written as:
v(k|k) = Ev v(k)
r D
23 D
r C2 D
r C3 )x(k|k) +
r C3 D
= Ev (D
33
33
33
33
r
r
r

r D
+Ev (D33 D23 D33 D31 D33 D21 D
33 31 )e(k|k) +
r D

r D
23 D
r
r
+Ev (D
33 32 D33 D22 D33 D32 )w(k)
33
= F x(k|k) + De e(k|k) + Dw w(k)

where
r
r
r
r
33
33
33
33
C3 D
C3 )
D23 D
C2 D
F = Ev (D
r
r
r
r

De = Ev (D33 D23 D33 D31 D33 D21 D33 D31 )


r D

r D
23 D
r
r
Dw = Ev (D
33
33 32 D33 D22 D33 D32 )

62 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


This means that also for the equality constrained case, the optimal predictive controller is
linear and time-invariant.
The control law has the same form as for the unconstrained case in the previous section
(of course for the alternative choices of F , Dw and De ).
The results are summarized in the following theorem:
Theorem 11
Consider system (5.1)(5.3) The equality constrained (nite) horizon standard predictive
control problem of minimizing performance index
J(v, k) =

N1


zT (k + j|k)(j)
z (k + j|k)

(5.26)

j=0

subject to equality constraint

31 e(k|k) + D
32 w(k)
33 v(k) = 0

+D
(k)
= C3 x(k|k) + D
is solved by control law

v(k) = F x(k|k) + De e(k|k) + Dw w(k)


where
F =
De =
Dw =
Ev =
=

(5.27)



r
r
r
r

Ev D33 D23 D33 C3 D33 C2 D33 C3




r
r
r
r
33
33
33
33
D31 D
D31
D21 D
Ev D
D23 D


r D
32 D
32
r D
23 D
r D
22 D
r D
Ev D
33
33
33
33


I 0 ... 0

1
T
D
23 D
r
T

r )T D
r )T D
(D
(D
33

23

33

33

23

Again, implementation issues are discussed in section 5.3.

5.1.3

Full standard predictive control problem

In this section we consider the problem of minimizing the performance index


J(v, k) =

N1


zT (k + j|k)(j)
z (k + j|k)

(5.28)

j=0

for the system as given in (5.1), (5.2) and (5.3), and with equality and inequality constraints. In this section we consider the case for nite N. The innite horizon case is
treated in the Sections 5.2.4 and 5.2.5.

5.1.2 EQUALITY CONSTRAINED SPCP

63

Consider the constrained standard problem


z (k)
min zT (k)
v

subject to the constraints:

31 e(k|k) + D
32 w(k)
33 v(k) = 0

+D
(k)
= C3 x(k|k) + D

41 e(k|k) + D
42 w(k)
43 v(k) (k)

+D
(k)
= C4 x(k|k) + D

(5.29)
(5.30)

Now v(k) is chosen as in the previous section




r

+D
v(k) = D33 C3 x(k|k) + D31 e(k|k) + D32 w(k)
33 (k)
r
= vE (k) + D
33 (k)
This results in

31 e(k|k) + D
32 w(k)
33 v(k) =

+D
(k)
= C3 x(k|k) + D
r
31 e(k|k) + D
32 w(k)
33 vE (k) + D
33 D
= C3 x(k|k) + D

+D
33 (k) = 0
and so the equality constraint is eliminated. The optimization vector
(k) can now be
written as
I (k)

(k) =
E (k) +
1
= H f (k) +
I (k)

(5.31)
(5.32)
(5.33)

where
E (k) = H 1 f (k) is the equality constrained solution given in the previous section, and
i (k) is an additional term to take the inequality constraints into account. Now
consider performance index (5.23) from the previous section. Substitution of (k) =
H 1 f (k) +
I (k) gives us:
1 T
(k) + f T (k)
(k) + c(k) =

(k) H
2
T 


1
1
1
=
I (k) H H f (k) +
I (k)
H f (k) +
2


I (k) + c(k) =
+f T (k) H 1 f (k) +
1 T

(k) H
I (k) f T (k)H 1H
I (k)
2 I
1
I (k) + c(k)
+ f T (k)H 1 H H 1 f (k) f T (k)H 1 f (k) + f T (k)
2
1 T
1
=
I (k) f T (k)H 1 f (k) + c(k)

I (k) H
2
2
=

64 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


Using equations (5.24)-(5.25), the vector (k) can be written as

(k) =
=
=
+

I (k)

E (k) +
zE (k) +
I (k)
r
23 D

C3 C2 )x(k|k) + (D
r D

23 D
(D
33
33 31 D21 )e(k|k)
r
23 D
33 D
32 D
22 )w(k)
(D

+
I (k)

(5.34)

r
r
r I (k) in inequality
Substitution of v(k) = vE (k) + D
E (k) + D
33 (k) = v
33 E (k) + D33
constraint (5.30) gives:

43 D
41 D
43 D
r C3 )x(k|k) + (D
r D

(k)
= (C4 D
33
33 31 )e(k|k)
r
r
r
33
33
33
42 D
43 D
43 D
43 D
D32 )w(k)
+(D

+D

E (k) + D

I (k)
r
r

r
r

C3 + D
D
C3 D
C2 )x(k|k)
43 D
43 D
23 D
43 D
= (C4 D
33
33
33
33
r
r

r D

43 D
21 )e(k|k)
+(D41 D43 D33 D31 + D43 D33 D23 D33 D31 D
33
r
r
r
r
33
33
33
33
43 D
43 D
43 D
42 D
D32 + D
D32 D
D23 D
D22 )w(k)

+(D
r

+D43 D33
I (k)
r
43 D
= E (k) + D
33 I (k)
Now let
43 D
r
A = D
33

b (k) = E (k) (k)


then we obtain the quadratic programming problem of minimizing
1 T
I (k) H
I (k)
min

(k) 2
subject to:
A
I (k) + b (k) 0
where H is as in the previous section and we dropped the term 12 f T (k)H 1 f (k) + c(k),
because it is not dependent on I (k) and so it has no inuence on the minimization. The
above optimization problem, which has to be solved numerically and on-line, is a quadratic
programming problem and can be solved in a nite number of iterative steps using reliable
and fast algorithms (see appendix A).
The control law of the full SPCP is nonlinear, and cannot be expressed in a linear form as
the unconstrained SPCP or equality constrained SPCP.
Theorem 12
Consider system (5.1) - (5.3) and the constrained (nite horizon) standard predictive control problem (CSPCP) of minimizing performance index
J(v, k) =

N1

j=0

zT (k + j|k)(j)
z (k + j|k)

(5.35)

5.2. INFINITE HORIZON SPCP

65

subject to equality and inequality constraints

31 e(k|k) + D
32 w(k)
33 v(k) = 0
(k)
= C3 x(k|k) + D

+D

41 e(k|k) + D
42 w(k)
43 v(k) (k)

+D
(k)
= C4 x(k|k) + D
Dene:
43 D
r
A = D
33

b (k) = E (k) (k)


r C3 + D
r D
r C3 D
r C2 )x(k|k)
43 D
43 D
23 D
43 D
= (C4 D
33
33
33
33
r
r
r

r D

43 D
21 )e(k|k)
+(D41 D43 D33 D31 + D43 D33 D23 D33 D31 D
33
r
r
r
r
33
33
33
33
43 D
43 D
43 D
42 D
D32 + D
D32 D
D23 D
D22 )w(k)

+(D

(k)
r )T D
T
r
H = 2 (D
23 D23 D33
33


r D
r C3 D
23 D
r C2 D
r C3
F = Ev D
33
33
33
33


23 D
r D
21 D
r D
r D
31 D
31
r D
De = Ev D
33
33
33
33


r D
r D
32 D
32
23 D
r D
22 D
r D
Dw = Ev D
33

33

33

33

r
D = Ev D
33
1

r
T T
r

r )T D
= (D33 ) D23 D23 D33
(D
23
33


I 0 ... 0
Ev =
The optimal control law that optimizes this CSPCP problem is given by:

+ D
I (k)
v(k) = F x(k|k) + De e(k|k) + Dw w(k)
where
is the solution for the quadratic programming problem
1 T
I (k) H
I (k)
min

I (k) 2
subject to:
I (k) + b (k) 0
A

5.2

Innite horizon SPCP

In this section we consider the standard predictive control problem for the case where the
prediction horizon is innite (N = ) [59].

66 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM

5.2.1

Steady-state behavior

In innite horizon predictive control it is important to know how the signals behave for
k . We therefore dene the steady-state of a system in the standard predictive
control formulation:
Denition 13
The quadruple (vss , xss , wss , zss ) is called a steady state if the following equations are
satised:
xss = A xss + B2 wss + B3 vss
zss = C2 xss + D22 wss + D23 vss

(5.36)
(5.37)

To be able to solve the predictive control problem it is important that for every possible wss ,
the quadruple (vss , xss , wss , zss ) exists. For a steady state error free control the performance
signal zss should be equal to zero for every possible wss (this is also necessary to be able to
solve the innite horizon predictive control problem). We therefore look at the existence
of a steady state
(vss , xss , wss , zss ) = (vss , xss , wss , 0)
for every possible wss .
Consider the matrix


I A B3
Mss =
C2 D23

(5.38)

then (5.36) and (5.37) can be rewritten as:



 

xss
B2
Mss
=
wss
vss
D22
Let the singular value decomposition be given by
 T 


 M 0
VM 1
Mss = UM 1 UM 2
0 0
VMT 2
where M = diag(1 , . . . , m ) with i > 0.
A necessary and sucient condition for existence of xss and vss is that


B2
T
=0
UM 2
D22
The solution is given by:




B2
xss
1 T
= VM 1 M UM 1
wss + VM 2 M
vss
D22

(5.39)

(5.40)

5.2. INFINITE HORIZON SPCP

67

where M is an arbitrary vector with the apropriate dimension. Substitution of (5.40) in


(5.36) and (5.37) shows that the quadruple (vss , xss , wss , zss )=(vss , xss , wss, 0) is a steady
state. The solution (xss , vss ) for a given wss is unique if VM 2 is empty or, equivalently if
Mss has full column rank.
In the section on innite horizon MPC we will assume that zss = 0 and that xss and vss
exist, are unique, and related to wss by




B2
xss
1 T
= VM 1 UM 1
wss
(5.41)
vss
D22
Usually, we assume a linear relation between wss and w(k):

wss = Dssw w(k)


Sometimes wss is found by some extrapolation, but often we choose w to be constant
beyond the prediction horizon,
 so wss = w(k + j) = w(k + N 1) for j N. The matrix
Dssw is then given by Dssw = 0 0 I , such that wss = w(k + N 1) = Dssw w(k).

Now dene the matrices






B2
Dssx
1 T
(5.42)
= VM 1 M UM 1
Dssw
Dssv
D22
and
Dssy = C1 Dssx + D12 Dssw + D13 Dssv

is given by
then a direct relation between (xss , vss , wss , yss ) and w(k)

xss
Dssx
vss Dssv

wss = Dssw w(k)


yss
Dssy

(5.43)

(5.44)

Remark 14 : Sometimes it is interesting to consider cyclic or periodic steady-state behavior. This means that the signals x, w and v do not become constant for k , but are
equal to some cyclic behavior. For a cyclic steady state we assume there is a cycle period P
such that (vss (k + P ), xss (k + P ), wss(k + P ), zss (k + P )) = (vss (k), xss (k), wss (k), zss (k)).
For innite horizon predictive control, we again make the assumption that zss = 0. In this
way, we can isolate the steady-state behavior from the predictive control problem without
aecting the performance index.

5.2.2

Structuring the input signal for innite horizon MPC

For innite horizon predictive control we will encounter an optimization problem with an
innite degrees of freedom, parameterized by v(k + j|k), j = 1, . . . , . For the unconstrained case this is not a problem, and we can nd the optimal predictive controller using

68 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


theory on Linear Quadratic Gaussian (LQG) controllers (Strejc, [63]). This is given in
section 5.2.3.
In the constrained case, we can tackle the problem by giving the input signal only a limited
degrees of freedom. We can do that by introducing the so-called switching horizon, denoted
by Ns . This Ns has to be chosen such, that all degrees of freedom are in the rst Ns samples
of the input signal, i.e. in v(k + j|k), j = 0, . . . , Ns 1. In this section we therefore adopt
the input sequence vector

v(k|k)

v(k + 1|k)

v(k) =

..

.
v(k + Ns 1|k)
which contains all degrees of freedom at sample time k.
We will consider two ways to expand the input signal beyond the switching horizon:
Control horizon: By taking Ns = Nc , we choose v(k + j|k) = vss for j Ns . (Note that
vss = 0 for IIO models).
Basis functions: We can choose v(k + j|k) for j Ns as a nite sum of (orthogonal)
basis functions. The future input v(k + j|k) can be formulated using equation (4.25):
v(k + j|k) = Cv Ajv (k|k) + vss
where we have taken added an extra term vss to take the steady-state value into
account. Following the denitions from section 4.3, we derived the relation

Cv
vss
v(k|k)
vss
v(k+1|k) Cv Av

v(k) =
=
(k|k) + .. = Sv (k|k) + vss
..
..
.

.
.
s 1
v(k+Ns 1|k)
vss
Cv AN
v
Suppose Ns is such that Sv has full column-rank, and a left-complement is given by
Sv (so Sv Sv = 0, see appendix C for a denition). Now we nd that
v (k) vss ) = 0
Sv (
or, by dening

vss
vss

vss = ..
.
vss

Dssv
Dssv
..
.
Dssv

(5.45)

ssv w(k)

=D
w(k)

5.2. INFINITE HORIZON SPCP

69

we obtain
ssv w(k)

= 0
Sv v(k) Sv D

(5.46)

We observe that the orthogonal basis function can be described by equality constraint
5.46.


s
By dening Bv = AN
v Sv (where Sv is the left inverse of Sv , see appendix C for a
denition), we can express in v
s
AN
(k)
v (k|k) = Bv v

and the future input signals beyond Ns can be expressed in terms of v(k):
s
Bv v(k) + vss
v(k + j|k) = Cv Ajv (k|k) + vss = Cv AjN
v

(5.47)

The description with (orthogonal) basis functions gives us that, for the constrained
case, the input signal beyond Ns can be described as follows:
s
Bv v(k) + Fv (
x(k + j|k) xss ) + vss
v(k + j|k) = Cv AjN
v

j Ns (5.48)

or
(5.49)
xv (k + Ns |k) = Bv v(k)
j Ns (5.50)
xv (k+j +1|k) = Av xv (k + j|k)
v(k + j|k) = Cv xv (k + j|k) + Fv (x(k + j|k) xss ) + vss j Ns (5.51)
where xv is an additional state, describing the dynamics of the (orthogonal) basis
function beyond the switching horizon. Finally, we should note that v(k) can not be
chosen arbitrarily but is constrained by the equality constraint (5.46).

5.2.3

Unconstrained innite horizon SPCP

In this section, we consider the unconstrained innite horizon standard predictive control
problem. To be able to tackle this problem, we will consider a constant external signal w,
a zero steady-state performance signal and the weighting matrix (j) is equal to identity,
so
w(k + j) = wss for all j 0
zss = 0
(j) = I for all j 0
Dene
x (k + j|k) = x(k + j|k) xss for all j 0
v (k + j|k) = v(k + j|k) vss for all j 0

70 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


then it follows that
x (k+j +1|k) =
=
=
z(k + j|k) =
=

x(k+j +1|k) xss


A x(k + j|k) + B1 e(k + j|k) + B2 w(k + j) + B3 v(k + j|k) xss
A x (k + j|k) + B1 e(k + j|k) + B3 v (k + j|k)
C2 x(k + j|k) + D21 e(k + j|k) + D22 w(k + j) + D23 v(k + j|k)
C2 x (k + j|k) + D21 e(k + j|k) + D23 v (k + j|k)

where we used equations (5.36) and (5.37). Now we derive:


x (k+j +1|k) = A x (k + j|k) + B3 v (k + j|k)
z(k + j|k) = C2 x (k + j|k) + D23 v (k + j|k)
Substitution in the performance index leads to:


zT (k + j|k)(j)
z (k + j|k)
J(v, k) =
=

j=0



T
xT (k + j|k)C2T + e(k + j|k)D21



C2 x (k + j|k) + D21 e(k|k) +

j=0



T
D23 v (k + j|k)
2 xT (k + j|k)C2T + e(k + j|k)D21

T
T
+ v
D23 v (k + j|k)
(k + j|k)D23
Now dene



T
T
D23 )1 D23
C2 x (k + j|k) + D21 e(k + j|k) (5.52)
v(k + j|k) = v (k + j|k) + (D23

Then the state equation is given by:


x (k+j +1|k) =A x (k + j|k) + B1 e(k + j|k) + B3 v (k + j|k)


T
T
D23 )1 D23
C2 x (k + j|k)+
= A B3 (D23


T
T
D23 )1 D23
D12 e(k + j|k) + B3 v(k + j|k)
B1 B3 (D23
Now we dene a new state


x (k + j|k)
x(k + j|k) =
e(k + j|k)
and dene

T
T
T
T
A B3 (D23
D23 )1 D23
C2 B1 B3 (D23
D23 )1 D23
D12
0
0


B3
=
0


T
T
= (I D23 (D23
D23 )1 D23
) C2 D21
= D23

A =

B
C

(5.53)

5.2. INFINITE HORIZON SPCP

71

We obtain
x(k + j|k) + B
v(k + j|k)
x(k+j +1|k) = A
v(k + j|k)
z(k+j|k) = C x(k + j|k) + D
and the performance index becomes
J(v, k) =

zT (k+j|k)
z(k+j|k)

j=0

v(k+j|k) + vT (k+j|k)D
v(k+j|k)
TD
=
xT (k+j|k)C T C x(k+j|k) + xT (k+j|k)C T D


x(k+j|k) + vT (k+j|k)R
v(k+j|k)
=
xT (k+j|k)Q
j=0

where




= C T C = C2 D21 T (I D23 (D T D23 )1 D T ) C2 D21 ,
Q
23
23
=D
TD
= D T D23 .
R

(5.54)

23



T
T
1 T
T
and we used the
 fact that D C = D23 (I D23 (D23 D23 ) D23 ) C2 D21 = (D23
D23 ) C2 D21 = 0.
Minimizing the performance index has now becomes a standard LQG problem (Strejc,
[63]), and the optimal control signal v is given by
T P A x(k|k)
+ R)
1 B
T P B
v(k) = (B

(5.55)

where P 0 is the smallest positive semi-denite solution of the discrete time Riccati
equation
T P A + Q
B
T P B
+ R)
1 B

P = AT P A AT P B(
B)
and invertibility of R.

which exists due to stabilizability of (A,


The closed loop equations can now be derived by substitution of (5.52), (5.55) and (5.44)
into (5.52) to obtain
v(k) = F x(k|k) + Dw w(k) + De e(k|k)
where





T
1 T
(D
D
)
D
C
AB
3
23
2
T
1
T
T
T
23
23
P
PB
+ R)
B
+ (D23
F = (B
D23 )1 D23
C2
0




T
1 T
AB
(D
D
)
D
C
3
23
2
T
1
T
T
T
23
23
P
PB
+ R)
B
Dw = ( B
D23 )1 D23
C2 Dssx + Dssv
+ (D23
0




T
1 T
B
B
(D
D
)
D
D
1
3
23
12
T
1
T
T
T
23
23
P
PB
+ R)
B
De = ( B
D23 )1 D23
D21
+ (D23
0

72 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM

= w(k).
and wss = w(k)
= D T D23 is invertible. This is a strengtened
Note that in the above we have used that R
23
version of assumption 2 as formulated in the beginning of this chapter. With some effort, the above can be derived solely based on assumptions 2 and 3 without relying on

invertibility of R.
The results are summarized in the following theorem:
Theorem 15
Consider system (5.1) - (5.3) with a steady-state (vss , xss , wss , zss ) = (Dssv wss , Dssx wss , wss , 0).
B,
C,
D,
Q,
and R
be dened as in
Further let w(k + j) = wss for j > 0 and let A,
equations (5.53) and (5.54). The unconstrained innite horizon standard predictive control
problem of minimizing performance index
J(v, k) =

zT (k + j|k)
z (k + j|k)

(5.56)

j=0

is solved by control law


v(k) = F x(k|k) + Dw w(k) + De e(k)

(5.57)

where




T
T
D23 )1 D23
C2
AB3 (D23
T
1 T
T
T

+ (D23
D23 )1 D23
C2
F = (B P B + R) B P
0




T
1 T
(D
D
)
D
C
AB
3
23
2
T
1
T
T
T
23
23
P
PB
+ R)
B
+ (D23
Dw = ( B
D23 )1 D23
C2 Dssx + Dssv
0




T
1 T
B
(D
D
)
D
D
B
1
3
23
12
T
1
T
T
T
23
23
P
PB
+ R)
B
+ (D23
De = ( B
D23 )1 D23
D21
0
and P is the solution of the discrete time Riccati equation
T P A + Q
B
T P B
+ R)
1 B

P = AT P A AT P B(

5.2.4

The innite Horizon Standard Predictive Control Problem


with control horizon constraint

The innite Horizon Standard Predictive Control Problem with control horizon constraint
is dened as follows:
Denition 16 Consider a system given by the state-space realization
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)

(5.58)
(5.59)
(5.60)

5.2. INFINITE HORIZON SPCP

73

The goal is to nd a controller v(k) = K(w,


y, k) such that the performance index
J(v, k) =

zT (k + j|k)(j)
z (k + j|k)

(5.61)

j=0

is minimized subject to the control horizon constraint


v(k + j|k) = 0 , for j Nc

(5.62)

and inequality constraints

Nc ,41 e(k|k) + D
Nc ,42 w(k)
Nc ,43 v(k) (k)

+D
(k)
= CNc ,4 x(k|k) + D

(5.63)

and for all j Nc :


(k + j) = C5 x(k + j) + D51 e(k + j) + D52 w(k + j) + D53 v(k + j)

(5.64)

where

w(k)

w(k|k)
w(k + 1|k)
..
.

v(k) =

w(k + Nc 1|k)

v(k|k)
v(k + 1|k)
..
.

v(k + Nc 1|k)

The above problem is denoted as the Innite Horizon Standard Predictive Control Problem
with control horizon constraint.

Theorem 17
Consider system (5.58) - (5.60) and let
Nc <


Nc = diag (0), (1), . . . , (Nc 1)

for j Nc
(j) = ss
w(k + j) = wss
for j Nc




vss
Dssv
=
w(k)

xss
Dssx


I 0 0 w(k)

w(k) =

= Ew w(k)
Let A be partioned into a stable part As and an unstable part Au as follows:



 As 0

Ts
A = Ts Tu
Tu
0 Au

(5.65)

74 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


where


Ts


 

 Ts
I 0
Tu
=
.
Tu
0 I

Dene the vector


 = (C5 Dssx + D52 Ew + D53 Dssv )w(k)

Rnc5 1

(5.66)

diagonal matrix
E = diag(1 , 2 , . . . , nc5) > 0,

(5.67)

Dene, for any integer m > 0, matrix

E 1 C5 Ts As
E 1 C5 Ts A2
s

Wm =

..

.
1
m
E C5 Ts As

(5.68)

Let n be such that for any x


E 1 C5 Ts Aks x 1 for all k n
implies that
E 1 C5 Ts Ais x 1 for all k > n
Nc ,21 , D
Nc ,22 ,
Let CNc ,2 , D

C2
C2 A

CNc ,2 = ..
.
C2 ANc 1

D22
C2 B2
Nc ,22 =
D

..

Nc ,23 , CNc ,3 , D
Nc ,31 , D
Nc ,32 and D
Nc ,33 be given by
D

D21


DNc ,21 = C2 B1

..
. C2 ANc 2 B1
0
D22

..

0
0
..
.

.
D22

(5.69)

C2 ANc2 B2 C2 ANc3 B2

0
0
D23
C2 B3
D23
0

DNc ,23 =
..
..
.
.

. .
.
C2 ANc2 B3 C2 ANc2 B3 D23




Nc ,31 = ANc 1 B1
D
CNc ,3 = ANc

(5.70)

(5.71)

(5.72)

5.2. INFINITE HORIZON SPCP




[ ANc 1 B2 B2 ] Dssx


= [ ANc 1 B3 B3 ]

Nc ,32 =
D
Nc ,33
D

75


(5.73)
(5.74)

Let Y be given by


Nc ,33 ,
Y = Tu D

(5.75)

Let Y have full row-rank with a right-complement Y r and a right-inverse Y r . Further let
be the solution of the discrete-time Lyapunov equation
M
As M
+ T T C T ss C2 Ts = 0
ATs M
s
2
Consider the innite-horizon model predictive control problem, dened as minimizing (5.61)
subject to an input signal v(k+j|k), with v(k+j|k) = 0 for j Nc and inequality constraints
(5.63) and (5.64).
Dene:


Nc ,43
D
r
A =
Nc ,33 Y
Wn D


r T T
T
T

H = 2(Y ) DNc ,33 Ts M Ts DNc ,33 + DNc ,23 Nc DNc ,23 Y r






Nc ,41 + D
e
Nc ,43 F
Nc ,43 D
D
CNc ,4 D
b (k) =
e ) e(k|k)
Nc ,33 F ) x(k|k)+ Wn (D
Nc ,31 + D
Nc ,33 D
Wn (CNc ,3 D





w
Nc ,42 + D
Nc ,43 D

D
(k)

(5.76)
+
Nc ,32 + D
Nc ,33 D
w ) w(k)
1
Wn (D
where
F = Z1 CNc ,3 Z2 CNc ,2 Z3 Y r Tu CNc ,3
Nc ,31
e = Z1 D
Nc ,31 + Z2 D
Nc ,21 + Z3 Y r Tu D
D
w = Z1 D
Nc ,32 + Z2 D
Nc ,22 + Z3 Y r Tu D
Nc ,32
D
r
r 1
r T T
Ts
Z1 = Y Tu 2Y H (Y ) DNc ,33 TsT M
T
Nc
Z2 = 2Y r H 1(Y r )T D
Nc ,23

T T T M
Ts D
Nc ,33 + 2Y r H 1 (Y r )T D
T

Z3 = 2Y H (Y ) D
Nc ,33 s
Nc ,23 Nc DNc ,23

T
and 1 = 1 1 1 .
The optimal control law that optimizes the inequality constrained innite-horizon MPC
problem is given by:
r

r T

+ D
I (k)
v(k) = F x(k|k) + De e(k|k) + Dw w(k)
where
F = Ev F

e
De = Ev D

w,
Dw = Ev D

D = Ev Y r

(5.77)

76 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


and where
I is the solution for the quadratic programming problem
1 T
min
I
I H

I 2
subject to:
A
I + b (k) 0


and Ev = I 0 . . . 0 such that v(k) = Ev v(k).
In the absence of inequality constraints (5.63) and (5.64), the optimum
I = 0 is obtained
and so the control law is given analytically by:

v(k) = F x(k|k) + De e(k|k) + Dw w(k)

(5.78)

Theorem 17 is a special case of theorem 21 in the next subsection.

5.2.5

The Innite Horizon Standard Predictive Control Problem


with structured input signals

The Innite Horizon Standard Predictive Control Problem with structured input signals
is dened as follows:
Denition 18 Consider a system given by the state-space realization
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)

(5.79)
(5.80)
(5.81)

Consider the nite switching horizon Ns , then the goal is to nd a controller such that the
performance index
J(v, k) =

zT (k + j|k)(j)
z (k + j|k)

(5.82)

j=0

is minimized subject to the constraints

ssv w(k)
(k)
= Sv D

+ Sv v(k) = 0

Ns ,41 e(k|k) + D
Ns ,42 w(k)
Ns ,43 v(k) (k)

+D
(k)
= CNs ,4 x(k|k) + D

(5.83)
(5.84)

where Sv is dened on page 68 and for all j Ns :


(k + j|k)
where
(k) = C5 x(k|k) + D51 e(k|k) + D52 w(k) + D53 v(k) ,

(5.85)

5.2. INFINITE HORIZON SPCP

w(k)

w(k|k)
w(k + 1|k)
..
.

77

v(k) =

w(k + Ns 1|k)

v(k|k)
v(k + 1|k)
..
.

v(k + Ns 1|k)

and v(k + j|k) beyond the switching horizon (j Ns ) is given by


s
Bv v(k) + vss
v(k + j|k) = Cv AjN
v

j Ns

(5.86)

The above problem is denoted as the Innite Horizon Standard Predictive Control Problem
with structured input signals.
Remark 19 : The equality constraint in the above denition is due to the constraint imposed on the structure of the input signal in the equations (5.86) and (5.83) as discussed
in section 5.2.2.
Remark 20 : Equation (5.84) covers inequality constraints up to the switching horizon,
equation (5.85) covers inequality constraints beyond the switching horizon.
Theorem 21
Consider system (5.79) - (5.81) and let
Ns <


Ns = diag (0), (1), . . . , (Ns 1)

(j) = ss
for j Ns
w(k + j|k) = wss
for j Ns




vss
Dssv
=
w(k)

xss
Dssx


I 0 0 w(k)
w(k) =

= Ew w(k)

Let AT , CT be given by


A B3 Cv
AT =
0
Av


CT = C2 D23 Cv

(5.87)

with a partitioning of AT into a stable part As and an unstable part Au as follows:






 As 0
Ts
(5.88)
AT = Ts Tu
0 Au
Tu
where


Ts

 


 Ts
I 0
Tu
=
Tu
0 I

78 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


Dene the vector

Rnc5 1
 = (C5 Dssx + D52 Ew + D53 Dssv )w(k)

(5.89)

diagonal matrix
E = diag(1 , 2 , . . . , nc5) > 0,

(5.90)

and
C =

C5 D53 Cv

Dene, for any integer m > 0, the matrix

E 1 C Ts As
E 1 C Ts A2
s

Wm =

..

.
1
m
E C Ts As

(5.91)

(5.92)

Let n be such that for any x


E 1 C Ts Aks x 1 for all k n
implies that
E 1 C Ts Ais x 1 for all k > n
Ns ,21 , D
Ns ,22 ,
Let CNs ,2 , D

C2
C2 A

CNs ,2 = ..
.
C2 ANs 1

D22
C2 B2
Ns ,22 =
D

..

Ns ,23 , CNs ,3 , D
Ns ,31 , D
Ns ,32 , and D
Ns ,33 be given by
D

D21
C2 B1

=
D
..

Ns ,21
.

Ns 2
C2 A
B1

0
0
D22
0

..
..
. .
Ns2
Ns3
C2 A
B2 C2 A
B2 D22

D23
0
0
C2 B3
D23
0

DNs ,23 =
..
..
.
.

. .
.
Ns2
Ns2
C2 A
B3 C2 A
B3 D23
 N 
 N 1

A s
A s B1

DNs ,31 =
CNs ,3 =
0
0

(5.93)

(5.94)

(5.95)

(5.96)

5.2. INFINITE HORIZON SPCP




[ ANs 1 B2 B2 ] Dssx
0


[ ANs 1 B3 B3 ]
=
Bv

79


Ns ,32 =
D

(5.97)

Ns ,33
D

(5.98)

Let Y be given by


Ns ,33
Tu D
Y =
,
Sv

(5.99)

Let Y have full row-rank with a right-complement Y r and a right-inverse Y r , partioned as




(5.100)
Y r = Y1r Y2r
Ns ,33 Y r = I, Tu D
Ns ,33 Y r = 0 S  Y r = 0 and S  Y r = I. Further let M
be
such that Tu D
1
2
1
2
v
v
the solution of the discrete-time Lyapunov equation
As M
+ TsT CTT ss CT Ts = 0
ATs M
Consider the innite-horizon model predictive control problem, dened as minimizing (5.82)
subject to an input signal v(k + j|k), given by (5.46) and (5.49)-(5.51) and inequality
constraints (5.84) and (5.85).
Dene:


Ns ,43
D
A =
Y r

Wn DNs ,33


Ns ,23 Y r
Ts D
Ns ,33 + D
Ns D
T T T M
T
H = 2(Y r )T D
Ns ,33 s
Ns ,23




Ns ,41 + D
Ns ,43 F
Ns ,43 D
e
D
CNs ,4 D
b (k) =
e ) e(k|k)
Ns ,33 F ) x(k|k)+ Wn (D
Ns ,31 + D
Ns ,33 D
Wn (CNs ,3 D





w
Ns ,42 + D
Ns ,43 D

D
(k)

(5.101)
+
w ) w(k)
Ns ,32 + D
Ns ,33 D
1
Wn (D
where
F = Z1 CNs ,3 Z2 CNs ,2 Z3 Y1r Tu CNs ,3
e = Z1 D
Ns ,31 + Z2 D
Ns ,21 + Z3 Y r Tu D
Ns ,31
D
1
w = Z1 D
Ns ,32 + Z2 D
Ns ,22 + Z3 Y r Tu D
Ns ,32 (Z3 I)Y r S  D
ssv
D
1
2 v
r
r 1
r T T
T
Ts
Z1 = Y1 Tu 2Y H (Y ) DNs ,33 Ts M
T
Ns
Z2 = 2Y r H 1(Y r )T D
Ns ,23

Z3 = 2Y H
and 1 =

1 1 1

T T T M
Ts D
Ns ,33 + 2Y r H 1 (Y r )T D
T

(Y ) D
Ns ,33 s
Ns ,23 Ns DNs ,23
r T

T

80 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


The optimal control law that optimizes the inequality constrained innite-horizon MPC
problem is given by:
v(k) = F x(k|k) + De e(k|k) + Dw w(k)

+ D
I (k)

(5.102)

where
F = Ev F

e
De = Ev D

w,
Dw = Ev D

D = Ev Y r

and where
I is the solution for the quadratic programming problem
1 T
I
min
I H

I 2
subject to:
A
I + b (k) 0
and Ev =

I 0 ... 0

such that v(k) = Ev v(k).

In the absence of inequality constraints (5.84) and (5.85), the optimum


I = 0 is obtained
and so the control law is given analytically by:

v(k) = F x(k|k) + De e(k|k) + Dw w(k)

(5.103)

Remark 22 : Note that if for any i = 1, . . . , nc5 there holds:


[]i 0
the i-th constraint
[ (k + j|k)]i [ ]i
cannot be satised for j and the innite horizon predictive control problem is infeasible. A necessary condition for feasibility is therefore
E = diag(1 , 2 , . . . , nc5) > 0
Proof of theorem 21:
The performance index can be split in two parts:
J(v, k) =


j=0

zT (k + j|k)(j)
z (k + j|k) = J1 (v, k) + J2 (v, k)

5.2. INFINITE HORIZON SPCP

81

where
J1 (v, k) =

N
s 1


zT (k + j|k)(j)
z (k + j|k)

j=0

J2 (v, k) =

zT (k + j|k)ss z(k + j|k)

j=Ns

For technical reasons we will consider the derivation of criterion J2 before we derive criterion J1 .

Derivation of J2 :
Consider system (5.79) - (5.81) with structured input signal (5.49), (5.50) and (5.51) and
constraint (5.46). Dene for j Ns :
x (k + j|k) = x(k + j|k) xss
v (k + j|k) = v(k + j|k) vss
Then, using the fact that w(k + j|k) = wss and e(k + j|k) = 0 for j Ns , it follows for
j Ns :
x (k+j +1|k) =
=
=
=
=
z(k + j|k) =
=
=
=
Dene
xT (k + j|k) =

x(k+j +1|k) xss


Ax
(k + j|k) + B1 e(k + j|k) + B2 w(k + j|k) + B3 v(k + j|k) xss
Ax
(k + j|k) + B1 0 + B2 wss + B3 vss + B3 v (k + j|k) xss
Ax
(k + j|k) + 0 + (I A) xss xss + B3 Cv xv (k + j|k)
Ax
(k + j|k) + B3 Cv xv (k + j|k)
C2 x(k + j|k) + D21 e(k + j|k) + D22 w(k + j|k) + D23 v(k + j|k)
C2 x (k + j|k) + C2 xss + D22 wss + D23 vss + D23 v (k + j|k)
C2 x (k + j|k) + 0 + D23 Cv xv (k + j|k)
C2 x (k + j|k) + D23 Cv xv (k + j|k)

x (k + j|k)
xv (k + j|k)

Then, using AT and CT from (5.87), we nd for j Ns :


xT (k+j +1|k) = AT xT (k + j|k)
z(k + j|k) = CT xT (k + j|k)
which is an autonomous system. This means that z(k +j|k) can be computed for all j Ns
if initial state xT (k + Ns |k) is given, so if x(k + Ns ) and xv (k + Ns ) are known. The state

82 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


xv (k + Ns ) is given by equation (5.49), the state x(k + Ns |k) can be found by successive
substitution:


x(k + Ns |k) = ANs x(k|k) + ANs 1 B1 e(k|k) + ANs 1 B2 B2 w(k)

+
 N 1

+ A s B3 B3 v(k)
together with xv (k + Ns |k) = Bv v(k) we obtain


x(k + Ns |k) xss
xT (k + Ns |k) =
xv (k + Ns )
Ns ,31 e(k|k) + D
Ns ,32 w(k)
Ns ,33 v(k)
= CNs ,3 x(k|k) + D

+D
Ns ,31 , D
Ns ,32 and D
Ns ,33 as dened in (5.96)-(5.98), and xss = Dssx w(k).

where CNs ,3 , D
The prediction z(k + j|k) for j Ns is obtained by successive substitution, resulting in:
s
xT (k + Ns |k)
z(k + j|k) = CT AjN
T

(5.104)

Now a problem occurs when the matrix AT contains unstable eigenvalues (|i | 1). For
s
j , the matrix AjN
will become unbounded, and so will z. The only way to solve
T
this problem is to make sure that the part of the state xT (k + Ns ), related to the unstable
poles is zero (Rawlings & Muske, [51]). Make a partitioning of AT into a stable part As
and an unstable part Au as in (5.88), then (5.104) becomes:



s

 AjN
Ts
0
s
xT (k + Ns |k)
z(k + j|k) = CT Ts Tu
s
Tu
0
AjN
u
(5.105)
= CT Ts AjNs Ts xT (k + Ns |k) + CT Tu AjNs Tu xT (k + Ns |k)
s

s
will grow unbounded for j . The prediction z(k + j|k) for j
The matrix AjN
u
will only remain bounded if Tu xT (k + Ns |k) = 0.
In order to guarantee closed-loop stability for systems with unstable modes in AT , we need
to satisfy equality constraint


Ns,31 e(k|k)+ D
Ns,32 w(k)+
Ns,33 v(k) = 0 (5.106)
Tu xT (k+Ns |k) = Tu CNs ,3 x(k|k)+ D

Together with the equality constraint (5.83) we obtain the nal constraint:








Ns ,32
Ns ,31
Ns ,33
Tu D
Tu D
Tu D
Tu CNs ,3

+
v(k) = 0
x(k|k) +
e(k|k) +
ssv w(k)
0
0
Sv
Sv D


Consider Y as given in (5.99) with right-inverse Y r = Y1r Y2r and right-complement
Y r , then the set of all signals v(k) satisfying the equality constraints is given by
(k)
v(k) = vE (k) + Y r

(5.107)

where vE is given by







Ns ,32
Ns ,31
Tu D
Tu CNs ,3
Tu D
r

vE (k) = Y
x(k|k) +
e(k|k) +
ssv w(k)
0
0
Sv D


r
ssv w(k)

= Y1 Tu CNs ,3 x(k|k) + DNs ,31 e(k|k) + DNs ,32 w(k)

+ Y2r Tu Sv D

5.2. INFINITE HORIZON SPCP

83

and
(k) is a free vector with the appropriate dimensions.
If the equality Tu xT (k + Ns |k) = 0 is satised, the prediction becomes:



s

 AjN
Ts xT (k + Ns |k)
0
s
=
z(k + j|k) = CT Ts Tu
s
0
AjN
0
u
= CT Ts AjNs Ts xT (k + Ns |k)
s

(5.108)
(5.109)

Dene
=
M

(ATs )jNs TsT CTT ss CT Ts (As )jNs

(5.110)

j=Ns

then
J2 (v, k) =
=


j=Ns

zT (k + j|k)ss z(k + j|k)


xTT (k + Ns |k)TsT (ATs )jNs TsT CTT ss CT Ts (As )jNs Ts xT (k + Ns |k)

j=Ns

Ts xT (k + Ns |k)
= xTT (k + Ns |k)TsT M
1 T
=
(k) +
T (k)f2 (k) + c2 (k)
(k)H2
2
where
Ns ,33 Y r
T T T M
Ts D
H2 = 2(Y r )T D
Ns ,33 s
Ts xT,E (k+Ns |k)
T T T M
f2 (k) = 2(Y r )T D
Ns ,33 s
Ts xT,E (k+Ns |k)
c2 (k) = xT (k + Ns |k)T T M
T,E

Ns ,31 e(k|k) + D
Ns ,32 w(k)
Ns ,33 vE (k)

+D
xT,E (k+Ns |k) = CNs ,3 x(k|k) + D
can be computed as the solution of the discrete time Lyapunov equation
The matrix M
As M
+ T T C T ss CT Ts = 0
ATs M
s
T
Derivation of J1 :
The expression of J1 (v, k) is equivalent to the expression of the performance index J(v, k)
for a nite prediction horizon. The performance signal vector z(k) is dened for switching
horizon Ns :

z(k|k)

z(k+1|k)

z(k) =

..

.
z(k+Ns 1|k)

84 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


Using the results of chapter 3 we derive:
Ns ,21 e(k|k) + D
Ns ,22 w(k)
Ns ,23 v(k)

+D
z(k) = CNs ,2 x(k|k) + D


Ns,21 e(k|k)+ D
Ns,22 w(k)+
Ns,23 vE (k)+Y r

D
(k)
= CNs ,2 x(k|k)+ D
Ns ,21 , D
Ns ,22 and D
Ns ,23 are given in (5.93)-(5.95). Now J1 becomes
where CNs ,2 , D
J1 (v, k) =

N
s 1


zT (k + j|k)(j)
z (k + j|k) =

j=0

Ns z(k) =
= zT (k)
1 T
=
(k) +
T (k)f1 (k) + c1 (k)

(k)H1
2
where
r
T

H1 = 2(Y r )T D
Ns ,23 Ns DNs ,23 Y
T
E (k)
f1 (k) = 2(Y r )T D
Ns ,23 Ns z
Ns zE (k)
c1 (k) = zET (k)
Ns ,21 e(k|k) + D
Ns ,22 w(k)
Ns ,23 vE (k)

+D
zE (k) = CNs ,2 x(k|k) + D

Minimization of J1 + J2 :
Combining the results we obtain the problem of minimizing
1 T
(k) +
T (k)(f1 (k)+f2 (k)) + c1 (k) + c2 (k)
J1 + J2 =
(k)(H1 +H2 )
2
1 T
(k) H
=
(k) +
T (k) f (k) + c(k)
2
Now dene:
Ns ,31 e(k|k) + D
Ns ,32 w(k)
xT,0 (k + Ns |k) = CNs ,3 x(k|k) + D

Ns ,21 e(k|k) + D
Ns ,22 w(k)
z0 (k) = CNs ,2 x(k|k) + D

ssv w(k)

0 (k) = Sv D
or


Ns ,32
Ns ,31
D
xT,0 (k + Ns |k)
D
CNs ,3
Ns ,22 w(k)

= CN ,2 x(k|k)+ D
Ns ,21 e(k|k)+ D
z0 (k)

s


0 (k)
0
0
Sv Dssv

(5.111)
(5.112)

(5.113)
(5.114)
(5.115)

Then:
vE (k) = Y1r Tu xT,0 (k) Y2r 0 (k)
Ns ,33 vE (k)
xT,E (k + Ns |k) = xT,0 (k) + D
Ns ,33 Y1r Tu xT,0 (k) D
Ns ,33 Y2r 0 (k)
= xT,0 (k) D
Ns ,23 vE (k)
zE (k) = z0 (k) + D
Ns ,23 Y r Tu xT,0 (k) D
Ns ,23 Y r 0 (k)
= z0 (k) D
1
2

(5.116)

5.2. INFINITE HORIZON SPCP

85

and thus:

Y1r Tu
0
Y2r
xT,0 (k + Ns |k)
vE (k)
Ns ,33 Y r Tu ) 0 D
Ns ,33 Y r

xT,E (k + Ns |k) = (I D
z0 (k)
1
2
r
r

Ns ,23 Y Tu
Ns ,23 Y
zE (k)
0 (k)
D
I D
1
2

(5.117)

Collecting the results from equations (5.113)-(5.117) we obtain:


f (k) = f1 (k) + f2 (k)
T
E (k) + 2(Y r )T D
T T T M
Ts xT,E (k + Ns |k)
= 2(Y r )T D
Ns ,23 Ns z
Ns ,33 s

vE (k)


T T T M
Ts 2(Y r )T D
T
xT,E (k + Ns |k)
= 0 2(Y r )T D
Ns ,33 s
Ns ,23 Ns
zE (k)
In the absence of inequality constraints we obtain the unconstrained minimization of
(5.112). The minimum
=
E is found for

E (k) = (H1 + H2 )1 (f1 (k) + f2 (k)) = H 1 f (k)


and so
v(k) = vE (k) + Y r
E (k)
r 1
= vE (k) Y H f (k)
T T T M
Ts xT,E (k + Ns |k)
T
E (k) 2Y r H 1 (Y r )T D
= vE (k) 2Y r H 1 (Y r )T D
Ns ,33 s
Ns ,23 Ns z

(k)
E


T T T M
Ts 2Y r H 1(Y r )T D
T
xT,E (k + Ns |k)
= I 2Y r H 1 (Y r )T D
Ns ,33 s
Ns ,23 Ns
zE (k)


T TsT M
Ts 2Y r H 1(Y r )T D
T

= I 2Y r H 1 (Y r )T D
Ns ,33
Ns ,23 Ns

Y1r Tu
0
Y2r
xT,0 (k + Ns |k)
r
r

z0 (k)
(I DNs ,33 Y1 Tu ) 0 DNs ,33 Y2
Ns ,23 Y r Tu
Ns ,23 Y r
0 (k)
D
I D
1
2


T TsT M
Ts 2Y r H 1(Y r )T D
T

= I 2Y r H 1 (Y r )T D
Ns ,33
Ns ,23 Ns

Y1r Tu
0
Y2r
Ns ,33 Y1r Tu ) 0 D
Ns ,33 Y2r
(I D
r
Ns ,23 Y Tu
Ns ,23 Y r
D
I D
1
2

Ns ,32
Ns ,31
D
D
CNs ,3
Ns ,22 w(k)
Ns ,21 e(k|k)+ D
CNs ,2 x(k|k)+ D

ssv
0
0
Sv D


= Z1 + Z3 Y1r Tu Z2 (Z3 I)Y2r

Ns ,32
Ns ,31
D
D
CNs ,3
Ns ,22 w(k)
Ns ,21 e(k|k)+ D

CNs ,2 x(k|k)+ D


0
0
S Dssv
v

e e(k|k) + D
w w(k)
= F x(k|k) + D

86 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


The optimal input becomes
v(k) = Ev v(k)
e e(k|k) + Ev D
w w(k)
= Ev F x(k|k) + Ev D

= F x(k|k) + De e(k|k) + Dw w(k)


which is equal to control law (5.103).
Inequality constraints:
Now we will concentrate on the inequality constraints. The following derivation is an
extension of the result, given in Rawlings & Muske [51]. Consider the inequality constraints:
= CNs ,4 x(k|k)+ D
Ns,41 e(k|k)+ D
Ns,42 w(k)+
Ns,43 v(k) (k)

(k)

where we assume that (k)


has nite dimension. To allow inequality constraints beyond
the switching horizon Ns we have introduced an extra inequality constraint
(k + j|k)

, j Ns

where Rnc5 1 is a constant vector and (k + j|k) is a prediction of (k + j|k)


given by
(k + j|k) = C5 x(k + j|k) + D51 e(k + j|k) + D52 w(k + j|k) + D53 v(k + j|k)
Dene , E, C and Wm according to (5.89), (5.90), (5.91) and (5.92). Further let n be
dened as in theorem 21 and let
(k + Ns + j|k)

for

j = 1, . . . , n

(5.118)

We derive for j > 0:


(k+Ns +j|k) = C5 x(k + Ns + j|k) + D52 w(k + Ns + j|k) + D53 v(k + Ns + j|k)
= C5 xT (k + Ns + j|k) + D53 Cv xv (k + Ns + j|k)) +
+C5 xss + D52 wss + D53 vss
= C5 x (k + Ns + j|k) + D53 Cv xv (k + Ns + j) +
+(C5 Dssx + D52 + D53 Dssv )wss
= C xT (k + Ns + j|k) + (C5 Dssx + D52 + D53 Dssv )wss
From equation (5.118) we know that for j = 1, . . . , n
(k + Ns + j|k) (C5Dssx + D52 + D53 Dssv )wss < (C5 Dssx + D52 + D53 Dssv )wss
and so
C xT (k + Ns + j|k) 

5.2. INFINITE HORIZON SPCP

87

Multiplied by E 1 we obtain for j = 1, . . . , n :


E 1 (k + Ns + j|k) = E 1 C xT (k + Ns + j|k) 1
or, equivalently,
E 1 C AjT xT (k + Ns |k) 1
We know Tu xT (k + Ns |k) = 0. We nd that:
E 1 C Ts Ajs Ts xT (k + Ns |k) 1.
We know that n is chosen such that the above for j = 1, . . . , n implies that
E 1 C Ts Ajs Ts xT (k + Ns |k) 1.
for all j > n . Combined with Tu xT (k + Ns |k) = 0, this implies that
E 1 C AjT xT (k + Ns |k) 1.
for all j > n . Therefore, starting from (5.118) we derived
(k + Ns + j|k) for j = n + 1, . . . ,

(5.119)

This means that (5.118), together with the extra equality constraint (5.106) implies (5.119).
Equation (5.118) can be rewritten as
Wn xT (k + Ns |k) 1
Together with constraint (5.84) this results in the overall inequality constraint


(k)
(k)
Wn xT (k + Ns |k) 1


0

Consider performance index (5.112) in which the optimization vector (k) is written as

(k) =
E (k) +
I (k)
= (H1 +H2)1 (f1 (k)+f2(k)) +
I (k)
1
= H (f (k)) +
I (k)

(5.120)
(5.121)
(5.122)

where
E (k) is the equality constrained solution given in the previous section, and
i (k) is
an additional term to take the inequality constraints into account. The performance index

88 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


now becomes
1 T

(k) H
(k) + f T (k)
(k) + c(k) =
2
T 

1
H 1 f (k) +
=
I (k) H H 1 f (k) +
I (k)
2


T
1
I (k) + c(k) =
+f (k) H f (k) +
1 T
I (k) f T (k)H 1H
I (k)

(k) H
2 I
1
+ f T (k)H 1 H H 1 f (k) f T (k)H 1 f (k) + f T (k)
I (k) + c(k)
2
1 T
1
=

I (k) H
I (k) f T (k)H 1 f (k) + c(k)
2
2
1 T
I (k) + c (k)
=

(k) H
2 I
=

(5.123)

Now consider the input signal


(k)
v(k) = vE (k) + Y r
r 1
= vE (k) Y H f (k) + Y r
I (k)
r
= vI (k) + Y
I (k)
where
e e(k|k) + D
w w(k)
(k) = F x(k|k) + D

vI (k) = vE (k) + Y r
and dene the signals
Ns,43 vI (k)
I (k) = 0 (k)+ D
Ns,41 + D
e )e(k|k)
Ns ,43 F )x(k|k)+(D
Ns ,43 D
= (CNs ,4 D
Ns ,42 + D
w )w(k)
Ns ,43 D
+(D

xT,I (k + Ns |k) = xT,0 (k + Ns |k) + DNs ,33 vI (k)


Ns,31 + D
e )e(k|k)+
Ns ,33 F )x(k|k)+(D
Ns ,33 D
= (CNs ,3 D
w )w(k)
Ns ,32 + D
Ns ,33 D
(D

(5.124)

(5.125)

then

Ns ,43 Y r
I (k)
(k)
= I (k) + D
Ns ,33 Y r
xT (k + Ns |k) = xT,I (k + Ns |k) + D
I (k)
and we can rewrite the constraint as:



(k)
(k)
= A
I (k) + b (k) 0
Wn xT (k + Ns |k) 1

(5.126)

5.3. IMPLEMENTATION
where

I (k) (k)
b (k) =
Wn xT,I (k + Ns |k) 1


Ns ,43
D
r
A =
Ns ,33 Y
Wn D

89

(5.127)
(5.128)

Substitution of (5.124) and (5.125) in (5.127) gives






Ns ,41 + D
Ns ,43 F
Ns ,43 D
e
D
CNs ,4 D
b (k) =
e ) e(k|k)
Ns ,33 F ) x(k|k)+ Wn (D
Ns ,31 + D
Ns ,33 D
Wn (CNs ,3 D





w
Ns ,42 + D
Ns ,43 D

D
(k)

+
w ) w(k)
Ns ,32 + D
Ns ,33 D
1
Wn (D
The constrained innite horizon MPC problem is now equivalent to a Quadratic Programming problem of minimizing (5.123) subject to (5.126). Note that the term c (k) in (5.123)
does not play any role in the optimization and can therefore be skipped.
2 End Proof

5.3

Implementation

5.3.1

Implementation and computation in LTI case

In this chapter the predictive control problem was solved for various settings (unconstrained, equality/inequality constrained, nite/innite horizon). In the absence of inequality constraints, the resulting controller is given by the control law:
v(k) = F x(k|k) + De e(k|k) + Dw w(k).

(5.129)

Unfortunately, the state x(k|k) of the plant and the true value of e(k|k) are unknown. We
therefore introduce a controller state xc (k) and an estimate ec (k). By substitution of xc (k)
and ec (k) in system equations (5.1)-(5.2) and control law (5.129) we obtain:
xc (k + 1) = A xc (k) + B1 ec (k) + B2 w(k) + B3 v(k)


1
ec (k) = D11 y(k) C1 xc (k) D12 w(k)

v(k) = F xc (k) + De ec (k) + Dw w(k)

(5.130)
(5.131)
(5.132)

Elimination of ec (k) and v(k) from the equations results in the closed loop form for the
controller:
1
1
C1 B3 De D11
C1 ) xc (k)
xc (k + 1) = (A B3 F B1 D11
1
1
+(B1 D11 + B3 De D11 ) y(k)
1
1
D12 Ew + B2 Ew B1 D11
D12 Ew ) w(k)

+(B3 Dw B3 De D11
1
1
v(k) = (F De D11 C1 ) xc (k) + De D11 y(k) +
1
+(Dw De D11
D12 Ew ) w(k)

(5.133)
(5.134)

90 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


process

v
-

B1

B3

-?
- q 1 I
dx(k+1)
6

B2

- d

x(k)
-

C1

-?
d

D12

d
-?

A 

?
y

A 
B2 Ew

-D

12 Ew

- d

Dw

B3

w
d
6

D11

@
R?
- q 1 I
dxc (k+1)
6

- C
rxc (k)
1

-?
d

?
1
D11

F 

B1 
De 

ec

controller
Figure 5.1: Realization of the LTI SPCP controller

with Ew =

I 0 ... 0

is such that w(k) = Ew w(k).

Note that this is an linear time-invariant (LTI) controller.


Figure 5.1 visualizes the controller, where the upper block is the true process, satisfying
equations (5.1) and (5.2), and the lower block is the controller, satisfying equations (5.130),
(5.131) and (5.132). Note that the states of the process and controller are denoted by x
and xc , respectively. When the controller is stabilizing and there is no model error, the
state xc and the noise estimate ec will converge to the true state x and true noise signal
e respectively. After a transient the states of plant and controller will become the same.
Stability of the closed loop is discussed in chapter 6.

5.3. IMPLEMENTATION

91

Computational aspects:
Consider the expression (5.14)-(5.16). The matrices F , De and Dw can also be found by
solving





C2 D
T
T

21 D
22

(D
=D
(5.135)
23 D23 ) F De Dw
23
e and Dw = Ev D
w . Inversion of D

T
where we dene F = Ev F , De = Ev D
23 D23 may
be badly conditioned and should be avoided. As dened in chapter 4, (j) is a diagonal
=
2 . Dene
selection matrix with ones and zeros on the diagonal. This means that
D
23 , then we can do a singular value decomposition of M:
M =




M = U1 U2
VT
0
and the equation 5.135 becomes:




w = V U T C2 D
21 D
22
e D
( V 2 V T ) F D

(5.136)

and so the solution becomes:






22 =
w
21 D
e D
= ( V 2 V T )1 V U1T C2 D
F D


21 D
22
= V 1 U1T C2 D
and so


F De Dw

= Ev V 1 U1T

22
21 D
C2 D

For singular value decomposition robust and reliable algorithms are available. Note that
the inversion of is simple, because it is a diagonal matrix and all elements are positive
23
D
T to have full rank).
(we assumes D
23

5.3.2

Implementation and computation in full SPCP case

In this section we discuss the implementation of the full SPCP controller. For both the
nite and innite horizon the full SPCP can be solved using an optimal control law
v(k) = v0 (k) + D
I (k)

(5.137)

where
is the solution for the quadratic programming problem
1 T
I H
min
I

(5.138)

subject to:
A
I + b (k) 0

(5.139)

92 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM

process

v
-

B1

B3

-?
- q 1 I
dx(k+1)
6

B2

- d

D11

x(k)
-

C1

-?
d

D12

-?
d

A 

?
y

A 
B2 Ew
-

-D

Dw

xc (k+1)
@
R d?
- q 1 I
6

B3

12 Ew

d
6
-

d
-?

?
1
D11

- d
6

rxc (k)
- C

B1 
De 

ec

D
6

Nonlinear mapping

xc


ec

controller
Figure 5.2: Realization of the FULL SPCP controller

5.3. IMPLEMENTATION

93

This optimization problem (5.137)-(5.139) can be seen as a nonlinear mapping with input
variables x(k|k), e(k|k) and w(k)

and signal
I (k):

(k))

I (k) = h(xc (k), ec (k), w(k),


As in the LTI-case (section 5.3.1), estimations xc (k) and ec (k) from the state x(k) and
noise signal e(k), can be obtained using an observer.
Figure 5.2 visualizes the nonlinear controller, where the upper block is the true process,
satisfying equations (5.1) and (5.2), and the lower block is the controller, where the non
linear mapping has inputs xc (k), ec (k), w(k)

and (k)
and output v(k). Again, when the
controller is stabilizing and there is no model error, the state xc and the noise estimate ec
will converge to the true state x and true noise signal e respectively. After a transient the
states of plant and controller will become the same.
An advantage of predictive control is that, by writing the control law as the result of a
constrained optimization problem (5.138)-(5.139), it can eectively deal with constraints.
An important disadvantage is that every time step a computationally expensive optimization problem has to be solved [78]. The time required for the optimization makes model
predictive control not suitable for fast systems and/or complex problems.
In this section we discuss the work of Bemporad et al. [5], who formulate the linear
model predictive control problem as multi-parametric quadratic programs. The control
variables are treated as optimization variables and the state variables as parameters. The
optimal control action is a continuous and piecewise ane function of the state under the
assumption that the active constraints are linearly independent. The key advantage of this
approach is that the control actions are computed o-line: the on-line computation simply
reduces to a function evaluation problem.
The Kuhn-Tucker conditions for the quadratic programming problem (5.138)-(5.139) are
given by:
H
I + AT = 0


T A
I + b (k) = 0

(5.140)

0
A
I + b (k) 0

(5.142)
(5.143)

(5.141)

Equation (5.140) leads to optimal value for I :

I = H 1 AT
substitution into equation (5.141) gives condition


T A H 1 AT + b (k) = 0

(5.144)

(5.145)

Let a and i denote the Lagrangian multipliers corresponding the active and inactive
constraints respectively. For inactive constraint there holds i = 0, for active constraint

94 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


there holds a > 0. Let Si and Sa be selection matrices such that

 

Sa
a

=
Si
i
and


Sa
Si

T 

Sa
Si


= SaT Sa + SiT Si = I

then

=

Sa
Si

T 

a
i


= SaT a

Substitution in (5.144) and (5.145) gives:

I = H 1 AT SaT a


Ta Sa A H 1AT SaT a + b (k) = 0

(5.146)
(5.147)

Because a > 0 equation (5.147) leads to the condition:


Sa A H 1 AT SaT a + Sa b (k) = 0
If Sa A has full-row rank then a can be written as

1
Sa b (k)
a = Sa A H 1AT SaT

(5.148)

Substitution in (5.146) gives



1
Sa b (k)

I = H 1 AT SaT Sa A H 1AT SaT

(5.149)

inequality (5.143) now becomes


I + b (k) =
A
1

1

= b (k) A H
Sa b (k)
Sa A H


1 
=
Sa b (k)
I A H 1 AT SaT Sa A H 1 AT SaT
0

AT

SaT

AT

SaT

(5.150)

The solution
I of (5.149) for a given choice of Sa is the optimal solution of the quadratic
programming problem as long as (5.150) holds and a 0. Combining (5.148) and (5.150)
gives:


M1 (Sa )
(5.151)
b (k) 0
M2 (Sa )

5.3. IMPLEMENTATION

95

where


1
M1 (Sa ) = I A H 1AT SaT Sa A H 1 AT SaT
Sa
1

M2 (Sa ) = Sa A H 1AT SaT
Sa

Let

xc (k)
ec (k)

b (k) = N

w(k)

(k)

Consider the singular value decomposition



 T 

 0
V1
N = U1 U2
0 0
V2T
where is square and has full-rank nN , and dene

xc (k)
ec (k)

(k) = V1T

w(k)

(k)
Condition (5.151) now becomes


M1 (Sa )
U1 (k) 0
M2 (Sa )

(5.152)

(5.153)

Inequality (5.153) describes a polyhedra P(Sa ) in the RnN , and represent a set of all ,
that leads to the set of active constraints described by matrix Sa , and thus corresponding
to the solution
I , described by (5.149).
For all combinations of active constraints (for all Sa ) one can compute the set P(Sa ) and
nN
in this way
 a part of the R will
 be covered. For some values of (or in other words,

for some xc (k), ec (k), w(k),

(k)
), there is no possible combination of active constraints,
and so no feasible solution to the QP-problem is found. In this case we have a feasibility
problem, for these values of we have to relax the predictive control problem in some way.
(this will be discussed in the next section).
All sets P(Sa ) can be computed o-line. The on-line optimization has now become a
search-problem. At time k we can compute (k) following equation (5.152), and determine
to which set P(Sa ) the vector (k) belongs.
I can be computed using the corresponding
Sa in equation (5.149).
Resuming, the method of Bemporad et al. [5], proposes an algorithm that partitions the
state space into polyhedral sets and computes the coecients of the ane function for

96 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


every set. The result is a search tree that determines the set to which a state belongs. The
tree is computed o-line, and the classication and computation of the control signal from
the state and the coecients is done on-line. The method works very well if the number of
inequality constraints is small. However, the number of polyhedral sets grows dramatically
with the number of inequality constraints.
Another way to avoid the cumbersome optimization, is described by Van den Boom et al.

[66]. The nonlinear mapping


I (k) = h(xc (k), ec (k), w(k),

(k))
may be approximated by
a neural network with any desired accuracy. Contrary to [5], linear independency of the
active constraints is not required, and the approach is applicable to control problems with
many constraints.

5.4

Feasibility

At the end of this chapter, a remark has to be made concerning feasibility and stability.
In normal operating conditions, the optimization algorithm provides an optimal solution
within acceptable ranges and limits of the constraints. A drawback of using (hard) constraints is that it may lead to infeasibility: There is no possible control action without
violation of the constraints. This can happen when the prediction and control horizon are
chosen too small, around setpoint changes or when noise and disturbance levels are high.
When there is no feasibility guarantee, stability can not be proven.
If a solution does not exist within the predened ranges and limits of the constraints, the
optimizer should have the means to recover from the infeasibility. Three algorithms to
handle infeasibility are discussed in this section:
soft-constraint approach
minimal time approach
constraint prioritization

Soft-constraint algorithm
In contrast to hard constraints one can dene soft constraints [55],[79], given by an additional term to the performance index and only penalizing constraint violations. For
example, consider the problem with the hard constraint
min J
v

subject to

(5.154)

This output constraint can be softened by considering the following problem:


min J + c 2
v,

subject to

+R , 0 ,

(5.155)

where c 1 and R = diag(r1 , . . . , rM ) with ri > 0 for i = 1, . . . , M. In this way, violation


of the constraints is allowed, but on the cost of a higher performance index. The entries
of the matrix R may be used to tune the allowed or desired violation of the constraints.

5.5. EXAMPLES

97

Minimal time algorithm


+ j|k) (k +
In the minimal time approach [51] consider the inequality constraints (k
j|k) for j = 1, . . . , N (see section 4.2.2). Now we disregard the constraints at the start of
the prediction interval up to some sample number jmt and determine the minimal jmt for
which the problem becomes feasible:
min jmt

subject to

v,jmt

+ j|k) (k + j) for j = jmt + 1, . . . , N


(k

(5.156)

Now we can calculate the optimal input sequence v(k) by optimizing the following problem:
min J
v

subject to

+ j|k) (k + j) for j = jmt + 1, . . . , N


(k

(5.157)

where jmt is the minimum value of problem (5.156). The minimal time algorithm is used if
our aim is to be back in a feasible operation as fast as possible. As a result, the magnitude
of the constraint violations is usually larger than with the soft constraint approach.

Prioritization
The feasibility recovery technique we will discuss now is based on the priorities of the
constraints. The constraints are ordered from lowest to highest priority. In the (nominal)
optimization problem becomes infeasible we start by dropping the lowest constraints and
see if the resulting reduced optimization problem becomes feasible. As long as the problem
is not feasible we continue by dropping more and more constraints until the optimization
is feasible again. This means we solve a sequence of quadratic programming problems in
the case of infeasibility. The algorithm minimizes the violations of the constraints which
cannot be fullled. Note that it may take several trials of dropping constraints and trying
to nd an optimal solution, which is not desirable in any real time application.

5.5

Examples

Example 23 : computation of controller matrices (unconstrained case)


Consider the system of example 7 on page 52. When we compute the optimal predictive
control for the unconstrained case, with N = Nc = 4, we obtain the controller matrices:


0 1 0.4 0.3 1
F =


0.6
De =


0.1 1 0 0 0 0 0 0
Dw =
Example 24 : computation of controller matrices (equality constrained case)
We add a control horizon for Nc = 1 and use the results for the equality constrained case

98 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


to obtain the controller matrices:


0 0.8624 0.2985 0.3677 1
F =


0.6339
De =


0.0831 0.8624 0.0232 0.2587 0.0132 0.1639 0.0158 0.1578
Dw =
Example 25 : MPC of an elevator
A predictive controller design problem with constraints will be illustrated using a triple
integrator chain
...

y (t) = u(t) + (t)


...

with constraints on y, y,
y and y . This is a model of a rigid body approximation of an
elevator for position control, subject to constraints on speed, acceleration and jerk (time
derivative of accelaration).
Dene the state

y(t)
x1 (t)

x(t) = x2 (t) = y(t)


y(t)
x3 (t)
The noise model is given by:
...

(t) = e (t) + 0.5


e(t) + e(t)
+ 0.1e(t)
where e is given as zero-mean white noise. Then the continuous-time state space description
of this system can be given
x 1 (t)
x 2 (t)
x 3 (t)
y(t)

=
=
=
=

u(t) + 0.1e(t)
x1 (t) + e(t)
x2 (t) + 0.5 e(t)
x3 (t) + e(t)
...

where the noise is now acting on the states y (t) and output y(t). In matrix form we get:
x(t)

= Ac x(t) + Kc e(t) + Bc u(t)


y(t) = Cc x(t) + e(t)
where

0 0 0
Ac = 1 0 0
0 1 0

1
Bc = 0
0

0.1
Kc = 1
0.5

Cc =

0 0 1

A zero-order hold transformation with sampling-time T gives a discrete-time IO model:


xo (k + 1) = Ao xo (k) + Ko e(k) + Bo u(k)
y(k) = Co xo (k) + e(k)

5.5. EXAMPLES
where

99

1
0 0
1 0
Ao = T
2
T /2 T 1


Co = 0 0 1

T
Bo = T 2 /2
T 3 /6

0.1 T

T + T 2 /20
Ko =
2
3
T /2 + T /6 + T /60

The prediction model is built up as given in section 3. Note that we use u(k) rather
than u(k). This because the system already consists of a triple integrator, and another
integrator in the controller is not desired. The aim of this example is to direct the elevator
from position y = 0 at time t = 0 to position y = 1 as fast as possible. For a comfortable
and safe operation, control is done subject to constraints on the jerk, acceleration, speed
and overshoot, given by
...

| y (t)|
|
y (t)|
|y(t)|

y(t)

(jerk)
(acceleration)
(speed)
(overshoot)

0.4
0.3
0.4
1.01

These constraints can be translated into linear constraints on the control signal u(k) and
prediction of the state x(k):
u(k + j 1)
u(k + j 1)
x1 (k + j|k)

x1 (k + j|k)
x2 (k + j|k)

x2 (k + j|k)
x3 (k + j|k)

0.4
0.4
0.3
0.3
0.4
0.4
1.01

(positive jerk)
(negative jerk)
(positive acceleration)
(negative acceleration)
(positive speed)
(negative speed)
(overshoot)

for all j = 1, . . . , N.
We choose sampling-time T = 0.1, and prediction and control horizon N = Nc = 30, and
minimum cost-horizon Nm = 1. Further we consider a constant reference signal r(k) = 1
for all k, and the weightings parameters P (q) = 1, = 0.1. The predictive control problem
is solved by minimizing the GPC performance index (for an IO model)
J(u, k) =

N 


y(k + j|k) r(k + j|k)

T 

y(k + j|k) r(k + j|k) +

j=Nm

Nc


uT (k + j 1|k)u(k + j 1|k)

j=1

subject to the above linear constraints.


The optimal control sequence is computed and implemented. In gure 5.3 the results are
given. It can be seen that the jerk (solid line), acceleration (dotted line), speed (dashed
line) and the position (dashed-dotted line) are all bounded by the constraints.

100 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM

1.2

position
velocity
acceleration
jerk

y, x, u

0.8

0.6

0.4

0.2

0.2

0.4

0.6

10

20

30

40

50

60

Figure 5.3: Predictive control of an elevator

70

80

Chapter 6
Stability
Predictive control design does not give an a priori guaranteed stabilizing controller. Methods like LQ, H , 1 -control have built-in stability properties. In this chapter we analyze the
stability of a predictive controller. We distinguish two dierent cases, namely the unconstrained and constrained case. It is very important to notice that a predictive controller
that is stable for the unconstrained case is not automatically stable for the constrained
case.
In Section 6.4 the relation between MPC, the Internal Model Control (IMC) and the Youla
parametrization is discussed.
Section 6.5 nalizes this chapter by considering the concept of robustness.

6.1

Stability for the LTI case

In the case where inequality constraints are absent, the problem of stability is not too
dicult, because the controller is linear and time-invariant (LTI).
Let the process be given by:
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)

(6.1)

and the controller by (5.133) and (5.134):


1
1
C1 B3 De D11
C1 ) xc (k)
xc (k + 1) = (A B3 F B1 D11
1
1
+(B1 D11 + B3 De D11 ) y(k) +
1
1
D12 Ew + B2 Ew B1 D11
D12 Ew ) w(k)

+(B3 Dw B3 De D11
1
1
1
v(k) = (F De D11 C1 ) xc (k) + De D11 y(k) + (Dw De D11 D12 Ew ) w(k)

with Ew =

I 0 ... 0

is such that w(k) = Ew w(k).

We make one closed loop state


101

102

CHAPTER 6. STABILITY

space representation by substitution:


1
1
1
(F + De D11
C1 ) xc (k) + De D11
y(k) + (Dw De D11
D12 Ew ) w(k)

1
1
(F + De D11 C1 ) xc (k) + De D11 C1 x(k) + De e(k) + Dw w(k)

A x(k) + B1 e(k) + B2 Ew w(k)

+ B3 v(k)
1

(B3 F + B3 De D11
C1 ) xc (k)
A x(k) + B1 e(k) + B2 Ew w(k)
1
+B3 De D11 C1 x(k) + B3 De e(k) + B3 Dw w(k)

1
1
= (A + B3 De D11 C1 ) x(k) (B3 F + B3 De D11
C1 ) xc (k)
+(B1 + B3 De ) e(k) + (B3 Dw + B2 Ew ) w(k)

1
1
xc (k + 1) = (A B3 F B1 D11 C1 B3 De D11 C1 ) xc (k)
1
1
+(B1 D11
+ B3 De D11
) y(k)
1
1
+(B3 Dw B3 De D11 D12 Ew + B2 Ew B1 D11
D12 Ew )w(k)

1
1
= (A B3 F B1 D11 C1 B3 De D11 C1 ) xc (k)
1
1
+(B1 D11
C1 + B3 De D11
C1 ) x(k) + (B1 + B3 De ) e(k)
+(B3 Dw + B2 Ew ) w(k)

v(k) =
=
x(k + 1) =
=

or in matrix form:





1
1
C1
B3 F B3 De D11
C1
x(k + 1)
A+B3De D11
x(k)
=
1
1
1
1
xc (k + 1)
B1 D11
C1 +B3 De D11
C1 AB3 F B1 D11
C1 B3 De D11
C1
xc (k)




B1 + B3 De
B3 Dw + B2 Ew
+
e(k) +
w(k)

B1 + B3 De
B3 Dw + B2 Ew


x(k)
+ B1,cl e(k) + B2,cl w(k)
= Acl

xc (k)




 



1
1
C1 F De D11
C1
v(k)
De D11
x(k)
De
Dw
=
+
e(k) +
w(k)

y(k)
C1
0
xc (k)
D11
D12 Ew


x(k)
+ D1,cl e(k) + D2,cl w(k)

= Ccl
xc (k)
Next we choose a new state:
 





I 0
x(k)
x(k)
x(k)
=
=T
I I
xc (k)
xc (k)
xc (k) x(k)
This state transformation with


I 0
T =
and
I I


T

I 0
I I

leads to a new realization






x(k)
x(k + 1)

1,cl e(k) + B2,cl w(k)

= Acl
+B
xc (k) x(k)
xc (k + 1) x(k + 1)




x(k)
v(k)
2,cl w(k)
1,cl e(k) + D

+D
= Ccl
xc (k)
y(k)

6.1. STABILITY FOR THE LTI CASE

103

with system matrices (see appendix B):


Acl = T Acl T 1




1
1
I 0
A+B3 De D11
C1
B3 F B3 De D11
C1
I 0
=
1
1
1
1
B1 D11
C1 +B3 De D11
C1 AB3 F B1 D11
C1 B3 De D11
C1
I I
I I


1
A B3 F B3 F B3 De D11 C1
=
1
0
A B1 D11
C1
1,cl = T B1,cl
B



I 0
B1 + B3 De
=
B1 + B3 De
I I


B1 + B3 De
=
0
2,cl = T B2,cl
B



I 0
B3 Dw + B2 Ew
=
I I
B3 Dw + B2 Ew


B3 Dw + B2 Ew
=
0
1

Ccl = Ccl T



1
1
De D11
I 0
C1 F De D11
C1
=
C1
0
I I


1
F F De D11 C1
=
C1
0

D1,cl = D1,cl
2,cl = D2,cl
D
The eigenvalues of the matrix Acl are equal to eigenvalues of (AB3 F ) and the eigenvalues
1
of (A B1 D11
C1 ). This means that in the unconstrained case necessary and sucient
conditions for closed-loop stability are:
1. the eigenvalues of (A B3 F ) are strictly inside the unit circle.
1
C1 ) are strictly inside the unit circle.
2. the eigenvalues of (A B1 D11

Condition (1) can be satised by choosing appropriate tuning parameters such that the
feedback matrix F makes |(AB3 F ) | < 1. We can obtain that by a careful tuning (chapter
8), by introducing the end-point constraint (section 6.3) or by extending the prediction
horizon to innity (section 6.3).
Condition (2) is related with the choice of the noise model H(q), given by the input output
relation (compare 6.1)
x(k + 1) = A x(k) + B1 e(k)
y(k) = C1 x(k) + D11 e(k)

104

CHAPTER 6. STABILITY
e

w
?

v


y-

plant

predictive
controller

w
Figure 6.1: Closed loop conguration
with transfer function:
H(q) = C1 (qI A)1 B1 + D11
From appendix B we learn that for


A
B1
H(q)
C1
D11
we nd
H


1

(q)

1
C1
A B1 D11
1
D11 C1

1
B1 D11
1
D11

so the inverse noise model is given (for v = 0 and w = 0) by the state space equations:
1
1
x(k + 1) = (A B1 D11
C1 ) x(k) + B1 D11
y(k)
1
1
e(k) = D11 C1 x(k) + D11 y(k)

with transfer function:


1
1
1
1
C1 (qI A + B1 D11
C1 )1 B1 D11
+ D11
H 1 (q) = D11
1
From these equations it is clear that the eigenvalues of (A B1 D11
C1 ) are exactly equal
1
to the poles of the system H (q), and thus a necessary condition for closed loop stability
is that the inverse of the noise model is stable.

If condition (2) is not satised, it means that the inverse of the noise model is not stable.
1
We will have to nd a new observer-gain B1,new such that A B1,new D11
C1 is stable,

6.2. STABILITY FOR THE INEQUALITY CONSTRAINED CASE

105

without changing the stochastic properties of the noise model. This can be done by a
spectral factorization
T
H(q)H T (q 1 ) = Hnew (q) Hnew
(q 1 )
1
where Hnew and Hnew
are both stable. H(q) and Hnew (q) have the same stochastic properties and so replacing H by Hnew will not aect the properties of the process. The new
Hnew is given as follows:

Hnew (q) = C1 (qI A)1 B1,new + D11


The new observer-gain B1,new is given by:
1
B1,new = (AXC1T + B1 D11
)(I + C1 XC1T )1 D11

where X is the positive denite solution of the discrete algebraic Riccati equation:
1
1
(A B1 D11
C1 )T X(I + C1T C1 X)(A B1 D11
C1 ) X = 0

6.2

Stability for the inequality constrained case

The issue of stability becomes even more complicated in the case where constraints have
to be satised. When there are only constraints on the control signal, a stability guarantee
can be given if the process itself is stable (Sontag [62], Balakrishnan et al. [4]).
In the general case with constraints on input, output and states, the main problem is
feasibility. The existence of a stabilizing control law is not at all trivial. For stability in
the inequality constrained case we need to modify the predictive control problem (as will
be discussed in Section 6.3) or we have to solve the predictive control problem with a state
feedback law using Linear Matrix inequalities, based on the work of Kothare et al. [35].
(This will be discussed in section 7.3.)

6.3

Modications for guaranteed stability

In this section we will introduce some modications of the MPC design method that
will provide guaranteed stability of the closed loop. We discuss the following stabilizing
modications ([45]):
1. Innite prediction horizon
2. End-point constraint (or terminal constraint)
3. Terminal cost function
4. Terminal cost function with end-point constraint set

106

CHAPTER 6. STABILITY

These modication uses the concept of monotonicity of the performance index to prove
stability ([12, 37]). In most modication we assume the steady-state (vss , xss , wss , zss ) =
(vss , xss , wss , 0) is unique, so the matrix Mss as dened in (5.38) has full column rank.
Furthermore we assume that the system with input v and output z is minimum-phase, so
Assumption 26 The matrix


zI A B3
C2 D23
does not drop in rank for all |z| 1.
Note that if assumption 26 is satised, Mss will have full rank. In most of the modications we will use the KrasovskiiLaSalle principle [72] to prove that the steady-state is
asymptotically stable. If a function V (k) can be found such that
V (k) > 0 , for all x = xss
V (k) = V (k) V (k 1) 0 for all x
and V (k) = V (k) = 0 for x = xss , and if the set {V (k) = 0} contains no trajectory of
the system except the trivial trajectory x(k) = xss for k 0, then the Krasovskii-LaSalle
principle states that the the steady-state x(k) = xss is asymptotically stable.

Innite prediction horizon


A straight forward way to guarantee stability in predictive control is to choose an innite
prediction horizon (N = ). It is easy to show that the controller is stabilizing for the
unconstrained case, which is formulated in the following theorem:
Theorem 27 Consider a LTI system given in the state space description
x(k + 1) = A x(k) + B2 w(k) + B3 v(k),
z(k) = C2 x(k) + D22 w(k) + D23 v(k),

(6.2)
(6.3)

satisfying assumption 26. Dene the set

v(k|k)

..
v(k) =
,
.
v(k + Nc 1|k)
where Nc ZZ+ {}. A performance index is dened as

T



z(k + j|k) (j) z(k + j|k) ,
min J(
v , k) = min
v(k)

v(k)

j=0

where (i) (i+j) > 0 for j 0. The predictive control law minimizing this performance
index results in a stabilizing controller.

6.3. MODIFICATIONS FOR GUARANTEED STABILITY

107

Proof:
First let us dene the function
V (k) = min J(
v , k) 0,

(6.4)

v(k)

and let v (k) be the optimizer


v (k) = arg min J(k),
v(k)

where

v (k) =

v (k|k)
v (k + 1|k)
..
.
v (k + Nc 1|k)

for Nc < ,

or

v (k|k)

v (k) = v (k + 1|k) for Nc = ,


..
.
where v (k + j|k) means the optimal input value to be applied a time k + j as calculated
at time k. Let z (k + j|k) be the (optimal) performance signal at time k + j when the
optimal input sequence v (k) computed at time k is applied. Then
V (k) =




z (k + j|k)

T




(j) z (k + j|k) .

j=0

Now based on v (k) and the chosen stabilizing modication method we construct a vector
vsub (k + 1). This vector can be seen as a suboptimal input sequence for the k + 1-th
optimization. The idea is to construct vsub (k + 1) such that
J(
vsub , k + 1) V (k).
If we now compute
v , k + 1),
V (k + 1) = min J(
v(k+1)

we will nd that
V (k + 1) = min J(
v , k + 1) J(
vsub , k + 1) V (k).
v(k+1)

108

CHAPTER 6. STABILITY

For the innite horizon case we dene the suboptimal control sequence as follows:

v (k + 1|k)

v (k + 2|k)

.
.
vsub (k + 1) =
for Nc < ,
.

v (k + Nc 1|k)
vss
or

v (k + 1|k)


vsub (k + 1) = v (k + 2|k) for Nc = ,
..
.

and compute
Vsub (k + 1) = J(
vsub , k + 1),
then this value is equal to

T



Vsub (k + 1) =
z (k + j|k) (j) z (k + j|k) ,
j=1

and so
Vsub (k + 1) V (k)
We proceed by observing that
V (k + 1) = min J(
v , k + 1) J(
vsub , k + 1),
v(k+1)

which means that V (k + 1) V (k). Using the fact that V (k) = V (k) V (k 1) 0 and
that, based on assumption 26, the set {V (k) = 0} contains no trajectory of the system
except the trivial trajectory x(k) = xss for k 0, the Krasovskii-LaSalle principle [72]
states that the steady-state is asymptotically stable.
2 End Proof
As is shown above, for stability it is not important whether Nc = or Nc < . For Nc =
the predictive controller becomes equal to the optimal LQ-solution (Bitmead, Gevers &
Wertz [7], see section 5.2.3). The main reason not to choose an innite control horizon is the
fact that constraint-handling on an innite horizon is an extremely hard problem. Rawlings
& Muske ([51]) studied the case where the prediction horizon is equal to innity (N = ),
but the control horizon is nite (Nc < ), see the sections 5.2.4 and 5.2.5. Because of the
nite number of control actions that can be applied, the constraint-handling has become
a nite-dimensional problem instead of an innite-dimensional problem for Nc = .

6.3. MODIFICATIONS FOR GUARANTEED STABILITY

109

End-point constraint (terminal constraint)


The main idea of the end-point constraint is to force the system to its steady state at the
end of the prediction interval. Since the end-point constraint is an equality constraint, it
still holds that the resulting controller is linear and time-invariant.

Theorem 28 Consider a LTI system given in the state space description


x(k + 1) = A x(k) + B2 w(k) + B3 v(k),
z(k) = C2 x(k) + D22 w(k) + D23 v(k),
satisfying assumption 26. A performance index is dened as
v , k) = min
min J(
v(k)

v(k)

N
1


T
z(k + j|k)



(j) z(k + j|k) ,

(6.5)

j=0

where (i) (i + 1) > 0 is positive denite for i = 0, . . . , N 1 and an additional equality


constraint is given by:
x(k + N) = xss .

(6.6)

Finally, let w(k) = wss for k 0. Then, the predictive control law, minimizing (6.5),
results in a stable closed loop.
proof:
Also in this modication the performance index will be used as a Lyapunov function for
the closed loop system. First let us dene the function
V (k) = min J(
v , k) 0,

(6.7)

v(k)

and let v (k) be the optimizer


v , k)
v (k) = arg min J(
v(k)

v (k|k)
..


v
(k
+
N

c 2|k)

= v (k + Nc 1|k)

vss

..

.
vss
v , k),
= arg min J(
v(k)

110

CHAPTER 6. STABILITY

where v (k + j|k) means the optimal input value to be applied a time k + j as calculated
at time k. Let z (k + j|k) be the (optimal) performance signal at time k + j when the
optimal input sequence v (k) computed at time k is applied. Then

T



z (k + j|k) (j) z (k + j|k) .
V (k) =
j=0

Now based on v (k) and the chosen stabilizing modication method we construct the vector
vsub (k + 1) as follows:

v (k + 1|k)
..


v (k + Nc 1|k)

vss
vsub (k + 1) =
,

vss

.
..

vss
and we compute
vsub , k + 1).
Vsub (k + 1) = J(
Applying this above input sequence vsub (k + 1) to the system results in a zsub (k + j|k + 1) =
z (k + j|k) for j = 1, . . . , N and zsub (k + N + j|k + 1) = 0 for j > 0. With this we derive
Vsub (k + 1) =

N 
T



zsub (k+j +1|k + 1) (j + 1) zsub (k+j +1|k + 1)
j=1

N 
T




zsub (k+j +1|k + 1) (j) zsub (k+j +1|k + 1)


j=1
N 
T



=
z (k + j|k) (j) z (k + j|k)
j=2



z (k|k)
= V (k) z (k|k)T (1)
V (k).
Further, because of the receding horizon strategy, we do a new optimization
V (k + 1) = min J(
v , k + 1) J(
vsub , k + 1),
v(k+1)

and therefore, V (k + 1) = V (k + 1) V (k) 0. Also for the end-point constraint,


the set {V (k) = 0} contains no trajectory of the system except the trivial trajectory
x(k) = xss for k 0 and we can use the Krasovskii-LaSalle principle [72] to proof that the
steady-state is asymptotically stable.
2 End Proof

6.3. MODIFICATIONS FOR GUARANTEED STABILITY

111

Remark 29 : The constraint


W x(k + N) = W xss ,
where W is a mn-matrix with full column-rank (so m n), is equivalent to the end-point
constraint (6.6), because x(k + N) = xss if W x(k + N) = W xss . Therefore, the constraint

C1
y(k + N|k)
C1
yss
C1 A

y(k + N + 1|k)

.. C1 A
x(k + N)
xss = 0,

. =
..
..
..

.
.
.
yss
n1
n1
C1 A
C1 A
y(k + N + n 1|k)
also results in a

C1
C1 A

..

stable closed loop if the matrix

C1 An1
has full rank. For SISO systems with pair (C1 , A) observable, we choose n = dim(A),
so equal to the number of states. For MIMO systems, there may be an integer value
n < dim(A) for which W has full column-rank.
The end-point constraint was rst introduced by (Clarke & Scattolini [16]), (Mosca & Zhang
[47]) and forces the output y(k) to match the reference signal r(k) = rss at the end of the
prediction horizon:
y(k + N + j) = rss for j = 1, . . . , n.
With this additional constraint the closed-loop system is asymptotically stable.
Although stability is guaranteed, the use of an end-point constraint has some disadvantages:
The end-point constraint is only a sucient, but not a necessary condition for stability.
The constraint may be too restrictive and can lead to infeasibility, especially if the horizon
N is chosen too small.
The tuning rules, as will discussed in the Chapter 8, change. This has to do with the fact
that because of the control horizon Nc the output is already forced to its steady state at
time Nc and so for a xed Nc a prediction horizon N = Nc will give the same stable closed
loop behavior as any choice of N > Nc . The choice of the prediction horizon N is overruled
by the choice of control horizon Nc and we loose a degree of freedom.

Terminal cost function


One of the earliest proposals for modifying the performance index J to ensure closed-loop
stability was the addition of a terminal cost [7] in the context of model predictive control
of unconstrained linear system.

112

CHAPTER 6. STABILITY

Theorem 30 Consider a LTI system given in the state space description


x(k + 1) = A x(k) + B2 w(k) + B3 v(k),
z(k) = C2 x(k) + D22 w(k) + D23 v(k),
satisfying assumption 26, and w(k) = wss for k 0. A performance index is dened as

T 

x(k + N|k) xss P0 x(k + N|k) xss
v , k) = min
min J(
v(k)
v(k)

N
1
T 


+
z(k + j|k)
z(k + j|k)
,
(6.8)
j=0

where P0 is the solution to the Riccati equation


P0 = 0,
1 B T P0 A + Q
AT P0 A AT P0 B3 (B3T P0 B3 + R)
3

(6.9)

where


T
1 T

A = A B3 (D23 D23 ) D23 C2 ,


= C T (I D23 (D T D23 )1 D T )C2 ,
Q
2
23
23
T

R = D D23 .
23

Then the predictive control law, minimizing (6.8), results in a stable closed loop.
Proof:
Dene the signals
x (k + j|k) = x(k + j|k) xss for all j 0,
v (k + j|k) = v(k + j|k) vss for all j 0,


T
T
C2 x (k + j|k) ,
D23 )1 D23
v(k + j) = v (k + j) + (D23
then, according to section 5.2.2, it follows that the system can be described by the state
space equation
x (k + j|k) + B3 v(k + j),
x (k+j +1|k) = A
and the performance index becomes
J(v, k) = xT (k + N|k)P0 x (k + N|k)
+

N
1

j=0

x (k + j|k) + vT (k + j)R
v (k + j).
xT (k + j|k)Q

6.3. MODIFICATIONS FOR GUARANTEED STABILITY

113

Now consider another (innite horizon) performance index

J (v, k) =

x (k + j|k) + vT (k + j)R
v(k + j),
xT (k + j|k)Q

j=0

and we structure the input signal such that v(k + j) is free for j = 0, . . . , Ns 1 and
v(k + j) = F x (k + j|k) where

1 B T P0 A.
F = (B T P0 B3 + R)
3

The performance index can now be rewritten as


N
1


J a (v, k) =

x (k + j|k) + vT (k + j)R
v (k + j)
xT (k + j|k)Q

j=0

x (k + j|k) + vT (k + j)R
v (k + j),
xT (k + j|k)Q

j=N

Then for j N:
x (k + j)
xT (k + j + 1)P0 x (k + j + 1) xT (k + j)P0 x (k + j) + xT (k + j)Q
v (k + j) =
+ vT (k + j)R

T  T



x (k + j)
AT P0 B3
A P0 A P0 + Q
x (k + j)
=
=

B3T P0 B3 + R
B T P0 A
v(k + j)
v(k + j)


 3T

 A P0 A P0 + Q

I
AT P0 B3
x (k + j)
=
xT (k + j) I F T
T
T

B3 P0 B3 + R
B3 P0 A
F



 AT P0 A P0 + Q

AT P0 B3
T
T
T
1

=
x (k + j) I A P0 B3 (B3 P0 B3 + R)

B3T P0 B3 + R
B3T P0 A


I

1 B T P0 A x (k + j)
(B3T P0 B3 + R)
3


AT P0 B3 (B T P0 B3 + R)
1 B T P0 A x (k + j)
=
xT (k + j) AT P0 A P0 + Q
3
3
=0,
and so we nd


x (k + j) + vT (k + j)R
v (k + j) = x (k + N)T P0 x (k + N),
xT (k + j)Q
j=Ns

which means that J = J a . We already have proven that the innite horizon performance
index Ja is a Lyapunov function. Because the nite horizon predictive controller with a
terminal cost function gives the same performance index J = J a means that also J is a
Lyapunov function and so nite horizon predictive controller with a terminal cost function
stabilizes the system.
2 End Proof

114

CHAPTER 6. STABILITY

Terminal cost function with End-point constraint set (Terminal


constraint set)
The main disadvantage of the terminal cost function is that it can only handle the unconstrained case. The end-point constraint can handle constraints, but it is very conservative
and can easily lead to infeasibility. If we replace the end-point constraint by an end-point
constraint set and also add the terminal cost function, we can ensure closed-loop stability
for the constrained case in a non-conservative way [31, 60, 64].
Theorem 31 Consider a LTI system given in the state space description
x(k + 1) = A x(k) + B2 w(k) + B3 v(k),
z(k) = C2 x(k) + D22 w(k) + D23 v(k),
(k) = C4 x(k) + D42 w(k) + D43 v(k),
satisfying assumption 26. A performance index is dened as

T 

x(k + N|k) xss P0 x(k + N|k) xss
min J(
v , k) = min
v(k)
v(k)

N
1
T 


z(k + j|k)
z(k + j|k)
+
,

(6.10)

j=0

where P0 is the solution to the Riccati equation


P0 = 0,
1 B3T P0 A + Q
AT P0 A AT P0 B3 (B3T P0 B3 + R)
for

(6.11)



T
1 T

A = A B3 (D23 D23 ) D23 C2 ,


= C T (I D23 (D T D23 )1 D T )C2 ,
Q
2
23
23
T

R = D D23 .
23

The signal constraint is dened by


(k) ,

(6.12)

where is constant vector. Let w(k) = wss for k 0, and consider the linear control law



T
T
D23 )1 D23
C2 x(k + j|k) xss + vss ,
(6.13)
v(k + j|k) = ( F + (D23
Finally let W be the set of all states for which (6.12)
1 B T P0 A.
with F = (B3T P0 B3 + R)
3
holds under control law (6.13), and assume
D D+ W.

(6.14)

Then the predictive control law, minimizing (6.8), subject to (6.12) and terminal constraint
x(k + N|k) D,
results in a stable closed loop.

(6.15)

6.3. MODIFICATIONS FOR GUARANTEED STABILITY

115

Proof:
Dene the signals
x (k + j|k) = x(k + j|k) xss for all j 0,
v (k + j|k) = v(k + j|k) vss for all j 0,


T
1 T
v(k + j) = v (k + j) + (D23 D23 ) D23 C2 x (k + j|k) ,
then according to section 5.2.2, it follows that the system can be described by the state
space equation
x (k + j|k) + B3 v(k + j),
x (k+j +1|k) = A
and the performance index becomes
J(
v , k) = xT (k + N|k)P0 x (k + N|k)
+

N
1


x (k + j|k) + vT (k + j)R
v (k + j).
xT (k + j|k)Q

j=0

Note that because x(k + N|k) D and the set D D+ where D+ is invariant under
feedback law (6.13), we can rewrite this performance index as
v, k) =
J (
+

N
1

j=0

x (k + j|k) + vT (k + j)R
v(k + j)
xT (k + j|k)Q
x (k + j|k) + vT (k + j)R
v(k + j).
xT (k + j|k)Q

j=N

Dene the function


V (k) = min J (
v , k),
v(k)

and the optimal vector


v (k) = arg min J (k),
v(k)

where

v (k|k)

v (k + 1|k)
..
.

v (k) = v (k + N 1|k)

F x (k + N|k)

F x (k + N + 1|k)
..
.

116

CHAPTER 6. STABILITY

where v (k + j|k) means the optimal input value to be applied a time k + j as calculated
at time k. Now dene

v (k + 1|k)

v (k + 2|k)

..

vsub (k + 1) = v (k + N 1|k) .

F x (k + N|k)

F x (k + N + 1|k)
..
.
Note that this control law is feasible, because x(k +N +j|k) D for all j 0. We compute
Vsub (k + 1) = J(
vsub , k + 1),
then this value is equal to

T 


Vsub (k + 1) =
z (k + j|k)
z (k + j|k) ,
j=1

and we nd
Vsub (k + 1) V (k).
Further
V (k + 1) = min J(
v , k + 1) J(
vsub , k + 1),
v(k+1)

and therefore, V (k + 1) = V (k + 1) V (k) 0. Also in this case the set {V (k) = 0}


contains no trajectory of the system except the trivial trajectory x(k) = xss for k 0
and we can use so the Krasovskii-LaSalle principle [72] to proof that the steady-state is
asymptotically stable.
2 End Proof

6.4
6.4.1

Relation to IMC scheme and Youla parametrization


The IMC scheme

The Internal Model Control (IMC) scheme provides a practical structure to analyze properties of closed-loop behaviour of control systems and is closely related to model predictive
control [26], [46]. In this section we will show that for stable processes, the solution of the
SPCP (as derived in chapter 5) can be rewritten in the IMC conguration.

6.4. RELATION TO IMC SCHEME AND YOULA PARAMETRIZATION

117

e
?

v -

Q
e

-?
e

1 
H

y+
?
-e

Figure 6.2: Internal Model Control Scheme

The IMC structure is given in gure 6.2 and consists of the process G(q), the model G(q),
1 and the two-degree-of-freedom controller Q.
the (stable) inverse of the noise model H
For simplicity reasons we will consider F (q) = 0 (known disturbances are not taken into
account), but the extension is straight forward. The blocks in the IMC scheme satisfy the
following relations:
Process : y(k) = G(q)v(k) + H(q)e(k)

Processmodel : y(k) = G(q)v(k)


1 (q)(y(k) y(k))
Pre-lter : e(k) = H


w(k)

+ Q2 (q) e(k)
Controller : v(k) = Q(q)
= Q1 (q)w(k)
e(k)
Proposition 32
(6.19), where G,

G(q)
= G(q) and

(6.16)
(6.17)
(6.18)
(6.19)

Consider the IMC scheme, given in gure 6.2, satisfying equations (6.16)H and H 1 are stable. If the process and noise models are perfect,so if

H(q)
= H(q), the closed loop description is given by:

+ Q2 (q)e(k)
v(k) = Q1 (q)w(k)


y(k) = G(q)Q1 (q)w(k)

+ G(q)Q2 (q) + H(q) e(k)

(6.20)
(6.21)

The closed loop is stable if and only if Q is stable.


Proof:
The description of the closed loop system is given by substitution of (6.16)-(6.18) in (6.16)

118

CHAPTER 6. STABILITY

and (6.19):
1 


1
1

Q1 (q)w(k)+Q

(q)H(q)e(k)
v(k) = I Q2 (q)H (q)(G(q) G(q))
2 (q)H
1 

1(q)(G(q) G(q))

Q1 (q)w(k)

y(k) = G(q) I Q2 (q)H



1 (q)H(q)e(k) +H(q)e(k)
+Q2 (q)H
and H = H
this simplies to equations (6.20) and (6.21). It is clear that from
For G = G
equations (6.20) and (6.21) that if Q is stable, the closed loop system is stable. On the
other hand, if the closed loop is stable, we can read from equation (6.20) that Q1 and Q2
are stable.
2 End Proof
The MPC controller in an IMC conguration
The next step is to derive the function Q for our predictive controller. To do so, we need
to nd the relation between the IMC controller and the common two-degree of freedom
controller C(q), given by:
v(k) = C1 (q)w(k)

+ C2 (q)y(k)
From equations (6.17)-(6.19) we derive:

+ Q2 (q) e(k)
v(k) = Q1 (q)w(k)
1(q)(y(k) y(k))

+ Q2 (q) H
= Q1 (q)w(k)
1(q)y(k) Q2 (q) H
1(q)G(q)v(k)

= Q1 (q)w(k)

+ Q2 (q) H


1(q)G(q)

1(q)y(k)
v(k) = Q1 (q)w(k)

+ Q2 (q) H
I + Q2 (q) H
so
1
1

Q1 (q)w(k)

v(k) = I + Q2 H (q)G(q)
1

1 (q)G(q)

1(q)y(k)
Q2 (q) H
+ I + Q2 H


This results in:



1
1(q)G(q)

I + Q2 (q) H
Q1 (q)

1
1(q)G(q)

1(q)
Q2 (q) H
C2 (q) = I + Q2 (q) H

1

= Q2 (q) H(q) + G(q)Q2 (q)

C1 (q) =

(6.22)

6.4. RELATION TO IMC SCHEME AND YOULA PARAMETRIZATION

119

with the inverse relation:



1

C1 (q)
Q1 (q) = I C2 (q)G(q)

1

C2 (q)H(q)
Q2 (q) = I C2 (q)G(q)
H
1 and Q(q) as follows:
In state space we can describe the systems G,

Process model G(q):


x1 (k + 1) = A x1 (k) + B3 v(k)
y(k) = C1 x1 (k)
1 (q):
Pre-lter H
1
1
x2 (k + 1) = (A B1 D11
C1 ) x2 (k) + B1 D11
(y(k) y(k))
1
1
e(k) = D11 C1 x2 (k) + D11 (y(k) y(k))

Controller Q(q):

+ (B1 + B3 De ) e(k)
x3 (k + 1) = (A B3 F ) x3 (k) + B3 Dw w(k)

+ De e(k)
v(k) = F x3 (k) + Dw w(k)

(6.23)
(6.24)

The state equations of x1 and x2 follow from chapter 2. We will now show that the above
choice of Q(q) will indeed lead to the same control action as the controller derived in
chapter 5.
Combining the three state equations in one gives:

1
1
C1
B3 F
x1 (k+1)
C1
B3 De D11
A B3 De D11
x1 (k)
1
1
x2 (k+1) =
x2 (k)
C1
A B1 D11
C1
0
B1 D11
1
1
1
1
B1 D11 C1 B3 De D11 C1 B1 D11 C1 B3 De D11 C1 A B3 F
x3 (k)
x3 (k+1)

1
B3 Dw
B3 De D11
1

B1 D11
0 w(k)
y(k) +

+
1
1
B1 D11 + B3 De D11
B3 Dw

x1 (k)






1
1
1
C1 De D11
C1 F x2 (k) + De D11
y(k) + Dw w(k)

v(k) = De D11
x3 (k)
We apply a state transformation:

x1
x1
I 0 0
x1
x2 = x1 + x2 x3 = I I I x2
0 0 I
x3
x3
x3

120

CHAPTER 6. STABILITY

This results in


1
1
C1
B3 F B3 De D11
C1
A
B3 De D11
x1 (k)
x1 (k+1)
x2 (k)
x2 (k+1) = 0
A
0
1
1
1
1
x3 (k+1)
0 B1 D11
x3 (k)
C1 B3 De D11
C1 AB3 F B1 D11
C1 B3 De D11
C1

1
B3 Dw
B3 De D11

0
0 w(k)
y(k) +

+
1
1
B1 D11 + B3 De D11
B3 Dw


x1 (k)






1
1
C1 F De D11
C1 x2 (k) + De y(k) + Dw w(k)

v(k) = 0 De D11
x3 (k)

It is clear that state x1 is not observable and state x2 is not controlable. Since A has all
eigenvalues inside the unit disc, we can do model reduction by deleting these states and
we obtain the reduced controller:
1
1
x3 (k + 1) = (AB3 F B1 D11
C1 B3 De D11
C1 ) x3 (k)
1
1
+(B1 D11
+B3 De D11
) y(k) + B3 Dw w(k)

1
1
v(k) = (F De D11 C1 ) x3 (k) + De D11 y(k) + Dw w(k)

This is exactly equal to the optimal predictive controller, derived in the equations (5.133)
and (5.134).

Remark 33 : Note that a condition of a stable Q is equivalent to the condition that all
eigenvalues of (A B3 F ) are strictly inside the unit circle. (The condition a stable inverse
1
H 1 is equivalent to the condition that all eigenvalues of (A B1 D11
C1 ) are strictly inside
the unit circle.)
Example 34 : computation of function Q1 (q) and Q2 (q) for a given MPC controller
Consider the system of example 7 on page 52 with the MPC controller computed in example
24 on page 97. Using equations 6.23 and 6.24 we can give the functions Q1 (q) and Q2 (q)
as follows:


A B3 F
B3 Dw
Q1 (q)
F
Dw

Q2 (q)

A B3 F
F

Note that
1 (q) Dw
Q1 (q) = Q

(B1 + B3 De )
De

6.4. RELATION TO IMC SCHEME AND YOULA PARAMETRIZATION


where


1 (q)
Q

A B3 F
F

B3
I

121

1 and Q2 in polynomial form:


Now we can compute Q
1 (q) = (z 1)(z 0.5)(z 0.2)
Q
(z 3 1.3323 z 2 + 0.4477 z)
Q2 (q) =

6.4.2

0.6339(z 2 1.0534 z + 0.3265)(z 0.2)


(z 3 1.3323 z 2 + 0.4477 z)

The Youla parametrization

The IMC scheme provides a nice framework for the case where the process is stable. For
the unstable case we can use the Youla parametrization [48], [73], [65]. We consider process
G(q), noise lter H(q), satisfying
y(k) = G(q) v(k) + H(q) e(k)

(6.25)

For simplicity reasons we will consider F (q) = 0 (known disturbances are not taken into
account). The extension for F (q) = 0 is straight forward.
Consider square transfer matrices


M(q) N(q)
(q) =
X(q) Y (q)


1

=
(q) =

(q)
Y (q) N

(q)
X(q)
M

satisfying the following properties:


1. The system (q) is stable.
2. The inverse system 1 (q) is stable.
(q) Y (q) and Y (q) are square and invertible systems.
3. M(q), M
(q)M
1 (q).
4. G(q) = M 1 (q)N(q) = N
5. H(q) = M 1 (q)R(q) where R(q) and R1 (q) are both stable.
The above factorization described in items 1 to 4 is denoted as a doubly coprime factorization of G(q). The last item is necessary to be able to stabilize the process. Now consider a
controller given by the following parametrization, denoted as the Youla parametrization:

1 

v(k) = Y (q) + Q2 (q)N(q)
X(q) + Q2 (q)M(q) y(k)
1

Q1 (q)w(k)

(6.26)
+ Y (q) + Q2 (q)N(q)

122

CHAPTER 6. STABILITY

where M(q), N(q), X(q) and Y (q) are found from a doubly coprime factorization of plant
model G(q) and Q1 (q) and Q2 (q) are transfer functions with the appropriate dimensions.
For this controller we can make the following proposition:
Proposition 35 Consider the process (6.25) and the Youla parametrization (6.26). The
closed loop description is then given by:



v(k) = M (q)Q1 (q)w(k)

+ X(q) + M (q)Q2 (q) R(q)e(k)


(6.27)



+ Y (q) + N(q)Q
(6.28)
y(k) = N(q)Q
1 (q)w(k)
2 (q) R(q)e(k)
The closed loop is stable if and only if both Q1 and Q2 are stable.
The closed loop of plant model G(q), noise model H(q) together with the Youla-based
controller is visualised in gure 6.3.

Controller

e
?
-e
6

?
-

Y 1

Q2
-

Process

Q1

6
-e


v N

-?
e

y -

Figure 6.3: Youla parametrization

The Youla parametrization (6.26) gives all controllers stabilizing G(q) for stable Q1 (q) and
Q2 (q). For a stabilizing MPC controller we can compute the corresponding functions Q1 (q)
and Q2 (q):
The MPC controller in a Youla conguration
The decomposition of G(q) in two stable functions M(q) and N(q) (or coprime factorization) is not unique. We will consider two possible choices for the factorization:
Factorization, based on the inverse noise model:
A possible state-space realization of (q) is the following
ij (q) = C,i (qI A )1 B,j + D,ij

6.4. RELATION TO IMC SCHEME AND YOULA PARAMETRIZATION


for

(q) =

M(q) N(q)
X(q) Y (q)


=

H (q) H (q)G(q)
X(q)
Y (q)

1
C1
A B1 D11
1

D11 C1
F

123

1
B1 D11
B3
1
D11
0
0
I

and we nd
R(q) = H(q) M(q) = I
And we choose
Q1 = Dw
Q2 = De
1
The poles of (q) are equal to the eigenvalues of the matrix A B1 D11
C1 , so equal to the
poles of the inverse noise model H 1(q). For 1 (q) we nd



A

B
F
B
B
3
1
3

= Y (q) N (q)
1 (q) =
C1
D11 0

X(q) M (q)
0
I
F

and so the poles of 1 (q) are equal to the eigenvalues of A B3 F . So, if the eigenvalues of
1
C1 ) and (A B3 F ) are inside the unit circle, the controller will be stabilizing
(A B1 D11
for any stable Q1 and Q2 . In our case Q1 = Dw and Q2 = De are constant matrices and
thus stable.
Factorization, based on a stable plant model:
If G, H and H 1 are stable, we can also make another choice for :


  1
M(q) N(q)
H (q) H 1 (q)G(q)
(q) =
=
0
I
X(q) Y (q)
(q) is stable, and so is 1 (q), given by:


H(q) G(q)
1
(q) =
0
I
By comparing Figure 6.2 and Figure 6.3 we nd that for the above choices the Youla
parametrization boils down to the IMC scheme by choosing QY OU LA = QIM C . We nd
that the IMC scheme is a special case of the Youla parametrization.
Example 36 : computation of the functions , 1 , Q1 (q) and Q2 (q)
Given a system with process model G and noise model H given by
q 1
1 2 q 1
1 0.4 q 1
H =
1 2 q 1
G = 0.2

124

CHAPTER 6. STABILITY

For this model we obtain the state space model


 


2
A
B1 B2
1.6 0.2
=
C1
D11 D12
1
1
0
Now assume that the optimal state feedback for the MPC controller is given by
F = 6, Dw = 3, and De = 2
then

A
C,1
C,2


1
B,1 B,2
C1
A B1 D11
1

D,11 D,12 =
D11 C1
F
D,21 D,22

and for 1 (q) we nd


A1
B1 ,1 B1 ,2
A B3 F
C1 ,1

D1 ,11 D1 ,12 =
C1
1
1
1
C ,2
D ,21 D ,22
F


1
B1 D11
B3
0.4
1

= 1
D11
0
6
0
I

1.6 0.2
1
0
0
1


B1 B3
0.8

=
D11 0
1
0
I
6

1.6 0.2
1
0
0
1

We see that the poles of both and 1 are stable (0.4 and 0.8 respectively). Finally
Q1 = Dw = 3 and Q2 = De = 2.

6.5

Robustness

In the last decades the focus of control theory is shifting more and more towards control of
uncertain systems. A disadvantage of common nite horizon predictive control techniques
(as presented in the chapters 2 to 5) is their inability to deal explicitly with uncertainty
in the process model. In real life engineering problems, system parameters are uncertain,
because they are dicult to estimate or because they vary in time. That means that we
dont know the exact values of the parameters, but only a set in which the system moves.
A controller is called robustly stable if a small change in the system parameters does
not destabilize the system. It is said to give robust performance if the performance does
not deteriorate signicantly for small changes parameters in the system. The design and
analysis of linear time-invariant (LTI) robust controllers for linear systems has been studied
extensively in the last decade.
In practice a predictive controller usually can be tuned quite easily to give a stable closedloop and to be robust with respect to model mismatch. However, in order to be able to
guarantee stability, feasibility and/or robustness and to obtain better and easier tuning
rules by increased insight and better algorithms, the development of a general stability
and robustness theory for predictive control has become an important research topic.
One way to obtain robustness in predictive control is by careful tuning (Clarke & Mothadi
[13], Soeterboek [61], Lee & Yu [39]). This method gives quite satisfactory results in the
unconstrained case.

6.5. ROBUSTNESS

125

In the constrained case robustness analysis is much more dicult, resulting in more complex
and/or conservative tuning rules (Zariou [77], Gencelli & Nikolaou [30]). One approach to
guarantee robust stability is the use of an explicit contraction constraint (Zheng & Morari
[80], De Vries & van den Boom [74]). The two main disadvantages of this approach are the
resulting non-linear optimization problem and the large but unclear inuence of the choice
of the contraction constraint on the controlled system.
An approach to guarantee robust performance is to guarantee that the criterion function
is a contraction by optimizing the maximum of the criterion function over all possible
models (Zheng & Morari [80]). The main disadvantages of this method are the need to use
polytopic model uncertainty descriptions, the use of less general criterion functions and,
especially, the dicult min max optimization. In the mid-nineties Kothare et al. ([35])
derived an LMI-based MPC method (see chapter 7), that circumvented all of these disadvantages, however, it may become quite conservative. This method, and its extensions,
will be discussed in section 7.4 and 7.5, respectively.
Robust stability for the IMC scheme
and
Next we will study what happens if there is a mismatch between the plant model G
the true plant G in the Internal Model Control (IMC) scheme of Section 6.4.1. We will
elaborate on the additive model error case, so the model error is given by:


1

(q) = Wa (q) G(q) G(q)


where Wa (q) is a given stable weighting lter with stable inverse Wa1 (q), such that 
1. Note that for SISO systems this means that
j )| |Wa (ej )| R
|G(ej ) G(e
and so Wa (ej ) gives na upper bound on the magnitude of the model error.
We make the following assumptions:

1. G(q) and G(q)


are stable.

2. H(q) and H(q)


are stable.
1(q) are stable.
3. H 1 (q) and H

4. The MPC controller for the model G(q),


H(q)
is stabilizing.
The rst three assumptions guarantee that the nominal control problem ts in the IMC
framework. The fourth assumption is necessary, because it only makes sense to study
robust stability if the MPC controller is stabilizing for the nominal case.
Now consider the IMC-scheme given in section by (6.16)-(6.19) in section 6.4.1. We assume
that the nominal MPC controller is stabilizing, so Q1 (q) and Q2 (q) are stable. The closed

126

CHAPTER 6. STABILITY

loop is now given by (6.22)-(6.22):


1 


1
1

(q)H(q)e(k)
Q1 (q)w(k)+Q
v(k) = I Q2 (q)H (q)(G(q) G(q))
2 (q)H
1 

1(q)(G(q) G(q))

y(k) = G(q) I Q2 (q)H


Q1 (q)w(k)

1 (q)H(q)e(k) +H(q)e(k)
+Q2 (q)H
It is clear that the perturbed closed loop is stable if and only if
1

1 (q)(G(q) G(q))

I Q2 (q)H
is stable. A sucient condition for stability is that for all |q| 1:
1(q)(G(q) G(q))

= I
Q2 (q)H
This leads to the sucient condition for stability:
1 (G G)
<1
Q2 H
which can be relaxed to
1 Wa  W 1 (G G)
<1
Q2 H
a
< 1, and so a sucient condition is
We know that Wa1 (G G)
1 Wa  < 1
Q2 H
1 Wa . This function T depends on the integer design paDene the function T = Q2 H
rameters N, Nc , Nm , but also on the choice of performance index matrices C2 , D21 , D22 ,
and D23 . A tuning procedure for robust stability can follow the next steps: Design the
controller for some initial settings (see also chapter 8). Compute T  . If T  < 1, we
already have robust stability and we can use the initial setting as nal setting. If T  1,
the controller has to be retuned to obtain robust stability:
We can perform a mixed-integer optimization to nd a better parameter setting and terminate as soon as T  < 1, or proceeded to obtain a even better robustness margin
(T   1).
Tuning of the noise model in the IMC conguration
Until now we assumed that the noise model
H(q) = C1 (qI A)1 B1 + D11
1
was correct with a stable inverse (eigenvalues of AB1 D11
C1 are all strict inside the unit
circle). In the case of model uncertainty however, the state observer B1 , based on the above

6.5. ROBUSTNESS

127

noise model may not work satisfactory and may even destabilize the uncertain system. In
many papers, the observer matrix B1 is therefore considered as a design variable and is
tuned to obtain good robustness properties. By changing B1 , we can change the dynamics
of T (z) and thus improve the robustness of the predictive controller.
We make the following assumption:
D21 = D21,o B1
which means that matrix De is linear in B1 , and so there exist a matrix De,o such that
De = De,o B1 . This will make the computations much easier.
We already had the equation
1 Wa  < 1
Q2 H
Note that Q2 (q) is given by
Q2 (q) = F (q I A + B2 F )1 (B1 + B3 De ) + De


= F (q I A + B2 F )1 (I + B3 De,o ) + De,o B1
= Q2,o (q) B1
So a sucient condition for stability is
1 Wa  < 1
Q2,o B1 H
where we now use B1 as a tuning variable. Note that for B1 0 we have that
1 Wa  0 and so by reducing the magnitude of matrix B1 we can alQ2,o B1 H
ways nd a value that will robustly stabilize the closed-loop system. A disadvantage of
this procedure is that the noise model is disturbed and noise rejection is not optimal any
more. To retain some disturbance rejection, we should detune the matrix B1 not more
than necessary to obtain robustness.
Robust stability in the case of an IIO model
In this section we will consider robust stability in the case where we use an IIO model for
o and the true
the MPC design with an additive model error between the IO plant model G
plant Go . Note that we cannot use the IMC scheme in this case, because the plant is not
stable. We therefore use the Youla conguration. We make the following assumptions:
i (q) = 1 (q)G
o (q).
1. Gi (q) = 1 (q)Go(q) and G
i (q) = 1 (q)H
o(q).
2. Hi (q) = 1 (q)Ho (q) and H
o (q), H 1 (q) and H
1(q) are stable.
3. Go (q), G
o
o
i (q) is stabilizing.
i (q), H
4. The MPC controller for the nominal model G

128

CHAPTER 6. STABILITY



5. The true IIO plant is given by Gi (q) = 1 (q) Go (q) + Wa (q)(q) where the
uncertainty bound is given by  1, and Wa (q) is a given stable weighting lter
with stable inverse Wa1 (q).
M
1 (q)
o (q) = M 1 (q)N(q) = N(q)
We introduce a coprime factorization on the plant G
i (q) and S(q) = 1 (q)M(q). Now R(q), S(q), R1 (q) and
and dene R(q) = M(q)H
1
R (q) are all stable. (If we use the coprime factorization based on the inverse noise model
1 (q).)
on page 122, we will obtain R(q) = I and S(q) = H
o
The closed loop conguration is now given in Figure (6.4). The controller is given by

e
? -e
6

6
-e


Y 1

Q2
-

Process

Controller
Q1


Wa -

v
o 
G

R
-?
e -?
e-

M 1

Figure 6.4: Youla parametrization

(Y + Q2 N) v(k) = (X + Q2 M) y + Q1 r(k)
and the plant
M y = N v + R e(k) + S
= XM
we obtain
substitution and using the fact that MX
(Y + Q2 N) v(k) = (X + Q2 M) y + Q1 r(k)
= (XM 1 + Q2 ) (N v + R e(k) + S ) + Q1 r(k)
+ Q2 ) (N v + R e(k) + S ) + Q1 r(k)
1 X
= (M
Q2 N) v = (M
+ Q2 ) (R e(k) + S ) + Q1 r(k)
1 XN
1 X
(Y + Q2 N M
XN)

+M
Q2 ) (R e(k) + S ) + M
Q1 r(k)
(MY
v = (X

y-

6.5. ROBUSTNESS

129

Y XN
= I we obtain
By using the fact that M
+ MQ
2 ) (R e(k) + S ) + MQ
1 r(k)
v = (X
Q1 r(k)
+M
Q2 ) (R e(k) + S ) + Wa M
 = Wa (X
A necessary and sucient condition for robust stability is now given by
+M
Q2 S)  1
 Wa (X
+M
Q2 S). Similar to the IMC robust analysis case this
Dene the function T = Wa (X
function T depends on the integer design parameters N, Nc , Nm , but also on the choice of
performance index matrices C2 , D21 , D22 , and D23 . We can use the same tuning procedure
as for the IMC case to achieve robust stability.

130

CHAPTER 6. STABILITY

Chapter 7
MPC using a feedback law, based on
linear matrix inequalities
In the chapters 2 to 6 the basic framework of model predictive control was elaborated.
In this chapter a dierent approach to predictive control is presented, using linear matrix
inequalities (LMIs), based on the work of Kothare et al. [35]. The control strategy is
focussed on computing an optimal feedback law in the receding horizon framework. A
controller can be derived by solving convex optimization problems using fast and reliable
tecniques. LMI problems can be solved in polynomial time, which means that they have
low computational complexity.
In fact, the eld of application of Linear Matrix Inequalities (LMIs) in system and control
engineering is enormous, and more and more engineering problems are solved using LMIs.
Some examples of application are stability theory, model and controller reduction, robust
control, system identication and (last but not least) predictive control.
The main reasons for using LMIs in predictive control are the following:
Stability is easily guaranteed.
Feasibility results are easy to obtain.
Extension to systems with model uncertainty can be made.
Convex properties are preserved.
Of course there are also some drawbacks:
Although solving LMIs can be done using convex techniques, the optimization problem is more complex than a quadratic programming problem, as discussed in chapter 5.
Feasibility in the noisy case still can not be guaranteed.
131

132

CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES

In the section 7.1 we will discuss the main features of linear matrix inequalities, in section
7.2 we will show how LMIs can be used to solve the unconstrained MPC problem, and in
section 7.3 we will look at the inequality constrained case. Robustness issues, concerning
stability in the case of model uncertainty, are discussed in section 7.4. Finally section 7.5
will discuss some extensions of the work of Kothare et al. [35].

7.1

Linear matrix inequalities

Consider the linear matrix expression


F () = F0 +

m


i Fi

(7.1)

i=1

where Rm1 , with elements i , is the variable and the symmetric matrices Fi = FiT
Rnn , i = 0, . . . , m are given. A strict Linear Matrix Inequality (LMI) has the form
F () > 0

(7.2)

The inequality symbol in (7.2) means that F () is positive denite (i.e. xT F ()x > 0 for
all nonzero x Rn1 ). A nonstrict LMI has the form
F () 0
Properties of LMIs:

Convexity:
The set C = {|F () > 0} is convex in , which means that for each pair 1 , 2 C
and for all [0, 1] the next property holds
1 + (1 )2 C

Multiple LMIs:
Multiple LMIs can be expressed as a single LMI, since
F (1) > 0 , F (2) > 0 , . . . , F (p) > 0
is equaivalent to:

0
F (1) 0
0 F (2)
0

..
..
.
.
.
.
.
0
0 F (p)

>0

7.1. LINEAR MATRIX INEQUALITIES

133

Schur complements:
Consider the following LMI:


Q() S()
>0
S T () R()
where Q() = QT (), R() = RT () and S() depend anely on . This is equaivalent to
R() > 0 , Q() S()R1 ()S T () > 0

Finally note that any symmetric matrix M Rnn can be written as




n(n+1)/2

M=

Mi mi

i=1

where mi are the scalar entries of the upper-right triangle part of M, and Mi are symmetric
matrices with entries 0 or 1. For example:
 






1 0
m1 m2
0 1
0 0
=
m1 +
M=
m2 +
m3
m2 m3
0 0
1 0
0 1
This means that a matrix inequality, that is linear in a symmetric matrix M, can easily
be transformed into a linear matrix inequality with a parameter vector
T

= m1 m2 . . . mn(n+1)/2
Before we look at MPC using LMIs, we consider the more simple cases of Lyapunov stability
of an autonomous system and the properties of a stabilizing state-feedback.
Lyapunov theory:
Probably the most elementary LMI is related to the Lyapunov theory. The dierence state
equation
x(k + 1) = A x(k)
is stable (i.e. all state trajectories x(k) go to zero) if and only if there exists a positivedenite matrix P such that
AT P A P < 0
For this system the Lyapunov function V (k) = xT (k)P x(k) is positive denite for P > 0,
and the increment V (k) = V (k + 1) V (k) is negative denite because
V (k + 1) V (k) =
=
=
<
for (AT P A P ) < 0.

xT (k + 1)P x(k + 1) xT (k)P x(k)


(Ax(k))T P (Ax(k)) xT (k)P x(k)
xT (k)(AT P A P )x(k)
0

134

CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES

Stabilizing state-feedback:
Consider the dierence state equation:
x(k + 1) = A x(k) + B v(k)
where x(k) is the state and v(k) is the input. When we apply a state feedback v(k) =
F x(k), we obtain:
x(k + 1) = A x(k) + B F x(k) = (A + BF ) x(k)
and so the corresponding Lyapunov equation becomes:
(AT + F T B T )P (A + BF ) P < 0 ,

P >0

Or by choosing P = S 1 this is equivalent to


(AT + F T B T )S 1 (A + BF ) S 1 < 0 ,

S>0

pre- and post-multiplying with S result in


S(AT + F T B T )S 1 (A + BF )S S < 0 , S > 0
By choosing F = Y S 1 we obtain the condition:
(SAT + Y T B T )S 1 (AS + BY ) S < 0 , S > 0
which can be rewritten (using the Schur complement transformation) as


S
SAT + Y T B T
> 0 ,S > 0
AS + BY
S

(7.3)

which are LMIs in S and Y . So any S and Y satisfying the above LMI result in a stabilizing
state-feedback F = Y S 1 .

7.2

Unconstrained MPC using linear matrix inequalities

Consider the system


x(k + 1) = A x(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D12 w(k)
z(k) = C2 x(k) + D22 w(k) + D23 v(k)

(7.4)
(7.5)
(7.6)

7.2. UNCONSTRAINED MPC USING LINEAR MATRIX INEQUALITIES

135

For simplicity reasons, in this chapter, we consider e(k) = 0. Like in chapter 5, we consider
constant external signal w, a zero steady-state performance signal zss and a weighting
matrix (j) which is equal to identity, so
w(k + j) = wss for all j 0
zss = 0
(j) = I for all j 0
Let the system have a steady-state, given by (vss , xss , wss , zss ) = (Dssv wss , Dssxwss , wss, 0).
We dene the shifted versions of the input and state which are zero in steady-state:
v (k) = v(k) vss
x (k) = x(k) xss
Now we consider the problem of nding at time instant k an optimal state feedback v (k +
j|k) = F (k) x (k + j|k) to minimize the quadratic objective
min(k)J(k)
F

where J(k) is given by ( > 0):


J(k) =

zT (k + j|k)
z (k + j|k)

j=0

in which z(k + j|k) is the prediction of z(k + j), based on knowledge up to time k.
J(k) =
=


j=0

z(k + j|k)T
z (k + j|k)
(C2 x(k + j|k) + D22 w(k + j) + D23 v(k + j|k))T (C2 x(k + j|k)

j=0

+D22 w(k + j) + D23 v(k + j|k))


=
(C2 x (k + j|k) + D22 v (k + j|k))T (C2 x (k + j|k) + D22 v (k + j|k))
=

j=0

T
x T (k + j|k)(C2T + F T (k)D22
)(C2 + D22 F (k))x (k + j|k)

j=0

This problem can be translated to a minimization problem with LMI-constraints. Consider


the quadratic function V (k) = x (k)T P x (k), P > 0, that satises
T
)(C2 + D22 F (k))x (k)
V (k) = V (k + 1) V (k) < x T (k)(C2T + F T (k)D22

(7.7)

136

CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES

for every trajectory. Note that because V (k) < 0 and V (k) > 0, it follows that V () =
0. This implies that x () = 0 and therefore V () = 0. Now there holds:
V (k) = V (k) V ()


V (k + j)
=
j=0

>

T
x T (k + j|k)(C2T + F T (k)D22
)(C2 + D22 F (k))x (k + j|k)

j=0

= J(k)
so x T (k)P x (k) is a lyapunov function and at the same time an upper bound on J(k).
Now derive
V (k) = V (k + 1) V (k) =
= x T (k + 1|k)P x (k + 1|k) x T (k)P x (k) =


= x T (k) (A + B3 F )T P (A + B3 F ) x (k) x T (k)P x (k) =


= x T (k) (A + B3 F )T P (A + B3 F ) P x (k)
And so condition (7.7) is the same as
(A + B3 F )T P (A + B3 F ) P + (C2 + D22 F )T (C2 + D22 F ) < 0
By setting P = S 1 and F = Y S 1 we obtain the following condition:
T
(AT + S 1 Y T B3T )S 1 (A + B3 Y S 1 ) S 1 + (C2T + S 1 Y T D22
)(C2 + D22 Y S 1 ) < 0

pre- and post-multiplying with S and division by result in


T
)(C2 S + D22 Y ) < 0 (7.8)
(SAT + Y T B3T )S 1 (AS + B3 Y ) S + 1 (SC2T + Y T D22

or
S

SA + Y

B3T

(SC2T

+Y

T
D22
)1/2




 S 1
AS + B3 Y
0
>0
0 1 I
1/2 (C2 S + D22 Y )

which can be rewritten (using the Schur complement transformation) as

T
S
SAT + Y T B3T (SC2T + Y T D22
)1/2
> 0 ,S > 0

AS + B3 Y
S
0
1/2
0
I
(C2 S + D22 Y )
So any , S and Y satisfying the above LMI gives an upper bound
V (k) = x (k)T S 1 x (k) > J(k)

(7.9)

7.3. CONSTRAINED MPC USING LINEAR MATRIX INEQUALITIES

137

Finally introduce an additional constraint that


x T (k)S 1 x (k) 1

(7.10)

which, using the Schur complement property, is equivalent to




1
x T (k)
0
S
x (k)

(7.11)

Now
>J

(7.12)

and so a minimization of subject to LMI constraints (7.9) and (7.11) is equivalent to a


miminization of the upper bound of the performance index.
Note that the LMI constraint (7.9) looks more complicated than expression (7.8). However
(7.9) is linear in S, Y and . A convex optimization algorithm, denoted as the interior
point algorithm, can solve convex optimization problems with many LMI constraints (up
to a 1000 constraints) in a fast and robust way.

7.3

Constrained MPC using linear matrix inequalities

In the previous section an unconstrained predictive controller in the LMI-setting was derived. In this section we incorporate constraints. The most important tool we use is the
notion of invariant ellipsoid:
Invariant ellipsoid:
Lemma 37 Consider the system (7.4) and (7.6). At sampling time k, consider
x (k|k)T S 1 x (k|k) 1
and let S also satisfy (7.9), then there will hold for all j > 0:
x (k + j|k)T S 1 x (k + j|k) 1
Proof:
From the previous section we know that when (7.9) is satised, there holds:
V (k + j) < V (k) for j > 0
Using V (k) = x T (k)P x (k) = x T (k)S 1 x (k) we derive:
x (k + j|k)T S 1 x (k + j|k) = 1 V (k + j) < 1 V (k) = x (k|k)T S 1 x (k|k) 1
2 End Proof

138

CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES



E = | T S 1 1 = | T P
of the system, and so
x (k|k) E

is an invariant ellipsoid for the predicted states

x (k + j|k) E

j>0

The condition x (k|k)T S 1 x (k|k) 1 of the above lemma is equivalent to the LMI


1
x T (k|k)
0
(7.13)
x (k|k)
S
Signal constraints
Lemma 38 Consider the scalar signal
(k) = C4 x(k) + D42 w(k) + D43 v(k)

(7.14)

and the signal constraint


|(k + j|k)| max j 0

(7.15)

where D43 = 0 and max > 0. Further let condition (7.11) hold and assume that |ss | <
max where
ss = C4 xss + D42 wss + D43 vss .
Then condition (7.15) will be satised if


S
(C4 S + D43 Y )T
0
C4 S + D43 Y (max |ss |)2

(7.16)

Proof::
Note that we can write
(k) = (k) + ss
where
(k) = C4 x (k)(k) + D43 v (k)(k) .
Equation (7.16) together with lemma 37 gives that
| (k + j|k)|2 (max |ss |)2
or
| (k + j|k)| max |ss |
and so
|(k)| |(k) |ss || + |ss | max |ss | + |ss | = max
2 End Proof

7.3. CONSTRAINED MPC USING LINEAR MATRIX INEQUALITIES

139

Remark 39 : Note that for an input constraint on the i-th input vi


|vi (k + j)| vmax,i for j 0
is equivalent to condition (7.15) for max,i = vmax,i , C2 = 0, D42 = 0 and D43 = ei , where
ei is a selection vector such that vi (k) = ei v(k).
An output constraint on the i-th output yi
|yi(k+j +1) yss,i| ymax,i for j 1
where yss = C1 xss + D12 wss = (C1 Dssx + D12 )wss .
By deriving
yi (k+j +1) = C1 x(k+j +1) + D12 w(k+j +1) =
= C1 x (k + j + 1) + yss =
= C1 Ax (k + j) + C1 B3 v (k + j) + yss
we nd that the output condition is equivalent to condition (7.15) for max,i = ymax,i ,
C4 = C1 A, D42 = 0 and D43 = Cy,i B3 .

Summarizing MPC using linear matrix inequalities


Consider the system
x(k + 1)
y(k)
z(k)
(k)

=
=
=
=

A x(k) + B3 v(k)
C1 x(k) + D12 w(k)
C2 x(k) + D22 w(k) + D23 v(k)
C4 x(k) + D42 w(k) + D43 v(k)

(7.17)
(7.18)
(7.19)
(7.20)

with a steady-state (vss , xss , wss , zss ) = (Dssv wss , Dssx wss , wss , 0) The LMI-MPC control
problem is to nd, at each time-instant k, a state-feedback law
v(k) = F (x(k) xss ) + vss
such that the criterion
min J(k)
F

where

J(k) =

z T (k + j)z(k + j)

(7.21)

j=0

is optimized subject to the constraint


| i (k + j) | i,max , , for i = 1, . . . , n ,

j 0

(7.22)

where i,max > 0 for i = 1, . . . , n . The optimal solution is found by solving the following
LMI problem:

140

CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES

Theorem 40 Given a system with state space description (7.17)-(7.19) and a control
problem of minimizing (7.21) subject to constraint (7.22), and an external signal
w(k + j) = wss

for j > 0

The state-feedback F = Y S 1 minimizing the worst-case J(k) can be found by solving:


min

,S,Y

(7.23)

subject to
(LMI for lyapunov stability:)
S>0
(LMI for ellipsoid boundary:)


1
x T (k|k)
0
S
x (k|k)
(LMIs for worst case MPC performance index:)

T
S
SAT +Y T B3T (SC2T +Y T D22
)1/2

>0
AS +B3 Y
S
0
1/2
0
I
(C2 S + D22 Y )
(LMIs for constraints on :)


S
(C4 S + D43 Y )T EiT
0,
i = 1, . . . , n
Ei (C4 S + D43 Y ) (i,max |i,ss |)2


where Ei = 0 . . . 0 1 0 . . . 0 with the 1 on the i-th position.

(7.24)

(7.25)

(7.26)

(7.27)

LMI (7.24) guarantees Lyapunov stability, LMI (7.25) describes an invariant ellipsoid for
the state, (7.26) gives the LMI for the MPC-criterion and (7.27) corresponds to the state
constraint on i .

7.4

Robustness in LMI-based MPC

In this section we consider the problem of robust performance in LMI based MPC. The
signals w(k) and e(k) are assumed to be zero. How these signals can be incorporated in
the design is in Kothare et al. [35].
We consider both unstructured and structured model uncertainty. Given the fact that w
and e are zero, we get the following model uncertainty descriptions for the LMI-based case:

7.4. ROBUSTNESS IN LMI-BASED MPC

141

Unstructured uncertainty
The uncertainty is given in the following description:
x(k + 1)
z(k)
(k)
(k)
(k)

=
=
=
=
=

A x(k) + B3 v(k) + B4 (k)


C2 x(k) + D23 v(k) + D24 (k)
C4 x(k) + D43 v(k) + D44 (k)
C5 x(k) + D53 v(k)
(q)(k)

(7.28)
(7.29)
(7.30)
(7.31)
(7.32)

where has a block diagonal structure:

..

.
r

(7.33)

with  (q) : Rn Rn ,  = 1, . . . , r is bounded in -norm:


  1 ,

 = 1, . . . , r

(7.34)

A nominal model Gnom G is given for (k) = 0:


x(k + 1) = A x(k) + B3 v(k)

Structured uncertainty
The uncertainty is given in the following description:
x(k + 1) = A x(k) + B3 v(k)
z(k) = C2 x(k) + D23 v(k)
(k) = C4 x(k) + D43 v(k)

(7.35)
(7.36)
(7.37)

where

A B3

P = C2 D23 Co

C4 D43

3,1
A1 B
23,1 , . . . ,
C2,1 D

43,1
C4,1 D

A nominal model Gnom G is given by


x(k + 1) = Anom x(k) + B3,nom v(k)

3,L
AL B
23,L
C2,L D

43,L
C4,L D

(7.38)

142

CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES

Robust performance for LMI-based MPC


The robust performance MPC control problem is to nd, at each time-instant k, a statefeedback law
v(k) = F x(k)
such that the criterion
min max J(k)
F

GG

where

J(k) =

z T (k + j)z(k + j)

(7.39)

j=0

is optimized subject to the constraints


max | i (k + j) | i,max , for i = 1, . . . , n ,
GG

j 0

(7.40)

In equation (7.39) input-signal v(k) is choosen such that the performance index J is minimized for the worst-case choice of G in G, subject to constraints (7.40) for the same
worst-case G. As was introduced in the previous section, the uncertainty in G can be
either structured or unstructured, resulting in the theorems 41 and 42.
Theorem 41 Robust performance for unstructured model uncertainty
Given a system with uncertainty description (7.28)-(7.34) and a control problem of minimizing (7.39) subject to constraints (7.40). The state-feedback F = Y S 1 minimizing the
worst-case J(k) can be found by solving:
min

,S,Y,T,V,

(7.41)

subject to
(LMI for lyapunov stability:)
S>0

(7.42)

(LMI for ellipsoid boundary:)




1
xT (k|k)
0
x(k|k)
S
(LMI for worst case MPC performance index:)

S
(AS + B3 Y )T (C2 S + D23 Y )T (C5 S + D53 Y )T
T
(AS + B3 Y )
S B4 B4T
B4 D24

T
(C2 S + D23 Y ) D24 B4T I D24 D24

0
0
0

(C5 S + D53 Y )

(7.43)

(7.44)

7.4. ROBUSTNESS IN LMI-BASED MPC

143

>0

(7.45)

where is partitioned as :
= diag(1 In1 , 2 In2 , . . . , r Inr ) > 0

(7.46)

(LMIs for constraints on :)

2
i,max
S
(C5 S + D53 Y )T (C4 S + D43 Y )T EiT
C5 S + D53 Y
0,
V 1
0
1 T
T
Ei (C4 S + D43 Y )
0
I Ei D44 V D11 Ei

i = 1, . . . , n

V 1 > 0

(7.47)

(7.48)

where V is partitioned as :
V = diag(v1 In1 , v2 In2 , . . . , vr Inr ) > 0


and Ei = 0 . . . 0 1 0 . . . 0 with the 1 on the i-th position.

(7.49)

LMI (7.42) guarantees Lyapunov stability, LMI (7.43) describes an invariant ellipsoid for
the state, (7.44), (7.45) are the LMIs for the worst-case MPC-criterion and (7.47), (7.49)
and (7.48) correspond to the state constraint on .

Proof of theorem 41:


The LMIs (7.43) and (7.45) for optimizing (7.41) are given in Kothare et al.[35]. In the
notation of the same paper (7.47)-(7.48) can be derived as follows:
For any admissible (k) and l = 1, . . . , m we have
(k) = EiT C4 x(k) + EiT D43 v(k) + EiT D44 (k)
= EiT (C4 + D43 F ) x(k) + EiT D44 (k)

(7.50)
(7.51)

and so for l = 1, . . . , n and j 0:


max | i (k + j|k) | = max | EiT (C4 + D43 F ) x(k + j|k) + EiT D44 (k + j|k) |
j0

j0

max |EiT (C4 + D43 F )S 2 z + EiT D44 (k + j|k)|


z T z1

Further the derivation of (7.47)-(7.48) is analoguous to the derivation of the LMI for an
output constraint in Kothare et al.[35].
In the notation of the same paper (7.44) can be derived as follows:
V (x(k+j +1|k)) =

144

CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES




T 

(A + B3 F )T P (A + B3 F ) (A + B3 F )T P B4
=
B4T P (A + B3 F )
B4T P B4



T 
P 0
x(k + j|k)
x(k + j|k)
V (x(k + j|k)) =
0 0
(k + j|k)
(k + j|k)
x(k + j|k)
(k + j|k)



x(k + j|k)
(k + j|k)

J(k + j) =
= z(k + j)T z(k + j)
T 


= (C2 + D23 F )x(k + j) + D24 (k + j) R (C2 + D23 F )x(k + j) + D24 (k + j)



T 
(C2 + D23 F )T (C2 + D23 F ) (C2 + D23 F )T D24
x(k + j|k)
x(k + j|k)
=
T
T
D24
(C2 + D23 F )
D24
D24
(k + j|k)
(k + j|k)
And so the condition
V (x(k+j +1|k)) V (x(k + j|k)) J(k + j)
result in the inequality


T
(A+B3 F )T P B4 + (C2 +D23 F )T D24
(A+B3 F )T P (A+B3F ) P +
x(k+j|k)

+(C2 +D23 F )T (C2 +D23 F )


(k+j|k)
T
T
(C2 +D23 F )
B4T P B4 + D24
D24
B4T P (A+B3F ) + D24


x(k+j|k)

0
(k+j|k)
This condition together with condition
 (k + j|k)T  (k + j|k)
x(k + j|k)(C22, + D53, F )T (C22, + D53, F )x(k + j|k) ,  = 1, 2, . . . r
is satised if there exists a matrix Z 0 such that

(A + B3 F )T P B4 + (C2 + D23 F )T D24


(A + B3 F )T P (A + B3 F ) P +
T
+(C2 + D23 F ) (C2 + D23 F )+

+(C5 + D53 F )T (C5 + D53 F )


T
T
T
(C2 + D23 F )
B4T P B4 + D24
D24 Z
B4 P (A + B3 F ) + D24
Susbtituting P = S 1 and F = Y S 1 we nd

(AS + B3 Y )T S 1 B4 +
(AS + B3 Y )T S 1 (AS + B3 Y ) S+
T
+(C2 S + D23 Y ) (C2 S + D23 Y ) + SQS+
+(C2 S + D23 Y )T D24

+(C5 S + D53 Y )T (C5 S + D53 Y )


T
T
T
(CS + DY ) B4T S 1 B4 + D24
D24 Z
B4 S 1 (AS + BY ) + D24

By susbtitution of Z = 1 and by applying some matrix operations we obtain equation


7.44.
2 End Proof

7.4. ROBUSTNESS IN LMI-BASED MPC

145

Theorem 42 Robust performance for structured model uncertainty


Given a system with uncertainty description (7.35)-(7.38) and a control problem of minimizing (7.39) subject to constraint (7.40). The state-feedback F = Y S 1 minimizing the
worst-case J(k) can be found by solving:
min

,S,Y

(7.52)

subject to
(LMI for lyapunov stability:)
S>0
(LMI for ellipsoid boundary:)


1
xT (k|k)
0
x(k|k)
S

(7.53)

(7.54)

(LMIs for worst case MPC performance index:)

T (S C T +Y T D
T )1/2
S
S AT +Y T B
3,
2,
23,
3, Y

0
S
0
A S + B
1/2
23, Y )
0
I
(C2, S + D

,  = 1, . . . , L (7.55)

(LMIs for constraints on :)




43, Y )T E T
S
(C4, S + D
i
0, i = 1, . . . , n ,  = 1, . . . , L (7.56)
2
43, Y )
Ei (C4, S + D
i,max


where Ei = 0 . . . 0 1 0 . . . 0 with the 1 on the i-th position.

LMI (7.53) guarantees Lyapunov stability, LMI (7.54) describes an invariant ellipsoid for
the state, (7.55) gives the LMIs for the worst-case MPC-criterion and (7.56) correspond to
the state constraint on .

Proof of theorem 42:


The LMI (7.54) for solving (7.52) is given in Kothare et al.[35]. In the notation of the same
paper (7.56) can be derived as follows:
We have, for i = 1, . . . , m:
i (k) = Ei C4 x(k) + Ei D43 v(k)
= Ei (C4 + D43 F ) x(k)

(7.57)
(7.58)

146

CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES

and so all possible matrices

C4 D43

and l = 1, . . . , n there holds

max | i (k + j|k) | = max | Ei (C4 + D43 F ) x(k + j|k) |


j0

j0

max |Ei (C4 + D43 F )S 2 z| , j 0


z T z1



1
=
Ei (C4 + D43 F )S 2 ,

j0

Further the derivation of (7.56) is analogous to the derivation of the LMI for an output
constraint in Kothare et al.[35].
In the notation of the same paper (7.55) can be derived as follows. The equation
(A + B3 F )T P (A + B3 F ) P + Q + F T RF 0
is replaced by:
(A + B3 F )T P (A + B3 F ) P + (C2 + D23 F )T (C2 + D23 F ) 0
For P = S 1 and F = Y S 1 we nd

T
S
SAT + Y T B3T (SC2T + Y T D23
)1/2

0
AS + B3 Y
S
0
1/2
0
I
(C2 S + D23 Y )

(7.59)

which is ane in [A B3 C2 D23 ]. Hence the condition becomes equation (7.55).


2 End Proof

7.5

Extensions and further research

Although the LMI based MPC method, described in the previous sections, is a useful for
robustly controlling systems with signal constraints, there are still a few disadvantages.
Computational complexity: Despite the fact that solving an LMI problem is convex,
it is computationally more complex than a quadratic program. For larger systems
the time needed to solve the problem may be too long.
Conservatism: The use of invariant sets (see Section 7.3) may lead to very conservative approximations of the signal constraints. The assumption that constraints are
symmetric around the steady state value is very restrictive.
Approximation of the performance index: Equation (7.12) shows that in LMI-MPC
we do not minimize the performance index J itself, but only an upper bound J.
This means that we may be far away from the real optimal control input.
Since Kothare et al. [35] introduced the LMI-based MPC approach, many researchers have
been looking for ways to adapt and improve the method such that at least one of the above
disadvantages is relaxed.

7.5. EXTENSIONS AND FURTHER RESEARCH

147

Using LMI with Hybrid Approach


Kouvaritakis et al. ([36]) introduces a xed feedback control law which is deployed by
introducing extra degrees of freedom, namely, an additional term c(k) in the following
form
v(k) = F x(k) + c(k),

c(k + N + i) = 0, i > 0(3 36)

(7.60)

In this way only a simple optimization is needed online to calculate the N extra control
moves. Then the closed-loop system is derived as follow.
x(k + 1) = Ac (k)x(k) + B(k)c(k)

(7.61)

where Ac (k) = A(k) + B(k)F . Equation can be expressed as the following state-space
model.
(k + 1) = (k)(k)

 

B(k) 0 . . . 0
Ac (k)
(k) =
0
M

c(k)


c(k + 1)
x(k)

(k) =
, f (k) =
..
f (k)

.
c(k + N 1)

0 IN 0 . . . 0
0 0 IN
0

..
.
.. ..
M = .
. ..
.

0 0
0 IN
0 0 ... 0 0

(7.62)

(7.63)

If there exists an invariant set Ex = {x|xT Q1 x 1} for the control law v(k + i|k) =
F x(k + i|k), i 0, there must exist ar least one invariant set E for system (7.62), where
Ex = {| T Q1
1}

(7.64)

Matrix Q1
can be partioned as

Q1

T
11 Q
Q
21
21 Q
22
Q


11 = Q1
withQ

Then the inequality of (7.64) can be written as


11 1 2f T Q
21 x f T Q
22 f
xT Q
Here one can see that nonzero f can be used to get an invariant ellipsoids which is larger
than Ex , such that the applicability can be widen. In other words, this extra freedom

148

CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES

can aord highly tuned control law without the reduction of the invariant set. Hence, the
control law F can be designed oine, using the any of the robust control methods without
considering the constraints. When the optimal F is decided, the cost function can be
replaced by Jf = f T f . One can see that the computational burden involved in the online
tuning of F is removed that shows the main advantage of this method.
Related to this method, Bloemen et al. [8] extends the hybrid approach by introducing a
switching horizon Ns . Before Ns the feedback law is designed oine, but after the Ns the
control law F is a variable in the online optimization problem. Here the uncertainties are
not considered. The performance index is dened as
J(k) =

y T (k + j + 1|k)Qy(k + j + 1|k) + uT (k + j|k)Ru(k + j|k)

(7.65)

j=0

where Q and R are positive-denite weighting matrices. If F is given, the innite sum
beyond Ns in (7.65) can be replaced by quadratic terminal cost function xT (k+Ns |k)P x(k+
Ns |k), the weighting matrix P can be obtained via a Lyapunov equation. Based on this
terminal cost function the upper bound of the performance index beyond Ns is given as
follow
N
s 1


J(k)
=
y T (k + j + 1|k)Qy(k + j + 1|k) + uT (k + j|k)Ru(k + j|k)
j=0

+xT (k + Ns |k)P x(k + Ns |k)


J(k)

(7.66)

The the entire MPC optimization problem can be given by the semi-denite programming

problem for the optimization of J(k)


subject to the additional LMIs which express the
input and output constraints. The advantage of this method is that constraints in the rst
part of the rst part of the prediction horizon are taken into account in an appropriate
way. sThe computational burden will increase exponentially if Ns increase, the automatic
trade-o between feasibility and optimality will still be obtained. The algorithm presented
in [8] is closely related to the algorithm presented by [42].

Combining Hybrid Approach with Multi-Lyapunov Functions


In [23] min-max algorithm using LMI with hybrid approach used by [8] and min-max
algorithm using LMI with multi-Lyapunov functions used by [19] are combined with LMI
technique mentioned by (Kothare et al. [35]). Ns free control moves are used without
using the static feedback control law. These Ns free control moves are optimized using
the min-max algorithm in order to minimize the worst-case cost function over a polytopic
model uncertainty set. Then after the switching horizon Ns , the a feedback control law
F is implemented in each step of the innite horizon control. This feedback control is
calculated online subject to the upper bound of the terminal cost function and robust
stability constraints formed in LMI form. Of course the computational burden will increase,
but the numerical example show that better results are obtained compared to [19].

7.5. EXTENSIONS AND FURTHER RESEARCH

149

Using LMI with o-line formation


In order to reduce the online computational load of the method proposed by Kothare et
al. [35], Wan and Kothare [75] developed an o-line formulation of robust MPC which can
calculate a sequence of explicit control laws corresponding to a sequence of asymptotically
stable invariant ellipsoids constructed o-line. Two concepts are used in this method. The
rst concept is an asymptotically stable invariant ellipsoid, dened as follows:
Given a discrete dynamic system x(k + 1) = f (x(k)), subset E = {x
Rnx |xT Q1 x 1} of the state space Rnx is said to be an asymptotically stable invariant ellipsoid, if it has the property that, whenever x(k1 ) E, then x(k) E for
all times k > k1 and x(k) 0 as k
The set E = {x|xT Q1 x 1} calculated by the method (Kothare et al. [35]) is an
asymptotically stable invariant ellipsoid according to the denition. The second concept
def
is
$the distance between the state x and the origin, dened as the weighted norm xQ1 =
xT Q1 x. The main idea of this o-line formulation is that o-line a sequence of N
asymptotically stable invariant ellipsoids are calculated, one inside another, by choosing
N states ranging from zero to the largest feasible state that satises x(k + j + 1)Q1 < 1,
j < N, for all systems belong to the uncertainty set. For each asymptotically stable
invariant ellipsoid, a state feedback law Fi is calculated by solving the optimization problem
mentioned in (Kothare et al. [35]) o-line. Then, these control laws are put in a look-up
table. At last, the online calculation just concludes checking which asymptotically stable
invariant ellipsoid the state x(k) in time k belongs to. The control action u(k) is calculated
by applying the control law corresponding to the smallest ellipsoid for the state x(k). So
the online computational burden is reduced due to the o-line calculation. In addition,
when the state located between the two adjacent asymptotically stable invariant ellipsoids,
1
i , 0 i 1, which is calculated by solving equation xT (k)(i Q1
i +(1i )Qi+1 )x(k) = 1
is applied to the control law u(k) = (i Fi + (1 i )Fi+1 )x(k) to avoid the discontinuity
on the borders of the ellipsoids. This method may give a little loss of the performance,
however the online computational load is reduced dramatically compared to [35].

150

CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES

Chapter 8
Tuning
As was already discussed in the introduction, one of the main reasons for the popularity
of predictive control is the easy tuning. The purpose of tuning the parameters is to acquire good signal tracking, sucient disturbance rejection and robustness against model
mismatch.
Consider the LQPC-performance index
J(u, k) =

N


x (k + j|k)Q
x(k + j|k) +

N


uT (k + j 1)Ru(k + j 1)

j=1

j=Nm

and GPC-performance index


J(u, k) =

N 


T 

yp (k + j|k) r(k + j)
yp (k + j|k) r(k + j) +

j=Nm

Nc


uT (k + j 1)u(k + j 1)

j=1

The tuning parameters can easily be recognized. They are:


Nm
N
Nc

P (q)
Q
R

=
=
=
=
=
=
=

minimum-cost horizon
prediction horizon
control horizon
weighting factor (GPC)
tracking lter (GPC)
state weighting matrix (LQPC)
control weighting matrix (LQPC)

151

152

CHAPTER 8. TUNING

This chapter discusses the tuning of a predictive controller. In section 8.1 some rules of
thumb are given for the initial parameter settings. In section 8.2 we look at the case
where the initial controller does not meet the desired specications. An advanced tuning
procedure may provide some tools that lead to a better and suitable controller. The nal
ne-tuning of a predictive controller is usually done on-line when the controller is already
running. Section 8.3 discusses some special settings of the tuning parameters, leading to
well known controllers.
In this chapter we will assume that the model is perfect. Robustness against model mismatch was already discussed in section 6.5.

8.1

Initial settings for the parameters

For GPC the parameters N, Nc and are the three basic tuning parameters, and most
papers on GPC consider the tuning of these parameters. The rules-of-thumb for the settings
of a GPC-controller were rst discussed by Clarke & Mothadi ([13]) and later reformulated
for the more extended UPC-controller in Soeterboek ([61]). In LQPC the parameters N
and Nc are present, but instead of and P (q), we have the weighting matrices R and Q.
In Lee & Yu ([39]), the tuning of the LQPC parameters is considered.
In the next sections the initial setting for the summation parameters (Nm , N, Nc ) and the
signal weighting parameters (P (q), / Q, R) are discussed.

8.1.1

Initial settings for the summation parameters

In both the LQPC-performance index and the GPC-performance index, three summation
parameters, Nm , N and Nc are recognized, which can be used to tune the predictive
controller. Often we choose Nm = 1, which is the best choice most of the time, but may
be chosen larger in the case of a dead-time or an inverse response.
The parameter N is mostly related to the length of the step response of the process, and
the prediction interval N should contain the crucial dynamics of x(k) due to the response
on u(k) (in the LQPC-case) or the crucial dynamics of y(k) due to the response on u(k)
(in the GPC-case).
The number Nc N is called the control horizon which forces the control signal to become
constant
u(k + j) = constant for j Nc
or equivalently the increment input signal to become zero
u(k + j) = 0 for j Nc
An important eect of a small control horizon (Nc  N) is the smoothing of the control
signal (which can become very wild if Nc = N) and the control signal is forced towards

8.1. INITIAL SETTINGS FOR THE PARAMETERS

153

its steady-state value, which is important for stability-properties. Another important consequence of decreasing Nc is the reduction in computational eort. Let the control signal
v(k) be a p 1 vector. Then the control vector v(k) is a pN 1 vector, which means
we have pN degrees of freedom in the optimization. By introducing the control horizon
Nc < N, these degrees of freedom decrease because of the equality constraint, as was shown
in chapter 4 and 5. If the control horizon is the only equality constraint, the new optimization parameter is
, which is a pNc 1 vector. Thus the degrees of freedom reduces
is an pN pNc matrix, and so
to pNc . This can be seen by observing that the matrix D
33
T
D
23 D
becomes an pNc pNc matrix. For the case without
)T D
the matrix H = 2 (D
33
23
33
further inequality constraints, this matrix H has to be inverted, which is easier to invert
for smaller Nc .
If we add inequality constraints we have to minimize
1 T

min
H

2
subject to these constraints. The number of parameters in the optimization reduces from
pN to pNc , which will speed up the optimization.
Consider a process where
d = the
n = the
= the
= the
= the
= the
ts = the
s = the
b = the

dead time of the process


number of poles of the model
dimension of the matrix Ao for an IO model
order of polynomial ao (q) for an IO model
dimension of the matrix Ai for an IIO model
order of polynomial ai (q) for an IIO model
5% settling time of the process Go (q)
sampling-frequency
bandwidth of the process Gc,o(s)

where Gc,o (s) is the original IO process model in continuous-time (corresponding to the
discrete-time IO process model Go (q)). Now the following initial setting are recommended
(Soeterboek [[61]]):
Nm
Nc
N
N

=
=
=
=

1+d
n
int(N ts ) for well-damped systems
int(N s /b ) for badly-damped and unstable systems

where N [1.5, 2] and N [4, 25]. Resuming, Nm is equal to 1 + the process dead-time
estimate and N should be chosen larger than the length of the step response of the openloop process. When we use an IIO model, the value of N should be based on the original
IO-model (Go (q) or Gc,o(s)).

154

CHAPTER 8. TUNING

The bandwidth b of a system


with transfer function Gc,o(s) is the frequency at which the
magnitude ratio drops to 1/ 2 of its zero-frequency level (given that the gain characteristic
is at at zero frequency). So:
% &
'
&
b = max & |Gc,o(j)|2/|Gc,o(0)|2 0.5
R

For badly-damped and unstable systems, the above choice of N = 2 int(2s /b) is a lower
bound. Often increasing N may improve the stability and the performance of the closed
loop.
In the constrained case Nc will usually be chosen bigger, to introduce extra degrees of
freedom in the optimization. A reasonable choice is
n Nc N/2
If signal constraints get very tight, Nc can be even bigger than N/2. (see elevator example
in chapter 5).
Remark 43 : Alamir and Bornard [1] show there exists a nite value N for which the
closed-loop system is asymptotically stable. Unfortunately, only the existence of such a
nite prediction horizon is shown, its computation remains dicult.

8.1.2

Initial settings for the signal weighting parameters

The GPC signal weighting parameters:


should be chosen as small as possible, 0 in most cases. In the case of non-minimum
phase systems, = 0 will lead to stability problems and so should be chosen as small as
possible. The parameters pi , i = 1, . . . , np of lter
P (q) = 1 + p1 q 1 + p2 q 2 + + pnp q np
are chosen such that the roots of the polynomial P (q) are the desired poles of the closed
loop. P (q) makes y(k) to track the low-pass ltered reference signal P 1 (q)r(k).

The LQPC signal weighting parameters


In the SISO case, the matrix Q is a n n matrix and R will be a scalar. When the purpose
is to get the output signal y(k) to zero as fast as possible we can choose
Q = C1T C1

and

R = 2 I

8.1. INITIAL SETTINGS FOR THE PARAMETERS

155

which makes the LQPC performance index equivalent to the GPC index for an IO model,
r(k) = 0 and P (q) = 1. This can be seen by substitution:
J(u, k) =

N


xT (k + j|k)Q
x(k + j|k) +

Nc

j=1

j=Nm

N


uT (k + j 1)Ru(k + j 1)

x (k +

j|k)C1T C1 x(k

+ j|k) +

N


Nc


uT (k + j 1)u(k + j 1)

j=1

j=Nm

yT (k + j|k)
y (k + j|k) + 2

j=Nm

Nc


uT (k + j 1)u(k + j 1)

j=1

The rules for the tuning of are the same as for GPC. So, = 0 is an obvious choice for
minimum phase systems, is small for non-minimum phase systems. By adding an term
Q1 to the weighting matrix
Q = C1T C1 + Q1
we can introduce an additional weighting xT (k+j|k)Q1x(k+j|k) on the states.
In the MIMO case it is important to scale the input and output signals. To make this
clear, we take an example from the paper industry where we want to produce a paper roll
with a specic thickness (in the order of 80 m) and width (for A1-paper in the order of 84
cm), Let y1 (k) represent the thickness-error (dened in meters) and y2 (k) the width-error
(also dened in meters). If we dene an error like

J=
|y1(k)|2 + |y2(k)|2
it is clear that the contribution of the thickness-error to the performance index can be
neglected in comparison to the contribution due to the width-error. This means that the
error in paper thickness will be neglected in the control procedure and all the eort will be
put in controlling the width of the paper. We should not be surprised that using the above
criterion could result in paper with variation in thickness from, say 0 to 200 m. The
introduction of scaling gives a better balance in the eort of minimizing both width-error
and thickness-error. In this example let us require a precision of 1% for both outputs, then
we can introduce the performance index

J=
|y1(k)/8 107 |2 + |y2 (k)/8.4 103 |2
Now the relative error of both thickness and width are measured in the same way.
In general, let the required precision for the i-th output be given by di, so | yi | di , for
i = 1, . . . , m. Then a scaling matrix can be given by
S = diag(d1 , d2 , . . . , dm )

156

CHAPTER 8. TUNING

By choosing
Q = C1T S 2 C1
the rst term of the LQPC performance index will consist of
T

x (k + j|k)Q
x(k + j|k) = x (k +

j|k)C1T S 2 C1 x(k

+ j|k) =

m


|
yi (k + j|k)|2 /d2i

i=1

We can do the same for the input, where we can choose a matrix
R = 2 diag(r12 , r22 , . . . , rp2)
The variables ri2 > 0 are a measure how much the costs should increase if the i-th input
ui (k) increases by 1.
Example 44 : Tuning of GPC controller
In this example we tune a GPC controller for the following system:
ao (q) y(k) = bo (q) u(k) + fo (q) do(k) + co (q) eo(k)
where
ao (q)
bo (q)
co (q)
fo (q)

= (1 0.9q 1)
= 0.03 q 1
= (1 0.7q 1)
=0

For the design we consider eo (k) to be integrated ZMWN (so ei (k) = eo (k) is ZMWN).
First we transform the above polynomial IO model into a polynomial IIO model and we
obtain the following system:
ai (q) y(k) = bi (q) u(k) + fi (q) di(k) + ci (q) ei (k)
where
ai (q)
bi (q)
ci (q)
fi (q)

= (1 q 1 )ao (q)
= bo (q)
= co (q)
= fo (q)

= (1 q 1 )(1 0.9q 1 )
= 0.03 q 1
= (1 0.7q 1 )
=0

Now we want to nd the initial values of the tuning parameters following the tuning rules
in section 8.1. To start with we plot the step response of the IO-model (!!!) in Figure 8.1.
We see in the gure that system is well-damped. The 5% settling time is equal to ts = 30.
The order of the IIO model is n = 2 and the dead time d = 0 .

8.1. INITIAL SETTINGS FOR THE PARAMETERS

157

step response with 5% bounds

0.35
0.3

s(k) >

0.25
0.2
0.15
0.1
0.05
0
0

stepresponse
5% bounds
settling time

10

20

30
k >

40

50

60

Figure 8.1: Step response of IO model


Now the following initial setting are recommended
Nm = 1 + d = 1
Nc = n = 2
N = int(N ts ) = int(N 30)
where N [1.5, 2]. Changing N from 1.5 to 2 does not have much inuence, and we
therefore choose for the smallest value N = 1.5, leading to N = 45.
To verify the tuning we apply a step to the system (r(k) = 1 for k 0) and evaluate the
output signal y(k). The result is given in 8.2.
Example 45 : Tuning of LQPC controller
In this example we tune a LQPC controller for the following system:




2.3000 1.0000
Ao =
Co = 1 0
1.2000
0


 
 
0.1
1
2
Ko =
Lo =
Bo =
0.09
1
1
which is found by sampling a continuous-time system with sampling-frequency s = 100
rad/s. Now we want to nd the initial values of the tuning parameters following the tuning
rules in section 8.1. To start with compute the eigenvalues of Ao and we nd 1 = 0.8

158

CHAPTER 8. TUNING
solid: y , dotted: r , dashdot: eo
1.5

y(k),r(k) >

0.5

0.5

1.5
0

y(k)
r(k)
eo(k)
10

20

30
k >

40

50

60

Figure 8.2: Closed loop response on a step reference signal


and 2 = 1.5. We observe that the the system is unstable and so we have to determine
the bandwidth of the original continuous-time system. We plot the bode-diagram of the
continuous-time model in Figure 8.3. We see in the gure that b = 22.89 = 18.15 rad/s.
The order of the IO model is n = 2 and the dead time d = 0 (there is no delay in the
system).
Now the following initial setting are recommended
Nm = 1 + d = 1
Nc = n = 2
N = int(N s /b ) = int(N 5.51)
where N [4, 25]. Changing N from 4 to 25 does not have much inuence, and we
therefore choose for the smallest value N = 4, leading to N = 22.

T
To verify the inuence of N , we start the system in initial state x = 1 1 , and
evaluate the output signal y(k). The result is given in 8.4. It is clear that for = 10 the best
response is found. We therefore obtain the optimal prediction horizon N = int(N 5.51) =
int(10 5.51) = 55.

8.2

Tuning as an optimization problem

In the ideal case the performance index and constraints reect the mathematical formulation of the specications from the original engineering problem. However, in practice the

8.2. TUNING AS AN OPTIMIZATION PROBLEM

159

10

10

10 2
10

10

10

10

10

Figure 8.3: Bode plot of IO continuous-time model

stepresponses for unstable system with LQPC


2.5

y(k) for =4
y(k) for =10
y(k) for =25
e (k)

y(k),e (k) >

1.5

0.5
0
0.5
1
0

10

15
k >

20

25

Figure 8.4: Closed loop response for initial state x = [1 1]T .

30

160

CHAPTER 8. TUNING

desired behaviour of the closed loop system is usually expressed in performance specications, such as step-response criteria (overshoot, rise time and settling time) and response
criteria on noise and disturbances. If the specications are not too tight, the initial tuning,
using the rules of thumb of the previous section, may result in a feasible controller. If the
desired specications are not met, an advanced tuning is necessary. In that case the relation between the performance specications and the predictive control tuning parameters
(Nm , N, Nc , for GPC: , P (q), and for LQPC: Q, R) should be understood.
In this section we will study this relation and consider various techniques to improve or
optimize our predictive controller.
We start with the closed loop system as derived in chapter 6:

 


v(k)
M11 (q) M12 (q)
e(k)
=
M21 (q) M22 (q)
y(k)
w(k)

where w(k)

consists of the present and future values of reference and disturbance signal.
Let w(k)

= wr (k) represent a step reference signal and let w(k)

= wd (k) represent a step


disturbance signal. Further dene the output signal on a step reference signal as
syr (k) = M22 (q) wr (k)
the output signal on a step disturbance signal as
d(k)
syd (k) = M22 (q) w
the input signal on a step reference signal as
r (k)
svr (k) = M12 (q) w
We will now consider the performance specications in more detail, and look at responses
criteria on step-reference signals (criteria 1 to 4), responses criteria on noise and disturbances (criteria 5 to 7) and nally at the stability of the closed loop (criterion 8):
1. Overshoot of output signal on step reference signal:
The overshoot on the output is dened as the peak-value of the output signal y(k)1
for r(k) is a unit step, or equivalently, w(k)

= wr (k). This value is equal to:


os = max( syr (k) 1 )
k

2. Rise time of output signal on step reference signal:


The rise time is the time required for the output to rise to 80% of the nal value.
This value is equal to:
% &
'
&
rt = min k & ( syr (k) > 0.8 )
k

8.2. TUNING AS AN OPTIMIZATION PROBLEM

161

The rise time ko = rt is an integer and the derivative cannot be computed directly.
Therefore we introduce the interpolated rise time
rt :

rt =

0.8 + (ko 1)syr (ko ) ko syr (ko 1)


syr (ko ) syr (ko 1)

The interpolated rise time is the (real-valued) time-instant where the rst-order interpolated step-response reach the value 0.8.
3. Settling time of output signal on step reference signal:
The settling time is dened as the time the output signal y(k) needs to settle within
5% of its nal value.
&
%
'
&
st = min k ko & | syr (k) 1 | < 0.05
ko

The settling time ko = st is an integer and the derivative cannot be computed


directly. Therefore we introduce the interpolated settling time st :

st =

0.05 + (ko 1)|syr (ko )| ko |syr (ko 1)|


|syr (ko )| |syr (ko 1)|

The interpolated settling time is the (real-valued) time-instant where the rst-order
interpolated tracking-error on a step reference signal is settled within its 5% region.
4. Peak value of input signal on step reference signal
of the output signal v(k) for r(k) is a step, is given by
pvr = max | svr (k) |
k

5. Peak value of output signal on step disturbance signal:


The peak-value of the output signal y(k) for d(k) is a step, or equivalently w(k)

=
wd (k), is given by
pyd = max | syd (k) |
k

6. RMS mistracking on zero-mean white noise signal:


The RMS value of the mistracking of the output due to a noise signal with spectrum
(k) I is given by
(
1/2
j
j
rm =  M21 (e ) 2 =
| M21 (e ) | d

7. Bandwidth of closed loop system:


The Bandwidth is the largest frequency below which the transfer function from noise
to output signal is lower than -20 dB.
'
% &
&
bw = min & | M21 (ej ) | < 0.1

162

CHAPTER 8. TUNING

8. Stability radius of closed loop system:


The stability radius is maximum modulus value of the closed loop poles, which is
equal to the spectral radius of the closed loop system matrix Acl .
sm = (Acl ) = max | i (Acl ) |
i

where i are the eigenvalues of closed loop system matrix Acl .


(Of course many other performance specications can be formulated.)
While tuning the predictive controller, some of the above performance specications will
be crucial in the design.
Dene the tuning vectors and as follows:

vec(Ap )


Nm

vec(Q1/2 )

N
for LQPC: =
=
for GPC: = vec(Bp )
vec(R1/2 )

vec(Cp )
Nc
vec(Dp )
It is clear that the criteria depend on the chosen values of and , so:
= (, )
where consists of integer values and consists of real values. The tuning of and
can be done sequentially or simultaneously. In a sequential procedure we rst choose
the integer vector and tune the parameters for these xed . In a simultaneous procedure the vectors and are tuned at the same time, using mixed-integer search algorithms.
The tuning of the integer values is not too dicult, if we can limit the size of the search
space S . A reasonable choice for bounds on Nm , N and Nc is as follows:
1 Nm
N
1 N N2,o
1 Nc
N
where N2,o is the prediction horizon from the initial tuning, and = 2 if N2,o is big and
= 3 or = 4 for small N2,o . By varying ZZ we can increase or decrease the search
space S .
For the tuning of it is important to know the sensitivity of the criteria to changes in

. Note that this derivative only


. A good measure for this sensitivity is the derivative
i
gives the local sensitivity and is only valid for small changes in . Furthermore, the criteria
are not necessarily continuous functions of and so the derivatives may only give a local
one-sided measure for the sensitivity.

8.3. SOME PARTICULAR PARAMETER SETTINGS

163

Parameter optimization
Tuning the parameters and can be formulated in two ways:
1. Feasibility problem: Find parameters and such that some selected criteria
1 , . . . , p satisfy specic pre-described bounds:
1 c1
..
.
p cp
For example: nd parameters and such that the overshoot is smaller than 1.05,
the rise time is smaller than 10 samples and the bandwidth of the closed loop system
is larger than /2 (this can be written as bw /2).
2. Optimization problem: Find parameters and such that criterion o is minimized, subject to some selected criteria 1 , . . . , p satisfying specic pre-described
bounds:
1 c1
..
.
p cp
For example: nd parameters and such that the settling time is minimized,
subject to an overshoot smaller than 1.1 and actuator eort smaller than 10.
Since has integer values, an integer feasibility/optimization problem arises. The criteria
i (x) are nonlinear functions of and , and a general solution technique does not exist.
Methods like Mixed Integer Linear Programming (MILP), dynamic programming, Constraint Logic Propagation (CLP), genetic algorithms, randomized algorithms and heuristic
search algorithms can yield a solution. In general, these methods require large amounts of
calculation time, because of the high complexity and they are mathematically classied as
NP-hard problems. This means that the search space (the number of potential solutions)
grows exponentially as a function of the problem size.

8.3

Some particular parameter settings

Predictive control is an open method, that means that many well-known controllers can
be found by solving a specic predictive control problem with special parameter settings
(Soeterboek [61]). In this section we will look at some of these special cases. We consider
only the SISO case in polynomial setting, and
y(k) =

c(q)
q d b(q)
u(k) +
e(k)
a(q)(q)
a(q)(q)

164

CHAPTER 8. TUNING

where
a(q)
b(q)
c(q)
(q)

=
=
=
=

1 + a1 q 1 + . . . + ana q na
b1 q 1 + . . . + bnb q nb
1 + c1 q 1 + . . . + cnc q nc
1 q 1

First we dene the following parameters:


na = the degree of polynomial a(q)
nb = the degree of polynomial b(q)
d = time-delay of the process.
Note that d 0 is the rst non-zero impulse response element h(d + 1) = 0, or equivalently
in a state space representation d is equal to the smallest number for which C1 Ad B3 = 0.

Minimum variance control:


In a minimum variance control (MVC) setting, the following performance index is minimized:
J(u, k) = |
y (k + d|k) r(k + d)|2
This means that if we use the GPC performance with setting Nm = N = d, Nc = 1,
P (q) = 1 and = 0 we obtain the minimum variance controller. An important remark is
that a MV controller only gives a stable closed loop for minimum phase plants.

Generalized minimum variance control:


In a generalized minimum variance control (GMVC) setting, the performance index is
extended with a weighting on the input:
J(u, k) = |
y (k + d|k) r(k + d)|2 + 2 |u(k)|2
where y(k) is the output signal of an IIO-model. This means that if we use the GPC
performance, with setting Nm = N = d, Nc = d, P (q) = 1 and some > 0 we obtain the
generalized minimum variance controller. A generalized minimum variance controller also
may give a stable closed loop for nonminimum phase plants, if is well tuned.

Dead-beat control:
In a dead-beat control setting, the following performance index is minimized:
J(u, k) =

na 
+nb +d

|
y (k + j|k) r(k + j)|2

j=nb +d

u(k + j) = 0 for j na + 1
This means that if we use the GPC performance, with setting Nm = nb + d, Nc = na + 1,
P (q) = 1 and N = na + nb + d and = 0 we obtain the dead-beat controller.

8.3. SOME PARTICULAR PARAMETER SETTINGS

165

Mean-level control:
In a mean-level control (MLC) setting, the following performance index is minimized:
J(u, k) =

|
y (k + j|k) r(k + j)|2

j=1

u(k + j) = 0 for j 1
This means that if we use the GPC performance with setting Nm = 1, Nc = 1, P (q) = 1
and N . we obtain the mean-level controller.

Pole-placement control:
In a pole-placement control setting, the following performance index is minimized:
J(u, k) =

na 
+nb +d

|
yp (k + j|k) r(k + j)|2

j=nb +d

u(k + j) = 0 for j na + 1
where yp (k) = P (q)y(k) is the weighted output signal. This means that if we use the GPC
performance with setting Nm = nb + d, Nc = na + 1, P (q) = Pd (q) and N = na + nb + d.
In state space representation, the eigenvalues of (A B3 F ) will become equal to the roots
of polynomial Pd (q).

166

CHAPTER 8. TUNING

Appendix A: Quadratic
Programming
In this appendix we will consider optimization problems with a quadratic object function
and linear constraints, denoted as the quadratic programming problem.
Denition 46 The quadratic programming problem:
Minimize the object function
F () =

1 T
H + fT
2

over variable , where H is a semi-positive denite matrix, subject to the linear inequality
constraint
A b
2 End Denition
This is the type of quadratic programming problem we have encountered in chapter 5,
where we solved the standard predictive control problem.
Remark 47 : Throughout this appendix we assume H to be a symmetric matrix. If H is
a non-symmetric matrix, we can dene
Hnew =

1
(H + H T )
2

Substitution of this Hnew for H does not change the object function.
Remark 48 : The unconstrained quadratic programming problem:
If the constraints are omitted and only the quadratic object function is considered, an
analytic solution exists: An extremum of the object function
F () =

1 T
H + fT
2

is found when the gradient F is equal to zero


F () = H + f = 0
167

168

Appendices

If H is non-singular the extremum is reached for


= H 1 f
Because H is a positive denite matrix, the extremum will be a minimum.
There are various algorithms to solve the quadratic programming problem.
1. The modied simplex method: For small and medium-sized quadratic programming problems then modied simplex method is the most ecient optimization algorithm. It will nd the optimum in a nite number of steps.
2. The interior point method: For large-sized quadratic programming problems it is
better to use the interior-point method. A disadvantage compared to the modied
simplex methods is that the optimum can only be approximated. In Nesterov and
Nemirovsky [49] an upper bound is derived on the number of iterations that are
needed to nd the optimum with a pre-described precision.
3. Other convex optimization methods: Also other convex optimization methods, like
the cutting plane algorithm [10, 22] or the ellipsoid algorithm [10, 32] may be used
to solve the quadratic programming problem.

The modied simplex method


First we dene another type of quadratic programming problem that is very common in
literature:
Denition 49 The quadratic programming problem (Type 2):
Minimize the object function
F () =

1 T
H + fT
2

over variable , where H is a semi-positive denite matrix, subject to the linear equality
constraint
A = b
and the non-negative constraint
0
2 End Denition
Lemma 50 The quadratic programming problem of denition 46 in a quadratic programming problem of the form of denition 49.

169
Proof : Dene
= + , with + , 0
Note that is now uniquely decomposed into + and . Further introduce a slack-variable y
such that
A + y = b where y 0
We dene

H H 0
f

+


= H
H
H 0 , f = f , A = A A I
, =
y
0
0 0
0

and obtain a quadratic programming problem of denition 49:


F () =

1 T T
H+f
2

A = b
0

2
Consider the quadratic programming problem of denition 49 with the object function
F () =

1 T
H + f T ,
2

the linear equality constraint


A = b,
and the non-negative constraint
0
for a semi-positive denite matrix H.
For an inequality/equality constrained optimization problem necessary conditions for an
extremum of the function F () in , satisfying h() = 0 and g() 0, are given by the
Kuhn-Tucker conditions:
There exist vectors and , such that
F () + g() + h()
T g()

h()
g()

=
=

0
0
0
0
0

170

Appendices

The Kuhn-Tucker conditions for this quadratic programming problem are given by:
H + AT
T
A
,

f
0
b
0

=
=
=

(8.1)
(8.2)
(8.3)
(8.4)

We recognize equality constraints (8.3),(8.1) and a non-negative constraint (8.4), two ingredients of a general linear programming problem. On the other hand, we miss the object
function, and we have an additional nonlinear equality constraint (8.2). (Note that the
multiplication T is nonlinear in the new parameter vector (, , )T .) An object function
can be obtained by introducing two slack variables u1 and u2 , and dening the problem:
A + u1
H + A + u2
, , u1 , u2
T
T

b
f
0
0

=
=

(8.5)
(8.6)
(8.7)
(8.8)

while minimizing
min

,,,u1 ,u2

u1 + u2

(8.9)

Construct the matrices


A0 =

A 0
0 I 0
T
H A I 0 I


b0 =

b
f

f0 =

0
0
0
1
1

0 =

u1
u2

(8.10)

and the general linear programming problem can be written as:


min f0T 0

where

A0 0 = b0

and

0 0

with the additional nonlinear constraint T = 0.


It may be clear from the Kuhn-Tucker conditions in (8.3-8.2) that an optimum is only
reached when u1 and u2 in (8.5-8.7) become zero. This can be achieved by using a simplex
algorithm which is modied in the sense that there is an extra condition on nding feasible
basic solutions namely that T = 0.
Note from (8.4) and (8.2) that
T = 1 1 + 2 2 + . . . + n n = 0 where i , i 0 for i = 1, . . . , n

171
means that the nonlinear constraint (8.2) can be rewritten as:
i i = 0 for i = 1, . . . , n
and thus either i = 0 or i = 0. The extra feasibility condition on a basic solution now
becomes i i = 0 for i = 1, . . . , n.
Remark 51 : In the above derivations we assumed b 0 and f 0. If this is not the
case, we have to change the corresponding signs of u1 and u2 in the equations (8.5) and
(8.6).
We will sketch the modied simplex algorithm on the basis of an example.
Modied simplex algorithm:
Consider the following type 2 quadratic programming problem:
min

1 T
H + fT
2

A = b
0
where

2 1 0 0
1 1 1 0

H=
0
1 4 2
0
0 2 4

0
3

f =
2
0


A=

2 1 0 0
0 2 1 2


b=

4
4

Before we start the modied simplex algorithm, we check if a simple solution can be found.
If we minimize the unconstrained function
1
min T H + f T
2

T
we obtain = 7 14 4 2 . This solution is not feasible, because neither the equality constraint A = b, nor the non-negative constraint 0 are satised.
We construct the matrix A0 , the vectors b0 , f0 and the parameter vector 0 according to
equation (8.10). Note that 0 is a 16 1-vector with the elements is a 4 1-vector, is
a 2 1-vector, is a 4 1-vector, u1 is a 2 1-vector and u2 is a 4 1-vector.
The modied simplex method starts with nding a rst feasible solution. We choose B to
contain the six most-right columns of A0 . We obtain the solution



0
0
0
 
 
0
0
3
0
4

=
=
=
u1 =
u2 =
0
0
2
0
4
0
0
0

172

Appendices

The solution is feasible because , , u1 , u2 0 and i i = 0 for i = 1, 2, 3, 4 . However,


the optimum is not found yet, because u1 , u2 > 0 .
After a nite number of iterations the optimum is found by selecting columns 1, 2, 5, 6, 9
and 10 of the matrix A0 for the matrix B. We nd the optimal solution




0
0
1
0
 

0
0
0
0

=
u2 =
=
=
u1 =

0
0
1
1
0
0
0
2
and the Kuhn-Tuker conditions (8.3-8.2) are satised for these values.
Algorithms that use a modied version of the simplex method are Wolfes algorithm [76],
and the pivoting algorithm of Lemke [40].

The interior-point method


In this section we discuss a popular convex optimization algorithm, namely the interiorpoint method from Nesterov & Nemirovsky [49]. This method solves the constrained convex
optimization problem
f = f ( ) = min f () subject to g() 0

and is based on the barrier function method. Consider the non-empty strictly feasible set
G = { | gi () < 0, i = 1, . . . , m }
and dene the function:
 )m
i=1 log(gi ())
() =

G
otherwise

Note that is convex on G and, as approaches the boundary of G, the function will go
to innity. Now consider the unconstrained optimization problem
*

min t f () + ()

for some t 0. Then the value (t) that minimizes this function is denoted by
*

(t) = arg min t f () + () .

It is clear that the curve (t), which is called the central path, is always located inside the
set G. The variable weight t gives the relative weight between the objective function f and
the barrier function , which is a measure for the constraint violation. Intuition suggests
that (t) will converge to the optimal value of the original problem as t . Note

173
that in a minimum (t) the gradient of (, t) = t f () + () with respect to is zero,
so:

( , t) = t f ( (t)) +

m

i=1

1
gi ( (t)) = 0 .
gi ( (t))

We can rewrite this as

f ( (t)) +

m


i (t) gi ( (t)) = 0

i=1

where
i (t) =

1
.
gi ( (t)) t

(8.11)

Then from the above we can derive that at the optimum the following conditions hold:

f ( (t)) +

m


g((t)) 0
i (t) 0

(8.12)
(8.13)

i (t) gi ( (t)) = 0

(8.14)

i=1

i (t)gi ( (t)) = 1/t

(8.15)

which proves that for t the right-hand side of the last condition will go to zero and so
the Kuhn-Tucker conditions will be satised. This means that (t) and (t)
for t .
From the above we can see that the central path can be regarded as a continuous deformation of the Kuhn-Tucker conditions.
Now dene the dual function [58, 9]:
+
d() = min f () +

m


,
i gi ()

i=1

A property of this dual function is that


d() f

for any 0

for Rm

174

Appendices

and so we have
f d( (t))
+
min f () +

f ( (t)) +

m


,
i (t)gi ()

i=1
m


i (t)gi ( (t))

i=1

(since (t) also minimizes the function


m

i (t)gi ()
F dened by F () = f () +
i=1

= f ( (t)) m/t

since F ( (t)) = 0 by (8.14))


(by (8.11))

and thus
f ( (t)) f f ( (t)) m/t
It is clear that m/t is an upper bound for the distance between the computed f ( (t)) and
the optimum f . If we want a desired accuracy > 0, dened by
| f f ( (t)) |
then choosing t = m/ gives us one unconstrained minimization problem, with a solution
(m/) that satises:
| f f ( (m/)) | .
Algorithms to solve unconstrained minimization problems are discussed in [58].
The approach presented above works, but can be very slow. A better way is to increase
t in steps to the desired value m/ and use each intermediate solution as a starting value
for the next. This algorithm is called the sequential unconstrained minimization technique
(SUMT) and is described by the following steps
Given: G, t > 0 and a tolerance

*

Step 1: Compute (t) starting from : (t) = arg min t f () + () .

Step 2: Set = (t).


Step 3: If m/t , return and stop.
Step 4: Increase t and goto step 1.
This algorithm generates a sequence of points on the central path and solves the constrained
optimization problem via a sequence of unconstrained minimizations (often quasi-Newton

175
methods). A simple update rule for t is used, so t(k + 1) = t(k), where is typically
between 10 and 100.
The steps 14 are called the outer iteration and step 1 involves an inner iteration (e.g., the
Newton steps). A trade-o has to be made: a smaller means fewer inner iterations to
compute k+1 from k , but more outer iterations.
In Nesterov and Nemirovsky [49] an upper bound is derived on the number of iterations
that are needed for an optimization using the interior-point method. Also tuning rules for
the choice of are given in their book.

176

Appendices

Appendix B: Basic State Space


Operations
Cascade:

G(q) G(q)
= G1 (q) G2 (q)

G1


G

A1
C1

B1
D1

A1
C1

B1
D1


G2

 
A2

C2

B2
D2

B1 D2
B2
D1 D2

B2
B1 D2
D1 D2

A1 B1 C2

0
A2
=
C 1 D1 C 2
A2
0

B1 C2 A1
=
D1 C 2 C 1

Parallel:

G(q) G(q)
= G1 (q) + G2 (q)
177

A2
C2

B2
D2

178

Appendices

G1


G

A1
C1

B1
D1

A1
C1

B1
D1


G2



+

A2
C2

B2
D2

A2
C2

B2
D2

B1

B2
D 1 + D2

A1 0

0 A2
=
C1 C2

Change of variables:
x x = T x
u u = P u
y y = R y

G


G

A
C

B
D

A
C

T is invertible
P is invertible


=

T AT 1
RCT 1

T BP 1
RDP 1

State feedback:
u u + F x

G


G

A
C

B
D

A
C


=

A + BF
C + DF

B
D

179

Output injection:
x(k + 1) = Ax(k) + Bu(k) x(k + 1) = A
x(k) + Bu(k) + Hy(k)

G


G

A
C

B
D

A
C


=

A + HC
C

B + HD
D

Transpose:

G(q) G(q)
= GT (q)

G


G

A
C
AT
BT

B
D

CT
DT

Left (right) Inversion:

G(q) G(q)
= G+ (q)
G+ is a left (right) inverse of G.


B
A
G
C
D


G

A BD+ C
D + C

BD+
D+

where D + is the left (right) inverse of D.


For G(q) is square, we obtain G+ (q) = G1 (q) and D + = D 1 .

180

Appendices

Appendix C: Some results from


linear algebra
Left inverse and Left complement:
Consider a full-column rank matrix M Rmn for m n. Let the singular value decompostion of M be given by:




VT
M = U1 U2
0
The left inverse is dened as
M  = V 1 U1T
and the left complement is dened as
M  = U2T
The left-inverse and left-complement have the following properties:
M M = I
M  M = 0

Right inverse and Right complement:


Consider a full-row rank matrix M Rmn for m n. Let the singular value decompostion
of M be given by:



 V1T
M =U 0
V2T
The right inverse is dened as
M r = V1 1 U T
and the right complement is dened as
M r = V2
The right-inverse and right-complement have the following properties:
M Mr = I
M M r = 0
181

182

Appendices

Singular value decomposition


If A Rmn then there exist orthogonal matrices


Rmm
U = u1 um
and
V =

v1 vn

Rnn

such that
U T AV =
where

..

0 0
.. for m n
..
.
.
0 0

.
m

..

0
..
.

n
0
..
.

for m n

where 1 2 . . . p 0, p = min(m, n). This means that


A = UV T
This decomposition of A is known as the singular value decomposition of A. The i are
known as the singular values of A (and are usually arranged in decending order).
Property:
U contains the eigenvectors of AAT . V contains the eigenvectors of AT A. is a diagonal
matrix whose entries are the nonnegative square roots of the eigenvalues AT A (for m n)
or AAT (for m n).
Property:
Let min and max be the smallest and largest singular value of a square matrix A. Further
let i denote the eigenvalues of A. Then there holds:
min |i| max ,

Appendix D: Model descriptions


D.1 Impulse and step response models
D.1.1. Model descriptions
Let gm and sm denote the impulse response parameters and the step response parameters
of the stable system Go (q), respectively. Then
sm =

m


gj

gm = sm sm1

and

, m {1, 2, . . .}

j=1

The transfer function Go (q) is found by


Go (q) =

gm q

m=1

sm q m (1 q 1 )

(8.16)

m=1

In the same way, let fm and tm denote the impulse response parameters and the step
response parameters of the stable disturbance model Fo (q), respectively. Then
tm =

m


fj

and

fm = tm tm1

, m {0, 1, 2, . . .}

j=0

The transfer function Fo (q) is found by


Fo (q) =


m=0

fm q

tm q m (1 q 1 )

(8.17)

m=0

The truncated impulse response model is dened by


y(k) =

n

m=1

gm u(k m) +

n


fm do (k m) + eo (k)

m=0

where n is an integer number such that gm 0 and fm 0 for all m n.


This is an IO-model in which the disturbance do (k) is known and noise signal eo (k) is
chosen as ZMWN.

183

184

Appendices

The truncated step response model is dened by


y(k) =

n1


sm u(k m) + sn u(k n) +

m=1

n1


tm di (k m) + tn do (k n) + eo (k)

m=0

This is a IIO-model in which di (k) = do (k) = do (k) do (k 1) is the disturbance


increment signal and the noise increment signal ei (k) = eo (k) = eo (k) eo (k 1) is
chosen as ZMWN to account for an oset at the process output, so
eo (k) = 1 (q)ei (k)
From impulse response model to state space model
(Lee et al. [38]) Let
Go (q) = g1 q 1 + g2 q 2 + . . . + gn q n
Fo (q) = f0 + f1 q 1 + f2 q 2 + . . . + fn q n
Ho (q) = 1
where gi are the impulse response parameters of the system, then the state space matrices
are given by:

Ao =

Co =

I 0 ...
0 I
..
..
.
.
.
0 0 0 ..
0 0 0 ...

0
0
..
.

0
0
..
.

I
0

I 0 0 ... 0

Ko =

0
0
..
.

Lo =

Bo =

g1
g2
..
.

gn

fn

DH = I

f1
f2
..
.

DF = f0

This IO system can be translated into an IIO model using the formulas from section 2.1:

I
0
..
.

I 0 ...

0 I

..
..

.
.
Ai =
0 0 0

0 0 0 ...
0 0 0 ...
Ci =

0 0
0 0

..
.

I 0

0 I
0 0

I I 0 ... 0

Ki =

I
0
..
.
0

DH = I

Li =

f0
f1
f2
..
.
fn

DF = f0

Bi =

0
g1
g2
..
.
gn

185

q 1

-?
ex3 6

6
-

g1
q 1

f2

-?
ex2 6

-?
e x1 6 y

q 1

f1

eo

f0

Figure 8.5: IO state space representation of impulse response model


-
?

?
e6

gn1

gn

g2

q 1 x - e-
n+1 6

fn
di

e-?
xn 6

g2

fn1

fn
do

gn1

gn
?
e 6

fn1
6
-

- e- 1
x4 6 q

f2

g1
?

- e- 1
x3 6 q

f1
6

eo
x1
@
@ ?
Re
@
x2 6

q 1 
y

f0
6

Figure 8.6: IIO state space representation of impulse response model


From step response model to state space model
(Lee et al. [38]) Let
Gi (q) = s1 q 1 + s2 q 2 + . . . + sn1 q n+1 + sn q n + sn q n1 + sn q n2 + . . .
q n
= s1 q 1 + s2 q 2 + . . . + sn1 q n+1 + sn
=
1 q 1
q 1
q 2
q n
= g1
+
g
+
.
.
.
+
g
2
n
1 q 1
1 q 1
1 q 1
1
2
n+1
Fi (q) = t0 + t1 q + t2 q + . . . + tn1 q
+ tn q n + tn q n1 + tn q n2 + . . .
q n
= t0 + t1 q 1 + t2 q 2 + . . . + tn1 q n+1 + tn
1 q 1
1
2
1
q
q
q n
= f0
+
f
+
f
+
.
.
.
+
f
1
2
n
1 q 1
1 q 1
1 q 1
1 q 1
I
Hi (q) =
(1 q 1 )

186

Appendices

where si are the step response parameters of the system, then the state space matrices are
given by:

0 I 0 ... 0 0

0 0 I
0 0
I
t1
s1

.. ..
I
t2
s2
..
..

.
.
K
Ai = . .
=
=
=
L
B

..
..
..
i
i
i

0 0 0

.
.
I 0
.

0 0 0 ... 0 I
tn
sn
I
0 0 0 ... 0 I
Ci =

I 0 0 ... 0

DH = I

DF = t0 = f0

sn1

sn2

sn

ei

-?
e6

q 1

xn- ?
e6

q 1

xn1
-?
e6

tn

tn1

di

s1
x2 ?
e-

6

x1 e y(k)6

tn2
6
-

q 1

t1
6

Figure 8.7: State space representation of step response model

Prediction using step response models


prediction for innite-length step response model:
The step response model has been derived in section 2.2.2 and is given by:
y(k) =


m=1

sm u(k m) +


m=0

tm di (k m) +

ei (k m)

(8.18)

m=0

where y is the output signal, u is the increment of the input signal u, di is the increment
of the (known) disturbance signal do , and ei , which is the increment of the output noise
signal eo , is assumed to be zero-mean white noise.
Note that the output signal y(k) in equation (8.18) is build up with all past values of the
input, disturbance and noise signals. We introduce a signal y0 (k|k ),  > 0, which gives

187
the value of y(k) based on the inputs, disturbances and noise signals up to time k ,
assuming future values to be zero, so u(t) = 0, di(t) = 0 and ei (t) = 0 for t > k :
y0 (k + j|k ) =

sj++m u(km) +

m=0

tj++m di (km) +

m=0

ei (km)

(8.19)

m=0

Based on the above denition it easy to derive a recursive expression for y0 :


y0 (k + j|k) =
=


m=0

sj+m u(km) +
sj+m u(km) +

m=1


m=0

tj+m di (km) +

ei (km)

m=0

tj+m di (km)

m=1

ei (km) + sj u(k) + tj di (k) + ei (k)

m=1

= y0 (k + j|k 1) + sj u(k) + tj di (k) + ei (k)


Future values of y can be derived as follows:
y(k + j) =
=


m=1

sm u(k + j m) +

tm di (k + j m) +

m=0

sj+m u(k m) +

m=1

j


sm u(k + j m) +

ei (k + j m)

m=0

tj+m di (k m) +

m=1

ei (k m)

m=1
j


tm di(k + j m) +

m=0

m=1

j


ei (k + j m)

m=0

= y0 (k + j|k 1)
j
j
j



+
sm u(k + j m) +
tm di(k + j m) +
ei (k + j m)
m=1

m=0

m=0

Suppose we are at time k, then we only have knowlede up to time k, so ei (k + j), j > 0 is
unknown. The best estimate we can make of the zero-mean white noise signal for future
time instants is (see also previous section):
ei (k + j|k) = 0 , for j > 0
If we assume u() and di () to be known in the interval k  k + j we obtain the
prediction of y(k + j), based on knowledge until time k, and denoted as y(k + j|k):
y(k + j|k) = y0 (k + j|k 1) +

j

m=1

sm u(k + j m) +

j

m=0

tm di (k + j m) + ei (k)

188

Appendices

prediction for truncated step response model:


So far we derived the following relations:
y0 (k + j|k) = y0 (k + j|k 1) + sj u(k) + tj di (k) + ei (k)
y(k) = y0 (k|k 1) + t0 di (k) + ei (k)
j
j


y(k + j|k) = y0 (k + j|k 1) +
sm u(k+j m) +
tm di (k+j m) + ei (k)
m=1

m=0

In section 2.2.2 the truncated step response model was introduced. For a model with
step-response length n there holds:
sm = sn

for

m>n

Using this property we can derive from 8.19 for j 0:


y0 (k+n+j|k 1) =
=


m=1

sn+m+j u(k m) +

tn+m+j di (km) +

m=1

sn u(km) +

m=1

ei (km)

m=1

tn di (km) +

m=1

ei (km)

m=1

= y0 (k+n1|k 1)
and so y0 (k +n+j|k 1) = y0 (k +n1|k 1) for j 0. This means that for computation of
y(k +n+j|k) where j 0, so when the prediction is beyond the length of the step-response
we can use the equation:
y(k + n + j|k) = y0 (k + n 1|k 1) +

n+j


sm u(k + n + j m)

m=1

n+j


tm di (k + n + j m) + ei (k)

m=0

Matrix notation
From the above derivation we found that computation of y0 (k + j|k 1) for j = 0, . . . , n 1
is sucient to make predictions for all j 0. We obtain the equations:
y0 (k+j|k) = y0 (k+j|k!1) + sj u(k) + tj di (k) + ei (k)

for 0 j n1

y0 (k+j|k) = y0 (k+n1|k1) + sn u(k) + tj di(k) + ei (k)

for j n

y(k|k) = y0 (k|k1) + t0 di (k) + ei (k)

(8.20)

189
In matrix form:

t1 di (k)
y0 (k+1|k1)
s1 u(k)
y0 (k+1|k)
ei (k)
y0 (k+2|k) y0 (k+2|k1) s2 u(k) t2 di (k) ei (k)

ei (k)

..
..
..
..
+
+
+
=

.
.
.
.


..


y0 (k+n1|k) y0 (k+n1|k1) sn1 u(k) tn1 di (k) .
y0 (k+n|k)
ei (k)
y0 (k+n1|k1)
sn u(k)
tn di (k)
y(k|k) = y0 (k|k 1) + t0 di (k) + ei (k)
Dene the following signal vector for j 0,  0:

y0 (k + j + 1|k )
y0 (k + j + 2|k )

..
y0 (k + j|k ) =

y0 (k + j + n 1|k )
y0 (k + j + n|k )
and matrices

M =

C=

0 0
0 0

..
.

I 0

0 I
0 I

0
I
.. . .
.
.
0 0 0
0 0 0
0 0 0
0
0
..
.

I
0
..
.

I 0 0

s1
s2
..
.

S=

sn1
sn

T =

tn1
tn

t1
t2
..
.

Then equations (8.20) can be written as:


y0 (k + 1|k) = M y0 (k|k 1) + Su(k) + T di(k) + In ei (k)
y(k) = C y0 (k|k 1) + to di (k) + ei (k)

(8.21)
(8.22)

To make predictions y(k + j|k), we can extend (8.22) over the prediction horizon N. In
matrix form:

y0 (k+1|k1)
s1 u(k)
y(k+1|k)
y(k+2|k) y0 (k+2|k1)

s2 u(k)+s1 u(k+1)

.
.
.

..
..
..

y(k+n1|k) = y0 (k+n1|k1) + sn1 u(k)+ +s1 u(k+n2) +


y(k+n|k) y0 (k+n1|k1) sn u(k)+ +s1 u(k+n1)

..
..
..

.
.
.
y(k+N|k)
sN u(k)+ +s1 u(k+N 1)
y0 (k+n1|k1)

190

Appendices

t1 di (k)
t2 di (k)+t1 di (k+1)
..
.

ei (k)
ei (k)
..
.

+ ei (k)

ei (k)
.
..
tN di (k)+ +t1 di (k+N 1)
ei (k)

+ tn1 di(k)+ +t1 di (k+n2)

tn di (k)+ +t1 di (k+n1)

..

Dene the following signal vectors:

u(k)
y(k+1|k)
u(k+1)
y(k+2|k)

..
..
u

(k)
=
z(k) =

.
.

u(k+N 2)
y(k+N 1|k)
u(k+N 1)
y(k+N|k)
and matrices

M =

0
0
..
.

I
0
..
.

0
0
..
.

0
0
..
.

0 0

0 0
0 0
..

I 0

0 I
.. ..
. .
0 0 I

0
I
.. . .
.
.
0
0
..
.

T =

t1
t2
..
.

..

S=

s1
s2
..
.

0
s1

sN sN 1

di (k)
di (k+1)
..
.

w(k)

di (k+N 2)
di (k+N 1)

..
.
..

0
0
..
.

s1

..
.

0
t1

tN tN 1

0
0
..
.

t1

(Note that sm = sn and tm = tn for m > n.)


Now the prediction model can be written as:
y0 (k) + Su(k) + T w(k)
z(k) = M

+ IN e(k)

(8.23)

Relation to state space model


The model derived in equations (8.21), (8.22) and (8.23) are similar to the state space
model derived in the previous section. By choosing
x(k) = y0 (k)
A=M

z(k) = y(k)
B1 = In

e(k) = ei (k)

B2 = T

w(k) = di (k)

B3 = S

v(k) = u(k)

191
C1 = C

D11 = I

D12 = t0

C2 = M

21 = IN
D

22 = T
D

23 = S
D

we have obtain the following prediction model:


x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
21 e(k) + D
22 w(k)
23 v(k)
z(k) = C2 x(k) + D

+D

D.2 Polynomial models


D.2.1. Model description
Consider the SISO IO polynomial model:
bo (q)
ao (q)

Go (q) =

Fo (q) =

fo (q)
ao (q)

Ho (q) =

co (q)
ao (q)

where ao (q), bo (q) and co (q) are polynomials in the operator q 1


ao (q)
bo (q)
co (q)
fo (q)

=
=
=
=

1 + ao,1 q 1 + . . . + ao,na q na
bo,1 q 1 + . . . + bo,nb q nb
1 + co,1 q 1 + . . . + co,nc q nc
fo,0 + fo,1 q 1 + . . . + fo,nc q nf

(8.24)

The dierence equation corresponding to the IO-model is given by the controlled autoregressive moving average (CARMA) model:
ao (q) y(k) = bo (q) u(k) + fo (q) do(k) + co (q) eo(k)
in which eo (k) is chosen as ZMWN.
Now consider the SISO IIO polynomial model (compare eq. 2.13):
Gi (q) =

bi (q)
ai (q)

Fi (q) =

fi (q)
ai (q)

Hi (q) =

ci (q)
ai (q)

where ai (q) = ao (q)(q) = ao (q)(1 q 1 ), bi (q) = bo (q), ci (q) = co (q), and fi (q) = fo (q),

192

Appendices

so
ai (q) =
=
bi (q) =
=
ci (q) =
=
fi (q) =
=

1 + ai,1 q 1 + . . . + ai,n+1 q n1
(1 q 1 )(1 + ao,1 q 1 + . . . + ao,n q n )
bi,1 q 1 + . . . + bi,n q n
bo,1 q 1 + . . . + bo,n q n
1 + ci,1 q 1 + . . . + ci,n q n
1 + co,1 q 1 + . . . + co,n q n
fi,0 + fi,1 q 1 + . . . + fi,n q n
fo,0 + fo,1 q 1 + . . . + fo,n q n

(8.25)

The dierence equation corresponding to the IIO-model is given by the controlled autoregressive integrated moving average (CARIMA) model:
ai (q) y(k) = bi (q) u(k) + fi (q) di(k) + ci (q) ei (k)
in which di (k) = (q)do (k) and ei (k) is chosen as ZMWN.

From IO polynomial model to state space model


(Kailath [33]) Let
Go (q) =

bo (q)
ao (q)

Fo (q) =

fo (q)
ao (q)

Ho (q) =

co (q)
ao (q)

where ao (q), bo (q), fo (q) and co (q) are given in (8.24). Then the state space matrices are
given by:

ao,1
ao,2
..
.

1 0 ...
0 1
..
..
.
.
..
.
0 0

0
0
..
.

Ao =

ao,n1
1
ao,n 0 0 . . . 0
Co =

1 0 0 ... 0

fo,1 f0,o ao,1


fo,2 f0,o ao,2

..
Lo =
.

fo,n1 f0,o ao,n1


fo,n f0,o ao,n

Bo =

bo,n1
bo,n

bo,1
bo,2
..
.

DF = fo,0

co,1 ao,1
co,2 ao,2

..
Ko =
.

co,n1 ao,n1
co,n ao,n

(8.26)

DH = I

(8.27)

(8.28)

193
u

bo,n

co,n

@
R e
@

 I
@
@

ao,n
6

q 1

xn

eo
?

bo,2

co,2

bo,1

co,1

@
R e
@
-

x3  @
I

q 1

@
R e
@
-

x2  @
I

q 1

fo,n

ao,2

fo,2

ao,1

fo,1

6



? -e

x1

do

Figure 8.8: State space representation of IO polynomial model


A state space realization of a polynomial model is not unique. Any (linear nonsingular)
state transformation gives a valid alternative. The above realization is called the observer
canonical form (Kailath [33]).
We can also translate the IO polynomial model into an IIO state space model using the
transformation formulas. We obtain the state space matrices:

1
1
0 0 ... 0
0
b0,1
0 ao,1 1 0 . . . 0

bo,2

0 ao,2 0 1
0

..
..
Bi = ..
(8.29)
Ai = ..
. . ..
. .
.
.
.
.

bo,n1
0 ao,n1 0 0 . . . 1
bo,n
0 ao,n 0 0 . . . 0


Ci = 1 1 0 0 . . . 0
DH = I DF = fo,0
(8.30)

0
1
fo,1 f0,o ao,1
co,1 ao,1

fo,2 f0,o ao,2


co,2 ao,2

Li =
Ki =
(8.31)

..
..

.
.

fo,n1 f0,o ao,n1


co,n1 ao,n1
fo,n f0,o ao,n
co,n ao,n
Example 52 : IO State space representation of IO polynomial model
In this example we show how to nd the state space realization of an IO polynomial model.
Consider the second order system
y(k) = Go (q) u(k) + Fo (q) do(k) + Ho (q) eo (k)

194

Appendices

where y(k) is the proces output, u(k) is the proces input, do (k) is a known disturbance
signal and eo (k) is assumed to be ZMWN and
Go (q) =

q 1
(1 0.5q 1 )(1 0.2q 1)

Fo (q) =

0.1q 1
1 0.2q 1

Ho (q) =

1 + 0.5q 1
1 0.5q 1

To use the derived formulas, we have to give Go (q), Ho (q) and Fo (q) the common denominator (1 0.5q 1 )(1 0.2q 1 ) = (1 0.7q 1 + 0.1q 2 ):
Fo (q) =

(0.1q 1 )(1 0.5q 1)


0.1q 1 0.05q 2
=
(1 0.2q 1)(1 0.5q 1 )
(1 0.7q 1 + 0.1q 2 )

Ho (q) =

(1 + 0.5q 1)(1 0.2q 1 )


1 + 0.3q 1 0.1q 2
=
(1 0.5q 1 )(1 0.2q 1)
(1 0.7q 1 + 0.1q 2 )

A state-space representation for this example is now given by (8.26)-(8.28):




 


0.7 1
1
Ao =
Bo =
Co = 1 0
0.1 0
0




0.1
1
Lo =
Ko =
DH = 1
DF = 0
0.05
0.2
and so
xo1 (k + 1) = 0.7 xo1 (k) + xo2 (k) + eo (k) + 0.1 do(k) + u(k)
xo2 (k + 1) = 0.1 xo1 (k) 0.2 eo (k) 0.05 do(k)
y(k) = xo1 (k) + eo (k)
Example 53 : IIO State space representation of IO polynomial model
In this example we show how to nd the IIO state space realization of the IO polynomial
system of example 52 using (8.29)-(8.31):


1
1
0
0



Bi = 1
Ci = 1 1 0
Ai = 0 0.7 1
0 0.1 0
0

0
1
Ki = 1
DH = 1
DF = 0
Li = 0.1
0.05
0.2
and so
xi1 (k + 1)
xi2 (k + 1)
xi3 (k + 1)
y(k)

=
=
=
=

xi1 (k) + xi2 (k) + ei (k)


0.7 xi2 (k) + xi3 (k) + ei (k) + 0.1 di(k) + u(k)
0.1 xi2 (k) 0.2 ei (k) 0.05 di(k)
xi1 (k) + xi2 (k) + ei (k)

195
From IIO polynomial model to state space model
(Kailath [33]) Let
Gi (q) =

bi (q)
ai (q)

Fi (q) =

fi (q)
ai (q)

Hi (q) =

where ai (q), bi (q) and ci (q) are given in (8.25). Then


by:

ai,1 1 0 . . . 0
bi,1

ai,2 0 1
bi,2
0

..
..
. . ..

. .
.
.
Bi = ...
Ai =

ai,n 0 0 . . . 1
bi,n
0
ai,n+1 0 0 . . . 0


Ci = 1 0 0 . . . 0

ci,1 ai,1
fi,1
ci,2 ai,2
fi,2

..
Ki =
Li = ...

ci,n ai,n
fi,n
0
ai,n+1

ci (q)
ai (q)

the state space matrices are given

(8.32)

(8.33)

(8.34)

Example 54 : IIO State space representation of IIO polynomial model


In this example we show how to nd the state space realization of the system of example
52, rewritten as an IIO polynomial model: We consider the system:
y(k) = Gi (q) u(k) + Fi (q) di(k) + Hi (q) ei (k)
where y(k) is the proces output, u(k) is the proces increment input, di (k) is the increment
disturbance signal and e(k) is assumed to be integrated ZMWN and
Gi (q) =
=
Fi (q) =
=
Hi (q) =
=

q 1
(1 q 1 )(1 0.5q 1 )(1 0.2q 1)
q 1
(1 1.7q 1 + 0.8q 2 0.1q 3 )
0.1q 1 (1 0.5q 1 )
(1 q 1 )(1 0.5q 1 )(1 0.2q 1)
0.1q 1 0.05q 2
(1 1.7q 1 + 0.8q 2 0.1q 3 )
(1 + 0.5q 1 )(1 0.2q 1 )
(1 q 1 )(1 0.5q 1 )(1 0.2q 1)
1 + 0.3q 1 0.1q 2
(1 1.7q 1 + 0.8q 2 0.1q 3 )

196

Appendices

A state-space representation for this example is now given by (8.33)-(8.34):


1.7 1 0
1



Bi = 0
Ci = 1 0 0
Ai = 0.8 0 1
0.1 0 0
0

0.1
Li = 0.05
0

2.0
Ki = 0.9
0.1

DH = 1

DF = 0

and so
xi1 (k + 1)
xi2 (k + 1)
xi3 (k + 1)
y(k)

=
=
=
=

1.7xi1 (k) + xi2 (k) + 2.0ei (k) + 0.1di (k) + u(k)


0.8xi1 (k) + xi3 (k) 0.9ei (k) 0.05di(k)
0.1xi1 (k) + 0.1ei (k)
xi1 (k) + ei (k)

Clearly, the IIO state space representation in example 3 is dierent from the IIO state space
representation in example 2. By denoting the state space matrices of example 2 with a hat
i , Ci , L
i, K
i ,D
H ,D
F ), the relation between the two state space representations
( so Ai , B
is given by:
Ai = T 1 Ai T
i = T 1 Bi
B

Ci = Ci T
i = T 1 Li
L

F = DF
H = DH
D
D
i
K
= T 1 Ki

where state transformation matrix T is given by:

1
1 0
T = 0.7 0 1
0.1 0 0

Prediction using polynomial models


Diophantine equation and making predictions
Consider the polynomial model
u1 (k) =

c(q)
u2 (k)
a(q)

where
a(q) = 1 + a1 q 1 + . . . + ana q na
c(q) = c0 + c1 q 1 + . . . + cnc q nc

197
Each set of polynomial functions a(q), c(q) together with a positive integer j saties a set
of Diophantine equations
c(q) = Mj (q) a(q) + q j Lj (q)

(8.35)

where
Mj (q) = mj,0 + mj,1 q 1 + . . . + mj,j1 q j+1
Lj (q) = j,0 + j,1 q 1 + . . . + j,na q na
Remark 55 : Diophantine equations are standard in the theory of prediction with polynomial models. An algorithm for solving these equations is given in appendix B of Soeterboek
[61].
When making a prediction of u1 (k + j) for j > 0 we are interested how u1 (k + j) depends
on future values of u2 (k + i), 0 < i j and present and past values of u2 (k i), i 0.
Using Diophantine equation (8.35) we derive (Clarke et al.[14][15], Bitmead et al.[6]):
u1 (k + j) =

c(q)
u2 (k + j)
a(q)

Lj (q)
u2 (k + j)
a(q)
Lj (q)
= Mj (q)u2(k + j) +
u2 (k)
a(q)

= Mj (q)u2(k + j) + q j

This means that u1 (k + j) can be written as


u1 (k + j) = u1,f uture (k + j) + u1,past (k + j)
where the factor
u1,f uture (k + j) = Mj (q)u2(k + j) = mj,0u2 (k + j) + . . . + mj,j1 u2 (k + 1)
only consist of future values of u2(k + i), 1 i j, and the second term
u1,past (k + j) =

Lj (q)
u2 (k)
a(q)

only present and past values of u2 (k i), i 0.


Prediction using polynomial models
Let model (3.1) be given in polynomial form as:
a(q) y(k) = c(q) e(k) + f1 (q) w1 (k) + f2 (q) w2 (k) + b(q) v(k)
n(q) z(k) = h(q) e(k) + m1 (q) w1(k) + m2 (q) w2(k) + g(q) v(k)

(8.36)
(8.37)

198

Appendices

where w1 and w2 usually corresponds to the reference signal r(k) and the known distrubance
signal d(k), respectively. In the sequel of this section we will use



 w1 (k)
f1 (q) f2 (q)
f (q)w(q) =
(8.38)
w2 (k)


 w1 (k)

m1 (q) m2 (q)
(8.39)
m(q)w(q) =
w2 (k)
as an easier notation.
To make predictions of future output signals of CARIMA models one rst needs to solve
the following Diophantine equation
h(q) = Ej (q) n(q) + q j Fj (q)

(8.40)

Using (8.37) and (8.40), we derive


h(q)
m(q)
g(q)
e(k + j) +
w(k + j) +
v(k + j)
n(q)
n(q)
n(q)
m(q)
g(q)
q j Fj (q)
e(k + j) +
w(k + j) +
v(k + j)
= Ej (q) e(k + j) +
n(q)
n(q)
n(q)
m(q)
g(q)
Fj (q)
= Ej (q) e(k + j) +
e(k) +
w(k + j) +
v(k + j)
(8.41)
n(q)
n(q)
n(q)

z(k + j) =

From (8.36) we derive


e(k) =

a(q)
f (q)
b(q)
y(k)
w(k)
v(k)
c(q)
c(q)
c(q)

Substitution in (8.41) yields:


Fj (q)
m(q)
g(q)
e(k) +
w(k + j) +
v(k + j)
n(q)
n(q)
n(q)
Fj (q)a(q)
Fj (q)f (q)
Fj (q)b(q)
= Ej (q) e(k + j) +
y(k)
w(k)
v(k) +
n(q)c(q)
n(q)c(q)
n(q)c(q)
g(q)
m(q)
w(k + j) +
v(k + j)
+
n(q)
n(q)


m(q) q j Fj (q)f (q)
Fj (q)a(q)
= Ej (q) e(k + j) +
y(k) +

w(k + j) +
n(q)c(q)
n(q)
n(q)c(q)


g(q) q j Fj (q)b(q)

v(k + j)
+
n(q)
n(q)c(q)
m(q)

g(q)
Fj (q)a(q)
y(k) +
w(k + j) +
v(k + j)
= Ej (q) e(k + j) +
n(q)c(q)
n(q)c(q)
n(q)c(q)

z(k + j) = Ej (q) e(k + j) +

199
where
m(q)

= m(q)c(q) q j Fj (q)f (q)


g(q) = g(q)c(q) q j Fj (q)b(q)
Finally solve the Diophantine equation:
q
g (q) = j (q) n(q) c(q) + q j j (q)

(8.42)

Then:
m(q)

g(q)
Fj (q)a(q)
y(k) +
w(k + j) +
v(k + j)
n(q)c(q)
n(q)c(q)
n(q)c(q)
Fj (q)a(q)
m(q)

= Ej (q) e(k + j) +
y(k) +
w(k + j) +
n(q)c(q)
n(q)c(q)
q 1 (q)
+
v(k) + q 1 j (q) v(k + j)
n(q)c(q)
Fj (q)a(q)
m(q)

= Ej (q) e(k + j) +
y(k) +
w(k + j) +
n(q)c(q)
n(q)c(q)
(q)
v(k 1) + j (q) v(k + j 1)
+
n(q)c(q)

z(k + j) = Ej (q) e(k + j) +

Using the fact that the prediction e(k + j|k) = 0 for j > 0 the rst term vanishes and the
optimal prediction z(k + j) for j > 0 is given by:


m(q)

Fj (q)a(q)
z(k + j) =
y(k) +
w(k + j) +
n(q)c(q)
n(q)c(q)


(q)
v(k 1) + j (q) v(k + j 1)
+
n(q)c(q)

w f (k + j) + (q) v f (k 1) + j (q) v(k + j 1)


= Fj (q)a(q) y f (k) + m(q)
where y f , w f and v f are the ltered signals
1
y(k)
n(q)c(q)
1
w(k)
w f (k) =
n(q)c(q)
1
v f (k) =
v(k)
n(q)c(q)
y f (k) =

The free-run response is now given by Fj (q)a(q) y f (k) + m(q)

w f (k + j) + j (q) v f (k 1).
which is the output response if all future input signals are taken zero. The last term
j v(k + j 1) accounts for the inuence of the future input signals v(k), . . . , v(k + j 1).
The prediction of the performance signal can now be given as:

200

Appendices

z(k) =

F0 (q) y f (k) + m
0 (q) w f (k) + 0 (q) v f (k1) + 0 (q) v(k)
1 (q) w f (k+1) + 1 (q) v f (k1) + 1 (q) v(k+1)
F1 (q) y f (k) + m
..
.

FN1 (q) y f (k) + m


N1 (q) w f (k+N 1) + N1 (q) v f (k1) + N1 (q) v(k+N 1)
vz v(k)
= z0 (k) + D
vz will be equal to z0 and D
3,
The free response vector z0 and the predictor matrix D
respectively, in the case of a state-space model.
Relation to state-space model:
Consider the above system a(q) u1 (k + j) = c(q) u2(k + j) in state-space representation,
where A, B, C, D are given by the controller canonical form:

1
a1 a2 an1 an
0

1
0

0
0

1
0
0
B= 0
A= 0

..

..
.
..
..
..
.
.
.
.
0
0
0
1
0


C = c1 co a1 c2 co a2 . . . cn co an
D = co
Then based on the previous sections we know that the prediction of u1 (k + j) is given by:
j

u1 (k + j) = CA x(k) +

j


CAi1 Bu2 (k + j i) + Du2(k + j)

i=1

For this description we will nd that:




j,0 j,1 j,na
CAj =
CAi1 B = mj,ji for 0 < i j
D = mj,0
and so where
CAj x(k) =

Lj (q)
u2 (k)
a(q)

reects the past of the system, the equation


Du2 (k + j) +

j


CAi1 Bu2 (k + j i) = mj,0 u2 (k + j) + . . . + mj,j1 u2 (k + 1)

i=1

= Mj (q)u2 (k + j)
gives the response of the future inputs u2 (k + i), i > 0.

Appendix E: Performance indices


In this appendix we show how the LQPC performance index, the GPC performance index
and the zone performance index can be can be translated into a standard predictive control
problem.
proof of theorem 2:
First choose the performance signal

z(k) =

Q1/2 xo (k + 1)
R1/2 u(k)


(8.43)

Then it easy to see that


J(v, k) =

N1


zT (k + j|k)(j)
z (k + j|k)

j=0

N
1


xT (k + 1 + j|k)Q1/2 1 (j)Q1/2 x(k + 1 + j|k) +

j=0
N
1


uT (k + j|k)R1/2 IR1/2 u(k + j|k)

j=0

N


xT (k + j|k)Q1/2 1 (j 1)Q1/2 x(k + j|k) +

j=1
N


uT (k + j 1|k)Ru(k + j 1|k)

j=1

N

j=Nm

xT (k + j|k)Q
x(k + j|k) +

N


uT (k + j 1|k)Ru(k + j 1|k)

j=1

and so performance index (4.1) is equivalent to (4.5) for the above given z, N and . From
the IO state space equation
xo (k + 1) = Ao xo (k) + Bo u(k) + Lo d(k) + Ko eo (k)
201

202

Appendices

and using the denitions of x, e, v and w, we nd:


x(k + 1) = A x(k) + B1 e(k) + B2 d(k) + B3 v(k)

 1/2
Q xo (k + 1)
z(k) =
R1/2 u(k)
 .

1/2
Q
Ao xo (k) + Bo u(k) + Lo d(k) + Ko eo (k)
=
R1/2 u(k)
 1/2

 1/2
 1/2
 1/2



Q Bo
Q Lo
Q Ko
Q Ao
u(k)
=
xo (k) +
eo (k) +
do (k) +
0
0
0
R1/2
= C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)
2 End Proof

for the given choices of A, B1 , B2 , B3 , C1 , C2 , D21 , D22 and D23 .


proof of theorem 4.1:

The GPC performance index as given in (4.8) can be transformed into the standard performance index (4.1). First choose the performance signal


yp (k + 1|k) r(k + 1)
(8.44)
z(k) =
u(k)
Then it easy to see that
J(v, k) =

N1


zT (k + j|k)(j)
z (k + j|k) =

j=0

N
1


ypT (k + 1 + j|k)r T (k + 1 + j) uT (k + j|k)

j=0


yp (k + 1 + j|k)r(k + 1 + j|k)
=
(j)
u(k + j|k)
N
1
T



=
yp (k + 1 + j|k) r(k + 1 + j) 1 (j) yp (k + 1 + j|k) r(k + 1 + j)


j=0

N
1


uT (k + j|k)u(k + j|k)

j=0

N 
T 


yp (k + j|k) r(k + j)
yp (k + j|k) r(k + j)
j=Nm

N


uT (k + j 1|k)u(k + j 1|k)

j=1

and so performance index (4.1) is equivalent to (4.8) for the above given z(k), N and (j).
Substitution of the IIO-output equation
y(k) = Ci xi (k) + DH ei (k)

203
in (4.11),(4.12) results in:

xp (k + 1) = Ap xp (k) + Bp Ci xi (k) + Bp DH ei (k)


yp (k) = Cp xp (k) + Dp Ci xi (k) + Dp DH ei (k)

and so

x(k + 1) =

=
y(k) =
=
yp (k) =


=



 
Bp DH
xp (k)
Ap Bp Ci
ei (k) +
+
Ki
0
Ai
xi (k)


 

di (k)
0
0 0
v(k) =
+
r(k + 1)
Bi
Li 0
A x(k) + B1 e(k) + B2 w(k) + B3 v(k)




di (k)
C1 x(k) + DH ei (k) + 0 0
r(k + 1)
C1 x(k) + D11 ei (k) + D12 w(k)





Cp Dp Ci x(k) + Dp DH ei (k) + 0 0


xp (k + 1)
xi (k + 1)

di (k)
r(k + 1)

for the given choices of A, B1 , B2 , B3 and C1 . For yp (k + 1|k) we derive:




yp (k + 1|k) = Cp Dp Ci x(k + 1) + Dp DH ei (k + 1|k)






 Ap Bp Ci
 Bp DH

xp (k)
C p Dp C i
+ C p Dp C i
ei (k) +
=
0
Ai
Ki
xi (k)







 0 0
 0
di (k)
C p Dp C i
+ C p Dp C i
v(k) +
r(k + 1)
Li 0
Bi
Dp DH ei (k + 1|k) =


 xp (k)



Cp Ap Cp Bp Ci + Dp Ci Ai
+ Cp Bp DH + Dp Ci Ki ei (k) +
=
xi (k)






di (k)
Dp Ci Li 0
+ Dp Ci Bi v(k)
r(k + 1)

where we used the estimate ei (k + 1|k) = 0.

204

Appendices

For z(k) we derive:




yp (k + 1|k) r(k + 1)
z(k) =
u(k)

 
 

r(k + 1)
0
yp (k + 1|k)
+
+
=
0
0
v(k)

 


xp (k)
Cp Bp DH + Dp Ci Ki
Cp Ap Cp Bp Ci + Dp Ci Ai
+
ei (k) +
=
0
0
0
xi (k)


 

di (k)
Dp Ci Bi
Dp Ci Li 0
v(k) +
+
0
0
0
r(k+1)




I
0
v(k)
r(k+1) +
0
I

 


xp (k)
Cp Bp DH + Dp Ci Ki
Cp Ap Cp Bp Ci + Dp Ci Ai
+
ei (k) +
=
0
0
0
xi (k)




Dp Ci Bi
Dp Ci Li I
v(k)
w(k) +
I
0
0
= C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)
for the given choices of C2 , D21 , D22 and D23 , results in a performance index (4.1) equivalent
to (4.5).
2 End Proof

Appendix F:

Standard Predictive Control Toolbox


Version 5.6
T.J.J. van den Boom

Manual MATLAB Toolbox

January 15, 2013

205

Contents

Contents
Introduction ............................................................................................. 207

206

Reference

.............................................................................................

209

gpc

.............................................................................................

211

gpc ss

.............................................................................................

215

lqpc

.............................................................................................

218

tf2syst

.............................................................................................

221

imp2syst

.............................................................................................

222

ss2syst

.............................................................................................

224

syst2ss

.............................................................................................

226

io2iio

.............................................................................................

228

gpc2spc

.............................................................................................

229

lqpc2spc

.............................................................................................

231

add du

.............................................................................................

232

add u

.............................................................................................

234

add y

.............................................................................................

238

add x

.............................................................................................

240

add nc

.............................................................................................

242

add end

.............................................................................................

243

external

.............................................................................................

244

dgamma

.............................................................................................

246

pred

.............................................................................................

248

contr

.............................................................................................

250

lticll

.............................................................................................

252

lticon

.............................................................................................

253

simul

.............................................................................................

254

rhc

.............................................................................................

255

Standard Predictive Control Toolbox

Introduction

Introduction
This toolbox is a collection of a few Matlab functions for the design and analysis of
predictive controllers. This toolbox has been developed by Ton van den Boom at the Delft
Center for Systems and Control (DCSC), Delft University of Technology, the Netherlands.
The Standard Predictive Control toolbox is an independent tool, which only requires the
control system toolbox and the signal processing toolbox.

models in SYSTEM format


The notation in the SPC toolbox is kept as close as possible to the one in the lecture
notes. To describe systems (IO model, IIO model, standard model, prediction model) we
use the SYSTEM format, where all the system matrices are stacked in a matrix G and the
dimensions are given in a vector dim.

IO model in SYSTEM format:


Consider the IO-system
xo (k + 1) = Ao xo (k) + Ko eo (k) + Lo do (k) + Bo u(k)
y(k) = Co xo (k) + DH eo (k) + DF do (k)
for Ao Rna na , Ko Rna nk , Lo Rna nl , Bo Rna nb and Co Rnc na .
In SYSTEM format this will be given as:

G=

Ao Ko Lo Bo
C o DH D F 0

dim =

na nk nl nb nc 0 0 0

IIO model in SYSTEM format:


Consider the IIO-system
xi (k + 1) = Ai xi (k) + Ki ei (k) + Li di(k) + Bi u(k)
y(k) = Ci xi (k) + DH ei (k) + DF di (k)
for Ai Rna na , Ki Rna nk , Li Rna nl , Bi Rna nb and Ci Rnc na .
In SYSTEM format this will be given as:

G=

Ai Ki Li Bi
C i DH DF 0

Standard Predictive Control Toolbox

dim =

na nk nl nb nc 0 0 0

207

Introduction

Standard model in SYSTEM format:


Consider the standard model of the SPCP:
x(k + 1)
y(k)
z(k)
(k)
(k)

=
=
=
=
=

Ax(k) + B1 e(k) + B2 w(k) + B3 v(k)


C1 x(k) + D11 e(k) + D12 w(k)
C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)
C3 x(k) + D31 e(k) + D32 w(k) + D33 v(k)
C4 x(k) + D41 e(k) + D42 w(k) + D43 v(k)

for A Rna na , Bi Rna nbi , i = 1, 2, 3, Cj Rncj na , j = 1, 2, 3, 4 and In SYSTEM


format this will be given as:

G=

A
C1
C2
C3
C4

B1
D11
D21
D31
D41

B2 B3
D12 0
D22 D23
D32 D33
D42 D43

dim =

na nb1 nb2 nb3 nc1 nc2 nc3 nc4

Prediction model in SYSTEM format:


Consider the prediction model of the SPCP:
x(k + 1)
y(k)
z(k)

(k)

(k)

=
=
=
=
=

1 e(k) + B
2 w(k)
3 v(k)
A x(k) + B

+B
11 e(k) + D
12 w(k)
13 v(k)
C1 x(k) + D

+D
21 e(k) + D
22 w(k)
23 v(k)

+D
C2 x(k) + D
31 e(k) + D
32 w(k)
33 v(k)

+D
C3 x(k) + D
41 e(k) + D
42 w(k)
43 v(k)

+D
C4 x(k) + D

i Rna nbi , Cj Rncj na , and D


ji Rncj nbi , i = 1, 2, 3, j = 2, 3, 4.
where A Rna na , B
the SYSTEM format is given as:

G=

208

A
C1
C2
C3
C4

1
B

D11
21
D
31
D
41
D

2
B

D12
22
D
32
D
42
D

3
B

D13
23
D
33
D
43
D

dim =

na nb1 nb2 nb3 nc2 nc3 nc4

Standard Predictive Control Toolbox

Reference

Reference
Common predictive control problems
gpc
gpc ss
lqpc

solves the generalized predictive control problem


solves the state space generalized predictive control problem
solves the linear quadratic predictive control problem

Construction of process model


tf2syst
imp2syst
ss2syst
syst2ss
io2iio

transforms a polynomial into a state space model in SYSTEM


format
transforms a impulse response model into a state space model in
SYSTEM format
transforms a state space model into the SYSTEM format
transforms SYSTEM format into a state space model
transforms an IO system into an IIO system

Formulation of standard predictive control problem


gpc2spc
lqpc2spc
add du
add u
add v
add y
add x
add nc
add end

transforms GPC problem into an SPC problem


transforms LQPC problem into an SPC problem
function to add increment input constraint to SPC problem
function to add input constraint to SPC problem
function to add input constraint to SPC problem
function to add output constraint to SPC problem
function to add state constraint to SPC problem
function to add a control horizon to SPC problem
function to add a state end-point constraint to SPC problem

Standard predictive control problem


external
dgamma
pred
contr
contr
lticll
lticon
simul
rhc

function makes the external signal w from d and r

function makes the selection matrix


function makes a prediction model
function computes controller matrices for predictive controller
function computes controller matrices for predictive controller
function computes the state space matrices of the LTI closed loop
function computes the state space matrices of the LTI optimal
predictive controller
function makes a closed loop simulation with predictive controller
function makes a closed loop simulation in receding horizon mode

Standard Predictive Control Toolbox

209

Reference

Demonstrations of tuning GPC and LQPC


demogpc
demolqpc

210

Tuning GPC controller


Tuning LQPC controller

Standard Predictive Control Toolbox

gpc

gpc
Purpose
Solves the Generalized Predictive Control (GPC) problem

Synopsis
[y,du]=gpc(ai,bi,ci,fi,P,lambda,Nm,N,Nc,lensim,...
r,di,ei,dumax,apr,rhc);
[y,du,sys,dim,vv]=gpc(ai,bi,ci,fi,P,lambda,Nm,N,Nc,lensim,...
r,di,ei,dumax,apr,rhc);

Description
The function solves the GPC method (Clarke et al.,[14],[15]) for a controlled
autoregressive integrated moving average (CARIMA) model, given by
ai (q) y(k) = bi (q)u(k) + ci (q) ei (k) + fi (q) di (k)
where the increment input signal is given by u(k) = u(k) u(k 1).
A performance index, based on control and output signals, is minimized:
min J(u, k) = min

u(k)

u(k)

N


|
yp(k+j|k) r(k+j)|2 + 2

Nc


|u(k+j 1)|2

j=1

j=Nm

where
yp (k)
r(k)
y(k)
di (k)
ei (k)
u(k)
Nm
N
Nc

P (q)

= P (q)y(k) is the weighted process output signal


is the reference trajectory
is the process output signal
is the known disturbance increment signal
is the zero-mean white noise
is the process control increment signal
is the minimum cost- horizon
is the prediction horizon
is the control horizon
is the weighting on the control signal
= 1+p1q 1 +. . .+pnp q np
is a polynomial with desired closed-loop poles

The signal yp (k + j|k) is the prediction of yp (k + j), based on knowledge up to


time k. The input signal u(k + j) is forced to become constant for j Nc , so
u(k + j) = 0
Standard Predictive Control Toolbox

for

j Nc
211

gpc

Further the increment input signal is assumed to be bounded:


| u(k + j) | umax

for

0 j < Nc

The parameter determines the trade-o between tracking accuracy (rst term)
and control eort (second term). The polynomial P (q) can be chosen by the
designer and broaden the class of control objectives. np of the closed-loop poles
will be placed at the location of the roots of polynomial P (q).
The function gpc returns the output signal y and increment input signal du.
The input and output variables are the following:
y
du
sys
dim
vv
ai,bi,ci,fi
P
lambda
Nm,N,Nc
r
di
ei
dumax
lensim
apr
rhc

the output signal y(k).


the increment input signal u(k).
the model (in standard form).
dimensions sys.
controller parameters.
polynomials of CARIMA model.
is the polynomial P (q).
is the trade-o parameter .
are the summation parameters.
is the column vector with reference signal r(k).
is the column vector disturbance increment signal di (k).
is the column vector with ZMWN noise signal ei (k).
is the maximum bound umax on the increment input signal.
is length of simulation interval.
vector with ags for a priori knowledge on external signal.
ag for receding horizon mode.

Setting dumax=0 means no increment input constraint will be added.


The vector apr (with length 2) determines if the external signals di (k + j) and
r(k + j) are a priori known or not. The disturbance di (k + j) is a priori known
for apr(1)=1 and unknown for apr(1)=0. The reference r(k + j) is a priori
known for apr(2)=1 and unknown for apr(2)=0.
For rhc=0 the simulation is done in closed loop mode. For rhc=1 the simulation
is done in receding horizon mode, which means that the simulation is paused
every sample and can be continued by pressing the return-key. Past signals and
prediction of future signals are given within the prediction range.
The output variables sys, dim and vv can be used as input variables for the
functions lticll and lticon
212

Standard Predictive Control Toolbox

gpc

Example
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Example for gpc
%
randn(state,4);
% set initial value of random generator
%%%
ai =
bi =
ci =
fi =

G(q)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
conv([1 -1.787 0.798],[1 -1]); % polynomial ai(q)
0.029*[0 1 0.928 0];
% polynomial bi(q)
[1 -1.787 0.798 0];
% polynomial ci(q)
[0 1 -1.787 0.798];
% polynomial fi(q)

%%% P(q)
P = 1;
lambda=0;
Nm = 8;
Nc = 2;
N = 25;
lensim = 61;
dumax = 0;
apr = [0 0];

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% No pole-placement
% increment input is not weighted
% Mimimum cost horizon
% Control horizon
% Prediction horizon
% length simulation interval
% dumax=0 means that no constraint is added !!
% no apriori knowledge about w(k)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
r = [zeros(1,10) ones(1,100)];
% step on k=11
di = [zeros(1,20) 0.5 zeros(1,89)];
% step on k=21
ei = [zeros(1,30) -0.1*randn(1,31)];
% noise starting from k=31
[y,du]=gpc(ai,bi,ci,fi,P,lambda,Nm,N,Nc,lensim,r,di,ei,dumax,apr);
u = cumsum(du);
do = cumsum(di(1:61));
eo = cumsum(ei(1:61));

% compute
% compute
% compute

u(k)
do(k)
eo(k)

from
from
from

du(k)
di(k)
ei(k)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% plot figures %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
t=0:60;
[tt,rr]=stairs(t,r(:,1:61));
[tt,uu]=stairs(t,u);
Standard Predictive Control Toolbox

213

gpc

[tt,duu]=stairs(t,du);
subplot(211);
plot(t,y,b,tt,rr,g,t,do,r,t,eo,c);
title(green: r , blue: y , red: do , cyan: eo);
grid;
subplot(212);
plot(tt,duu,b,tt,uu,g);
title(blue: du , green: u);
grid;
disp(
);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

See Also
tf2io, io2iio, gpc2spc, external, pred, add nc, add du, add y, contr,
simul

214

Standard Predictive Control Toolbox

gpc ss

gpc ss
Purpose
Solves the Generalized Predictive Control (GPC) problem for a state space IIO
model

Synopsis
[y,du]=gpc ss(Ai,Ki,Li,Bi,Ci,DH,DF,Ap,Bp,Cp,Dp,Wy,Wu,...
Nm,N,Nc,lensim,r,di,ei,dumax,apr,rhc);
[y,du,sys,dim,vv]=gpc ss(Ai,Ki,Li,Bi,Ci,DH,DF,Ap,Bp,Cp,Dp,Wy,Wu,...
Nm,N,Nc,lensim,r,di,ei,dumax,apr,rhc);

Description
The function solves the GPC method (Clarke et al., [14],[15]) for a state space
IIO model, given by
x(k + 1) = Ai x(k) + Ki ei (k) + Li di (k) + Bi u(k)
y(k) = Ci x(k) + DH ei (k) + DH di (k)
where the increment input signal is given by u(k) = u(k) u(k 1).
A performance index, based on control and output signals, is minimized:
min J(u, k) = min

u(k)

u(k)

N

j=Nm

|Wy yp (k+j|k) r(k+j)|2 +

Nc


|Wu u(k+j1)|2

j=1

where |(k)| stands for the euclidean norm of the vector (k), and the variables
are given by:
yp (k)
r(k)
y(k)
di (k)
ei (k)
u(k)
Nm
N
Nc
Wy
Wu
P (q)

= P (q)y(k) is the weighted process output signal


is the reference trajectory
is the process output signal
is the known disturbance increment signal
is the zero-mean white noise
is the process control increment signal
is the minimum cost- horizon
is the prediction horizon
is the control horizon
is the weighting matrix on the tracking error
is the weighting matrix on the control signal
is the reference system

Standard Predictive Control Toolbox

215

gpc ss

The reference system P (q) is given by the state space realization:


xp (k + 1) = Ap x(k) + Bp y(k)
(k) = Cp x(k) + Dp y(k)
The signal yp (k + j|k) is the prediction of yp (k + j), based on knowledge up to
time k. The input signal u(k + j) is forced to become constant for j Nc , so
u(k + j) = 0

for

j Nc

Further the increment input signal is assumed to be bounded:


| u(k + j) | umax

for

0 j < Nc

For MIMO systems umax can be choosen as a scalar value or a vector value
(with the same dimension as vector u(k)). In the case of a scalar the bound
umax holds for each element of u(k). In the case of a vector umax , the
bound is taken elementwise.
The weighting matrices Wy and Wu determines the trade-o between tracking
accuracy (rst term) and control eort (second term). The reference system
P (q) can be chosen by the designer and broaden the class of control objectives.
The function gpc ss returns the output signal y and increment input signal du.
The input and output variables are the following:
y
du
sys
dim
vv
Ai,Ki,Li,Bi,
Ci,DH,DF
Ap,Bp,Cp,Dp
Wy
Wu
Nm,N,Nc
r
di
ei
dumax
lensim
apr
rhc
216

the output signal y(k).


the increment input signal u(k).
the model (in standard form).
dimensions sys.
controller parameters.
system matrices of IIO model.
system matrices of reference system P (q).
weighting matrix on tracking error.
weighting matrix on input increment signal.
are the summation parameters.
is the column vector with reference signal r(k).
is the column vector disturbance increment signal di (k).
is the column vector with ZMWN noise signal ei (k).
is the vector with bound umax .
is length of simulation interval.
vector with ags for a priori knowledge on external signal.
ag for receding horizon mode.
Standard Predictive Control Toolbox

gpc ss

Setting dumax=0 means no increment input constraint will be added.


The vector apr (with length 2) determines if the external signals di (k + j) and
r(k + j) are a priori known or not. The disturbance di (k + j) is a priori known
for apr(1)=1 and unknown for apr(1)=0. The reference r(k + j) is a priori
known for apr(2)=1 and unknown for apr(2)=0.
For rhc=0 the simulation is done in closed loop mode. For rhc=1 the simulation
is done in receding horizon mode, which means that the simulation is paused
every sample and can be continued by pressing the return-key. Past signals and
prediction of future signals are given within the prediction range.
The output variables sys, dim and vv can be used as input variables for the
functions lticll and lticon

See Also
gpc, tf2io, io2iio, gpc2spc, external, pred, add nc, add du, add y,
contr, simul

Standard Predictive Control Toolbox

217

lqpc

lqpc
Purpose
Solves the Linear Quadratic Predictive Control (LQPC) problem

Synopsis
[y,u,x]=lqpc(Ao,Ko,Lo,Bo,Co,DH,DF,Q,R,Nm,N,Nc, ...
lensim,xo,do,eo,umax,apr,rhc);
[y,u,x,sys,dim,vv]=lqpc(Ao,Ko,Lo,Bo,Co,DH,DF,Q,R,Nm,N,Nc, ...
lensim,xo,do,eo,umax,apr,rhc);

Description
The function solves the Linear Quadratic Predictive Control (LQPC) method
(Garca et al., [28]) for a state space model, given by
x(k + 1) = Ao x(k) + Ko eo (k) + Lo do (k) + Bo u(k)
y(k) = Co x(k) + DH eo (k) + DF d0 (k)
A performance index, based on state and input signals, is minimized:
min J(u, k) = min
u(k)

u(k)

N


x(k+j)T Q
x(k+j) +

N


u(k+j 1)T Ru(k+j 1)

j=1

j=Nm

where
x(k)
u(k)
y(k)
do (k)
eo (k)
Nm
N
Nc
Q
R

is
is
is
is
is
is
is
is
is
is

the
the
the
the
the
the
the
the
the
the

state signal vector


process control signal
process output signal
known disturbance signal
zero-mean white noise
minimum cost- horizon
prediction horizon
control horizon
state weighting matrix
control weighting matrix

The signal x(k + j|k) is the prediction of state x(k + j), based on knowledge up
to time k. The input signal u(k + j) is forced to become constant for j Nc .
Further the input signal is assumed to be bounded:
| u(k + j) | umax
218

for

0 j < Nc
Standard Predictive Control Toolbox

lqpc

The matrices Q and R determines the trade-o between tracking accuracy (rst
term) and control eort (second term). For MIMO systems the matrices can
scale the states and inputs.
The function lqpc returns the output signal y, input signal u and the state
signal x. The input and output variables are the following:
y
u
x
sys
dim
vv
Ao,Ko,Lo,Bo,Co,DH,DF
Q,R
Nm,N,Nc
xo
do
eo
umax
lensim
apr
rhc

the output signal y(k).


the input signal u(k).
the state signal x(k).
the model (in standard form).
dimensions sys.
controller parameters.
system matrices of state space IO model.
the weighting matrices Q and R
the summation parameters.
the column vector with initial state xo = x(0).
the disturbance signal do (k).
the ZMWN noise signal eo (k).
the maximum bound umax on the input signal.
length of simulation interval.
ag for a priori knowledge on disturbance signal.
ag for receding horizon mode.

The signal eo must have as many rows as there are noise inputs. The signal do
must have as many rows as there are disturbance inputs. Each column of eo
and do corresponds to a new time point.
The signal y will have as many rows as there are outputs. The signal u will
have as many rows as there are inputs. The signal x must have as many rows
as there are states. Each column of y, u and x corresponds to a new time point.
Setting umax=0 means no input constraint will be added.
The (scalar) variable apr determines if the disturbance signal di (k+j) is a priori
known or not. The signal di (k + j) is a priori known for apr=1 and unknown
for apr=0.
For rhc=0 the simulation is done in closed loop mode. For rhc=1 the simulation
is done in receding horizon mode, which means that the simulation is paused
every sample and can be continued by pressing the return-key. Past signals and
prediction of future signals are given within the prediction range.
The output variables sys, dim and vv can be used as input variables for the
Standard Predictive Control Toolbox

219

lqpc

functions lticll and lticon

See Also
lqpc2spc, pred, add nc, add u, add y, contr, simul

220

Standard Predictive Control Toolbox

tf2syst

tf2syst
Purpose
Transforms a polynomial into a state space model in SYSTEM format

Synopsis
[G,dim]=tf2syst(a,b,c,f);

Description
The function transforms a polynominal model into a state space model in SYSTEM format (For SISO system only). The input system is:
a(q)y(k) = b(q)v(k) + c(q)e(k) + f (q)d(k)
The state space equations become
x(k + 1) = Ax(k) + B1 e(k) + B2 d(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 d(k)
A Rnana ,
B3 Rnanb3 ,

B1 Rnanb1 , B2 Rnanb2
C1 Rnc1na ,

D11 Rnc1nb1 , D12 Rnc1nb2

The state space representation is given in a SYSTEM format, so




B1 B2 B3
A
G=
C1
D11 D12 0
with dimension vector


dim = na nb1 nb2 nb3 nc1 0 0 0
The function tf2syst returns the state space model in Go and its dimension
dimo, given in SYSTEM format. The input and output variables are the following:
G
dim
a,b,c,f

The state space model in SYSTEM format.


the dimension of the system.
polynomials of CARIMA model.

See Also
io2iio, imp2syst, ss2syst

Standard Predictive Control Toolbox

221

imp2syst

imp2syst
Purpose
Transforms an impulse reponse model into a state space model in SYSTEM
format

Synopsis
[G,dim]=imp2syst(g,f,nimp);
[G,dim]=imp2syst(g,f);

Description
The function transforms an impulse response model into a state space model
in SYSTEM format (For SISO system only) The input system is:
y(k) =

ng


g(i)u(k i) +

i=1

nf


f (i)d(k i) + e(k)

i=1

where the impulse response parameters g(i) and f (i) are given in the vectors


0 g(1) g(2) . . . g(ng )
g =


0 f (1) f (2) . . . f (nf )
f =
The state space equations become
x(k + 1) = Ax(k) + B1 e(k) + B2 d(k) + B3 v(k)
y(k) = C1 x(k) + e(k)
A Rnimpnimp ,

B1 Rnimpnb1 ,

B2 Rnimpnb2

B3 Rnimpnb3 , C1 Rnc1nimp
The state space representation is given in a SYSTEM format, so


A
B1 B2 B3
G=
C1
I
0 0
with dimension vector


dim = nimp nb1 nb2 nb3 nc1 0 0 0
The function imp2syst returns the state space model in Go and its dimension
dimo, given in SYSTEM format. The input and output variables are the following:
222

Standard Predictive Control Toolbox

imp2syst

G
dim
g
f
nimp

The state space model in SYSTEM format.


the dimension of the system.
vectors with impulse response parameters of process model.
vectors with impulse response parameters of disturbance model.
length of impulse response in model G

If nimp is not given, it will be set to nimp = max(ng ,nf ).

See Also
io2iio, tf2syst, ss2syst

Standard Predictive Control Toolbox

223

ss2syst

ss2syst
Purpose
Transforms a state space model into the SYSTEM format

Synopsis
[G,dim]=ss2syst(A,B1,B2,B3,C1,D11,D12,D13);
[G,dim]=ss2syst(A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23);
[G,dim]=ss2syst(A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23,...
C3,D31,D32,D33,C4,D41,D42,D43);

Description
The function transforms a state space model into the SYSTEM format The
state space equations are
x(k + 1)
y(k)
z(k)
(k)
(k)

=
=
=
=
=

Ax(k) + B1 e(k) + B2 w(k) + B3 v(k)


C1 x(k) + D11 e(k) + D12 w(k) + D13 v(k)
C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)
C3 x(k) + D31 e(k) + D32 w(k) + D33 v(k)
C4 x(k) + D41 e(k) + D42 w(k) + D43 v(k)

A Rnana ,
C1 Rnc1na ,

B1 Rnanb1 , B2 Rnanb2 , B3 Rnanb3


C2 Rnc2na ,

The output system is given in the

A
B1 B2 B3
C1
D11 D12 D13

G=
C
D21 D22 D23
2
C3
D31 D32 D33
C4
D41 D42 D43

C3 Rnc3na

SYSTEM format, so

with dimension vector




dim = na nb1 nb2 nb3 nc1 nc2 nc3 nc4
The function ss2syst returns the state space model in G and its dimension dim,
given in SYSTEM format. The input and output variables are the following:
224

Standard Predictive Control Toolbox

ss2syst

G
The state space model in SYSTEM format.
dim
the dimension of the system.
A,B1,. . .,D43 system matrices of state space model.

See Also
io2iio, imp2syst, tf2syst, syst2ss

Standard Predictive Control Toolbox

225

syst2ss

syst2ss
Purpose
Transforms SYSTEM format into a state space model

Synopsis
[A,B1,B2,B3,C1,D11,D12,D13]=syst2ss(sys,dim);
[A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23]=syst2ss(sys,dim);
[A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23,C3,D31,D32,D33,...
C4,D41,D42,D43]=syst2ss(sys,dim);

Description
The function transforms a model in SYSTEM format into its state space representation.
Consider a system in the SYSTEM format, so

sys =

A
C1

B1 B2 B3
D11 D12 D13

with dimension vector dim. Using


[A,B1,B2,B3,C1,D11,D12,D13]=syst2ss(sys,dim);
the state space equations become:
x(k + 1) = Ax(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k) + D13 v(k)

Consider a standard predictive control problem in the SYSTEM format, so

sys = C1
C2

B1 B2 B3
D11 D12 D13
D21 D22 D23

with dimension vector dim. Using


[A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23]=syst2ss(sys,dim);
226

Standard Predictive Control Toolbox

syst2ss

the state space equations become:


x(k + 1) = Ax(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k) + D13 v(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)

Consider a standard predictive control problem after prediction in SYSTEM format, so

1 B
2 B
3
B
A

11 D
12 D
13
D

C1

21 D
22 D
23
sysp = C2
D

C3
D31 D32 D33
42 D
43
41 D
C4
D
with dimension vector dimp. Using
[A,tB1,tB2,tB3,tC1,D11,D12,D13,tC2,tD21,tD22,tD23,...
tC3,tD31,tD32,tD33,tC4,tD41,tD42,tD43]= syst2ss(sysp,dimp);
the state space equations become:
x(k + 1)
y(k)
z(k)

(k)

(k)

=
=
=
=
=

2 w(k)
3 v(k)
1 e(k) + B

+B
Ax(k) + B
11 e(k) + D
12 w(k)
13 v(k)
C1 x(k) + D

+D
21 e(k) + D
22 w(k)
23 v(k)
C2 x(k) + D

+D
31 e(k) + D
32 w(k)
33 v(k)

+D
C3 x(k) + D
41 e(k) + D
42 w(k)
43 v(k)

+D
C4 x(k) + D

See Also
io2iio, imp2syst, tf2syst, ss2syst

Standard Predictive Control Toolbox

227

io2iio

io2iio
Purpose
Transformes a IO system (Go) into a IIO system (Gi)

Synopsis
[Gi,dimi]=io2iio(Go,dimo);

Description
The function transforms an IO system into an IIO system
The state space representation of the input system is:


Ao
Ko Lo Bo
Go =
Co
DH DF 0
and the dimension vector


dimo = na nk nl nb nc
The state space representation of the output system is:


Ki Li Bi
Ai
Gi =
Ci
DH DF 0
and a dimension vector


dimi = na + nc nk nl nb nc
The function io2iio returns the state space IIO-model Gi and its dimension
dimi, The input and output variables are the following:

Gi
dimi
Go
dimo

228

The increment input output(IIO) model in SYSTEM format.


the dimension of the IIO model.
The Input output(IO) model in SYSTEM format.
the dimension of the IO model.

Standard Predictive Control Toolbox

gpc2spc

gpc2spc
Purpose
Transforms GPC problem into a SPC problem

Synopsis
[sys,dim]=gpc2spc(Gi,dimi,P,lambda);

Description
The function transforms the GPC problem into a standard predictive control
problem. For an IIO model Gi :
xi (k + 1) = Ai xi (k) + Ki ei (k) + Lo w(k) + Bo u(k)
y(k) = Ci xi (k) + DH ei (k) + DF w(k)
The weighting P (q) is given in state space (!!) representation:
xp (k + 1) = Ap xp (k) + Bp y(k)
yp (k) = Cp xp (k) + Dp y(k)
Where xp is the state of the realization describing yp (k) = P (q)y(k).
After the transformation, the system becomes
x(k + 1) = Ax(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D21 w(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)


yp (k + 1) r(k + 1)
z(k) =
u(k)


d(k)
w(k) =
r(k + 1)
The IO model Gi, the weighting lter P and the transformed system sys are
given in the SYSTEM format. The input and output variables are the following:
sys
dim
Gi
P
lambda

Standard model of SPC problem in SYSTEM format.


The dimension in SPC problem.
The IIO model in SYSTEM format.
The weighting polynominal.
Weighting factor.

Standard Predictive Control Toolbox

229

gpc2spc

See Also
lqpc2spc, gpc

230

Standard Predictive Control Toolbox

lqpc2spc

lqpc2spc
Purpose
Transforms LQPC problem into a SPC problem

Synopsis
[sys,dim]=lqpc2spc(Go,dimo,Q,R);

Description
The function transforms the LQPC problem into a standard predictive control
problem. Consider the IO model Go :
xo (k + 1) = Ao xo (k) + Ko eo (k) + Lo w(k) + Bo u(k)
y(k) = Co xo (k) + DH eo (k) + DF w(k)
After the transformation, the system becomes
x(k + 1) = Ax(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D21 w(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)

 1/2
Q x(k + 1)
z(k) =
R1/2 u(k)
w(k) = d(k)
where
Q
R

is the state weighting matrix


is the control weighting matrix

The IO model Gi and the transformed system sys are given in the SYSTEM
format. The input and output variables are the following:
sys
dim
Go
Q
R

Standard model of SPC problem in SYSTEM format.


The dimension in SPC problem.
The IO model in SYSTEM format.
The state weighting matrix .
The control weighting matrix.

See Also
gpc2spc, lqpc
Standard Predictive Control Toolbox

231

add du

add du
Purpose
Function to add increment input constraint to standard problem

Synopsis
[sys,dim] = add du(sys,dim,dumax,sgn,descr);
[sysp,dimp]= add du(sysp,dimp,dumax,sgn,descr);

Description
The function adds an increment input constraint to the standard problem (sys)
or the prediction model (sysp). For sgn=1 , the constraint is given by:
u(k + j) |umax | for j = 0, , N 1
For sgn=-1 , the constraint is given by:
u(k + j) |umax | for j = 0, , N 1
For sgn=0 , the constraint is given by:
|umax | u(k + j) |umax | for j = 0, , N 1
where
umax
u(k + j)

is the maximum bound on the increment input signal


is the input increment signal

For MIMO systems umax can be choosen as a scalar value or a vector value
(with the same dimension as vector u(k)). In the case of a scalar the bound
umax holds for each element of u(k). In the case of a vector umax , the
bound is taken elementwise.
The function add du returns the system, subjected to the increment input constraint. The input and output variables are the following:
sys
dim
sysp
dimp
dumax
sgn
descr

232

Standard model
The dimension of the standard model.
Prediction model
The dimension of the prediction model.
The maximum bound on the increment input.
Scalar with value of +1/0/-1.
Flag for model type (descr=IO for input output model.
descr=IIO for an increment input output model).
Standard Predictive Control Toolbox

add du

Setting dumax=0 means no input constraint will be added.

See Also
add u, add v, add y, add x, gpc

Standard Predictive Control Toolbox

233

add u

add u
Purpose
Function to add input constraint to standard problem

Synopsis
[sys,dim]=add u(sys,dim,umax,sgn,descr);
[sysp,dimp]=add u(sysp,dimp,umax,sgn,descr);

Description
The function adds input constraint to the standard problem (sys) or the prediction model (sysp). For sgn=1 , the constraint is given by:
u(k + j) |umax | for j = 0, , N 1
For sgn=-1 , the constraint is given by:
u(k + j) |umax | for j = 0, , N 1
For sgn=0 , the constraint is given by:
|umax | u(k + j) |umax | for j = 0, , N 1
Where
umax
u(k + j)

is the maximum bound on the input signal


is the input signal

For MIMO systems umax can be choosen as a scalar value or a vector value
(with the same dimension as vector u(k)). In the case of a scalar the bound
umax holds for each element of u(k). In the case of a vector umax , the bound is
taken elementwise.
The function add u returns the system, subjected to the input constraint. The
input and output variables are the following:

234

Standard Predictive Control Toolbox

add u

sys
dim
sysp
dimp
umax
sgn
descr

Standard model
The dimension of the standard model.
Prediction model
The dimension of the prediction model.
The maximum bound on the input signal.
Scalar with value of +1/0/-1.
Flag for model type (descr=IO for input output model.
descr=IIO for an increment input output model).

Setting umax=0 means no input constraint will be added.

See Also
add du, add v, add y, add x, lqpc

Standard Predictive Control Toolbox

235

add v

add v
Purpose
Function to add input constraint to standard problem

Synopsis
[sys,dim] = add v(sys,dim,vmax,sgn);
[sysp,dimp]= add v(sysp,dimp,vmax,sgn);

Description
The function adds an increment input constraint to the standard problem (sys)
or the prediction model (sysp). For sgn=1 , the constraint is given by:
v(k + j) |vmax | for j = 0, , N 1
For sgn=-1 , the constraint is given by:
v(k + j) |vmax | for j = 0, , N 1
For sgn=0 , the constraint is given by:
|vmax | v(k + j) |vmax | for j = 0, , N 1
where
vmax
v(k + j)

is the maximum bound on the input signal


is the input signal

For MIMO systems vmax can be choosen as a scalar value or a vector value
(with the same dimension as vector v(k)). In the case of a scalar the bound
vmax holds for each element of v(k). In the case of a vector vmax , the bound is
taken elementwise.
The function add v returns the system, subjected to the increment input constraint. The input and output variables are the following:
sys
dim
sysp
dimp
vmax
sgn
236

Standard model
The dimension of the standard model.
Prediction model
The dimension of the prediction model.
The maximum bound on the input.
Scalar with value of +1/0/-1.
Standard Predictive Control Toolbox

add v

Setting vmax=0 means no input constraint will be added.

See Also
add u, add du, add y, add x, gpc

Standard Predictive Control Toolbox

237

add y

add y
Purpose
Function to add output constraint to to the standard problem (sys) or the
prediction model (sysp).

Synopsis
[sys,dim] =add y(sys,dim,ymax,sgn);
[sysp,dimp]=add y(sysp,dimp,ymax,sgn);

Description
The function adds output constraint to the standard problem (sys) or the
prediction model (sysp). For sgn=1 , the constraint is given by:
y(k + j) |ymax | for j = 0, , N 1
For sgn=-1 , the constraint is given by:
y(k + j) |ymax | for j = 0, , N 1
For sgn=0 , the constraint is given by:
|ymax | y(k + j) |ymax | for j = 1, , N 1
Where
ymax
y(k + j)

is the maximum bound on the output signal


is the system output signal

For MIMO systems ymax can be choosen as a scalar value or a vector value
(with the same dimension as vector y(k)). In the case of a scalar the bound
ymax holds for each element of y(k). In the case of a vector ymax , the bound is
taken elementwise.
The function add y returns the system subjected to the output constraint. The
input and output variables are the following:

238

Standard Predictive Control Toolbox

add y

sys
dim
sysp
dimp
ymax
sgn

Standard model
The dimension of the standard model.
Prediction model
the dimension of the prediction model.
The maximum bound on the output signal.
Scalar with value of +1/0/-1.

Setting ymax=0 means no output constraint will be added.

See Also
add u, add du, add v, add x

Standard Predictive Control Toolbox

239

add x

add x
Purpose
Function to add state constraint to to the standard problem.

Synopsis
[sys,dim]=add x(sys,dim,E);
[sys,dim]=add x(sys,dim,E,sgn);

Description
The function adds state constraint to the standard problem.
NOTE: The function add x can NOT be applied to the prediction model !!!!
(i.e. add x must be used before pred).
For sgn=1 , the constraint is given by:
Ex(k + j) 1 for j = 0, , N 1
For sgn=-1 , the constraint is given by:
Ex(k + j) 1 for j = 0, , N 1
For sgn=0 , the constraint is given by:
1 Ex(k + j) 1 for j = 1, , N 1
Where
E
x(k + j)

is a matrix
is the state of the system

The function add x returns the standard problem, subjected to the state constraint. The input and output variables are the following:

sys
dim
E
sgn

The standard model.


The dimension of the standard model.
Matrix to dene bound on the state.
Scalar with value of +1/0/-1.

Setting E=0 means no state constraint will be added.


240

Standard Predictive Control Toolbox

add x

See Also
add u, add du, add v, add y

Standard Predictive Control Toolbox

241

add nc

add nc
Purpose
Function to add a control horizon to prediction problem

Synopsis
[sysp,dimp]=add nc(sysp,dimp,dim,Nc);
[sysp,dimp]=add nc(sysp,dimp,dim,Nc,descr);

Description
The function add nc returns the system with a control horizon sysp and the
dimension dimp. The input and output variables are the following:

sysp
dimp
dim
Nc
descr

The prediction model.


The dimension of the prediction model.
The dimension of the original IO or IIO system
The control horizon.
Flag for model type (descr=IO for input output model,
descr=IIO for an increment input output model)

The default value of descr is IIO.

See Also
gpc, lqpc

242

Standard Predictive Control Toolbox

add end

add end
Purpose
Function to add a state end-point constraint to prediction problem.

Synopsis
[sysp,dimp] = add end(sys,dim,sysp,dimp,N);

Description
The function adds a state end-point constraint to the standard problem:
x(k + N) = xss = Dssx w(k + N)
The function add end returns the system with a state end-point constraint in
sysp and the dimension dimp. The input and output variables are the following:

sys
dim
sysp
dimp
N

The
The
The
The
The

standard model.
dimension of the standard model.
system of prediction problem.
dimension of the prediction problem.
prediction horizon.

Standard Predictive Control Toolbox

243

external

external
Purpose
Function makes the external signal w from d and r for simulation purpose

Synopsis
[tw]=external(apr,lensim,N,w1);
[tw]=external(apr,lensim,N,w1,w2);
[tw]=external(apr,lensim,N,w1,w2,w3);
[tw]=external(apr,lensim,N,w1,w2,w3,w4);
[tw]=external(apr,lensim,N,w1,w2,w3,w4,w5);

Description
The function computes the vector w with with predictions of the external signal
w:


w = w(k) w(k + 1) w(k + N 1)
where
w(k) =

w1 (k) w5 (k)

For the lqpc controller, the external signal w(k) is given by




w(k) = d(k)
and so we have to choose w1 (k) = d(k). For the gpc controller, the external
signal w(k) is given by


w(k) = d(k) r(k + 1)
and so we have to choose w1 (k) = d(k) and w2 (k) = r(k + 1).

244

tw
apr
lensim
N
w1
..
.

is the row vector w with predictions of the external signal w.


vector with ags for a priori knowledge on external signal.
is length of simulation interval.
is the prediction horizon.
is the row vector with external signal w1 (k).
..
.

w5

is the row vector with external signal w5 (k).


Standard Predictive Control Toolbox

external

The vector apr determine if the external signal w(k + j) is a priori known. For
i = 1, . . . , 5 there holds:

signal wi (k) is a priori known for apr(i) = 1
signal wi (k) is not a priori known for apr(i) = 0

See Also
gpc, lqpc

Standard Predictive Control Toolbox

245

dgamma

dgamma
Purpose
For the innite horizon case the funtion
Function makes the selection matrix .
also creates the steady state matrix ss .

Synopsis
[dGamma] = dgamma(Nm,N,dim);
[dGamma,Gammass] = dgamma(Nm,N,dim);

Description
The selection matrix is dened as
= diag((0), (1), . . . , (N 1))

where



0
0

0 I


(j) =
I
0

0 I

for

0 j Nm 1

for

Nm j N 1

Where

Nm
N
dim

is the minimum cost- horizon


is the prediction horizon
is the dimension of the system

The function dgamma returns the diagonal elements of the selection matrix ,
The input and output variables are the following:
so dGamma = diag().

dGamma
Gammass
Nm
N
Nc
dim

246

vector with diagonal elements of .


vector with diagonal elements of ss .
is the minimum-cost horizon.
is the prediction horizon.
is the control horizon.
the dimension of the system.

Standard Predictive Control Toolbox

dgamma

See Also
gpc, lqpc, blockmat

Standard Predictive Control Toolbox

247

pred

pred
Purpose
Function computes the prediction model from the standard model

Synopsis
[sysp,dimp]=pred(sys,dim,N);

Description
The function makes a prediction model for the system
x(k + 1)
y(k)
z(k)
(k)
(k)

=
=
=
=
=

Ax(k) + B1 e(k) + B2 w(k) + B3 v(k)


C1 x(k) + D11 e(k) + D12 w(k)
C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)
C3 x(k) + D31 e(k) + D32 w(k) + D33 v(k)
C4 x(k) + D41 e(k) + D42 w(k) + D43 v(k)

for

sys =

dim =

A
C1
C2
C3
C4

B1
D11
D21
D31
D41

B2 B3
D12 0
D22 D23
D32 D33
D42 D43

na nb1 nb2 nb3 nc1 nc2 nc3 nc4

Where
x(k)
e(k)
w(k)
v(k)
z(k)
(k)
(k)
A, Bi , Cj , Dij

248

is the state of the system


is zero-mean white noise (ZMWN)
is known external signal
is the control signal
is the performance signal
is the equality constraint at time k
is the inequality constraint at time k
the system matrices

Standard Predictive Control Toolbox

pred

For the above model, the prediction model is given by


x(k + 1)
y(k)
z(k)

(k)

(k)
where

p =

=
=
=
=
=

1 e(k) + B
2 w(k)
3 v(k)
Ax(k) + B

+B
11 e(k) + D
12 w(k)
13 v(k)
C1 x(k) + D

+D
21 e(k) + D
22 w(k)
23 v(k)

+D
C2 x(k) + D
31 e(k) + D
32 w(k)
33 v(k)

+D
C3 x(k) + D
41 e(k) + D
42 w(k)
43 v(k)

+D
C4 x(k) + D

p(k|k)
p(k + 1|k)
..
.

p(k + N|k)
stands for the predicted signal for p = y, z, , and for future values for p =
v, w.
The function pred returns the prediction model sysp:

sysp =

dimp =

A
C1
C2
C3
C4

1
B
11
D
21
D
31
D
41
D

2
B
12
D
22
D
32
D
42
D

3
B
13
D
23
D
33
D
43
D

na nb1 N nb2 N nb3 N nc1 N nc2 N nc3 N nc4

The input and output variables are the following:

sysp
dimp
sys
dim
N

The
The
The
The
The

prediction model.
dimension of the prediction model.
system to be controlled.
dimension of the system.
prediction horizon.

See Also
gpc, lqpc, blockmat

Standard Predictive Control Toolbox

249

contr

contr
Purpose
Function computes controller matrices for predictive controller

Synopsis
[vv,HH,AA,bb]=contr(sysp,dimp,dim,dGam);

Description
The function computes the matrices for predictive controller for use in the
function simul.
sysp is the prediction model, dimp is the dimension-vector of the prediction
model, dim is the dimension-vector of the standard model, and dGam is a row
vector with the diagonal elements of the matrix .
If inequality constraints are present, the control problem is transformed into a
quadratic programming problem, where the objective function is chosen as
1 T
(k)H
I (k)
J(I , k) =
2 I
subject to
A
I b 0
which can be solved at each time instant k in the function simul. There holds
The variable vv is given by:


vv = F De Dw D
so that

xc (k)
e(k)

v(k) = vv

w(k)

I (k)
= F xc (k) + De e(k) + Dw w(k)

+ D
I (k)

Further bb is such that

1
xc (k)

b (k) = bb
e(k)
w(k)

250

Standard Predictive Control Toolbox

contr

If no inequality constraints are present, we nd


I = 0 and the control problem
can be solved analytically at each time instant in the function simul, using
v(k) = F xc (k) + De e(k) + Dw w(k)

The variable vv is now given by:




vv = F De Dw
so that

xc (k)
v(k) = vv e(k)
w(k)

The input and output variables are the following:

sysp
dimp
dim
dGam
vv,HH,AA,bb

The prediction model.


The dimension of the prediction model.
the dimension of the system.

vector with diagonal elements of .


controller parameters.

See Also
simul, gpc, lqpc

Standard Predictive Control Toolbox

251

lticll

lticll
Purpose
Function computes the state space matrices of the closed loop in the LTI case
(no inequality constraints).

Synopsis
[Acl,B1cl,B2cl,Ccl,D1cl,D2cl]=lticll(sys,dim,vv,N);

Description
Function lticll computes the state space matrices of the LTI closed loop,
given by
xcl (k + 1) = Acl xcl (k) + B1cl e(k) + B2cl w(k)



v(k)

= Ccl xcl (k) + D1cl e(k) + D2cl w(k)


y(k)
The input and output variables are the following:
sys
dim
N
vv
Acl,B1cl,B2cl,Ccl,D1cl,D2cl

the model.
dimensions sys.
the prediction horizon.
controller parameters.
system matrices of closed loop

See Also
gpc, lqpc, gpc ss, contr, simul, lticon

252

Standard Predictive Control Toolbox

lticon

lticon
Purpose
Function computes the state space matrices of the optimal predictive controller
in the LTI case (no inequality constraints).

Synopsis
[Ac,B1c,B2c,Cc,D1c,D2c]=lticon(sys,dim,vv,N);

Description
Function lticon computes the state space matrices of the optimal predictive
controller, given by
xc (k + 1) = Ac xc (k) + B1c y(k) + B2c w(k)

v(k) = Cc xc (k) + D1c y(k) + D2c w(k)

The input and output variables are the following:


sys
dim
N
vv
Ac,B1c,B2c,Cc,D1c,D2c

the model.
dimensions sys.
the prediction horizon.
controller parameters.
system matrices of LTI controller

See Also
gpc, lqpc, gpc ss, contr, simul, lticll

Standard Predictive Control Toolbox

253

simul

simul
Purpose
Function makes a closed loop simulation with predictive controller

Synopsis
[x,xc,y,v]=simul(syst,sys,dim,N,x,xc,y,v,tw,e,...
k1,k2,vv,HH,AA,bb);

Description
The function makes a closed loop simulation with predictive controller, the input and output variables are the following:

syst
sys
dim
N
x
xc
y
v
tw
k1
k2
vv,HH,AA,bb

the true system.


the model.
dimensions sys and syst.
the horizon.
state of the true system.
state of model/controller.
the true system output.
the true system input.
vector with predictions of the external signal.
begin simulation interval.
end simulation interval.
controller parameters.

See Also
contr, gpc, lqpc

254

Standard Predictive Control Toolbox

rhc
Purpose
Function makes a closed loop simulation in receding horizon mode

Synopsis
[x,xc,y,v]=rhc(syst,sys,sysp,dim,dimp,dGam,N,x,xc,y,v,tw,e,k1,k2);

Description
The function makes a closed loop simulation with predictive controller in the
receding horizon mode. Past signals and prediction of future signals are given
within the prediction range. The simulation is paused every sample and can be
continued by pressing the return-key. The input and output variables are the
following:

syst
sys
sysp
dim
dimp
dGamma
N
x
xc
y
v
tw
k1
k2

the true system.


the model.
the prediction model.
dimensions sys and syst.
dimensions sysp.

vector with diagonal elements of .


the horizon.
state of the true system.
state of model/controller.
the true system output.
the true system input.
vector with predictions of the external signal.
begin simulation interval.
end simulation interval.

See Also
simul, contr, gpc, lqpc

255

256

Index
a priori knowledge, 36
add du.m, 232
add end.m, 243
add nc.m, 242
add u.m, 234
add v.m, 236
add x.m, 240
add y.m, 238

full SPCP, 62
generalized minimum variance control, 164
Generalized predictive control (GPC), 11
GPC, 154
gpc.m, 211
gpc2spc.m, 229
gpc ss.m, 215

imp2syst.m, 222
implementation, 89
implementation, full SPCP case, 91
implementation, LTI case, 89
constraints, 25, 44
Increment-input-output (IIO) models, 16
control horizon constraint, 46
Innite horizon SPCP, 65
equality constraints, 25, 44, 46
innite horizon SPCP with control horizon ,
inequality constraints, 25, 45, 48
72
state-end point constraint, 47
Input-Output (IO) models, 15
contr.m, 250
internal model control (IMC) scheme, 116
control horizon, 27, 46
Controlled AutoRegressive Integrated Mov- invariant ellipsoid, 137
io2iio.m, 228
ing Average (CARIMA), 11
Controlled AutoRegressive Moving Average
linear matrix inequalities (LMI), 131
(CARMA), 11
LQPC, 154
coprime factorization, 121
lqpc.m, 218
lqpc2spc.m, 231
dead-beat control, 164
lticll.m, 252
dgamma.m, 246
lticon.m, 253
Diophantine equation, 196
basis functions, 27
blocking, 27

equality constrained SPCP, 58


external.m, 244
feasibility, 96
minimal time algorithm, 97
soft-constraint algorithm, 96
nite horizon SPCP, 56
Full innite horizon SPCP, 76

mean-level control, 165


minimum variance control, 164
model, 13
disturbance model, 13, 15
impulse response model, 20, 183
noise model, 15
polynomial model, 20, 191
process model, 13, 15
257

step response model, 20, 183


MPC applications, 12
MPC using LMIs, 139
optimization, 28
orthogonal basis functions, 28, 50
performance index, 21, 39
GPC performance index, 22, 41, 202
LQPC performance index, 22, 40, 201
standard predictive control, 39
standard predictive control performance
index, 43
zone performance index, 23, 42
pole-placement control, 165
pred.m, 248
prediction, 31
polynomial model, 196
state space model, 31
step response model, 186
receding horizon principle, 29
rhc.m, 255
robustness, 124
LMI-based MPC, 140
Schur complement, 133
simul.m, 254
ss2syst.m, 224
stability, 101
end-point constraint, 109
end-point constraint set, 111, 114
inequality constrained case, 105
innite prediction horizon, 106
internal model control (IMC) scheme, 116
LTI case, 101
terminal constraint, 109
terminal constraint set, 114
terminal cost function with end-point constraint set, 114
Youla parametrization, 121
standard performance index, 24
Standard predictive control problem, 55
258

standard predictive control problem (SPCP),


39, 51
state space model, 15, 16
steady-state behavior, 66
Structuring input signal, 67
structuring of input signal, 27
syst2ss.m, 226
tf2syst.m, 221
tuning, 151
GPC, 154
initial setting, 153
LQPC, 154
optimization, 158
Unconstrained innite horizon SPCP, 69
unconstrained SPCP, 56
Youla parametrization, 121

Bibliography
[1] M. Alamir and G. Bornard. Stability of a truncated innite constrained receding
horizon scheme: The general discrete nonlinear case. Automatica, 31(9):13531356,
1995.
[2] F. Allgower, T.A. Badgwell, J.S. Qin, and J.B. Rawlings. Nonlinear predictive control
and moving horizon estrimation an introductory overview. In Advances in Control,
Highlights of ECC99, Edited by F. Frank, Springer-Verlag, London, UK, 1999.
[3] K.J.
Astrom and B. Wittenmark. On self-tuning regulators. Automatica, 9:185199,
1973.
[4] V. Balakrishnan, A. Zheng, and M. Morari. Constrained stabilization of discrete-time
systems. In Advances in MBPC, Oxford University Press, 1994.
[5] A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos. The expicit linear
quadratic regulator for constrained systems. Automatica, 38(1):320, 2002.
[6] B.W. Bequette. Nonlinear control of chemical processes: A review. Ind. Eng. Chem.
Res., 30(7):13911413, 1991.
[7] R.R. Bitmead, M. Gevers, and V. Wertz. Adaptive Optimal Control. The Thinking
Mans GPC. Prentice Hall, Upper Saddle River, New Jersey, 1990.
[8] H. H. J. Bloemen, T. J. J. van den Boom, and H. B. Verbruggen. Optimizing the end
point state weighting in Model-based Predictive Control. Automatica, 38(6):1061
1068, June 2002.
[9] S. Boyd and Y. Nesterov. Nonlinear convex optimization: New methods and new
applications. Minicourse on the European Control Conference, Brussels, Belgium,
July 1-4, 1997.
[10] S.P. Boyd and C.H. Barratt. Linear Controller Design, Limits of performance. Prentice Hall, Information and System Sciences Series, Englewood Clis, New Jersey, 1991.
[11] E.F. Camacho and C. Bordons. Model Predictive Control in the Process Industry,
Advances in Industrial Control. Springer, London, 1995.
259

[12] C.C. Chen and L. Shaw. On receding horizon feedback control. Automatica, 18:349
352, 1982.
[13] D.W. Clarke and C. Mohtadi. Properties of generalized predictive control. Automatica,
25(6):859875, 1989.
[14] D.W. Clarke, C. Mohtadi, and P.S. Tus. Generalized predictive control - part 1. the
basic algorithm. Automatica, 23(2):137148, 1987.
[15] D.W. Clarke, C. Mohtadi, and P.S. Tus. Generalized predictive control - part 2.
extensions and interpretations. Automatica, 23(2):149160, 1987.
[16] D.W. Clarke and R. Scattolini. Constrained receding horizon predictive control. IEE
Proc-D, 138(4), 1991.
[17] C.R. Cutler and B.L. Ramaker. Dynamic matrix control - a computer control algorithm. In AIChE Nat. Mtg, 1979.
[18] C.R. Cutler and B.L. Ramaker. Dynamic matrix control - a computer control algorithm. In Proceeding Joint American Control Conference, San Francisco, CA, USA,
1980.
[19] F.C. Cuzzola, J.C. Geromel, and M. Morari. An improved approach for constrained
robust model predictive control. Automatica, 38(7):11831189, 2002.
[20] R.M.C. De Keyser and A.R. van Cauwenberghe. Typical application possibilities for
self-tuning predictive control. In IFAC Symp.Ident.& Syst.Par.Est., 1982.
[21] R.A.J. De Vries and H.B Verbruggen. Multivariable process and prediction models in
predictive control a unied approach. Int. J. Adapt. Contr.& Sign. Proc., 8:261278,
1994.
[22] V.F. Demyanov and L.V. Vasilev. Nondierentiable Optimization, Optimization Software. Springer-Verlag, 1985.
[23] B. Ding, Y. Xi, and S. Li. A synthesis approach of on-line constraint robust model
predictive control. Automatica, 40(1):163167, 2004.
[24] J.C. Doyle, B.A. Francis, and A.R. Tannenbaum. Feedback control systems. MacMillan
Publishing Company, New York, USA, 1992.
[25] J.C. Doyle, K. Glover, P.P. Khargonekar, and B.A. Francis. State-space solutions to
standard H2 and H control problems. IEEE AC, 34:pp831847, 1989.
[26] C.E. Garcia and M. Morari. Internal model control. 1. a unifying review and some
new results. Ind.Eng.Chem.Proc.Des.Dev., 21(2):308323, 1982.
260

[27] C.E. Garcia and A.M. Morshedi. Quadratic programming solution of dynamic matrix
control (QDMC). Chem.Eng.Comm., 46(11):7387, 1986.
[28] C.E. Garcia, D.M. Prett, and M. Morari. Model predictive control: Theory and
practice - a survey. Automatica, 25(3):335348, 1989.
[29] C.E. Garcia, D.M. Prett, and B.L. Ramaker. Fundamental Process Control. Butterworths, Stoneham, MA, 1988.
[30] H. Genceli and M. Nikolaou. Robust stability analysis of constrained 1 -norm model
predictive control. AIChE Journ., 39(12):19541965, 1993.
[31] E.G. Gilbert and K.T. Tan. Linear systems with state and control constraints: the
theory and application of maximal output admissible sets. IEEE AC, 36:10081020,
1991.
[32] M. Grotschel, L. Lovasz, and A. Schrijver. Geometric Algorithms and Combinatorial
Optimization. Springer-Verlag, Berlin, Germany, 1988.
[33] T. Kailath. Linear Systems. Prentice Hall, New York, 1980.
[34] M. Kinnaert. Adaptive generalized predictive controller for mimo systems. Int.J.
Contr., 50(1):161172, 1989.
[35] M.V. Kothare, V. Balakrishnan, and M. Morari. Robust contrained predictive control
using linear matrix inequalities. Automatica, 32(10):13611379, 1996.
[36] B. Kouvaritakis, J.A. Rossiter, and J. Schuurmans. Ecient robuts predictive control.
IEEE AC, 45(8):15451549, 2000.
[37] W.H. Kwon and A.E. Pearson. On feedback stabilization of time-varying discrete
systems. IEEE AC, 23:479481, 1979.
[38] J.H. Lee, M. Morari, and C.E. Garcia. State-space interpretation of model predictive
control. Automatica, 30(4):707717, 1994.
[39] J.H. Lee and Z.H.Yu. Tuning of model predictive controllers for robust performance.
Comp.Chem.Eng., 18(1):1537, 1994.
[40] C.E. Lemke. On complementary pivot theory. In Mathematics of the Decision Sciences, G.B.Dantzig and A.F.Veinott (Eds.), 1968.
[41] L. Ljung. System Identication: Theory for the User. Prentice Hall, Englewood Clis,
NJ, 1987.
[42] Y. Lu and Y. Arkun. Quasi-min-max mpc algorithms for LPV systems. Automatica,
36(4):527540, 2000.
261

[43] J.M. Maciejowski. Multivariable Feedback Control Design. Addison-Wesley Publishers,


Wokingham, UK, 1989.
[44] J.M. Maciejowski. Predictive control with constraints. Prentice Hall, Pearson Education Limited, Harlow, UK, 2002.
[45] D.Q. Mayne, J.B. Rawlings, C.V. Rao, and P.O.M. Scokaert. Constrained model
predictive control: stability and optimality. Automatica, 36:789814, 2000.
[46] M. Morari and E. Zariou. Robust Process Control. Prentice Hall, Englewood Clis,
New Jersey, 1989.
[47] E. Mosca and J. Zhang. Stable redesign of predictive control. Automatica, 28(6):1229
1233,, 1992.
[48] A. Naganawa, G. Obinata, and H. Inooka. A design method of model predictive
control system using coprime factorization approach. In ASCC, 1994.
[49] Y. Nesterov and A. Nemirovskii. Interior-Point Polynomial Algorithms in Convex
Programming. SIAM, Philadelphia, 1994.
[50] S.J. Qin and T.A. Badgewell. An overview of industrial model predictive control
technology. In Chemical Process Control - V, AIChe Symposium Series - American
Institute of Chemical Engineers, volume 93, pages 232256, 1997.
[51] J.B. Rawlings and K.R. Muske. The stability of constrained receding horizon control.
IEEE AC, 38:15121516, 1993.
[52] J. Richalet. Industrial applications of model based predictive control. Automatica,
29(5):12511274, 1993.
[53] J. Richalet, A. Rault, J.L. Testud, and J. Papon. Algorithmic control of industrial
processes. In Proceedings of the 4th IFAC Symposium on Identication and System
Parameter Estimation, Tbilisi, 1976.
[54] J. Richalet, A. Rault, J.L. Testud, and J. Papon. Model predictive heuristic control:
Applications to industrial processes. Automatica, 14(1):413428, 1978.
[55] N.L. Ricker, T. Subrahmanian, and T. Sim. Case studies of model-predictive control
in pulp and paper production. In Proc. 1988 IFAC Workshop on Model Based Process Control, T.J. McAvoy, Y. Arkun and E. Zariou eds., Pergamon Press, Oxford,
page 13, 1988.
[56] J.A. Rossiter. Model-Based Predictive Control: A Practical Approach. CRC Press,
Inc., 2003.
[57] R. Rouhani and R.K. Mehra. Model algorithmic control (mac): Basic theoretical
properties. Automatica, 18(4):401414, 1982.
262

[58] L.E. Scales. Introduction to Non-Linear Optimization. MacMillan, London, UK, 1985.
[59] P.O.M. Scokaert. Innite horizon genralized predictive control. Int. J. Control,
66(1):161175, 1997.
[60] P.O.M. Scokaert and J.B. Rawlings. Constrained linear quadratic regulation. volume 43, pages 11631169, 1998.
[61] A.R.M. Soeterboek. Predictive control - A unied approach. Prentice Hall, Englewood
Clis, NJ, 1992.
[62] E.D. Sontag. An algebraic approach to bounded controllability of linear systems.
Int.J.Contr., pages 181188, 1984.
[63] V. Strejc. State Space Theory of Discrete Linear Control. Academia, Prague, 1981.
[64] M. Sznaier and M.J. Damborg. Suboptimal control of linear systems with state and
control inequality constraints. In Proceedings of the 26th IEEE conference on decision
and control, pages 761762, Los Angeles, 1987.
[65] T.J.J. van den Boom. Model based predictive control, status and perspectives. In Tutorial Paper on the CESA IMACS Conference. Symposium on Control, Optimization
and Supervision, volume 1, pages 112, Lille, France, 1996.
[66] T.J.J. van den Boom, M. Ayala Botto, and P. Hoekstra. Design of an analytic constrained predictive controller using neural networks. International Journal of System
Science, 36(10):639650, 2005.
[67] T.J.J. van den Boom and R.A.J. de Vries. Robust predictive control using a timevarying Youla parameter. Journal of applied mathematics and computer science,
9(1):101128, 1999.
[68] E.T. van Donkelaar. Improvement of eciency in system identication and model
predictive control of industrial processes. PhD Thesis, Delft University of Technology,
Delft, The Netherlands, 2000.
[69] E.T. van Donkelaar, O.H. Bosgra, and P.M.J. Van den Hof. Model predictive control
with generalized input parametrization. In ECC, 1999.
[70] P. Van Overschee and B. de Moor. Subspace algorithms for the stochastic identication
problem. Automatica, 29(3):649660, 1993.
[71] M. Verhaegen and P. Dewilde. Subspace model identication. part i: The output-error
state space model identication class of algorithms / part ii: Analysis of the elementary
output-error state space model identication algorithm. Int.J.Contr., 56(5):1187
1241, 1992.
263

[72] M. Vidyasagar. Nonlinear Systems Analysis. SIAM Classics in Applied Mathematics,


SIAM Press, 2002.
[73] R.A.J. De Vries and T.J.J. van den Boom. Constrained predictive control with guaranteed stability and convex optimization. In American Control Conference, pages
28422846, Baltimore, Maryland, 1994.
[74] R.A.J. De Vries and T.J.J. van den Boom. Robust stability constraints for predictive
control. In 4th European Control Conference, Brussels, Belgium, 1997.
[75] Z. Wan and M.V. Kothare. An ecient oine formulation of robust model predictive
control using linear matrix inequalities. Automatica, 39(5):837846, 2003.
[76] P. Wolfe. The simplex method for quadratic programming. Econmetrica, 27:382398,
1959.
[77] E. Zariou. Robust model predictive control of processes with hard constraints.
Comp.Chem.Engng., 14(4/5):359371, 1990.
[78] A. Zheng. Reducing on-line computational demands in model predictive control by
approximating qp constraints. Journal of Process Control, 9(4):279290, 1999.
[79] A. Zheng and M. Morari. Stability of model predictive control with mixed constraints.
IEEE AC, 40:1818, 1995.
[80] Z.Q. Zheng and M. Morari. Robust stability of contrained model predictive control.
In ACC, 1993.

264

You might also like