Professional Documents
Culture Documents
www.elsevier.com/locate/isatrans
Abstract
Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of
an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases.
Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used.
This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm
for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The
key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and
ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum
control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly
useful for multi-input, multi-output (MIMO) cases where the definition of “optimum” control is subject to the opinion of the control engineer
tuning the system.
A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of “optimum”
control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of
the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control,
thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.
c 2007, ISA. Published by Elsevier Ltd. All rights reserved.
Keywords: Genetic algorithms; Multi-objective fuzzy decision-making; Auto-tuning; Model predictive control
2. Genetic algorithms
Fig. 3. Mutation.
A genetic algorithm (GA) is an optimization method based
on natural selection, in that a population of individuals
(variables to optimize) is created and when evaluated the
best members survive. They then go on to reproduce and
as this cycle continues the population inherits the traits of
the strongest individuals, resulting in a population that will
contain individuals that are close to the “optimum” values.
This population can then be evaluated and the best individual
chosen. This process occurs in four basic steps: Initialization,
Reproduction, Crossover and Mutation (e.g. [4]). Simple GA
was used for these steps in this work. This was done in order
to keep the algorithm as simple as possible and allow the
focus to be on the tuning of MPCs rather than the intricacies
of GA algorithm development. The aim was also to ensure
that the algorithm employed was simple so that it would
be robust—particularly when combined with multi-objective
fuzzy decision-making. The first step required is initialization.
This creates a population of individuals that are composed of
Fig. 4. Genetic algorithm flow chart.
chromosomes; each chromosome in an individual will represent
one of the variables that are to be optimized. For example, if an The variables from each individual are then input into the
equation had two constants (C1 and C2) that were to be fitted to optimization problem and the resulting values are collected and
a data set there would be one chromosome representing C1 and evaluated. The individual that yields the worst result is replaced
another representing C2 in each individual. The chromosomes with a duplicate of the best individual. After reproduction
are composed of L number of genes to form binary strings, the population needs to be updated in some way in order to
which are populated randomly with zeros and ones. Fig. 1 converge on the “optimum” solution. This is where crossover
shows what a population may look like for this example. and mutation come in to play. Crossover is analogous to mating
Once a population has been initialized the next step between individuals, where gene sections of two individuals are
is reproduction. This comprises several steps. First, the exchanged to form two new individuals as illustrated in Fig. 2.
chromosomes must be decoded into meaningful values. This As this process continues through each generation the best
is done by translating the binary strings that comprise the gene sections will become more prevalent thus converging to
chromosomes into the numbers they represent (for example an “optimum” solution. Mutation is a process by which an
“1010” ∼ 23 + 21 = 10) then Eq. (1) is used to determine a individual gene in a chromosome will be switched from a 1 to a
numerical value for the variable of interest. 0 or vice versa. This can result in a significantly different value
b and is intended to spread out the search region and attempt to
Ci = Cmin i + (Cmax i − Cmin i ) , (1) find the global optimum rather than just a local one. Fig. 3
2L − 1 illustrates mutation.
where Fig. 4 illustrates these steps in a flow chart of a conventional
genetic algorithm. GA have become quite popular in recent
Ci = value of the variable to be optimized
years for solving optimization problems. Further details on
i = index of the number of variables genetic algorithms can be found elsewhere, e.g. [5].
Cmin i = minimum value of the desired range of Ci
Cmax i = maximum value of the desired range of Ci 3. Multi-objective fuzzy optimization
b = numerical value of the binary string representing the Until a few decades ago most systems were optimized
variable using a single objective function. Often the objective function
L = length of the binary string. accounted for the economic efficiency only. Multi-objective
J.H. van der Lee et al. / ISA Transactions 47 (2008) 53–59 55
optimization involves the simultaneous optimization of more The objectives are calculated from the process data resulting
than one objective function. A number of industrial systems from running a simulation using each of the following sets of
have been optimized with multiple-objective functions and alternative tuning parameters:
constraints using a variety of algorithms, often generating a
A = (S1, S2, S3, S4) ,
set of several equally good non-dominating solutions or a
Pareto front. Further details on multi-objective optimization where Si, for i = 1 to 4, are the values of the four tuning
can be found elsewhere, e.g. [1] have reviewed multi-objective parameters.
optimization problems in chemical engineering. The objectives appear as follows. These objective sets are
Several extensions of GA have been developed to solve fuzzy sets expressed in Zadeh’s notation:
problems involving multi-objective optimization, e.g. [8,3]. As The temperature time to steady state objective could be:
mentioned previously in this paper we use simple GA for multi-
0.5 0.8 0.5 0.3
objective fuzzy decision making (MOFDM) [7] for both the O1 = + + +
sake of simplicity and robustness. S1 S2 S3 S4
MOFDM is the second key component in the tuning while the composition time to steady state objective could be:
algorithm. As the name would suggest it is useful in situations
0.5 0.2 0.6 0.8
when decisions need to be made based on more than one O2 = + + + .
objective, and where the definition of the objectives may S1 S2 S3 S4
only be able to be ordinally ranked. The method ranks a set The temperature ISE minimization objective could be:
of alternatives (A = {a1 , a2 . . . am )) according to a set of
0.3 0.7 0.7 0.3
objectives (O = {O1 , O2 . . . On )) and the preference set (P = O3 = + + +
{b1 , b2 . . . bn )) that one has for the objectives. P is a fuzzy set S1 S2 S3 S4
and as such can be based on qualitative statements, which is while the composition ISE minimization objective could be:
useful when it is difficult to quantify a preference, or when
0.6 0.9 0.4 0.3
there are multiple objectives to consider. P gives a relative O4 = + + +
importance of each objective; these values can be adjusted to S1 S2 S3 S4
suit a desired response. with preference set (for tight composition control):
The basic structure of the MOFDM algorithm is general and
P1 = 0.6 0.3 0.3 0.9 ,
can be adapted to whatever is determined to be a factor in the
desired control performance for a given process. Numerical
with importance, b1 = 1 − P1 , b1 = 0.4 0.7 0.7 0.1 .
values can be given to qualitative statements of preferences The overall rank with preference set P1 was then
by assigning a statement like “not important” a value of 0 calculated as follows:
and a statement like “extremely important” a value of 1, with
appropriately scaled intermediate values. Further details on D(S1) = b1 ∪ O1 ∩ b2 ∪ O2 ∩ b3 ∪ O3 ∩ b4 ∪ O4
MOFDM can be found elsewhere, e.g. [7]. = (0.4 ∪ 0.5) ∩ (0.7 ∪ 0.5) ∩ (0.7 ∪ 0.3) ∩ (0.1 ∪ 0.6)
For example, consider a process with temperature and = 0.5 ∩ 0.7 ∩ 0.7 ∩ 0.6
composition control whose control engineer determined that the
= 0.5
“optimum” tuning parameters would be based on minimization
of the total control movement and the time to steady state D(S2) = b1 ∪ O1 ∩ b2 ∪ O2 ∩ b3 ∪ O3 ∩ b4 ∪ O4
for each loop. To calculate the objectives in this case, the = (0.4 ∪ 0.8) ∩ (0.7 ∪ 0.2) ∩ (0.7 ∪ 0.7) ∩ (0.1 ∪ 0.9)
process performance measures (Fig. 5) integrated squared error = 0.8 ∩ 0.7 ∩ 0.7 ∩ 0.9
(ISE) and time to steady state (e.g. [10]) could be used in = 0.7
the calculation of the objectives to be used in the MOFDM
algorithm, as described in the following. D(S3) = b1 ∪ O1 ∩ b2 ∪ O2 ∩ b3 ∪ O3 ∩ b4 ∪ O4
Minimization of total control movement: = (0.4 ∪ 0.5) ∩ (0.7 ∪ 0.5) ∩ (0.7 ∪ 0.7) ∩ (0.1 ∪ 0.4)
P
ISEMPC = 0.5 ∩ 0.7 ∩ 0.7 ∩ 0.4
O =1− P , (2) = 0.4
ISEBase
D(S4) = b1 ∪ O1 ∩ b2 ∪ O2 ∩ b3 ∪ O3 ∩ b4 ∪ O4
where ISEMPC is the ISE for the MPC controller, ISEBase is
the ISE for the base (regulatory) controller and if O < 0; O = = (0.4 ∪ 0.3) ∩ (0.7 ∪ 0.8) ∩ (0.7 ∪ 0.3) ∩ (0.1 ∪ 0.3)
0. = 0.4 ∩ 0.8 ∩ 0.7 ∩ 0.3
Minimization of time to steady state: = 0.3,
TimeSSMPC where D is the decision function.
O =1− , (3)
TimeSSBase Thus, the overall ranking with preference set P1 is: 1st = S2,
where TimeSSMPC is the time to steady state for the MPC 2nd = S1, 3rd = S3, 4th = S4. S4 is thrown out and two of the
controller, TimeSSBase is the time to steady state for the base S2 tuning parameter individuals are used in the next iteration of
(regulatory) controller and if O < 0; O = 0. the genetic algorithm.
56 J.H. van der Lee et al. / ISA Transactions 47 (2008) 53–59
Table 1
Preferences for set 1 and 2
J.H. van der Lee et al. / ISA Transactions 47 (2008) 53–59 57
Table 2
Preferences for sets 1 to 4
Fig. 9. system response for a temperature set point change from 50–70 ◦ C, using preference set 1.
58 J.H. van der Lee et al. / ISA Transactions 47 (2008) 53–59
Fig. 11. System response for a temperature set point change from 50–70 ◦ C, using preference set 2.
J.H. van der Lee et al. / ISA Transactions 47 (2008) 53–59 59
preference sets. It was found that the algorithm was capable [4] Goldberg D. Genetic algorithms. USA: Addison Wesley; 1989.
of manipulating the tuning parameters to match the preference. [5] Edgar TF, Himmelblau DM, Lasdon LS. Optimization of chemical
processes. USA: McGraw-Hill; 2001.
[6] Meadows ES. MPC tuning. In: Canadian society of chemical engineering
References conference. 2001.
[7] Ross TJ. Fuzzy logic with engineering applications. 2nd ed. New York
[1] Bhaskar V, Gupta SK, Ray AK. Applications of multiobjective (NY, USA): McGraw-Hill; 2004.
optimization in chemical engineering. Rev Chem Eng 2000;16:1–54. [8] Schaffer JD. Some experiments in machine learning using vector
[2] Cutler CR, Ramaker BL. Dynamic matrix control: A computer control evaluated genetic algorithms. Ph.D. thesis. Nashville (TN, USA):
algorithm. In: Proceedings joint automatic controls conference. Paper Vanderbilt University; 1984.
WP5-B. 1980. [9] Sridhar R, Cooper D. A tuning strategy for unconstrained SISO model
[3] Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist predictive control. Ind Eng Chem Res 1997;36:729–46.
multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput [10] Svrcek WY, Mahoney DP, Young BR. A real-time approach to process
2002;6:182–97. control. 2nd ed. Chichester (UK): John Wiley and Sons, Ltd.; 2006.