You are on page 1of 7

ISA Transactions 47 (2008) 53–59

www.elsevier.com/locate/isatrans

A tuning algorithm for model predictive controllers based on genetic


algorithms and fuzzy decision making
J.H. van der Lee a , W.Y. Svrcek b , B.R. Young c,∗
a Virtual Materials Group Inc., 657 Hawkside Mews NW, Calgary, Alberta T3G 3S1, Canada
b Department of Chemical & Petroleum Engineering, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4, Canada
c Department of Chemical & Materials Engineering, University of Auckland, Private Bag 92019, Auckland, New Zealand

Received 6 December 2005; accepted 22 June 2007


Available online 17 September 2007

Abstract

Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of
an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases.
Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used.
This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm
for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The
key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and
ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum
control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly
useful for multi-input, multi-output (MIMO) cases where the definition of “optimum” control is subject to the opinion of the control engineer
tuning the system.
A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of “optimum”
control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of
the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control,
thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.
c 2007, ISA. Published by Elsevier Ltd. All rights reserved.

Keywords: Genetic algorithms; Multi-objective fuzzy decision-making; Auto-tuning; Model predictive control

1. Introduction a general method of tuning MPC in all of its incarnations


has remained elusive (e.g. [6]). The two main factors for this
Model predictive controllers (MPC) are an important tool in are that MPCs can be very complex and difficult to study
a process control engineer’s repertoire. Their ability to handle analytically, especially considering MPCs can be very different
constraints, non-linearities and unusual dynamics in a wide from one another. Additionally, it is very difficult to quantify
range of multiple-input multiple-output control schemes has the best way to tune a particular system especially in a MIMO
fuelled their success. As such there has been a large number case. To work around this, researchers have developed tuning
of published works extending the knowledge and use of MPC, methods for MPCs that are simplified in some way, for example,
these include a wide range of subjects such as: detailed analysis by looking at a single input, single output (SISO) case with first
of a specific type of MPC, improvement of an existing or order plus dead time models and setting the control objective
proposal of a new MPC or a case study showing the issues to be the minimizing of integrated squared error (ISE), e.g. [9].
involved in MPC implementation. Despite all of this work While this is useful some of the time, there is a lack of flexibility
in these approaches that can lead to tuning through trial and
∗ Corresponding author. Tel.: +64 9 373 7599; fax: +64 9 3737463. error methods, which are time consuming and may not yield the
E-mail address: b.young@auckland.ac.nz (B.R. Young). “optimum” results. With this in mind an automated generalized

c 2007, ISA. Published by Elsevier Ltd. All rights reserved.


0019-0578/$ - see front matter
doi:10.1016/j.isatra.2007.06.003
54 J.H. van der Lee et al. / ISA Transactions 47 (2008) 53–59

tuning algorithm was developed capable of handling the many


variations of MPC as well as the wide range of definitions of
Fig. 1. Chromosomes for an individual with two variables C1 and C2.
“optimum” tuning. The key features of this tuning algorithm
are the use of a genetic algorithm with a multi-objective fuzzy
decision-making algorithm as the effectiveness function. A
brief summary of genetic algorithms and multi-objective fuzzy
decision-making will be presented along with how each is used
Fig. 2. Crossover.
in the tuning algorithm.

2. Genetic algorithms
Fig. 3. Mutation.
A genetic algorithm (GA) is an optimization method based
on natural selection, in that a population of individuals
(variables to optimize) is created and when evaluated the
best members survive. They then go on to reproduce and
as this cycle continues the population inherits the traits of
the strongest individuals, resulting in a population that will
contain individuals that are close to the “optimum” values.
This population can then be evaluated and the best individual
chosen. This process occurs in four basic steps: Initialization,
Reproduction, Crossover and Mutation (e.g. [4]). Simple GA
was used for these steps in this work. This was done in order
to keep the algorithm as simple as possible and allow the
focus to be on the tuning of MPCs rather than the intricacies
of GA algorithm development. The aim was also to ensure
that the algorithm employed was simple so that it would
be robust—particularly when combined with multi-objective
fuzzy decision-making. The first step required is initialization.
This creates a population of individuals that are composed of
Fig. 4. Genetic algorithm flow chart.
chromosomes; each chromosome in an individual will represent
one of the variables that are to be optimized. For example, if an The variables from each individual are then input into the
equation had two constants (C1 and C2) that were to be fitted to optimization problem and the resulting values are collected and
a data set there would be one chromosome representing C1 and evaluated. The individual that yields the worst result is replaced
another representing C2 in each individual. The chromosomes with a duplicate of the best individual. After reproduction
are composed of L number of genes to form binary strings, the population needs to be updated in some way in order to
which are populated randomly with zeros and ones. Fig. 1 converge on the “optimum” solution. This is where crossover
shows what a population may look like for this example. and mutation come in to play. Crossover is analogous to mating
Once a population has been initialized the next step between individuals, where gene sections of two individuals are
is reproduction. This comprises several steps. First, the exchanged to form two new individuals as illustrated in Fig. 2.
chromosomes must be decoded into meaningful values. This As this process continues through each generation the best
is done by translating the binary strings that comprise the gene sections will become more prevalent thus converging to
chromosomes into the numbers they represent (for example an “optimum” solution. Mutation is a process by which an
“1010” ∼ 23 + 21 = 10) then Eq. (1) is used to determine a individual gene in a chromosome will be switched from a 1 to a
numerical value for the variable of interest. 0 or vice versa. This can result in a significantly different value
b and is intended to spread out the search region and attempt to
Ci = Cmin i + (Cmax i − Cmin i ) , (1) find the global optimum rather than just a local one. Fig. 3
2L − 1 illustrates mutation.
where Fig. 4 illustrates these steps in a flow chart of a conventional
genetic algorithm. GA have become quite popular in recent
Ci = value of the variable to be optimized
years for solving optimization problems. Further details on
i = index of the number of variables genetic algorithms can be found elsewhere, e.g. [5].
Cmin i = minimum value of the desired range of Ci
Cmax i = maximum value of the desired range of Ci 3. Multi-objective fuzzy optimization
b = numerical value of the binary string representing the Until a few decades ago most systems were optimized
variable using a single objective function. Often the objective function
L = length of the binary string. accounted for the economic efficiency only. Multi-objective
J.H. van der Lee et al. / ISA Transactions 47 (2008) 53–59 55

optimization involves the simultaneous optimization of more The objectives are calculated from the process data resulting
than one objective function. A number of industrial systems from running a simulation using each of the following sets of
have been optimized with multiple-objective functions and alternative tuning parameters:
constraints using a variety of algorithms, often generating a
A = (S1, S2, S3, S4) ,
set of several equally good non-dominating solutions or a
Pareto front. Further details on multi-objective optimization where Si, for i = 1 to 4, are the values of the four tuning
can be found elsewhere, e.g. [1] have reviewed multi-objective parameters.
optimization problems in chemical engineering. The objectives appear as follows. These objective sets are
Several extensions of GA have been developed to solve fuzzy sets expressed in Zadeh’s notation:
problems involving multi-objective optimization, e.g. [8,3]. As The temperature time to steady state objective could be:
mentioned previously in this paper we use simple GA for multi-  
0.5 0.8 0.5 0.3
objective fuzzy decision making (MOFDM) [7] for both the O1 = + + +
sake of simplicity and robustness. S1 S2 S3 S4
MOFDM is the second key component in the tuning while the composition time to steady state objective could be:
algorithm. As the name would suggest it is useful in situations  
0.5 0.2 0.6 0.8
when decisions need to be made based on more than one O2 = + + + .
objective, and where the definition of the objectives may S1 S2 S3 S4
only be able to be ordinally ranked. The method ranks a set The temperature ISE minimization objective could be:
of alternatives (A = {a1 , a2 . . . am )) according to a set of  
0.3 0.7 0.7 0.3
objectives (O = {O1 , O2 . . . On )) and the preference set (P = O3 = + + +
{b1 , b2 . . . bn )) that one has for the objectives. P is a fuzzy set S1 S2 S3 S4
and as such can be based on qualitative statements, which is while the composition ISE minimization objective could be:
useful when it is difficult to quantify a preference, or when  
0.6 0.9 0.4 0.3
there are multiple objectives to consider. P gives a relative O4 = + + +
importance of each objective; these values can be adjusted to S1 S2 S3 S4
suit a desired response. with preference set (for tight composition control):
The basic structure of the MOFDM algorithm is general and
P1 = 0.6 0.3 0.3 0.9 ,
 
can be adapted to whatever is determined to be a factor in the
desired control performance for a given process. Numerical
 
with importance, b1 = 1 − P1 , b1 = 0.4 0.7 0.7 0.1 .
values can be given to qualitative statements of preferences The overall rank with preference set P1 was then
by assigning a statement like “not important” a value of 0 calculated as follows:
and a statement like “extremely important” a value of 1, with    
appropriately scaled intermediate values. Further details on D(S1) = b1 ∪ O1 ∩ b2 ∪ O2 ∩ b3 ∪ O3 ∩ b4 ∪ O4
MOFDM can be found elsewhere, e.g. [7]. = (0.4 ∪ 0.5) ∩ (0.7 ∪ 0.5) ∩ (0.7 ∪ 0.3) ∩ (0.1 ∪ 0.6)
For example, consider a process with temperature and = 0.5 ∩ 0.7 ∩ 0.7 ∩ 0.6
composition control whose control engineer determined that the
= 0.5
“optimum” tuning parameters would be based on minimization    
of the total control movement and the time to steady state D(S2) = b1 ∪ O1 ∩ b2 ∪ O2 ∩ b3 ∪ O3 ∩ b4 ∪ O4
for each loop. To calculate the objectives in this case, the = (0.4 ∪ 0.8) ∩ (0.7 ∪ 0.2) ∩ (0.7 ∪ 0.7) ∩ (0.1 ∪ 0.9)
process performance measures (Fig. 5) integrated squared error = 0.8 ∩ 0.7 ∩ 0.7 ∩ 0.9
(ISE) and time to steady state (e.g. [10]) could be used in = 0.7
the calculation of the objectives to be used in the MOFDM    
algorithm, as described in the following. D(S3) = b1 ∪ O1 ∩ b2 ∪ O2 ∩ b3 ∪ O3 ∩ b4 ∪ O4
Minimization of total control movement: = (0.4 ∪ 0.5) ∩ (0.7 ∪ 0.5) ∩ (0.7 ∪ 0.7) ∩ (0.1 ∪ 0.4)
P
ISEMPC = 0.5 ∩ 0.7 ∩ 0.7 ∩ 0.4
O =1− P , (2) = 0.4
ISEBase    
D(S4) = b1 ∪ O1 ∩ b2 ∪ O2 ∩ b3 ∪ O3 ∩ b4 ∪ O4
where ISEMPC is the ISE for the MPC controller, ISEBase is
the ISE for the base (regulatory) controller and if O < 0; O = = (0.4 ∪ 0.3) ∩ (0.7 ∪ 0.8) ∩ (0.7 ∪ 0.3) ∩ (0.1 ∪ 0.3)
0. = 0.4 ∩ 0.8 ∩ 0.7 ∩ 0.3
Minimization of time to steady state: = 0.3,
TimeSSMPC where D is the decision function.
O =1− , (3)
TimeSSBase Thus, the overall ranking with preference set P1 is: 1st = S2,
where TimeSSMPC is the time to steady state for the MPC 2nd = S1, 3rd = S3, 4th = S4. S4 is thrown out and two of the
controller, TimeSSBase is the time to steady state for the base S2 tuning parameter individuals are used in the next iteration of
(regulatory) controller and if O < 0; O = 0. the genetic algorithm.
56 J.H. van der Lee et al. / ISA Transactions 47 (2008) 53–59

with little modification. The MOFDM is similarly flexible and


is capable of determining which set of a set of alternatives
most closely satisfies a set of multiple-qualitative objectives,
which may be typical in the definition of “optimum” process
Fig. 5. ISE and time to steady state. behaviour. A genetic algorithm using MOFDM in place of a
conventional fitness function provides the basis for the tuning
algorithm, with the benefit of it being automated while being
capable of evaluating process behaviour in a way similar to that
of a control engineer. This algorithm would, at the very least, be
able to provide a starting point from which a control engineer
could fine tune a set of tuning parameters, or at best, give a set of
tuning parameters which could be used directly in the process.
The flow chart in Fig. 6 shows this algorithm.

5. Hot water mixing tank case study

In order to assess the performance of this tuning algorithm


it was decided to implement it on a relatively simple case
study. For this, a simulation of a tank mixing three water
streams of varying temperatures was developed. A schematic
of this process can be seen in Fig. 7. The hot and cold streams
are manipulated in order to control the level in the tank and
the temperature of the stream exiting the tank. The control
Fig. 6. MPC tuning algorithm.
strategy used to control the temperature and level employed an
unconstrained linear DMC [2]. The step response models were
determined by introducing pseudo random binary sequence
(PRBS) signals separately to the hot and cold water streams,
and then using the system identification tool box in MATLAB
to identify the model that best fit the data, which was then used
to generate a unit step response for the DMC.
The expected disturbance to the system was a set point
change of 50–70 ◦ C in the temperature of the exit water stream.
As mentioned previously, there is little in the way of a published
tuning methodology for the MPC employed in the case study,
apart from trial and error (e.g. [6]) and Sridhar and Cooper’s [9]
work on unconstrained single input single MPC. However,
the efficacy of this method can actually be evaluated from
the calculated performance metrics, i.e. integral square error
and time to steady state. It was decided that the overshoot,
integrated squared error and time to steady state would be used
to calculate the objectives for the MOFDM as these parameters
would provide a very good indication of the process behaviour.
The resulting values for temperature overshoot (Tover ), level
overshoot (L over ), temperature ISE (TISE ), level ISE (L ISE ),
Fig. 7. Mixing tank schematic with DMC. temperature time to steady state (Ttss ) and level time to steady
state (L tss ) can be seen in Tables 1 and 2.
4. MPC tuning It was assumed that the DMC weighting and move
suppression coefficients for each pairing (w1 , λ1 , w2 and λ2 )
The basic structure of the GA is not problem specific, is were the tunable parameters and that the size of the models
quite flexible and can be used on a wide range of problems used in the DMC were fixed, although the algorithm could be

Table 1
Preferences for set 1 and 2
J.H. van der Lee et al. / ISA Transactions 47 (2008) 53–59 57

Table 2
Preferences for sets 1 to 4

algorithm was set to have ten individuals each representing a


set of tuning parameters. Each chromosome was comprised
of 32 bits (genes) for each of these parameters making up
each individual. A crossover rate of 0.9 was used along with
a mutation rate of 0.0015 for 100 generations. The final tuning
parameters used were selected from the 100th generation using
the MOFDM. Four preference sets were used in tuning the
algorithm to show how the algorithm arrives at the desired
number. The results for preference sets 1 and 2 can be seen
in Figs. 8 through 11 and Table 1. The results for all four
preference sets can be seen in Table 2.
Preference set 1 was primarily concerned with minimizing
the amount of overshoot on the level control during a
temperature set point change. As such, L over was given a value
of P = 0.9 and the time to steady state (Ttss ) was given a value
of P = 0.01, as were all other preference values.
The preference set was input into the genetic algorithm and
Fig. 8. Average and standard deviation of tuning parameters for preference the average tuning parameters and standard deviation of the
set 1. tuning parameters can be seen in Fig. 8.
The set of tuning parameters that most closely satisfied
adapted to adjust these. Each of these tuning parameters was preference set 1, as determined from the MOFDM algorithm,
represented by a chromosome in an individual in the GA. The were then used to control the system, as seen in Fig. 9.

Fig. 9. system response for a temperature set point change from 50–70 ◦ C, using preference set 1.
58 J.H. van der Lee et al. / ISA Transactions 47 (2008) 53–59

The set of tuning parameters that most closely satisfied


preference set 2, were then determined using the MOFDM
algorithm and these were then used to control the system, as
seen in Fig. 11.
As is obvious between preference set 1 and 2, the time to
steady state for the temperature loop is significantly better if
the amount of level overshoot is compromised slightly.
Two additional preference sets were tested, and the results
of these can be seen in Table 2. Preference set 3 was a case that
considered overshoot and time to steady state to be of medium
and equal importance, while preference set 4 considered the
case where overshoot and time to steady state were both of great
importance. For both cases, all other preferences were set to
0.01.
As the results show, the algorithm yielded tuning parameters
that match the desired response of the system. However, it
is important to remember that the objectives are tied to one
Fig. 10. Average and standard deviation of tuning parameters for preference
another and the user needs to be careful of conflicts between
set 2. preferences. In addition to this, process interactions will dictate
the extent to which the user can specify the response of the
Preference set 2 was for a case with a small level overshoot system.
and a smaller time to steady state in the temperature control
loop. In this case, minimizing the amount of level overshoot is 6. Conclusions
very important but the amount of time taken by the temperature This paper outlines the development of a generalized
loop to reach steady state is also of some importance. As such, automated tuning algorithm whose structure allows it to be used
L over was given a P = 0.9 and Ttss was given a P = 0.6. All on a wide variety of different controllers and control systems.
other objectives were given a P = 0.01. The algorithm is used to determine tuning parameters that fit
Once again, the preference set was input into the genetic the “optimum” response as defined by a control engineer. A
algorithm and the average tuning parameters and standard mixing tank case is used to illustrate the capability of this tuning
deviation of the tuning parameters can be seen in Fig. 10. algorithm to identify “optimum” tuning parameters for four

Fig. 11. System response for a temperature set point change from 50–70 ◦ C, using preference set 2.
J.H. van der Lee et al. / ISA Transactions 47 (2008) 53–59 59

preference sets. It was found that the algorithm was capable [4] Goldberg D. Genetic algorithms. USA: Addison Wesley; 1989.
of manipulating the tuning parameters to match the preference. [5] Edgar TF, Himmelblau DM, Lasdon LS. Optimization of chemical
processes. USA: McGraw-Hill; 2001.
[6] Meadows ES. MPC tuning. In: Canadian society of chemical engineering
References conference. 2001.
[7] Ross TJ. Fuzzy logic with engineering applications. 2nd ed. New York
[1] Bhaskar V, Gupta SK, Ray AK. Applications of multiobjective (NY, USA): McGraw-Hill; 2004.
optimization in chemical engineering. Rev Chem Eng 2000;16:1–54. [8] Schaffer JD. Some experiments in machine learning using vector
[2] Cutler CR, Ramaker BL. Dynamic matrix control: A computer control evaluated genetic algorithms. Ph.D. thesis. Nashville (TN, USA):
algorithm. In: Proceedings joint automatic controls conference. Paper Vanderbilt University; 1984.
WP5-B. 1980. [9] Sridhar R, Cooper D. A tuning strategy for unconstrained SISO model
[3] Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist predictive control. Ind Eng Chem Res 1997;36:729–46.
multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput [10] Svrcek WY, Mahoney DP, Young BR. A real-time approach to process
2002;6:182–97. control. 2nd ed. Chichester (UK): John Wiley and Sons, Ltd.; 2006.

You might also like