You are on page 1of 16

AI Communications 30 (2017) 347–362 347

DOI 10.3233/AIC-170741
IOS Press

Adaptive simulated annealing for tuning PID


controllers
Luis Fernando Fraga-Gonzalez a , Rita Q. Fuentes-Aguilar a , Alejandro Garcia-Gonzalez a and
Gildardo Sanchez-Ante b,∗
a Tecnologico de Monterrey, Campus Guadalajara, Av. Gral Ramon Corona 2514, Zapopan, Jal 45201, Mexico
b Universidad Politecnica de Yucatan, Carretera Merida-Tetiz, Km. 4.5, Ucu, YUC 97357, Mexico

Abstract. PID controllers are one of the most popular types of controllers found in the industry; they require determining three
real values to minimize the error over time and to deal with specific process requirements. Finding such values has been subjected
to extensive research, and many popular algorithms and methods exist to accomplish this. One of these methods is Simulated
Annealing. In this paper, we study the use of the re-annealing characteristic of Adaptive Simulated Annealing (ASA) for PID
tuning in 20 benchmark systems. This adaptive version gives special treatment to each parameter of the search space. We compare
the results of ASA with a simple SA algorithm. An extra comparison, with a Particle Swarm Optimization algorithm, was made
to provide some information on how ASA behaves compared against another optimization based method. The results show that
using an adaptive algorithm effectively improves the performance of the tested systems.
Keywords: Adaptive Simulated Annealing, meta-heuristics, optimization, automatic control, PID controller

1. Introduction The general form for PID controllers is given by equa-


tion (1):
Along with the creation of complex industrial sys-  t de(t)
tems came the need to maintain them under certain
y(t) = kp e(t) + ki e(t) dt + kd (1)
operation conditions. This was achieved by means of 0 dt
many human operators, until control theory was devel-
oped and automated control schemes were put in prac- where y(t) is the output of the controller; kp , ki and kd
tice. A good example of that evolution can be found in are constants for the proportional, integral and deriva-
the chemical industry [18] as well as in some others. tive terms in the equation, also called gains and e(t) =
Among such control approaches, it is possible to men- yref − y(t) is the error; yref is the reference; and t rep-
tion: root locus controller design, frequency domain resents the time.
control methods, and proportional-integral-derivative In general, the controller is designed to improve cer-
(PID) controllers [21]. As noted in [21], by 2001 more tain characteristics of the system such as maximum
than half of the industrial controllers were either PID overshoot (how much above the desired final value is
controllers or modifications of such schemes. PID con- the maximum peak), settling time (how much does it
trollers make use of three different terms: proportional, take for the system to get close to the final value), the
integral and derivative gains. The proportional gain de- final value and the time the oscillations remain in the
pends on the current error, the integral depends on system.
past errors, and the derivative is a prediction of fu- Despite the simplicity in PID controllers, in most
ture errors. Then, the sum of those three elements is of the cases, the values for the proportional, integral
used to adjust the process. The main advantage of PID and derivative gains are determined in situ. There are
controllers is that they can be applied to most con- several systematic methods to tune PID controllers.
trol systems even when the mathematical representa- Perhaps the most popular one is the Ziegler–Nichols
tion of the system is unknown. Also, PID controllers method [32]. The Ziegler–Nichols method is often
improve both transient and steady-state responses [2]. combined with manual tuning to produce the desired
behavior of the controlled system [21]. Unfortunately,
* Corresponding author. E-mail: gildardo.sanchez@upy.edu.mx. the Ziegler–Nichols method cannot be applied to tune

0921-7126/17/$35.00 © 2017 – IOS Press and the authors. All rights reserved
348 L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers

PID controllers for every system; particularly, systems This technique has been applied for automatic PID
for which there is no proportional gain that causes the tuning [9,11,22,23,28,30]. In [28], this algorithm was
system output to oscillate cannot be controlled using used to tune the PID controllers for four benchmark
this method. systems which are difficult to tune using conventional
Thus, some researchers have considered the PID methods.
tuning problem from an optimization perspective. In [22] approached the PID tuning problem for two
such case, the cost function could be represented in specific systems with GA by using the sum of the
many ways, going from a single-valued function (the squared error as the cost function to optimize the PID
total error) to a multi-objective function. Once the values; the results improved over the classical Ziegler–
PID tuning problem is stated as an optimization one, Nichols algorithms.
it would not be surprising that researchers have ap- Working with GAs is not always straightforward.
proached its solution via different heuristic and meta- The GAs work with strings that represent the solu-
heuristic methods coming from the artificial intelli- tions, rather than with the solutions themselves. In
gence community. We will cover some of the most some settings, GAs are known to need more computa-
representative works on this matter in the next sec- tional time than other methods to achieve good solu-
tion. tions [9,19].
In this work, we study the use of the re-annealing Another possibility is to use Particle Swarm Op-
characteristic of Adaptive Simulated Annealing (ASA) timization (PSO). PSO is an optimization technique
for PID tuning in benchmark systems: a variety of based on having a set of points distributed in the
representative systems found in industrial controllers. search space and moving them with specific veloci-
We make an extensive analysis of an adaptive version ties through it looking for an optimum solution [13].
of simulated annealing that gives special treatment to PSO was designed considering numerical optimization
each parameter of the search space for 20 different problems in the space of the real numbers. That means
benchmark systems and compare it to the performance that its application to engineering problems is, in gen-
of normal simulated annealing. eral, easier than genetic algorithms since this algorithm
The paper shows a fruitful effect in the tuning of does not require a candidate solution to be represented
the PID in comparison with the classical method of as a string. In [29], the authors propose a modified
Ziegler–Nichols or the non-adaptive version of Simu- hybrid PSO and Simulated Annealing (SA) algorithm.
lated Annealing for the benchmark systems tested. An The focus of their work is on the initialization of the
extra comparison with a Particle Swarm Optimization particles in the search space. The tests are conducted
algorithm was made, and it is presented in the results over well known benchmarks in the optimization area.
section. Finally, some conclusions are derived from Our approach differs from all this in that the algorithm
those results. is simpler; less parameters, less complexity in the code.
In a recent work, the authors of [20] describe how
to use SA to tune a multi-variable cross-coupled PID
2. Previous work controller for a lab-scale helicopter. The authors of-
fer a comparison against evolutionary techniques such
Automatic PID tuning algorithms have received a as differential evolution (DE) and GA. For the domain
lot of attention due to their importance in the indus- in which the authors tested their method, it outper-
try. This has been particularly true in the last two formed GA and DE. The approach of considering the
decades [28]. Some of the methods applied include tuning problem as a multi-objective optimization in-
meta-heuristic techniques such as: genetic algorithms, stance is also investigated by the authors of [16]. In
particle swarm optimization and simulated annealing, this work a non-dominated Sorting Particle Swarm Op-
among others. In the following paragraphs we give a timization (NSPSO) is applied to the design of multi-
short overview of some of such proposals. objective robust PID controllers. The proposed method
A genetic algorithm (GA) is an optimization method is tested in several industrial settings, showing good
based on the principles of natural selection and genet- robustness.
ics [19]. The genetic algorithm generate a population For the specific problem of tuning PID controllers,
of candidate solutions for the problem. A number of significant work has been done using PSO. For in-
genetic operators modify the population emulating ac- stance, in [6] the authors developed a PSO based
tions such as: crossover, mutation and reproduction. method to optimize an automatic voltage regulator. In
L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers 349

the same way in [27], it is reported the design of a multi-objective optimization procedure. The process
PID controller for an air heater temperature control under consideration is an unstable first order plus dead
system. The approach is based on the application of time.
PSO. In both papers, the results show that the PSO op-
timized PID controller is able to improve the perfor-
mance when compared to the Ziegler–Nichols as well 3. Background
as to GA.
In [1], the PSO approach was applied to the de- 3.1. Simulated annealing
sign of a DC Motor Drive and compared with a Fuzzy
Logic approach, finding that this method improved the The simulated annealing algorithm (SA) was pro-
dynamic performance of the system. In [26], the au- posed by Kirkpatrick [14] in 1983 as a technique to
thors proposed a PSO implementation to tune PID con- solve optimization problems. Simulated annealing is
trollers. According to their findings, the controllers that based on similarities that Kirkpatrick found between
were tuned by using the PSO approach had less over- statistical mechanics and combinatorial optimization.
shoot compared to that of the classical Ziegler–Nichols In statistical mechanics, a configuration of a physical
method. system is described by the positions that the atoms
In [17], a Support Vector Machine (SVM) was used have at a certain temperature. Each such possible ar-
to represent the dynamics of a turbine engine, and PSO rangements have a weighted probability factor, given
was employed to tune the PID controller; the con- by the Boltzmann function: exp(−E{ri }/kB T ), where
trollers for two benchmark functions with high dimen- ri is a given atomic configuration, E{ri } is the en-
sionality and a marine system are tuned in [5] using ergy associated with that configuration, kB is the Boltz-
PSO with adaptive mutation. mann’s constant and T the temperature [14]. A funda-
In [15], a comparison between GA, PSO, SA and mental question in physical systems is what happens
the classical Ziegler–Nichols method was made for the with the material when it reaches low temperature.
PID controller tuning of a third order system represent- The simple fact of reaching low temperature does not
ing a DC motor. Out of those three algorithms, the SA guarantee that the material gets to its ground state,
approach gave the best results concerning overshoot the one with the lowest energy. Then, a careful an-
and settling time, and the PID gains were the lowest nealing process has been used. The solid is melted;
for the three parameters. and then, the temperature is decreased slowly, allow-
Of particular interest for this work is what has been ing the system to reach the thermodynamical equi-
done with another technique called Simulated Anneal- librium at that temperature. Then, the process is re-
ing (SA). Simulated Annealing is a stochastic tech- peated iteratively until the fluid solidifies. In comput-
nique used to find an optimum solution given a search ing, that same idea is simulated thanks to the work
space of independent variables [14]; this technique of Metropolis [25]. Metropolis proposed an algorithm
is based on the physical process for getting low en- that can be used to simulate a group of atoms being
ergy state of a solid material exposed to heat. One at equilibrium at a particular temperature. The algo-
form of simulated annealing is the Adaptive Simu- rithm works by giving a small random displacement
lated Annealing (ASA) algorithm that was extensively to an atom at each step, and the change in energy E
described by Lester Ingber [10]; this form of simu- is calculated; if E is negative or zero the change is
lated annealing is especially useful for problems where accepted, whereas if it is positive it is accepted with
changes in variables of the search space have varied probability P (E) = exp(−E{ri }/kB T ). The former
sensitivities in the cost function.This technique is use- procedure can be applied to optimization problems by
ful for the problem of PID tuning, where the propor- changing the energy for a cost function, the atoms con-
tional, integral and derivative gain values can have figuration by a state in the problem search space, and
varied and different effects when changed. PID con- the atom displacement by a function generating neigh-
trollers for some specific plants have been tuned us- bor states from the current state. Some work has been
ing this technique: an axial motion system represented done analyzing the convergence of the simulated an-
by two plants [31], a high-performance drilling pro- nealing algorithm, and [4] shows the necessary con-
cess [8] and a real-time process occurring in a pres- ditions for the SA algorithm to converge along with
surized tank [7]. In [24], the authors consider the a summary of some other convergence proofs. This is
problem of tuning a PID controller by means of a done by giving a mathematical representation of the
350 L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers

algorithm as a Markov Chain and analyzing the con- Algorithm 1 Simulated annealing
vergence in probability rather than absolute conver- 1: procedure SIMULATED-ANNEALING re-
gence. turns A STATE sk
The simulated annealing algorithm is composed of 2: inputs: T0 , initial temperature
the following elements: 3: J , cost function
– Cooling schedule: Simulated annealing starts with 4: s0 , initial state
a high temperature that is gradually decreased us- 5: temp-schedule, cooling schedule
ing the cooling schedule. This schedule dictates 6: neighbor, neighbor state function
how the temperature T is updated at each step and 7: T ← T0
has an important impact on the algorithm’s per- 8: sk ← s0
formance. 9: for t = 1 to tmax do
– Neighbors of the current state generation: A func- 10: sk+1 ← neighbor(sk )
tion is required to create neighbor states given a 11: E ← J (sk+1 ) − J (sk )
current state for which the energy change will be 12: if min(1, e−E/T )  rand(0, 1) then
computed and analyzed to decide if it will be ac- 13: sk ← sk+1
cepted as the next current state. 14: end if
– Cost function: This is a problem specific function 15: T ← temp-schedule(t)
that serves as a measure of the energy. In the gen- 16: end for
17: end procedure
eral case, this will be the function that the algo-
rithm is trying to optimize.
– Acceptance probability: This probability distribu- state; since E will be negative in such case, e−E/T
tion determines which states are to be accepted. is greater than 1, and then we assign to the current state
the new state. Otherwise, we assign it with probability
The initial temperature and cooling schedule for SA
e−E/T .
have important effects on both the optimization time
At the end of each state, we update the current tem-
and the best solution found by the algorithm. The work
perature by using a temperature update function: temp-
by [12] presents the drawbacks of increasing the ini-
schedule(t).
tial temperature or having a slower rate of cooling.
A higher initial temperature may lead to an optimal so-
lution, the optimization time of the algorithm increases 3.2. Adaptive simulated annealing
accordingly. On the other hand, a low initial tempera-
ture and a faster rate of cooling decrease the optimiza- Adaptive simulated annealing (ASA) is an algorithm
tion time, but may give a suboptimal solution. The re- proposed by Ingber [10] that is based on simulated an-
sults presented in [12] show evidence that choosing an nealing. This adaptive version of simulated annealing
initial temperature is not a straightforward process, and is based on randomly importance sampling of the pa-
there is not a deterministic approach for such purpose. rameter space. Ingber [10] remarks that in optimiza-
The simulated annealing algorithm can be repre- tion problems there are different parameters with vary-
sented with the pseudo-code shown in Algorithm 1. ing finite ranges and with different annealing-time de-
The pseudo-code can be explained as follows: Ini- pendent sensitivities; this serves as a motivation for
tially we assign to the current temperature T an initial adaptive simulated annealing where different anneal-
value T0 and to the current state sk an initial value s0 . ing schedules are held to address the different sensi-
Then, we loop from a starting temperature of tivities corresponding to each parameter. Keeping indi-
T = 1 until the maximum defined temperature and at vidual temperatures for each parameter, re-annealing,
each step: and generating new parameters based on the current
We assign to the current state a neighbor state by us- temperature are some characteristics of ASA.
ing a neighbor generating function neighbor(sk ). We The motivation of using adaptive simulated anneal-
then calculate the change in energy E as the differ- ing for the tuning to PID controllers is precisely the
ence between the value in the cost function between the fact that changing the different parameters of the PID
next current state sk+1 and the current state sk . If the controller in the same magnitude can have very dif-
change was negative that means that we improved (i.e. ferent effects in the system output since adaptive sim-
minimized) the objective function value with this new ulated annealing maintains temperatures and cooling
L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers 351

Algorithm 2 Adaptive Simulated annealing


1: procedure ADAPTIVE-SIMULATED-
ANNEALING returns A STATE sk
2: inputs: T0 , initial temperature vector
3: J , cost function
Fig. 1. Block diagram of a closed loop control system with a PID
4: s0 , initial state vector
controller. R(s) represents the input of the system (usually a unite
5: temp-schedule, cooling schedule step), E(s) the error signal, C(s) the system output, and the G(s)
6: neighbor, neighbor state function block the system to be controlled.
7: k, temperature update parameter vector
8: tmax , maximum number of iterations tor k by assigning to each of its i entries (n in to-
9: rt , re-annealing interval tal): | ln( TT0i max(c)
ci )|; where Ti is the temperature for pa-
10: c, objective function partial derivatives vec- rameter i and max(c) is the maximum element of vec-
tor tor c.
11: T ← T0 Finally, at the end of each iteration we update the
12: sk ← s0 temperature vector T using the update-parameter vec-
13: for t = 1 to tmax do tor k and the iteration number t.
14: sk+1 ← neighbor(sk )
15: E ← J (sk+1 ) − J (sk ) 3.3. PID controller
16: if min(1, e−E/T )  rand(0, 1) then
17: sk ← sk+1 A PID controller can be viewed as an extreme form
18: end if of a phase lead-lag compensator having one pole in the
19: if t ≡ 0 (mod rt ) then origin and a second at infinity [2]. The transfer func-
20: for i = 1 to n do ci ← ∂J∂α(sik ) tion corresponding to this type of controller is the fol-
21: end for lowing:
22: for i = 1 to n do ki ← | ln( TT0i max(c)
ci )| ki
23: end for G(s) = kp + + kd s (2)
24: end if s
25: T ← temp-schedule(t, k) The PID controller is typically applied to closed
26: end for loop control systems in a configuration as shown in
27: end procedure Fig. 1.
Some of the characteristics that are desirable to im-
schedules specifically for each parameter. This tech- prove with the controller regarding the transient re-
nique seems suitable to solve this optimization prob- sponse are:
lem.
– Rise time: The time taken by the system output to
The pseudo-code shown in Algorithm 2 illustrates
go from a low value to a high value (usually above
the differences between Simulated Annealing and
90% of its final value).
Adaptive Simulated Annealing. The main difference
– Overshoot: The amplitude of the highest peak
is that we keep track of a temperature vector T for
with respect to the final value. This parameter is
each of the n parameters for which the cost func-
usually measured as a percentage.
tion depends instead of a unique scalar temperature
– Settling time: The time taken by the system output
T for all parameters. We also keep track of an up-
to reach its final value (95% of the final value is
date parameter vector k, which is later used to update
the usual measure).
the temperature for each parameter of the cost func-
tion. And the feature to be improved concerning the perma-
We perform each step of the Adaptive Simulated nent response is:
Annealing algorithm in the same way as in the simple
– Steady-state error: The difference between the
version; except that, when the number of iterations t
desired output and the final value of the system
is a multiple of the re-annealing interval rt we obtain
after the transient response.
the partial derivatives of the cost function with respect
to each parameter αi , and store them in the vector c. Increasing each of the three gains has different ef-
Then we update the temperature update-parameter vec- fects on the output of the controlled system [2]:
352 L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers

– Increasing kp : Decreases the rise time, increases rules non-general. This is the main reason of [3] to col-
the overshoot, slightly increases the settling time, lect systems which are suitable for testing PID con-
decreases the steady-state error and degrades sta- trollers. Four families of those systems are controlled
bility. with a PID controller tuned using adaptive simulated
– Increasing ki : Slightly decreases the rise time, in- annealing:
creases the overshoot, increases the settling time, The first group of plants have multiple equal poles
largely decreases the steady-state error and de- and include first and second order systems. For higher
grades stability. values of α the system has a slow response:
– Increasing kd : Slightly decreases the rise time,
decreases the overshoot and the settling time, 1
G1 (s) = , α = 1, 2, 3, 4 (7)
slightly changes the steady-state error and im- (s + 1)α
proves stability.
The second group of benchmark systems are fourth
There are many methods for assessing the perfor- order systems with poles whose spacing is determined
mance of the output of a system; those methods are by parameter α [28].
useful in constructing a cost function to be used as
part of optimization techniques. The following mea- G2 (s)
sures serve as single objective cost functions of the
performance of a system and are based on the error: 1
= ,
Integral of Absolute Error (IAE), Integral of Squared (1 + s)(1 + αs)(1 + α 2 s)(1 + α 3 s)
Error (ISE), Integral of Time multiplied by the Abso- α = 0.1, 0.2, 0.5 (8)
lute Error (ITAE) and Integral of Time multiplied by
the Squared Error (ITSE). They are calculated respec- Third order systems with zeros located at 1/α and
tively: three equal poles constitute the third group:
 ∞  1 − αs
IAE = e(t) dt (3) G3 (s) = , α = 0.1, 0.2, 0.5, 1, 2 (9)
0 (s + 1)3
 ∞
ISE = e2 (t) dt (4) Finally, a group of second order systems with a one
0 second time delay is also tested:
 ∞  
ITAE = t e(t) dt (5) 1
0 G4 (s) = e−s ,
 ∞ (1 + sα)2
ITSE = te2 (t) dt (6)
0 α = 0.1, 0.2, 0.5, 1, 2, 5, 10 (10)

Cost functions that combine different objectives (set- 4.2. Adaptive simulated annealing
tling time, overshoot, steady-state error, etc.) are also
used as means of multi-objective performance in- The adaptive version of SA used to tune the PID
dex. controllers keeps individual temperatures for each pa-
rameter and independent cooling schedule parame-
ters ci . The following components were chosen for the
4. Methodology ASA algorithm:
– Cooling schedule. The temperature is updated af-
4.1. Tested systems
ter each step according to the following relation:
Äström et al. in [3] show a series of benchmark sys-
T i = T0 α c i (11)
tems commonly found in industry and useful to test
several PID automatic tuning techniques. The applica- α was selected to be 0.95. ci and Ti represents the
tion of tuning rules over known process responses and cooling schedule parameter and current tempera-
known plant models have some particular conditions. ture of parameter i respectively. T0 is the initial
These conditions make the design methods and tuning temperature.
L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers 353

– Neighbor generation. For each parameter i a nor- tive of L with respect to αi , T0 the initial temper-
mally distributed random number λi is generated; ature and Ti the current temperature of parame-
neighbor states are then generated using tempera- ter i.
ture based steps according to the following func-
The optimization algorithm does not consider any
tion:
bounds for the parameters, and an ideal closed loop
Ti λ i control model with no disturbance and no derivative
X i = X i +  (12) filter is used. Since every application may have very
n
k=1 λk different requirements (for instance, some may priori-
tize overshoot over settling time), the ITSE cost func-
Xi represents the magnitude of parameter i of the tion was selected to give a general improvement of the
search space and Ti the temperature of this pa- system.
rameter. Each normally distributed random num-
ber is divided by the Euclidean norm of all ran- 4.3. ASA and SA benchmark systems tests
dom variables and then multiplied by the temper-
ature. The PID tuning for the 20 plants described ear-
– Acceptance function: States for which the en- lier was tested using the adaptive simulated annealing
ergy change is positive are accepted with a Boltz-
algorithm. Different initial temperature values: 75°C,
mann probability distribution, where T is taken
100°C and 125°C were tested with re-annealing inter-
as the maximum temperature of all parameters,
val values of 75, 100, 125, 150 and 175 iterations giv-
and E is the energy change (difference be-
ing a total of 15 different configurations. The results
tween the last accepted value of the evaluating
were compared, and an initial temperature of 75°C in
function and the candidate value of this func-
combination with a 175 iterations re-annealing interval
tion):
provided the best performance overall.
−E To compare the performance of adaptive simulated
P (E) = e T (13) annealing with the simple simulated annealing algo-
rithm, i.e. without re-annealing, the next state genera-
– Re-annealing: One of the features of adaptive tion function shown in [31] was used:
simulated annealing is that after a specific num-
ber of steps the temperature is raised again inde- – The next set of PID values x  is calculated at each
pendently for each parameter in the search space. iteration as x  = x + ηλ.
For this experiment, different (as described be- Where η is the scale parameter and λ a random
low) re-annealing intervals were tested, and 175 perturbation with Gaussian distribution.
was found to be the optimal value for those sys- – The scale parameter η is reduced at each itera-
tems. To adjust to the different sensitivities of tion ‘k’ proportionally with the same value, α, for
each parameter, the partial derivatives of the cur- which the temperature is decreased: n(k + 1) =
rent value of the cost function with respect to each α · n(k)
parameter are calculated; then the cooling sched- This simple simulated annealing version was tested
ule parameter ci is obtained based on the temper- with initial temperatures of 50°, 100°C and 150°C
ature ratio and a sensitivity ratio between each pa- with scale parameter initial values, η, of 1, 25, 50,
rameter and the maximum sensitivity calculated. 100, 125 and 150, making a total of 18 different set-
This re-annealing schedule was proposed by Ing- tings. Low initial temperature values, 50 for instance,
ber [10]: showed worst results for the cost function of the G1
system with α = 1, 2, 3, 4; but provided better re-
∂L
si = (14) sults for most of the rest of the plants in comparison
∂αi with the other two initial temperatures tested (which
  
 T0 max(s)  had higher values). This was particularly noticeable

ci = ln (15)
T s  for the G3 system with α = 2 and the G4 systems
i i
with α = 0.1, 0.2, 0.5. The results for every differ-
Where L is the current cost function, αi the cur- ent system showed different behaviors with distinct
rent value of parameter i, si the partial deriva- initial temperatures and scale parameters: some cost
354 L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers

functions decreased while lowering the initial values of An initial temperature of 75°C with a scale parameter
those parameters, and some cost functions decreased η = 1 showed the best cost function values; and in fact,
while increasing those parameters. Therefore, among outperformed the simple simulated annealing version
the 18 combinations tested, one was selected such that with scale parameter reduction over time.
it provided the best results in overall for the 20 sys- Both the best adaptive simulated annealing version
tems. The temperature scale parameter was set to 0.9. (T0 = 75°C, re-annealing after 175 steps) and the best
An initial temperature of 50°C along with a scale pa- simple simulated annealing configuration (T0 = 75°C,
rameter initial value of η = 100 provided the best re- constant scale parameter η = 1) were run 10 times
sults. across the 20 different benchmark systems to compare
Finally, this same simulated annealing algorithm their performance. Table 1 shows the minimum, maxi-
was tested, but this time keeping the scale parameter mum, and average value of the 10 simulations. The last
constant; i.e. not decreasing it after each iteration. Ini- column of Table 1 shows the percentage decrease in the
tial temperatures of 75°, 100°C and 150°C along with cost function (ITSE) of the average simple simulated
scale parameter values of 1, 2, 5, 10, 20 and 50 were annealing algorithm in relation to that obtained by the
tested as well. As with the simple simulated anneal- adaptive simulated annealing algorithm.
ing algorithm, the results varied when increasing or Three systems: G1 with α = 4, G3 with α = 2 and
decreasing the parameters: initial temperature and the G4 with α = 5 were tuned using the Ziegler–Nichols
re-annealing interval. In particular, decreasing the ini- sustained oscillations method. The results are shown in
tial temperature worsened the results of all of the G1 Figs 2, 3 and 4.
systems, and decreasing the re-annealing interval im-
proved the results of all the G1 systems; while wors- 4.4. Comparison with other methods
ening the results for most of the G3 and G4 systems.
Again, one configuration was selected such that it pro- In [26], particle swarm optimization (PSO) is used
vided the best results in overall for the 20 systems. to find the optimal values of the PID controller of a
Table 1
Resulting cost function (ITSE) value after tuning 20 benchmark systems with simple simulated annealing (T0 = 75°C, constant scale parameter
η = 1) and adaptive simulated annealing (T0 = 75°C, re-annealing after 175 steps). The systems were tuned 30 times with those configurations;
and the table shows minimum, maximum and average of the simulations
System Minimum ASA Minimum SA Maximum ASA Maximum SA Average ASA Average SA %Decrease
SA->ASA
G1 , α = 1 3.28126E−14 3.16306E−10 1.51957E−08 5.38914E−06 2.25162E−09 7.69002E−07 99.71%
G1 , α = 2 6.48827E−08 2.29178E−06 4.17797E−07 2.60870E−06 1.47328E−07 2.43776E−06 93.96%
G1 , α = 3 0.056711085 0.067037751 0.059249195 0.067040784 0.057609447 0.067039283 14.07%
G1 , α =4 0.806249949 0.809537928 0.811887485 0.835697928 0.80751714 0.818732495 1.37%
G1 , α =8 11.5815742 11.67690248 11.58355417 13.23023769 11.58219552 12.32414603 6.02%
G2 , α = 0.1 7.99563E−05 7.99311E−05 8.48778E−05 7.99317E−05 8.07330E−05 7.99313E−05 −1.00%
G2 , α = 0.2 0.00153637 0.0012547 0.0020361 0.00153581 0.00162348 0.001481759 −9.56%
G2 , α = 0.5 0.06985861 0.070018449 0.071761363 0.071526088 0.070550895 0.070548342 −0.0036%
G3 , α = 0.1 0.194227458 0.194396184 0.205095509 0.197156875 0.196737888 0.195337366 −0.72%
G3 , α = 0.2 0.31948517 0.319921568 0.32699106 0.332193392 0.320412495 0.324255663 1.19%
G3 , α = 0.5 0.799883474 0.811094646 0.800839841 1.006062687 0.800090735 0.873146945 8.37%
G3 , α =1 1.908013685 1.956160971 1.908733605 2.78456457 1.908137842 2.341091507 18.49%
G3 , α =2 5.082382173 5.421020039 5.082693324 20.4012907 5.08250986 10.23233222 50.33%
G4 , α = 0.1 0.622449 0.691723394 0.624340079 2.80732558 0.623246649 1.376029046 54.72%
G4 , α = 0.2 0.660632617 0.668969013 0.660928656 1.546021432 0.660734712 0.948845043 30.36%
G4 , α = 0.5 0.828923492 1.01885133 0.829058133 1.552539677 0.828983917 1.216073093 31.83%
G4 , α =1 0.98936213 0.999201726 0.990111337 1.405266364 0.989490066 1.143018531 13.43%
G4 , α =2 1.136631736 1.144077334 1.138747002 1.247442467 1.137018133 1.183792936 3.95%
G4 , α =5 1.269052208 1.269703702 1.302555452 1.27387108 1.275305511 1.271351474 −0.31%
G4 , α = 10 1.320642609 1.320011246 1.428377447 1.320828552 1.351677884 1.320310036 −2.38%
L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers 355

Fig. 2. System, G1 , α = 4, tuned with Ziegler–Nichols and with


adaptive simulated annealing using one arbitrary run. The ASA re-
sults show less overshoot, and almost the same settling time and
steady state error than Ziegler Nichols. The output y(t) represents Fig. 3. System, G3 , α = 2, tuned with Ziegler–Nichols and with
the ratio between the actual output and the reference input. adaptive simulated annealing using one arbitrary run.The ASA re-
sults almost the same settling time, steady state error and overshoot
than Ziegler–Nichols; but less total error. The output y(t) represents
DC motor. Four different cost functions (ISE, IAE,
the ratio between the actual output and the reference input.
ITSE, ITAE) are used to find the gains of the PID
controller with the PSO algorithm. The three param-
eters of the controller are limited in the range of 0 functions is set from 0 to 10 seconds. The ASA pa-
to 100. rameters selected were: Re-annealing interval, 175 it-
The following system represents the DC motor erations; Initial temperature 75°C, and maximum num-
model in the Laplace domain: ber of iterations 1500. Table 2 shows the result of
the comparison. The PID parameters reported in [26]
1 for each objective function were taken and the cost
G(s) = (16) function was calculated under the same conditions
s3 + 9s 2 + 23s + 15
reported in that work to compare it with our algo-
In this section, we use the adaptive simulated an- rithm.
nealing algorithm to tune the same controller. The As mentioned, the four configurations were run for
search space is limited in the same range as in [26], and a maximum of 1500 iterations, and the objective func-
the integration interval used to calculate the four cost tion value after 100, 500, 1000 and 1500 iterations is
356 L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers

– ITSE: 69.38 seconds


– ITAE: 73.97 seconds
In average the algorithm is running 1500 iterations
in 70 seconds, which is roughly equivalent to 21 it-
erations per second. ASA significantly improved the
results obtained with PSO in 100 iterations (roughly
5 seconds according to the calculation above). [26]
does not specify the running time used to obtain its re-
sults, but mentions a total of 50 PSO iterations being
used.

4.5. Systems with constant disturbance

Four systems: G1 with α = 3, G2 with α = 0.5,


G3 with α = 2 and G4 with α = 2 were set with a
unit step disturbance (nominal input) and with a zero
input reference. A PID controller was tuned with adap-
tive simulated annealing to correct the error due to the
constant disturbance and to take the output of the sys-
tem to zero. Figure 5 shows the output of such systems
with the PID parameters found for each one. The three
parameters were obtained by using the same configu-
ration that was used to tune the PID controllers for the
systems in Table 1.
Figure 6 shows the configuration of a system with a
PID controller to correct the error due to a disturbance
signal D(s) and with a reference input signal R(s).

5. Results

Fig. 4. System, G4 , α = 5, tuned with Ziegler–Nichols and with This section reports the results generated after run-
adaptive simulated annealing using one arbitrary run. The ASA re- ning ASA and comparing it against particle swarm op-
sults show less overshoot, less settling time and the same steady state timization. The reason to choose PSO for comparison
error compared to Ziegler–Nichols. The output y(t) represents the is based on the interest of comparing ASA against an-
ratio between the actual output and the reference input.
other optimization-based approach, and the fact that
PSO reported interesting results according to [26].
shown in Table 2. The simulations were run on a ma-
The results (minimum, maximum, average cost func-
chine with Intel Core i5 2.4 GHz processor. The run-
tion value) after running the simulations with 30 rep-
ning time for each configuration was as follows:
etitions and for the 20 different benchmark systems
– ISE: 70.85 seconds are shown in Tables 1, 3–5. For each system we re-
– IAE: 72.57 seconds port the three gains and the respective system char-
Table 2
Comparison of best cost function found using a particle swarm optimization approach (PSO) and adaptive simulated annealing (ASA)
Cost function Best cost function Best cost after 100 Best cost after 500 Best cost after 1000 Best cost after 1500
value (PSO) iterations (ASA) iterations (ASA) iterations (ASA) iterations (ASA)
ISE 0.312 0.163 0.1038 0.1034 0.1032
IAE 1.0043 0.3842 0.2627 0.2535 0.2535
ITSE 0.0976 0.0139 0.0135 0.0135 0.0135
ITAE 1.7096 0.2067 0.1187 0.0924 0.0743
L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers 357

Fig. 5. One system of each of the four families of plants discussed earlier. Each system was set with an input reference of 0 and with a unit step
disturbance. The PID was tuned with ASA to correct the error due to this disturbance. When analyzing the disturbance rejection of a control
system, a reference input (set point) of zero is typically used to limit the analysis to just disturbance rejection.

acteristics after tuning with ASA (percentage over-


shoot, steady-state error, and settling time). Three of
those systems were selected for comparison with the
Ziegler–Nichols algorithm by creating a plot of the
system after being tuned by both methods. To com-
pare ASA with PSO, a DC motor system representa-
tion was tuned using both methods, and the results are
described as well. Finally, plots of four systems un-
der constant disturbance are shown before and after
Fig. 6. Block diagram of a closed loop control system with a PID
PID tuning with the technique described in this pa-
controller and disturbance. R(s) represents the input of the system per.
(zero in this case), E(s) the error signal, C(s) the system output, To illustrate the ASA algorithm performance graph-
the G(s) block the system to be controlled and D(s) the disturbance ically, three different systems; G1 with α = 4, G3 with
signal. α = 2 and G4 with α = 5; were tuned using the
358 L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers

Table 3
erations to get very close to the optimal values found
Resulting standard deviation of the cost function (ITSE) value af-
by ASA.
ter tuning 20 benchmark systems with simple simulated annealing
(T0 = 75°C, constant scale parameter η = 1) and adaptive simulated Table 6 shows the results of running each system for
annealing (T0 = 75°C, re-annealing after 175 steps). The systems both ASA and SA in a computer with an Intel Core i5-
were tuned 30 times with those configurations, and the table shows 6300HQ CPU at 2.3 GHz CPU. In general, SA can run
the standard deviation of all the simulations slightly more iterations per second than ASA given the
System Standard Deviation ASA Standard Deviation SA fact that in the ASA algorithm, in the re-annealing in-
G1 , α = 1 3.50E−09 1.33E−06 terval the gradient for each parameter has to be calcu-
G1 , α =2 7.06E−08 7.70E−08 lated.
G1 , α =3 5.15E−04 8.43E−07 Table 1 shows the minimum, maximum, and aver-
G1 , α =4 1.44E−03 6.01E−03 age cost function (ITSE) values for the 20 benchmark
G1 , α =8 4.44E−04 4.64E−01 systems after SA and ASA tuning. The last column
G2 , α = 0.1 9.78E−07 1.27E−10 shows the percentage decrease in the cost function of
G2 , α = 0.2 1.13E−04 9.82E−05 the average simple simulated annealing algorithm in
G2 , α = 0.5 6.24E−04 3.27E−04 relation to that obtained by the adaptive simulated an-
G3 , α = 0.1 3.11E−03 6.58E−04 nealing algorithm. The results show that for 14 of the
G3 , α = 0.2 1.77E−03 2.59E−03 20 system tested there was a decrease in the cost func-
G3 , α = 0.5 2.35E−04 3.92E−02 tion when using ASA with respect to SA; and for the
G3 , α =1 1.85E−04 0.21160 remaining 5 systems, where the average cost function
G3 , α =2 8.56E−05 3.5226 was greater in ASA, the percentage increase of using
G4 , α = 0.1 4.34E−04 0.4996 ASA with respect to SA was lower than 10% for all
G4 , α = 0.2 7.34E−05 0.21055 five cases.
G4 , α = 0.5 3.69E−05 0.1314 The desired system characteristics are something
G4 , α =1 1.35E−04 9.38E−02 that varies among different applications. Some systems
G4 , α =2 4.21E−04 2.38E−02 where speed is the major factor prefer to improve the
G4 , α = 5 7.44E−03 1.07E−03 settling time while sacrificing percentage overshoot,
G4 , α = 10 0.0258 1.86E−04 and in other applications where avoiding peaks is the
priority, percentage overshoot can be improved at the
expense of settling time for instance. Decreasing the
Ziegler–Nichols method; and after obtaining the three
percentage overshoot, the steady state error, and the
gains, the output of the system was plotted against the
settling time have the effect of decreasing the objective
same obtained by tuning it with adaptive simulated an-
function ITSE, so this is why all of the systems were
nealing. The Ziegler–Nichols method can be used to
tested only considering the value function to provide
tune most systems, but we just selected three of the to- greater generality.
tal 20 to show the results using this widely used al- The average, minimum and maximum values of
gorithm (Ziegler–Nichols) and ASA. Figures 2, 3 and the system characteristics and their corresponding kp ,
4 show the system step input response along with a ki and kd gains after ASA tuning are shown in Ta-
plot of the best overall cost function found at each ble 4, Table 5 and Table 7 respectively. Those re-
step. For the G1 with α = 4 case, our proposed al- sults demonstrate that the algorithm produced varied
gorithm improved slightly the settling time and sig- results regarding the gains and system characteristics
nificantly the percentage overshoot of the system. The while keeping a similar cost function value. For in-
ASA algorithm just required 800 iterations, as shown stance, for G1 with α = 1, the minimum percent-
in Fig. 2(b), to nearly find the final parameters. For age overshoot was 8.8818E−14%, and the maximum
the G3 with α = 2 system, the ASA approach im- percentage overshoot was 2971.307% ; however, both
proved the settling time requiring around 1000 itera- the minimum and maximum cost function were be-
tions to come close to the final parameters (those with low 1.0E−5. Table 3 shows the standard deviation ob-
the best cost function found by ASA); those results are tained for each system after running 30 times the sim-
shown in Fig. 3. Finally, after using ASA and Ziegler– ulation. 8 of the 20 results (rows starting at G3 with
Nichols tuning G4 with α = 5, the former greatly im- α = 0.5 and ending in G4 with α = 5) in this
proved both the overshoot and settling time in contrast table show a much lower standard deviation for the
to the latter, as shown in Fig. 3, just requiring 1000 it- Adaptive Simulated Annealing simulations, suggesting
L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers 359

Table 4
Resulting average characteristics and PID parameters of 20 different benchmark system after being tuned with adaptive simulated annealing (T0 =
75°C, re-annealing after 175 steps). The systems were tuned 30 times with those configurations, and the table shows the average of the 30 results
System Average percentage Average abs. value of Average settling time Average kp Average ki Average kd
overshoot steady-state error (seconds)
G1 , α =1 406.860% 3.55E−15 0.001716 177.382 158.082 −1.005
G1 , α =2 0.003% 3.33E−17 0.004163 1551.193 803.636 1012.011
G1 , α =3 87.987% 3.87E−05 2.60477 46.6663 551.207 1479.417
G1 , α =4 14.116% −9.10E−09 12.0339 2.345 0.933 4.201
G1 , α =8 11.419% 1.50E−04 24.94153 0.979 0.218 2.759
G2 , α = 0.1 29.636% 1.67E−16 0.09581 123.157 125.494 16.209
G2 , α = 0.2 25.488% 1.07E−16 0.46656 25.928 30.705 6.986
G2 , α = 0.5 16.606% 7.03E−17 3.94392 4.216 3.234 3.014
G3 , α = 0.1 19.646% 4.23E−11 6.977 4.334 2.193 6.251
G3 , α = 0.2 15.422% −2.38E−11 7.9128 3.093 1.449 4.094
G3 , α = 0.5 11.083% 1.84E−11 7.8852 1.841 0.810 2.226
G3 , α =1 6.769% −1.74E−11 9.2084 1.166 0.493 1.332
G3 , α =2 6.826% 8.48E−10 10.7682 0.703 0.284 0.769
G4 , α = 0.1 21.047% 6.59E−14 5.7866 0.531 0.870 0.127
G4 , α = 0.2 17.174% 3.55E−13 4.8688 0.623 0.881 0.237
G4 , α = 0.5 10.475% −8.35E−13 5.99009 0.902 0.792 0.523
G4 , α =1 10.686% −1.33E−12 7.42301 1.414 0.723 1.230
G4 , α =2 11.864% −9.01E−11 8.6794 2.485 0.669 3.581
G4 , α =5 14.116% 1.90E−06 7.4767 5.859 0.635 18.328
G4 , α = 10 16.188% −8.52E−05 7.7835 12.294 0.632 68.131

Table 5
Resulting minimum characteristics and PID parameters of 20 different benchmark system after being tuned with adaptive simulated annealing
(T0 = 75°C, re-annealing after 175 steps). The systems were tuned 30 times with those configurations, and the table shows the minimum of the
30 results for each system
System Minimum percentage Minimum abs. value of Minimum settling time Minimum kp Minimum ki Minimum kd
overshoot steady-state error (seconds)
G1 , α = 1 8.8818E−14% −7.11E−15 0.001715 −1579.614 −1445.306 −1.508
G1 , α = 2 1.8969E−04% −2.22E−16 0.003414 1021.561 533.561 729.357
G1 , α = 3 82.500% −6.39E−04 2.54757 29.743 203.116 629.518
G1 , α =4 12.956% −1.81E−08 9.33244 2.306 0.879 3.898
G1 , α =8 10.800% −1.67E−04 24.8927 0.973 0.216 2.737
G2 , α = 0.1 24.789% −4.44E−16 0.069532 104.192 86.603 13.419
G2 , α = 0.2 18.036% −8.88E−16 0.3076 19.679 16.601 5.361
G2 , α = 0.5 13.539% −1.78E−15 2.64418 4.003 2.776 2.691
G3 , α = 0.1 14.907% −6.05E−11 6.33833 3.972 1.818 5.361
G3 , α = 0.2 13.393% −6.75E−11 7.6274 2.987 1.347 3.757
G3 , α = 0.5 9.846% −9.86E−12 7.466 1.810 0.794 2.190
G3 , α =1 6.327% −4.51E−11 9.1117 1.162 0.487 1.313
G3 , α =2 6.604% 7.41E−10 10.70126 0.700 0.283 0.766
G4 , α = 0.1 17.569% −4.57E−12 5.39284 0.513 0.854 0.124
G4 , α = 0.2 16.255% 2.30E−13 4.6374 0.618 0.870 0.233
G4 , α = 0.5 10.056% −1.02E−12 5.9823 0.895 0.786 0.519
G4 , α =1 10.125% −3.72E−12 7.39401 1.403 0.712 1.217
G4 , α =2 10.960% −6.77E−10 8.37078 2.453 0.644 3.509
G4 , α =5 9.719% −6.13E−06 7.16988 5.483 0.585 16.268
G4 , α = 10 7.184% −2.64E−04 5.6130 10.128 0.482 59.281
360 L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers

Table 6
The table shows the running time for a single run of each system using both ASA and SA. The table also shows the number of iterations each
algorithm run, and the iterations per second for each algorithm and system. As in the previous experiments, the algorithm was set to run for 9000
iterations stopping the algorithm if no improvement was made after 1500 iterations
System ASA Running time SA Running time Number of Number of Iterations per Iterations per
(seconds) (seconds) iterations ASA iterations SA second ASA second SA
G1 , α = 1 319.582 306.090 9000 9000 28.162 29.403
G1 , α =2 145.265 111.754 3779 2809 26.014 25.136
G1 , α =3 255.265 244.525 9000 9000 35.257 36.806
G1 , α =4 240.155 225.697 9000 9000 37.476 39.877
G1 , α =8 240.166 226.949 9000 9000 37.474 39.656
G2 , α = 0.1 1616.406 346.521 9000 9000 5.568 25.972
G2 , α = 0.2 286.405 263.584 9000 9000 31.424 34.145
G2 , α = 0.5 276.815 264.458 9000 9000 32.513 34.032
G3 , α = 0.1 254.976 230.426 9000 9000 35.297 39.058
G3 , α = 0.2 265.259 228.476 9000 9000 33.929 39.391
G3 , α = 0.5 270.337 267.995 9000 9000 33.292 33.583
G3 , α =1 262.206 238.182 9000 9000 34.324 37.786
G3 , α =2 273.076 221.663 9000 9000 32.958 40.602
G4 , α = 0.1 458.638 443.777 9000 9000 19.623 20.280
G4 , α = 0.2 408.957 381.264 9000 9000 22.007 23.606
G4 , α = 0.5 336.279 310.643 9000 9000 26.764 28.972
G4 , α =1 339.227 301.379 9000 9000 26.531 29.863
G4 , α =2 379.087 289.279 9000 9000 23.741 31.112
G4 , α = 5 340.711 278.141 9000 9000 26.415 32.358
G4 , α = 10 323.470 310.834 9000 9000 27.823 28.954

more consistent results for those configurations with authors perform simulations and according to them,
ASA. the simulation results are achieved that the PSO op-
Table 2 shows the optimal values for each cost func- timized PID controller is able to improve the perfor-
tion found by using ASA after 100, 500, 1000 and 1500 mance when compared with the Ziegler–Nichols as
iterations along with the cost function value obtained well as with GA in the perspectives of transient re-
by using PSO. Table 8 shows the PID values found for sponses.
each cost function, the resulting overshoot and the set-
tling time. After just 100 iterations, ASA obtained a
better value than PSO for the four cost functions used 6. Conclusions
to tune the DC motor system representation. The set-
tling time was below 4 seconds and the percentage The tests showed that the best adaptive simulated
overshoot below 15% for all the cases (4 different cost annealing version outperformed the best simple sim-
functions). ulated annealing version. Particularly, out of the 20
As mentioned earlier, G1 , α = 3; G2 , α = 0.5; G3 , benchmark systems tested, for 14 of them the average
α = 2 and G4 , α = 2 were selected to test the tun- smallest cost function was achieved through adaptive
ing algorithm with a constant disturbance and a zero simulated annealing. For 10 of those systems, the per-
reference input. In the four cases, ASA tuning effec- centage decrease with respect to the average simple
tively rejected the disturbance, as shown in Figs 5(a), simulated annealing cost function was greater than 5%,
(b), (c) and (d). For the first two systems, the output and only in one benchmark system this percentage de-
reached zero in less than five seconds while for the last crease was greater than 5%. This best ASA version was
two systems this happened in less than 20 seconds. able to obtain a smaller cost function than that obtained
In [27], the authors report the design of a PID con- by PSO in less than 100 iterations for the four different
troller for an air heater temperature control system. cost functions described earlier. The tests performed in
The approach is based on the application of PSO. The four of the benchmark systems demonstrated that ASA
L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers 361

Table 7
Resulting maximum characteristics and PID parameters of 20 different benchmark system after being tuned with adaptive simulated annealing
(T0 = 75°C, re-annealing after 175 steps). The systems were tuned 30 times with those configurations, and the table shows the maximum of the
30 results for each system
System Maximum percentage Maximum abs. value of Maximum settling time Maximum kp Maximum ki Maximum kd
overshoot steady-state error (seconds)
G1 , α = 1 2971.307% 1.14E−13 0.00173 1686.658 1487.652 −0.570
G1 , α =2 0.00593% 4.44E−16 0.005526 1951.671 955.449 1209.862
G1 , α =3 90.719% 6.39E−04 2.6648 58.175 904.424 2379.001
G1 , α =4 15.862% 9.58E−09 12.2243 2.428 0.995 4.493
G1 , α =8 11.866% −1.33E−04 24.9801 0.983 0.219 2.785
G2 , α = 0.1 34.652% 6.66E−16 0.115417 137.614 228.309 19.515
G2 , α = 0.2 37.559% 8.88E−16 0.930472 37.712 105.120 9.917
G2 , α = 0.5 19.814% 1.78E−15 4.22339 4.570 3.921 3.336
G3 , α = 0.1 26.351% 4.66E−10 7.5567 5.000 2.769 7.294
G3 , α = 0.2 20.711% 2.25E−11 8.09092 3.363 1.705 4.366
G3 , α = 0.5 11.691% 4.53E−11 9.27546 1.856 0.826 2.254
G3 , α =1 7.273% 7.43E−11 9.3598 1.172 0.500 1.341
G3 , α =2 7.074% 1.02E−09 10.85137 0.705 0.286 0.772
G4 , α = 0.1 24.429% 1.94E−12 6.4242 0.548 0.890 0.131
G4 , α = 0.2 18.433% 6.44E−13 5.35986 0.629 0.893 0.243
G4 , α = 0.5 11.078% −4.23E−13 6.0001 0.906 0.800 0.528
G4 , α =1 11.538% 6.25E−13 7.4528 1.424 0.739 1.259
G4 , α =2 12.566% 6.02E−10 8.84518 2.514 0.683 3.642
G4 , α = 5 18.770% 7.52E−06 7.82972 6.188 0.690 19.914
G4 , α = 10 26.904% 8.03E−05 10.315 14.827 0.816 78.094

Table 8
Proportional, derivative and integral gains of the PID controllers obtained after tuning the DC motor using adaptive simulated annealing for the
four cost functions described earlier
Cost function kp ki kd Overshoot Settling time (seconds)
ISE 99.6727 99.998 99.5863 12.83% 3.997
IAE 99.9671 75.6893 47.7092 6.23% 1.2708
ITSE 99.9273 90.4924 63.8036 7.29% 1.2062
ITAE 99.1446 66.3086 33.1841 8.59% 1.4186

can also be effectively used to tune PID controllers for optimization strategy, Leonardo Electronic Journal of Prac-
plants with constant disturbance. tices and Technologies 14 (2009), 19–32.
The desired characteristics of the controlled systems [2] K.H. Ang, G. Chong and Y. Li, PID control system analysis,
were improved with respect to the original system (no design, and technology, IEEE Transactions on Control Systems
Technology 13(4) (2005), 559–576. doi:10.1109/TCST.2005.
control): overshoot, settling time and steady state er-
847331.
ror. The comparisons with the Ziegler–Nichols tuning
[3] K.J. Åström and T. Hägglund, Benchmark systems for PID
method and with simple simulated annealing show that control. Digital Control – Past, Present, and Future of PID
the idea of adapting the algorithm to each of the three Control 33(4) (2000), 165–166.
parameters, which have very different effects on the [4] D. Bertsimas and J. Tsitsiklis, Simulated annealing, Statistical
system, proved to be fruitful. Science 8(1) (1993), 10–14. doi:10.1214/ss/1177011077.
[5] J. Chen, Z. Ren and X. Fan, Particle swarm optimization with
adaptive mutation and its application research in tuning of PID
References parameters, in: 1st International Symposium on Systems and
Control in Aerospace and Astronautics, IEEE, 2006.
[1] B. Allaoua, B. Gasbaoui and B. Mebarki, Setting up PID DC [6] Z.-L. Gaing, A particle swarm optimization approach for op-
motor speed control alteration parameters using particle swarm timum design of PID controller in AVR system, IEEE Trans.
362 L.F. Fraga-Gonzalez et al. / Adaptive simulated annealing for tuning PID controllers

Energy Conversion 19(2) (2004), 384–391. doi:10.1109/TEC. [20] M. Moness and A.M. Moustafa, Tuning a digital multivariable
2003.821821. controller for a lab-scale helicopter system via simulated an-
[7] S. GirirajKumar, B. Rakesh and N. Anantharaman, Design of nealing and evolutionary algorithms, Transactions of the Insti-
controller using simulated annealing for a real time process, tute of Measurement and Control 37(10) (2015), 1254–1273.
International Journal of Computer Applications 6(2) (2010), doi:10.1177/0142331214560806.
20–25. doi:10.5120/1053-1368. [21] K. Ogata, Modern Control Engineering, 5th edn, Prentice Hall,
[8] R. Haber, R. Haber-Haber, R. del Toro and J. Alique, Us- 2009.
ing simulated annealing for optimal tuning of a PID controller [22] D.S. Pereira and J.O. Pinto, Genetic algorithm based system
for time-delay systems. An application to a high-performance identification and PID tuning for optimum adaptive control,
in: Proceedings, International Conference on Advanced Intel-
drilling process, in: Computational and Ambient Intelligence,
ligent Mechatronics, IEEE, 2005, pp. 801–806.
2007, pp. 1155–1162. doi:10.1007/978-3-540-73007-1_140.
[23] B. Porter and A. Jones, Genetic tuning of digital PID con-
[9] J. Herrero, X. Blasco, M. Martinez and J. Salcedo, Optimal
trollers, Electronics Letters 28(9) (1992), 843–844. doi:10.
PID tuning with genetic algorithms for non-linear process
1049/el:19920533.
models, in: 15th Triennial World Congress, Barcelona, Spain,
[24] G. Reynoso-Meza, J. Carrillo-Ahumada, Y. Boada and J. Picó,
Citeseer, 2002. PID controller tuning for unstable processes using a multi-
[10] L. Ingber et al., Adaptive simulated annealing (ASA): Lessons objective optimisation design procedure, IFAC-PapersOnLine
learned, Control and Cybernetics 25 (1996), 33–54. 49(7) (2016), 284–289. doi:10.1016/j.ifacol.2016.07.287.
[11] A. Jones and P. de Moura Oliveira, Genetic auto-tuning of PID [25] R.A. Rutenbar, Simulated annealing algorithms: An overview,
controllers, in: First International Conference on Genetic Al- in: IEEE Circuits and Devices Magazine, 1989, pp. 19–26.
gorithms in Engineering Systems: Innovations and Applica- [26] M.I. Solihin, L.F. Tack and M.L. Kean, Tuning of PID con-
tions, IET, 1995, pp. 141–145. troller using particle swarm optimization (PSO), International
[12] K. Shojaee, H.G. Shakouri and M.B. Taghadosi, Importance Journal on Advanced Science, Engineering and Information
of the initial conditions and the time schedule in the simulated Technology 1(4) (2011), 458–461. doi:10.18517/ijaseit.1.4.93.
annealing, in: Simulated Annealing, Theory with Applications, [27] A. Sungthong and W. Assawinchaichote, Particle swam opti-
2010, pp. 217–234. mization based optimal PID parameters for air heater temper-
[13] J. Kennedy, Particle swarm optimization, in: Encyclopedia of ature control system, Procedia Computer Science 86 (2016),
Machine Learning, Springer, 2010, pp. 760–766. 108–111. doi:10.1016/j.procs.2016.05.027.
[14] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi et al., Optimization [28] C. Thammarat, P. Sukserm and D. Puangdownreong, Design of
by simulated annealing, Science 220(4598) (1983), 671–680. PID controllers via genetic algorithm for benchmark systems,
doi:10.1126/science.220.4598.671. in: Proceeding of the 4th Annual International Conference
on Electrical Engineering/Electronics Computer, Telecommu-
[15] M. Kishnani, S. Pareek and R. Gupta, Optimal tuning of PID
nications and Information Technology, 2007, pp. 221–224.
controller using meta heuristic approach, International Journal
[29] M. Tharmalingam and K. Raahemifar, Strategic iniitialization
of Electronic and Electrical Engineering 7 (2014), 171–176.
of a hybrid particle swarm optimization-simulated annealing
[16] C. Kumar and N.K. Nair, Multiobjective robust PID controller
algorithm (HPSOSA) for PID controller design for a nonlin-
design for various industrial processes, AI Communications ear system, in: Electrical & Computer Engineering (CCECE),
28(3) (2015), 567–578. doi:10.3233/AIC-140639. 2012 25th IEEE Canadian Conference on, IEEE, 2012, pp. 1–
[17] J. Lu, C. Yang, B. Peng, R. Wan, X. Han and W. Ma, Self- 4.
tuning PID control scheme with swarm intelligence based on [30] P. Wang and D. Kwok, Optimal design of PID process con-
support vector machine, in: IEEE International Conference on trollers based on genetic algorithms, Control Eng. Pract. 2(4)
Mechatronics and Automation (ICMA), IEEE, 2014, pp. 1554– (1994), 641–648. doi:10.1016/0967-0661(94)90008-6.
1558. [31] Z. Yachen and H. Yueming, On PID controllers based on sim-
[18] W.L. Luyben, Process Modeling, Simulation and Control for ulated annealing algorithm, in: 27th Chinese Control Con-
Chemical Engineers, 2nd edn, McGraw-Hill Higher Educa- ference, IEEE, 2008, pp. 225–228. doi:10.1109/CHICC.2008.
tion, 1989. 4605420.
[19] M. Mitchell, An Introduction to Genetic Algorithms, MIT [32] J.G. Ziegler and N.B. Nichols, Optimum settings for automatic
Press, 1998. controllers, Trans. ASME 64(11) (1942).

You might also like