Professional Documents
Culture Documents
I. I NTRODUCTION
ULTIOBJECTIVE problems (MOPs) are typically
characterized by conflicting objectives, and the algorithms for MOPs should be able to [1]: 1) discover solutions as
close to the Pareto-optimal solutions as possible; 2) find solutions as diversely as possible in the obtained nondominated
front; and 3) find solutions distributed in the nondominated
front as evenly as possible. However, achieving these three
Manuscript received June 12, 2015; revised October 12, 2015; accepted
November 11, 2015. Date of publication December 17, 2015; date of current version November 15, 2016. This work was supported in part by the
National Natural Science Foundation of China under Grant 61472297 and
Grant 61272119, and in part by the Fundamental Research Funds for the
Central Universities under Grant BDZ021430. This paper was recommended
by Associate Editor Y. S. Ong. (Corresponding author: Yuping Wang.)
C. Dai, Y. Wang, M. Ye, and X. Xue are with the School of Computer
Science and Technology, Xidian University, Xian 710071, China (e-mail:
ywang@xidian.edu.cn).
H. Liu is with the School of Mathematics and Statistics, Guangdong
University of Technology, Guangzhou 510006, China.
This paper has supplementary downloadable multimedia material available
at http://ieeexplore.ieee.org provided by the authors.
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TCYB.2015.2503433
goals simultaneously is still a challenging task for multiobjective optimization algorithms. Among various multiobjective
optimization algorithms, multiobjective evolutionary algorithms (MOEAs), which make use of the population evolution
to get the optimal solutions, are a kind of effective methods for
solving MOPs. Nowadays, there exist many MOEAs [2][16],
such as multiobjective genetic algorithms (GAs) [3], multiobjective particle swarm optimization algorithms [5], [6],
multiobjective differential evolution (DE) algorithms [7], multiobjective immune clone algorithms [9], and group search
optimizer [12]. To enhance the performance of MOEAs, some
mathematical improvements, such as orthogonal experimental
design, are often applied [13].
In the evolutionary algorithm, an experiment refers to the
steps of generating some new solutions. Orthogonal experimental design is a method of generating uniformly distributed
multiple solutions, and it has been developed to sample
a small but representative set of combinations of components of variables. Orthogonal experimental design was first
introduced into the GA by Zhang and Leung [17] for discrete
optimization problems. Later, Leung and Wang [18] applied
quantization technique into orthogonal experiment design and
proposed quantization orthogonal crossover (QOX) for numerical optimization problems. Experimental studies [19], [20]
show QOX is an effective and efficient operator for numerical optimization problems. In order to limit the number of
potential offspring of each pair of parents and avoid a large
number of function evaluations during selection, QOX groups
variables into a few groups randomly and sequentially [18].
However, since randomly grouping the variables does not consider the relationship or connection of these variables, this
may reduce the possibility of QOX to generate the solutions
close to the global optimal solutions, and may degrade the
search efficiency of the evolutionary algorithm. To overcome
this drawback, we propose a variable grouping scheme which
considers the relationship or connection of these variables. As
a result, the solutions generated by QOX and the new grouping scheme are more possible to be close to the global optimal
solutions.
The variables can be grouped by a technique called
learning automata (LA) [21] which is derived from the
theory of probability and Markov process. LA model
was first proposed and developed in mathematical psychology by Bush and Mosteller [22], and then developed
by Atkinson et al. [23] and Tsetlin [24]. Recently,
c 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
2168-2267
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
LA method has been used in many engineering problems (e.g., control problem [25], data mining problem [26],
event patterns tracking [27], complex networks [28], function optimization [29], [30], vehicular sensor networks [31],
opportunistic [32], and economic emission dispatching [33])
to improve the performance of the algorithms, and
reinforcement learning has also been used in function
optimizations [34][37] to improve the performance of the
algorithms. Over the last two decades, LA method has also
been adopted to improve the performance of MOEAs. In [38],
LA is employed to divide the search space into separate
cubes. In this method, the multiobjective functions are aggregated into a single function by using a weighted-sum method.
However, because the method uses the weighted-sum method
to combine fitness functions, this method cannot find the solutions in nonconvex parts of the Pareto-optimal front. In [39],
LA and GAs are integrated to solve the MOP of generation dispatch in a six-bus power system. Liao and Wu [40]
proposed a method of multiobjective optimization by LA to
solve complex MOP. LA method is used to solve multiobjective generation dispatch in [41]. Horn and Oommen [42]
developed four LA-based algorithms to solve multiconstraint
assignment problems. In [43], LA method as a local search
technique is used in DE to solve numerical optimization
problems. In this paper, LA is employed to divide each
dimension into a certain number of parts, and each part
is called a cell. The value of a cell is updated based
on the current value of the cell and the number of current nondominated solutions whose corresponding component
locates in the cell. The reduction of the number of nondominated solutions will make the value of the cell become
smaller. The values of these cells are used to group variables
for QOX.
In this paper, an orthogonal evolutionary algorithm with
LA (OELA) for multiobjective optimization problems is presented. To be specific, an LA method, QOX, and a new fitness
function are used in this algorithm. LA is used to execute the
mutation operator and used as a tool to group the decision
variables for QOX. The resulted QOX is an improved QOX
which is more possible to generate good offspring than QOX,
and thus can enhance the performance of QOX and improve
the search efficiency of the algorithm. Moreover, a new fitness
function based on the decomposition of the objective space is
proposed and used to maintain the diversity and uniformity
of solutions and guide the obtained solutions approaching the
true Pareto front (PF).
The remainder of this paper is organized as follows:
Section II introduces the main concepts of the multiobjective
optimization problems, LA and QOX; Section III presents the
OELA, including the LA, QOX and the new fitness function;
while Section IV shows the experimental results of the proposed algorithm and the compared algorithms; Section V gives
the conclusion and future works.
II. P RELIMINARIES
In this section, the main concepts of the multiobjective
optimization problems, LA, and QOX are introduced.
3307
A. Multiobjective Optimization
A multiobjective optimization problem can be formulated
as follows [44]:
hj (x) = 0, j = 1, 2, . . . , p
where x = (x1 , . . . , xn ) X Rn is called decision variable
and X is an n-dimensional decision space. fi (x)(i = 1, . . . , m)
is the ith objective to be minimized, gi (x)(i = 1, 2, . . . , q)
defines the ith inequality constraint and hj (x)( j = 1, 2, . . . , p)
defines the jth equality constraint. Furthermore, all the
constraints determine the set of feasible solutions which is
denoted by , and Y = {F(x)|x } is denoted as the objective space. To be specific, we try to find a feasible solution
x to minimize each objective function fi (x)(i = 1, . . . , m)
in the vector-valued objective function F. Let x, z , x
is said to dominate z, if F(x) = F(z) and fi (x) fi (z) for
i = 1, 2, . . . , m. x is called Pareto-optimal if there is no other
x such that x dominates x . The set of all the Pareto-optimal
solutions is called the Pareto set (PS). The image of the PS on
the objective space, PF = {F(x)|x PS}, is called the PF [45].
B. Learning Automata
LA is used to execute the mutation operator and used as
a tool to group the decision variables for the QOX. In this
paper, LA consists of n automata, where n is the dimension of the optimization problem considered, and its role is
to search the optimal solutions on a specified dimension. The
ith automaton can be defined as i , Bi , Pi , where:
1) i = {xi } is the set of possible states on the ith dimension
and xi is the dimensional state on the ith dimension,
where xi [xmin,i , xmax,i ] with xmax,i and xmin,i being
the maximum and minimum values of the ith dimension,
respectively;
2) Bi = {bl, } denotes the set of possible actions which the
automaton can take on dimension i, i.e., bl, represents
an action to move a length on left path (when l = 1)
or right path (when l = 2) or own path (when l = 3),
where l represents a path label and length is no longer
than the path length. Also, each path will be defined
a path value, which represents the possibility of finding
a better solution if the automaton searches in this path.
This will be defined and explained later in detail;
3) Pi includes two probabilities p1 and p2 , where p1 represents the probability of path selection and p2 shows
the probability of moving the length of xi along the
selected path.
Because the dimensional states are continuous variables, the
search space contains an infinite number of possible states.
In order to achieve computational tractability, each dimension
can be divided into D cells. We denote the jth cell as ci,j of
the ith dimension, where i = 1, . . . , n, j = 1, . . . , D. Thus,
there are totally nD cells for an n-dimensional search space.
Moreover, the width of a cell in the ith dimension can be
computed by wi = (xmax,i xmin,i )/D, and the value of cell
ci,j is defined by the possibility of the ith component of an
3308
(6)
0.5Vi,s /a
1s<j e
(7)
lbi ,
j=1
|xi zi |
, 2 j Q 1 (9)
i,j = lbi + ( j 1)
Q1
ubi ,
j=Q
where ubi = max(xi , zi ) and lbi = min(xi , zi ). In practice,
dimension n is often large and so is the scale of LM (Qn ).
However, each pair of parents should not produce too many
potential offspring in order to avoid a large number of function evaluations. For this purpose, the variables x1 , . . . , xn are
divided into G groups, where G is small and each group is
treated as one factor. In this way, the corresponding orthogonal array has a small number of combinations, and a small
number of potential offspring are generated.
III. O RTHOGONAL E VOLUTIONARY A LGORITHM
W ITH L EARNING AUTOMATA
OELA mainly consists of three parts: QOX with LA, and
the new fitness function, which will be introduced one by one
in this section.
A. Quantization Orthogonal Crossover
With Learning Automata
In [18], the variables (components) are sequentially and
randomly grouped, which does not consider the relationship
or connection of these variables. This may reduce the possibility of QOX to generate the solutions close to the global
optimal solutions, and may degrade the search efficiency of
the evolutionary algorithm. Note that LA can provide the
information that the biggest-value cell of each variable has
the most nondominated solutions. If the variables whose the
jth( j = 1, . . . , Q) levels ai,j simultaneously locate in the
biggest-value cells are grouped into a group. Many components of offspring generated by the QOX will simultaneously
locate in the biggest-value cells, which may make these offspring more likely be nondominated solutions. Thus, if we
use LA in this way by considering the relationship or connection of variables and combine LA with QOX, the search
efficiency can be improved. The detail of using LA is as follows. First, for each dimension, we find the biggest-value
cell, then these n cells in all dimensions form a vector
denoted as C = (c1,j1 , . . . , cn,jn ), where ci,ji {ci,1 , . . . , ci,D },
i = 1, . . . , n, and ji is the number of the cell with the
biggest cell value in the ith dimension. For a given pair of
solutions x and z, we get lb = [min(x1 , z1 ), . . . , min(xn , zn )]
and ub = [max(x1 , z1 ), . . . , max(xn , zn )] which are the lower
and upper bounds of x and z, respectively. To facilitate
grouping, a matrix R = [ri,j ]nQ (where i indicates the ith
3309
variable and j indicates the jth levels ai,j of the ith dimension, i = 1, . . . , n, j = 1, . . . , Q) is defined by the following
formula:
1
if ai,j ci,ji
(10)
ri,j =
0
otherwise
where ri,j = 1 indicates that the jth levels ai,j belongs to
the biggest-value cell of the ith dimension ci,ji ; ri,j = 0 indicates that ai,j does not belong to ci,ji . The column of R with
most elements 1 is first found, and then the rows (dimensions) corresponding to these elements 1 will be assigned to
a group. By repeating this process, the variables can be iteratively grouped. Suppose Q1 groups have been made at last.
There are two cases: 1) there is no ungrouped variable and
2) there are some ungrouped variables. For the first case, all
variables are divided into Q1 groups and let G = Q1 . For
the second case, the remaining variables will be randomly
grouped into (G Q1 ) and all variables will also be divided
into G groups. This grouping scheme is to make as many components of offspring generated by QOX as possible locate in C,
and enforce these offspring more likely to locate close to the
Pareto-optimal solutions. After grouping, the orthogonal array
LM (QG ) is used to choose a sample of M chromosomes as the
potential offspring. The improved QOX is denoted as QOXLA.
Example: Consider a 5-D optimization problem, let
the two parents be x = (0, 4, 2, 0, 1) and z =
(6, 1, 5, 3, 2), and they define the solution space [l, u] =
[(0, 1, 2, 3, 1), (6, 4, 5, 0, 2)]. We set G = 3, Q = 3, and
C = ([2.5, 4], [0, 2], [4, 6], [4, 1], [1.1, 1.9]). According
to (10), the matrix R can be calculated and the result is as
follows:
0 1 0
1 0 0
R=
0 0 1 .
1 0 0
0 1 0
Through the above grouping scheme, these five variables
are divided into three groups, i.e., (x1 , x5 ), (x2 , x4 ), and (x3 ).
By applying L32 (33 ), we can get the following nine potential
offspring: (0, 1, 2, 3, 1), (0, 2.5 3.5, 1.5, 1), (0, 4, 5, 0, 1),
(3, 1, 3.5, 3, 1.5), (3, 2.5, 5, 1.5, 1.5), (3, 4, 2, 0, 1.5),
(6, 1, 5, 3, 2), (6, 2.5, 2, 1.5, 2), and (6, 4, 3.5, 0, 2).
Since four components of both offspring (3, 1, 3.5, 3, 1.5)
and (3, 2.5, 5, 1.5, 1.5) locate in C, they are likely to locate
close to the Pareto-optimal solutions.
Comparing with other adaptive approaches (such as adaptive DE [7] and adaptive particle swarm optimization [5]),
the main advantages of the QOXLA are as follows:
1) QOXLA can predict the potential nondominated area more
exactly; 2) QOXLA can generate better diversity of nondominated solutions; 3) QOXLA takes the correlation of variables
into account; and 4) parameters of QOXLA are easily set.
The space complexity and the computational complexity of
QOXLA are O(2N n) and O(N n), respectively; the space
complexity and the computational complexity of adaptive
DE [7] are O(n) and O(N), respectively; the space complexity and the computational complexity of adaptive particle
3310
swarm optimization [5]) are O(3N n) and O(N), respectively (where N is the size of population). Thus, the cost of
QOXLA is larger than that of other two adaptive methods.
B. New Fitness Function Based on the Decomposition
of the Objective Space
A critical and difficult aspect of MOEAs is to find a suitable fitness function. The fitness value of each solution should
indicate the capability of various features of this member to
survive. In this paper, we propose a new fitness function which
is based on the decomposition of the objective space and is
helpful to maintain the diversity of the obtained solutions and
guide the obtained solutions converging to the true PF. The
objective space is uniformly divided into subspaces by a set
of weight vectors which are evenly distributed. The solutions
corresponding to one subspace are classified into one set by
the following equations:
i
i
j
,
Z = x x POP, F(x), = max F(x),
1 jN
i = 1, . . . , N
(11)
i
j
Yi = F(x)|x , F(x), = max (F(x), )
1 jN
(12)
where POP = {x1 , x2 , . . . , xK1 } is the current population
with K1 solutions, W = {1 , 2 , . . . , N } is a set of weight
vectors, and (F(x), i ) is a specific aggregation function
which will be given in the following. For a given solution, suppose that the value (F(x), i ) is the largest among
{(F(x), j ), j = 1, . . . , N}, then the solution x belongs to the
set Z i . These K1 solutions are divided into N classes by (11)
and the objective space Y is divided into N subobjective spaces
Y1 , . . . , YN . To illustrate the idea intuitively, we give an example shown in Fig. 2. The objective space is evenly divided
into four parts by four weight vectors. The weight vector
1 corresponds to the subobjective space surrounded by the
space ABGF. The aggregation function plays a very important role in this model which directly determines whether the
objective space is evenly divided. The elliptic function and
ellipsoidal function which have a good performance in capturing nonconvex fronts [49] are used as the aggregation function
for two-objective and three-objective problem, respectively.
Consider an ellipse centered at origin whose semi major is
= (1 , 2 ), its elliptic function can be written as
g(F(x), ) = ( f1 (x)1 + f2 (x)2 )2 A2 + ( f1 (x)2 f2 (x)1 )2
(13)
where
A > 1, A is the major semi-axis of the ellipse
and A2 1/A is the eccentricity of the ellipse. An ellipsoid function whose center and semi major are origin and
= (1 , 2, 3 ), respectively, can be written as
g(F(x), ) = ( f1 (x)1 + f2 (x)2 + f3 (x)3 )2 A2
+ ( f1 (x)2 f2 (x)1 )2
2
+ ( f 2 (x)2 3 + f1 (x)1 3 12 + 22 f3 (x)) .
(14)
m
2
i fi (x)
i=1
fi
(x) =
m
i1
A2 +
m
( fi
(x))2
i=2
j=i
j=i
j2 , if i = 2, . . . , m 1
if i = m.
(15)
For two and three objective optimization problems, we
adopt g(F(x), i ) in (13) and (14) as (F(x), i ), respectively. When the solutions are classified by using (14), if a set
Z i (i = 1, . . . , N) is not empty, the fitness function value of
each solution x in Z i is calculated by the following formula:
fit(x) = Z i + + F(x), i
(16)
where x Z i ; if x is a nondominated solution of Z i , is equal
to 0, otherwise, is equal to 1; |Z i | indicates the number of
solutions in set Z i ; is a small positive constant to make
(F(x), i ) < 1. In this paper, our main objectives are to
improve the convergence and maintain the diversity by minimizing the fitness value of solutions. In particular, minimizing
|Z i | to make each subobjective space have more than one
solution can improve the diversity of the solutions of POP
in objective space, and minimizing to keep the nondominated solutions of Z i can improve the convergence. Moreover,
minimizing (F(x), i ) to make the angle between the objective vector of solution x and the weight vector i close to
0 can make the solutions in POP distribute more evenly in
objective space.
C. Selection Strategy
If an algorithm makes use of the neighborhood information of a solution to generate offspring, the quality of the
offspring will usually be better and the convergence to a good
PF will be speed up [50]. For this consideration, a method
of clarifying the neighborhood of the nondominated solutions
should be designed. In this paper, because the objective space
is divided into some subspaces by weight vectors, and the
decision space is accordingly divided into some subspaces.
Each subspace can be seen as a neighborhood. After that, for
3311
D. Update Strategy
The elite strategy is used in the algorithm when updating the
population. The update strategy uses the( + ) type of deterministic replacement where indicates the size of the parent
population and indicates the size of the descendant population. In this replacement strategy, the parent and descendant
populations are combined, then we select best individuals
which are kept to the next generation. This means that
individuals will be removed from the parent and descendant
populations. In this paper, and are equal to N. Deletion
rules are as follows.
Step 1: Let k = 1; if k > , stop; otherwise go to step 2.
Step 2: Find a set Z i whose size is the maximum among
{Z i (i = 1, . . . , N)}. Find a solution x Z i whose
(F(x), i ) is the maximum among Z i . Delete the
solution x from set Z i , then let Z i = Z i \x and k =
k + 1. Go to step 1.
E. Procedure of the Proposed Algorithm
Based on all above, a novel OELA is proposed. The
flowchart of OELA is shown in Fig. 3 and the procedure of
OELA is as follows.
Fig. 3.
Flowchart of OELA.
3312
[0, 1][1, 1]29 , [0, 1][1, 1]29 , [0, 1][1, 1]29 , [0, 1]2
[2, 2]28 , [0, 1]2 [2, 2]28 , [0, 1]2 [2, 2]28 , [0, 1]20 ,
[0, 1]20 , [0, 1]20 , [0, 1]20 , and [0, 1]4 [2, 2]26 , respectively.
These fifteen benchmark problems are challenging enough
for MOEAs.
B. Parameter Settings
The experiments are carried out on a personal computer (Intel Xeon CPU 2.53 GHz, 3.98 GB RAM).
The solutions are all coded as real vectors. Polynomial
mutation [44] operators are applied directly to real vectors
in four algorithms: 1) NSGAII; 2) MOEA/D; 3) NNIA; and
4) MOEA/D-ENS. For crossover operators, simulated binary
crossover (SBX [44]) is used in NSGAII and NNIA, and
DE [54] is used in MOEA/D. The parameter configuration in
this paper is as follows: distribution index is 20 and crossover
probability is 1 in SBX operator; crossover rate is 0.5 and
scaling factor is 0.5 in DE operator; the parameter of the fitness function is set to 0.01; distribution index is 20 and
mutation probability is 0.1 in polynomial mutation operator;
quantization levels Q = 3 and there are G = 4 factors in
QOXLA operator, so that each pair of parents can produce
nine potential offspring (M = 9); parameters in LA: D = 20,
and the parameters of (6) and (7): = 0.2.
Population size of the algorithms is set to 105 for all test
problems on two or three objective problems, and 210 for fiveobjective problems. Weight vectors are generated by using the
method in [57] for MOEA/D and OELA. The number of the
weight vectors in the neighborhood in OELA are set to 10 for
two or three objective problems and 20 for five objective problems, respectively. The Techebycheff approach [57] is used for
MOEA/D as an aggregate function, while the elliptic function
and ellipsoidal function are used for OELA as an aggregate
function for two-objective problems, three-objective problems,
and five-objective
problems, respectively, and the eccentricities are set to 99/10. For NNIA, the size of active population
is 20 and the clone scale is 20. Each algorithm runs 20 times
independently on each test instance. The maximal number of
function evaluations is set to 100 000 for all test problems.
C. Performance Measures
In this paper, the following four performance metrics are
used to evaluate the performance of the different algorithms
quantitatively: 1) generational distance (GD) [55]; 2) inverted
GD (IGD) [55]; 3) hyper-volume indicator (HV) [58]; and
4) Wilcoxon rank-sum test [56]. GD measures how far the
known PF is away from the true PF. If GD is equal to 0, all
points of the obtained PF belong to the true PF. GD allows
us to observe whether the algorithm can converges to some
region in the true PF. IGD measures how far the true PF is
away from the obtained PF. If IGD is equal to 0, the obtained
PF contains every point of the true PF. IGD shows whether
points of the obtained PF are evenly distributed throughout the
true PF. Here, GD and IGD indicators are used simultaneously
to observe whether the solutions are distributed over the entire
PF. The HV measures both the convergence and diversity of
the obtained solutions. For the problem DTLZ1 (F11 and F13),
3313
TABLE I
IGD, GD, AND HV O BTAINED BY OELA, MOEA/D, NSGAII, AND NNIA ON F1F10
and NNIA, respectively. The best results obtained are highlighted bold in these tables. It can be seen from Tables I and II
that, for IGD metric, OELA outperforms MOEA/D and
NNIA on all 15 test problems, and outperforms NSGAII
on 14 test problems. NSGAII has the same performance as
OELA only on problem F3. For GD metric, OELA outperforms NSGAII on 13 test problems (F1F4, F6F8, and
F10F15), performs the same as NSGAII on one problem F9,
performs worse than NSGAII only on problem F5. OELA outperforms MOEA/D on 11 problems and performs same as
MOEA/D on two problems (F3 and F9), while performs worse
than MOEA/D on only two problems F1 and F5. OELA outperforms NNIA on 14 problems, and performs worse only on
problem F5. For HV metrics, OELA outperforms MOEA/D
and NNIA on 14 test problems, outperforms NSGAII on
3314
TABLE II
IGD, GD, AND HV O BTAINED BY OELA, MOEA/D, NSGAII, AND NNIA ON F11F15
Fig. 4.
3315
300 000 for these ten test problems, moreover, the experimental results of MOEA/D-ENS and MOLA are obtained
directly from [16] and [40], respectively.
3316
TABLE III
IGD O BTAINED BY OELA, MOEA/D-ENS, AND MOLA
3317
Fig. 5.
Evolution of the mean of GD and IGD values on problems DTLZ1 and DTLZ3.
Fig. 6.
(a) Sensitivity of the major semi-axis of the ellipse on F1. (b) Sensitivity of T on F1. (c) Sensitivity of D on F1.
R EFERENCES
[1] C. A. C. Coello, D. A. Van Veldhuizen, and G. B. Lamont, Evolutionary
Algorithms for Solving Multi-Objective Problems. New York, NY,
USA: Kluwer, May 2002.
[2] A. M. Zhou et al., Multiobjective evolutionary algorithms: A survey
of the state of the art, Swarm Evol. Comput., vol. 1, no. 1, pp. 3249,
Mar. 2011.
[3] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, A fast and elitist
multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput.,
vol. 6, no. 2, pp. 182197, Apr. 2002.
3318
[53] S. Huband, P. Hingston, L. Barone, and L. While, A review of multiobjective test problems and a scalable test problem toolkit, IEEE Trans.
Evol. Comput., vol. 10, no. 5, pp. 477506, Oct. 2006.
[54] R. Storn and K. Price, Differential evolutionA simple and efficient heuristic for global optimization over continuous space, J. Global
Optim., vol. 11, no. 4, pp. 341359, Dec. 1997.
[55] C. A. C. Coello and N. C. Corts, Solving multiobjective optimization
problems using an artificial immune system, Genet. Program. Evol.
Mach., vol. 6, no. 2, pp. 163190, Jun. 2005.
[56] R. G. D. Steel, J. H. Torrie, and D. A. Dickey, Principles and
Procedures of Statistics: A Biometrical Approach. New York, NY,
USA: McGraw-Hill, 1997.
[57] Q. F. Zhang and H. Li, MOEA/D: A multiobjective evolutionary algorithm based on decomposition, IEEE Trans. Evol. Comput., vol. 11,
no. 6, pp. 712731, Dec. 2007.
[58] E. Zitzler and L. Thiele, Multiobjective evolutionary algorithms:
A comparative case study and the strength Pareto approach, IEEE
Trans. Evol. Comput., vol. 3, no. 4, pp. 257271, Nov. 1999.
[59] A. O. Allen, Probability, Statistics, and Queuing Theory With Computer
Science Applications, 2nd ed. Boston, MA, USA: Academic Press, 1990.
Cai Dai received the Ph.D. degree in computer science and technology from the School of Computer
Science and Technology, Xidian University, Xian,
China, in 2014.
His current research interests include multiobjective optimization and evolutionary algorithms.
3319
Miao Ye, photograph and biography not available at the time of publication.
Xingsi Xue, photograph and biography not available at the time of publication.
Hailin Liu, photograph and biography not available at the time of publication.