You are on page 1of 10

A Hybrid of Differential Evolution and Genetic Algorithm for Constrained Multiobjective Optimization Problems

Min Zhang, Huantong Geng, Wenjian Luo, Linfeng Huang, and Xufa Wang
Nature Inspired Computation and Applications Laboratory, Department of Computer Science and Technology, University of Science and Technology of China, 230027, Hefei, Anhui, China {zhangmin, lfhuang}@mail.ustc.edu.cn, htgeng@ustc.edu, {wjluo, xfwang}@ustc.edu.cn

Abstract. Two novel schemes of selecting the current best solutions for multiobjective differential evolution are proposed in this paper. Based on the search biases strategy suggested by Runarsson and Yao, a hybrid of multiobjective differential evolution and genetic algorithm with (N+N) framework for constrained MOPs is given. And then the hybrid algorithm adopting the two schemes respectively is compared with the constrained NSGA-II on 4 benchmark functions constructed by Deb. The experimental results show that the hybrid algorithm has better performance, especially in the distribution of non-dominated set.

1 Introduction
Genetic Algorithm (GA) for Multiobjective Optimization Problems (MOPs) was suggested by Rosenberg in his dissertation as early as 1967 [1]. However, until 1985, the first genetic algorithm for MOPs, namely VEGA, was proposed by Shaffer [2]. Because of the deficiencies of VEGA, Multiobjective Evolutionary Algorithms (MOEAs) have been paid more and more attention. Two generations of Evolutionary Multiobjective Optimization have been classified by Coello Coello [3]. The first generation (1985-1998) emphasizes the simplicity of algorithms, where the most representative are NSGA [4], NPGA [5] and MOGA [6]. The second generation started when elitism became a standard mechanism, which was firstly adopted by Zitzler [7] in SPEA. In this generation, efficiency is stressed and the most representative are SPEA [7], SPEA2 [8], PAES [9] and NSGA-II [10]. Meanwhile, the -dominance MOEAs [11], particle swarm optimization [12] and differential evolution [13] for MOPs are also proposed in this generation. These MOEAs are mainly for unconstrained MOPs. However, the real-world MOPs are often with constraints. To solve these problems, the crucial problem is how to handle the constraints in MOPs, i.e., how to balance the search between the feasible and infeasible regions. Runarsson and Yao [14] proposed the search biases strategy and introduced the differential evolution to their algorithm [15] in order to achieve a good compromise
T.-D. Wang et al. (Eds.): SEAL 2006, LNCS 4247, pp. 318 327, 2006. Springer-Verlag Berlin Heidelberg 2006

A Hybrid of Differential Evolution and Genetic Algorithm

319

between feasible and infeasible regions in constrained single objective optimization. In this paper, the search biases strategy is introduced to solve constrained MOPs. Firstly, two novel schemes of selecting the current best solutions for multiobjective differential evolution (MODE) in constrained MOPs are proposed. And then a hybrid of MODE and GA with the (N+N) framework for constrained MOPs is given. Finally, the hybrid algorithm is implemented on NSGA-II [10] with the two schemes respectively, and is compared with a state-of-the-art MOEA, i.e., constrained NSGA-II (CNSGA-II) [20].

2 Preliminary
2.1 Problem Definition Definition 1 (Constrained MOP). A general constrained MOP includes a set of n decision variables, a set of m objective functions, and a set of p inequality constraints (equality ones may be approximated by inequalities [14]). The goal of optimization is

min f (x) = { f1 (x), f 2 (x), s.t.g i (x) 0, i = 1,2, , p

, f m (x)}

(1)

Where x is the decision vector in decision space X, and f(x) is the objective vector in objective space Y. In constrained optimization problems, p constraints are usually transformed to a constraint violation function, which is defined in (2).

( g (x)) = i =1 wi (max( g (x),0))


p

(2)

Where the exponent is usually 1 or 2 and the weights wi>0 (j=1,,p) would be tuned during search (=1 and w=1 in this study). And the feasible set Xf is defined as the set of decision vector x which satisfy the p constraints, i.e., (g(x))=0. Definition 2 (Pareto Dominance "" ). For any two decision vectors a and b,
a b 1 i m f i (a) f i (b) 1 j m f j (a) < f j (b) . (3)

The non-dominated set P in Xf is Pareto-optimal set (POS) and the set f(P) is Paretooptimal front (POF). 2.2 Constraint Pareto Dominance Among constraint handling methods for MOPs, the most promising one is the Constraint Pareto Dominance proposed by Deb [10] [20], which is defined as follows. Definition 3 (Constraint Pareto Dominance " c " ) For any two decision vectors a, b,

(a b ( g (a)) = ( g (b)) = 0) a c b or( ( g (a)) = 0 ( g (b)) > 0) . or ( ( g (b)) > ( g (a)) > 0)

(4)

320

M. Zhang et al.

3 Hybrid of MODE and GA for Constrained MOPs


3.1 Differential Evolution Storn and Price [16] proposed the differential evolution method which is a simple and efficient adaptive scheme for global optimization over continuous spaces, and gave two most promising schemes of differential evolution. Differential Evolution (DE) is a population-based evolutionary algorithm with simple mutation and crossover operators to create next generation. DE has similarities with traditional Evolutionary Algorithms. However, it doesnt employ binary encoding like a simple GA and doesnt utilize a probability density function to self-adapt its parameters like an ES [21]. Runarsson and Yao [14] suggested the search biases in constrained single objective optimization, and proposed the corresponding method as follows. When creating the next generation based on ES (, ), individuals are generated according to xk x i + (x1 x i +1 ) . (5)

Where x1 denotes the top one after stochastic ranking for the whole individuals, i.e., the current best solution, xi and xi+1 are random samples from the population, and is the parameter of search step length. It is suggested by the authors [19] that (5) can be deduced from the second scheme of the differential evolution method in [16]. And (5) has obtained satisfying results for constrained single objective optimization [14]. 3.2 Two Schemes of MODE According to (5), individuals generated newly are close to x1, which denotes the current best solution in the population. However, there are usually more than one solutions in the non-dominated set of MOPs. Furthermore, the diversity of population is crucial for MOPs, which determines whether the pure POF could be found or not. Therefore the diversity of population and the pureness of the non-dominated set will be affected seriously if (5) is applied to constrained MOPs directly. So how to choose the current best solutions in MOPs becomes a key problem. In order to preserve the population diversity, the current best solutions should be extended to a solution set, which could denote the main search biases in MOPs. A simple way is to select the boundaries of current non-dominated solutions as the current best solutions. So the number of current best solutions is no more than the dimension of objective space. Individuals will be generated by sampling the boundaries uniformly, which is described in Algorithm 1. Algorithm 1. B-Scheme (Boundaries as the current best solutions) 1. for k = 1 to N do xbest=Uniform_Sample(Boundary(Non_Dominated_Set)) 2. i=rand[1,N] 3. xk x i + (x best x i+1 ) 4. 5. end for In Algorithm 1, N denotes the size of population and is the percentage of the individuals generated by B-Scheme. The N individuals are close with the boundaries

A Hybrid of Differential Evolution and Genetic Algorithm

321

of current non-dominated set. Then the boundary search ability will be improved, and the population diversity could be maintained. So the pure POF will be located with larger probability. However, it may be still hard to find the pure POF just by enhancing the boundary search ability when the distribution of POF is non-continuous. Because the boundaries of non-dominated set can only represent part of the search biases in such situations. To solve this problem, the representative individuals should be picked out from current population, and offspring are generated by differential evolution around the representatives to improve the search ability. So a better distribution of nondominated set could be obtained. In this paper, a fitness function based on constraint Pareto dominance and crowding distance [10] is proposed to pick out the representative individuals, which is given as Algorithm 2. Algorithm 2. Representative Individuals Selection 1. Split the population P according to " c " , and get P = F1 Fk 2. Calculate the crowding distances I of P: the feasible individuals values are calculated by the definition in [10] while the infeasible ones values are set to 0 3. Calculate the fitness of the individuals in P: fitness (x) = i + 1 /(2 + I (x)), x Fi 4. Sort P by fitness values and the top M are selected as the representatives The representative individuals selected by Algorithm 2 are feasible solutions or infeasible ones with smaller constraint violations. The diversity of these representative individuals is good because the crowding distance is employed. And then the offspring generated by differential evolution are close to these representatives. So the search ability to feasible region during evolution will be improved and the diversity of the offspring will be better. Treating the M representative individuals (RI) as the current best solutions, the offspring will be generated according to Algorithm 3. Algorithm 3. R-Scheme (Representative Individuals as the current best solutions) 1. for k = 1 to N do best=mod(k-1, M)+1 2. i=rand[1,N] 3. xk x i + (RI best x i +1 ) 4. 5. end for According to the definition of crowding distance in [10], the I values of boundary individuals in each layer are . Therefore, Algorithm 1 and Algorithm 3 are identical when M equals the number of the non-dominated set boundaries. But when M is less than the number of the boundaries, the representatives are part of the boundary individuals. Here the representatives may not denote the main search biases, and the diversity may be deteriorated in this condition. Thus the value of M should be greater than the number of non-dominated set boundaries to make a distinct difference otherwise the performance of R-Scheme will be similar to B-Scheme or even worse.

322

M. Zhang et al.

3.3 Hybrid Algorithm with the Framework of NSGA-II The hybrid algorithm (DE-MOEA) proposed in this paper is based on the (N+N) framework of NSGA-II [10]. And N individuals in the next generation are created as: N individuals are generated by MODE and the left ones are by genetic operators (crossover and mutation) like NSGA-II. DE-MOEA is described in Algorithm 4. Algorithm 4. DE-MOEA with (N+N) Framework 1. Initialization: create the initial population P0, t=0, |P0|=N 2. Evaluate the population Pt 3. Pt+1=generate_next_pop(Pt) 3.1. Calculate the fitness of individuals in Pt according to Algorithm 2 3.2. Sort Pt by the fitness values and select the top N as population Qt 3.3. Generate N individuals with MODE on Qt and put them to Pt+1 3.4. Generate (1-)N individuals from Qt with binary tournament selection, crossover and mutation, and put the offspring to Pt+1 3.5. Pt+1=Pt+1+Qt 4. t=t+1, if termination satisfied then output non-dominated set, else go to step 2 In Algorithm 4, the MODE in step 3.3 can be implemented by Algorithm 1 or Algorithm 3 and the corresponding algorithms are denoted as HBGA and HRGA. In NSGA-II the termination is satisfied when the number of already split individuals is not less than N. However, it seems that it is needed to split the entire population in DE-MOEA, but actually it doesnt. For any two individuals a Fi , b F j (i j ) i fitness (a) < i + 1, j fitness (b) < j + 1 . So (7) follows easily fitness (a) < fitness (b) i < j . (7) (6)

From (7) the terminating condition of splitting population in DE-MOEA is the same as NSGA-II, so the time complexity of algorithm 4 is equal to the NSGA-II.

4 Experimental Results and Discussions


4.1 Test Functions and Performance Measures In this section, 4 benchmark functions (CTP1, CTP2, CTP6 and CTP7) are chosen from [20] to evaluate the performance of DE-MOEA, which are described in (8), (9).
Min. f 1 ( x) = x1 , f 2 ( x) = g ( x) exp( f1 ( x) / g ( x)) CTP1 s.t. c j ( x) = f 2 ( x) a j exp(b j f1 ( x)) 0, j = 1,2, ,J

(8)

CTP2 Min. f1 ( x ) = x1 , Min. f 2 ( x ) = g ( x )(1 f1 ( x ) / g ( x) ) CTP6 s.t. c( x ) = cos( )( f 2 ( x) e ) sin( ) f1 ( x ) CTP7 a | sin(b (sin( )( f 2 ( x ) e) + cos( ) f1 ( x)) c ) |d

(9)

A Hybrid of Differential Evolution and Genetic Algorithm

323

Where the decision space of each function has 5 dimensions, which are defined as: 5 0x11, -5x2,3,4,55, and g ( x) = 41 + i =2 ( xi2 10 cos(2xi )) . For CTP1, J=2, a1, 2= (0.858, 0.728), b1, 2= (0.541, 0.295), and the parameters chosen to the different CTP2, CTP6 and CTP7 functions are listed in Table 1.
Table 1. Parameter Settings in CTP2, CTP6 and CTP7
Function CTP2 CTP6 CTP7 -0.2 0.1 -0.05 a 0.2 40 40 b 10 0.5 5 c 1 1 1 d 6 2 6 e 1 -2 0

Performance measures for MOPs are analyzed and classified by Zitzler [17]. And the unary indicator D1R [17] and binary indictor V(A, B) [18] are selected here, which are defined as follows.
D1R ( X) = aX min{ a b ; b X} / | X * |
*

(10)

where X* denotes a reference set (1,000 uniform samples from POF in this paper), and X is the non-dominated set found by the algorithm. The definition of V(A, B) is the percentage of objective space dominated exclusively by A in the smallest hypercube which contains the both non-dominated set A and B. Like [18], 50,000 Monte Carlo samples are taken to calculate the values. When calculating the hypercube we desire that the solutions with the first objective less than 10-7 should be rejected in order to obtain more distinct differences between the two non-dominated sets by finite samples. The two situations are illustrated in Fig. 1 and Fig. 2, where the two non-dominated sets are obtained from CNSGA-II [20] and HBGA on CTP1 in a run.
CNSGA-II HBGA
1.1 1.0 0.9
15

25 20

CNSGA-II HBGA

f2

0.8

10 5

0.0

0.2

0.4

f1

0.6

0.8

1.0

f2
0.7 0.6 0.5 0.0 0.2 0.4

f1

0.6

0.8

1.0

Fig. 1. Two non-dominated set without rejection Fig. 2. Two non-dominated set with rejection

From Fig. 1, it can be found that there exist a few solutions with very large f2 values and quite little f1 values in each set. This will influence the quality of the V(A,B) seriously because the great majority of samples are located in the area which is dominated by both sets and few samples can find out the slight but significant differences

324

M. Zhang et al.

between the both sets. The values of V(A,B) in Fig.1 is (0.0000%, 0.2180%), so we can assert that HBGA outperforms CNSGA-II on CTP1 [17]. However, from the corresponding value (0.1900%, 11.8260%) in Fig.2, this assertion cant come into existence. This is because the differences between the both sets can be captured by this appropriate scale in Fig. 2 with 50,000 samples. Meanwhile, the rejection influences the experimental results little. The f1 values of all the benchmark functions are ranged from 0 to 1, and then the probability of sample points in the rejected area is 107 . So the mean and standard deviation values of the times in the rejected area by 50,000 samples are 0.005 and 0.0707 respectively, which can hardly influence the final experimental results. Therefore, this rejection is employed here in order to obtain distinct results without increasing sample times. 4.2 Results and Discussions To the best of our knowledge, CNSGA-II [20] is the most promising method for constrained MOPs and is selected to compare with DE-MOEA (HBGA and HRGA). All the algorithms are performed in Matlab 7.0, and the source code may be obtained from the author upon request. Real-coded GA (simulated binary crossover SBX and polynomial mutation PM) is adopted in the implementation and the parameter values are the same as [20], which are listed in Table 2.
Table 2. Parameter Settings for CNSGA-II and DE-MOEA
N pc pm SBX c* PM m* FE* * 100 10% 0.9 1/n 20 20 50,000 * n denotes the dimension of decision space, and FE is the total number of function evaluations.

Table 3. Mean and Standard Deviation of the Unary Indicator D1R


Function CTP1 CTP2 CTP6 CTP7 HBGA 0.0043 (0.0003) 0.0023 (0.0003) 0.2234 (0.9297) 0.0097 (0.0125) HRGA 0.0056 (0.0007) 0.0037 (0.0021) 0.1103 (0.6587) 0.0241 (0.0932) CNSGA-II 0.0970 (0.0570) 0.1889 (0.1236) 0.3525 (1.1969) 0.0610 (0.0847) Results highlighted in bold signify significantly better than CNSGA-II at =0.05 by a two-tailed test.

Table 4. Mean and Standard Deviation of the Binary Indicator V(A, B)


V(HBGA, CNSGA-II) V(CNSGA-II, HBGA) V(HRGA, CNSGA-II) V(CNSGA-II, HRGA) V(HBGA, HRGA) V(HRGA, HBGA) CTP1 4.4609% (0.0349) 0.2617% (0.0007) 4.0455% (0.0346) 0.3684% (0.0014) 0.4046% (0.0015) 0.2000% (0.0009) CTP2 12.5761% (0.0892) 0.0497% (0.0002) 12.1040% (0.0881) 0.0961% (0.0006) 0.2008% (0.0023) 0.0442% (0.0003) CTP6 5.4083% (0.1757) 3.2613% (0.1345) 5.4424% (0.1766) 1.7138% (0.0965) 1.7404% (0.0958) 3.5931% (0.1382) CTP7 2.6680% (0.0467) 0.1681% (0.0024) 2.9616% (0.0506) 0.1814% (0.0021) 0.4740% (0.0109) 0.7050% (0.0168)

Results highlighted in bold signify significantly better than the other at =0.05 by a two-tailed test.

The values of search step length parameter in HBGA and HRGA are set to 1.1 and 0.85 respectively. And the number M in HRGA is set to 5, i.e., 5% of the population size. All the algorithms for each benchmark function perform 100 independent

A Hybrid of Differential Evolution and Genetic Algorithm

325

runs, and the initial populations of all the algorithms in each run are the same for fair comparison. Table 3 and Table 4 show the means and standard deviations of the D1R and V(A, B) indicators obtained by all the algorithms, where the standard deviations are in the parenthesizes. From Table 3 and Table 4 it can be observed that HBGA and HRGA have better results than CNSGA-II in all the 4 benchmark functions, and especially the results are significant better except CTP6. It can also be seen that HBGA performs better than HRGA in CTP1 and CTP2 while the latter has better results in CTP6. To illustrate the pureness of the POF found by each algorithm, the statistical values in 100 independent runs are listed in table 5.
Table 5. Pureness Statistic of POF Found by the Three Algorithms
Benchmark Function CTP1 Algorithm HBGA HRGA CNSGA-II HBGA HRGA CNSGA-II HBGA HRGA CNSGA-II HBGA HRGA CNSGA-II Number of runs produce pure POF 100 100 4 100 100 10 94 97 89 48 80 15 Numbers of runs produce partial POF 0 0 96 0 0 90 2 1 4 52 20 85 Number of runs produce local Pareto front 0 0 0 0 0 0 4 2 7 0 0 0

CTP2

CTP6

CTP7

From table 5, for CTP1 and CTP2, HBGA and HRGA converge to the pure POF with probability 1 in 100 runs, while the probability of CNSGA-II is not greater than 10%. For CTP6, all the three algorithms converge to the partial POF or local Pareto front with a low probability, but it is somewhat lower for HBGA and HRGA. For CTP7 it is hard for CNSGA-II to find the pure POF while the situation becomes easier for HBGA, and especially for HRGA. The search ability in the boundaries of non-dominated set is improved in both HBGA and HRGA, so it is easier for both HBGA and HRGA to locate the pure POF in CTP1, which has a continuous POF. The CTP2 and CTP7 have non-continuous POF, and the intervals in CTP7s POF are much larger. So it could be still efficient to find the pure POF of CTP2 for HBGA, but it is not the case in CTP7. However, the HRGA could solve the problem by choosing representatives from population for differential evolution. And the experimental results of CTP2 and CTP7 give an evidence for this explanation. CTP6 has several local Pareto fronts for the infeasible holes towards the Pareto-optimal region in objective space. By introducing the different evolution, a better tradeoff between feasible and infeasible regions could be achieved. So the number of convergence to local Pareto front is less than CNSGA-II. Compared with HBGA, the HRGA has better diversity during search for choosing more current best solutions for differential evolution, and the probability of its getting into local

326

M. Zhang et al.

Pareto front is much lower than HBGA. However, the approximating and diversity of non-dominated set are two (possible) conflicting objectives [8]. So when the diversity exceeds the actually required, the approximating will be deteriorated. This could be the reason why HBGA performs better than HRGA on CTP1 and CTP2. With the above results and analysis, the algorithm DE-MOEA proposed in this paper has superior performance compared with CNSGA-II, especially in the distribution of the non-dominated set.

5 Conclusion
This paper proposes two novel schemes of selecting the current best solutions for MODE. And then based on the search biases strategy suggested by Runarsson and Yao, a hybrid algorithm of MODE and GA is put forward here for constrained MOPs. We implement the hybrid algorithm based on NSGA-II with the two schemes of MODE respectively, named HBGA and HRGA. HBGA and HRGA are compared with CNSGA-II on 4 benchmark functions constructed by Deb. Experimental results show that the quality of non-dominated set obtained by the both algorithms is better than that of CNSGA-II on all the benchmark functions. The future work is to apply the hybrid algorithm to more complex problems and applications.

Acknowledgement
This work is partially supported by the National Natural Science Foundation of China (Grant No. 60428202), and the authors would like to thank Prof. X. Yao from University of Birmingham for his helpful comments.

References
1. R.S. Rosenberg: Simulation of genetic populations with biochemical properties. Ph.D. thesis, University of Michigan, Ann Harbor, Michigan, 1967 2. J. David Schaffer: Multiple objective optimization with vector evaluated genetic algorithms. In Genetic Algorithms and their Applications: Proceedings of the First International Conference on Genetic Algorithms, 93100, Lawrence Erlbaum, 1985 3. C.A. Coello Coello: Evolutionary multi-objective optimization: a historical view of the field. IEEE Computational Intelligence Magazine, 1(1): 28-36, Feb. 2006 4. N. Srinivas, K. Deb: Multiobjective optimization using nondominated sorting in genetic algorithms. Evolutionary Computation, 2(3): 221248, Fall 1994 5. J. Horn, N. Nafpliotis, and D.E. Goldberg: A niched pareto genetic algorithm for multiobjective optimization: In Proceedings of the 1st CEC, 1: 8287, June 1994 6. C.M. Fonseca, P.J. Fleming: Genetic algorithms for multiobjective Optimization: Formulation, discussion and generalization. In Proceedings of the Fifth International Conference on Genetic Algorithms, 1993, 416423 7. E. Zitzler, L. Thiele: Multiobjective evolutionary algorithms: A comparative case study and the strength pareto approach. IEEE Trans. Evol. Comput., 3(4): 257271, Nov. 1999

A Hybrid of Differential Evolution and Genetic Algorithm

327

8. E. Zitzler, M. Laumanns, L. Thiele: SPEA2: Improving the strength pareto evolutionary algorithm. In EUROGEN 2001. Evolutionary Methods for Design, Optimization and Control with Applications to Industrial Problems, 2002, 95100 9. J.D. Knowles, D.W. Corne: Approximating the nondominated front using the pareto archived evolution strategy. Evolutionary Computation, 8(2): 149172, 2000 10. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan: A fast and elitist multiobjective genetic algorithm: NSGAII. IEEE Trans. Evol. Comput., 6(2): 182197, Apr. 2002 11. M. Laumanns, L. Thiele, K. Deb, and E. Zitzler: Combining convergence and diversity in evolutionary multi-objective optimization. Evolutionary Computation, 10(3): 263282, Fall 2002 12. C.A. Coello Coello, G. Toscano Pulido, and M. Salazar Lechuga: Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput., 8(3): 256279, June 2004 13. T. Robi, B. Filipi: DEMO: Differential Evolution for Multiobjective Optimization. EMO 2005, 520-533 14. T. P. Runarsson, X. Yao: Search Biases in Constrained Evolutionary Optimization. IEEE Trans. Syst. Man Cybern. Part C-Appl. Rev., 35(2): 233-243, May 2005 15. T. P. Runarsson, X. Yao: Stochastic Ranking for Constrained Evolutionary Optimization. IEEE Trans. Evol. Comput., 4(3):284-294, Sep. 2000 16. R. Storn, K. Price: Differential evolution-A simple and efficient heuristic for global optimization over continuous spaces. J. Global Optimiz., 11(4): 341359, Dec. 1997 17. E. Zitzler, L. Thiele, M. Laumanns, C.M. Fonseca, and V. Grunert da Fonseca: Performance Assessment of Multiobjective Optimizers: An Analysis and Review. IEEE Trans. Evol. Comput., 7(2): 117132, Apr. 2003 18. J.E. Fieldsend, R.M. Everson, and S. Singh: Using unconstrained elite archives for multiobjective optimization. IEEE Trans. Evol. Comput., 7(2): 305-323, June 2003 19. M. Zhang, H.T. Geng, W.J. Luo, L.F. Huang, X.F. Wang: A Novel Search Biases Selection Strategy for Constrained Evolutionary Optimization. CEC 2006, to appear 20. K. Deb, A. Pratap, T. Meyarivan: Constrained Test Problems for Multi-objective Evolutionary Optimization. EMO 2001, 284-298 21. E. Mezura-Montes, J. Velzquez-Reyes, and C.A. Coello Coello: Promising infeasibility and multiple offspring incorporated to differential evolution for constrained optimization. GECCO 2005, 225-232

You might also like