MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 10.1155/2014/259473 259473 Research Article Many-Objective Optimization Using Adaptive Differential Evolution with a New Ranking Method He Xiaoguang 1 http://orcid.org/0000-0002-2144-1177 Dai Cai 2 Chen Zehua 3 Cuevas Erik 1 School of Basic Military Education Engineering University of Chinese People’s Armed Police Force Xi’an 710068 China 2 School of Computer Science and Technology, Xidian University Xi’an 710071 China xidian.edu.cn 3 Yingshang Third School Fuyang 236200 China 2014 1282014 2014 06 06 2014 19 07 2014 26 07 2014 12 8 2014 2014 Copyright © 2014 Xiaoguang He et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Pareto dominance is an important concept and is usually used in multiobjective evolutionary algorithms (MOEAs) to determine the nondominated solutions. However, for many-objective problems, using Pareto dominance to rank the solutions even in the early generation, most obtained solutions are often the nondominated solutions, which results in a little selection pressure of MOEAs toward the optimal solutions. In this paper, a new ranking method is proposed for many-objective optimization problems to verify a relatively smaller number of representative nondominated solutions with a uniform and wide distribution and improve the selection pressure of MOEAs. After that, a many-objective differential evolution with the new ranking method (MODER) for handling many-objective optimization problems is designed. At last, the experiments are conducted and the proposed algorithm is compared with several well-known algorithms. The experimental results show that the proposed algorithm can guide the search to converge to the true PF and maintain the diversity of solutions for many-objective problems.

1. Introduction

Multiobjective evolutionary algorithms (MOEAs) are a kind of effective methods for solving multiobjective problems [1, 2]. Almost all well-known and frequently used MOEAs, which have been proposed in the last twenty years , are based on Pareto dominance. Such Pareto dominance-based algorithms usually work well on problems with two or three objectives but its searching ability is often severely degraded by the increased number of objectives . This is due to the fact that most solutions in a population could be nondominated solutions under many objectives even in the early stages of evolution for many-objective optimization problems. When almost all solutions in a population are nondominated solutions, Pareto dominance-based fitness evaluation almost cannot generate any selection pressure toward the Pareto front (PF). Therefore, how to increase the selection pressure toward the PF is critical for the many-objective optimization algorithm.

In the literature, there are mainly three categories which are used to MOEAs to deal with many-objective optimization problems. The first category uses an indicator function, such as the hypervolume , as the fitness function. This kind of algorithms is also referred to as IBEAs (indicator-based evolutionary algorithms), and their high search ability has been demonstrated in the literature . Bader and Zitzler  propose a fast hypervolume-based many-objective optimization algorithm which uses Monte Carlo simulation to quickly approximate the exact hypervolume values. However, one of their main drawbacks is the computation time for the hypervolume calculation which exponentially increases with the number of objectives.

The second category utilizes the scalarizing functions to deal with the many-objective problem. According to the literatures , scalarizing function-based algorithms could better deal with many-objective problems than the Pareto dominance-based algorithms. The main advantage of scalarizing function-based algorithms is the simplicity of their fitness evaluation which can be easily calculated even if the number of objectives is large. The representative MOEA in this category is MOEA/D  (multiobjective evolutionary algorithm based on decomposition), which works well on a wide range of multiobjective problems with many objectives, discrete decision variables, and complicated Pareto sets . In MOEA/D , the uniformity of the used weighted vectors determines the uniformity of the obtained nondominated optimal solutions; however, the used weighted vectors in MOEA/D are not very uniform and the size N of these weighted vectors should satisfy the restriction N=CH+m-1m. Thus N cannot be freely assigned and it will increase nonlinearly with the increase of m, where m is the number of objectives and H is an integer. This restricts the application of MOEA/D to many-objective optimization problems (i.e., m is large). Therefore, for many-objective problems, how to set weight vectors is a very difficult but critical task.

The third category makes use of solution ranking methods. Specifically, solution ranking methods are used to discriminate among solutions in order to enhance the selection pressure toward the PF, which make sure the solutions are able to converge to the PF. Bentley and Wakefield  proposed ranking composition methods which extract the separated fitness of every solution into a list of fitness values for each objective. Ikeda et al.  proposed a relaxed form of dominance (RFD) to deal with what they called dominance resistant solutions, that is, solutions that are extremely inferior to others in at least one objective but are hardly dominated. Farina and Amato  proposed a dominance relation which takes into account the number of objectives where a solution is better, equal, and worse than another solution. Sato et al.  proposed a method to strengthen or weaken the selection process by expanding or contracting the solutions’ dominance area.

In this paper, a new ranking approach for many-objective problems is proposed to rank the obtained solutions. In this approach, for each solution, a small number of virtual objective vectors which are evenly distributed around the objective vector of the solution are generated to determine whether the solution is a nondominance solution. The selection pressure, the diversity of solutions, and time consumption are taken into consideration by the ranking method. Moreover, a multiobjective DE algorithm based on the new ranking method is designed to solve many-objective optimization problems.

The remainder of this paper is organized as follows. Section 2 introduces the new ranking method. The proposed many-objective algorithm is proposed in Section 3. Section 4 shows the experimental results of the proposed algorithm. Finally, Section 5 gives the conclusions and future works.

2. The New Ranking Method

A multiobjective optimization problem can be formulated as follows : (1)minF(x)=(f1(x),f2(x),,fm(x))s.t.  gi(x)0,i=1,2,,qs.t.  hj(x)=0,j=1,2,,p, where x=(x1,,xn)XRn is called decision variable and X is n-dimensional decision space. fi(x)(i=1,,m) is the ith objective to be minimized, gi(x)(i=1,2,,q) defines ith inequality constraint, and hj(x)(j=1,2,,p) defines jth equality constraint. Furthermore, all the constraints determine the set of feasible solutions which are denoted by Ω. To be specific, we try to find a feasible solution xΩ minimizing each objective function fi(x)(i=1,,m) in F (Table 1).

Multiobjective benchmark functions.

Objective function Domain
DTLZ1 f 1 = 0.5 x 1 x 2 x m - 1 ( 1 + g ( x ) ) [ 0,1 ] n
f 2 = 0.5 x 1 x 2 ( 1 - x m - 1 ) ( 1 + g ( x ) )
f m - 1 = 0.5 x 1 ( 1 - x 2 ) ( 1 + g ( x ) )
f m = 0.5 ( 1 - x 1 ) ( 1 + g ( x ) )
g ( x ) = 100 ( n - 2 ) + 100 i = 3 n { ( x i - 0.5 ) 2 - cos ( 20 π ( x i - 0.5 ) ) }

DTLZ3 f 1 = cos ( 0.5 π x 1 ) cos ( 0.5 π x 2 ) cos ( 0.5 π x m - 1 ) ( 1 + g ( x ) ) [ 0,1 ] n
f 2 = cos ( 0.5 π x 1 ) cos ( 0.5 π x 2 ) sin ( 0.5 π x m - 1 ) ( 1 + g ( x ) )
f m - 1 = cos ( 0.5 π x 1 ) sin ( 0.5 π x 2 ) ( 1 + g ( x ) )
f m = sin ( 0.5 π x 1 ) ( 1 + g ( x ) )
g ( x ) = 100 ( n - 2 ) + 100 i = 3 n { ( x i - 0.5 ) 2 - cos ( 20 π ( x i - 0.5 ) ) }
Definition 1 (Pareto dominance).

Pareto dominance between solutions x,zΩ is defined as follows. If (2)i{1,2,,m}fi(x)fi(z)i{1,2,,m}fi(x)<fi(z) are satisfied, x dominates (Pareto dominate) z (denoted by xz).

For many-objective problems, to determine whether a solution is a nondominated one by using Pareto dominance, all other solutions should be used to make the comparisons of objectives one by one among all objectives, and a lot of comparisons (thus a large amount of computation) will be needed. Also, Pareto dominance will result in a lot of nondominated solutions. In this work, a new ranking method is designed to determine whether a solution is a nondominated solution by only using a small number of virtual objective vectors. For each solution x, m points in objective space are generated on the following surface: (3)f1P+f2P++fmP=RP, where P>0 and it controls the shape and size of the surface; R=min{F(x)PxPOP}, where POP is the set of the current solutions. Before the m points are generated, m vectors (Di,i=1,,m) are firstly generated. The ith vector Di=(d1,,dm)(i=1,,m) is generated by optimizing the following formula: (4)minji(dj-H*oj)2s.tj=1mdj=Hdj{0,1,,H},j=1,,m{di=0if  oi=0H*oi-1di<H*oielse, where o=(o1,,om)=F(x)/i=1mfi(x) and H is an integer. After Di is obtained, the ith virtual vector is generated by the following formula: (5)Vi=R*DiDiP. Note that these m virtual vectors have the same and the smallest norm (P-norm) among all obtained solutions and distribute evenly around the objective vector of the solution x. Moreover, for any virtual vector, for example, Vi(i=1,,m), its ith component is not larger than fi(x) and the ith component of Di is not larger than Hoi, which indicate that the virtual vector Vi more likely dominates the solution F(x) than other virtual vectors whose corresponding ith component of Di is larger than Hoi. Thus, if F(x) is not dominated by these m virtual vectors, the solution x can be regarded as a nondominated solution (of course, it may be a dominated one, but it is more possible to be a nondominated one); otherwise, the solution x can be regarded as a dominated solution.

The parameter H plays a very important role in this method which controls the convergence and diversity of solutions. The larger the value of H is, the better the convergence of solutions is. However, if the surface and the true PF are in great difference, a large value of H will lead to the poor diversity. Therefore, the value of H should be traded off between the convergence and diversity of solutions.

The proposed ranking method has the following advantages.

For each solution x of the set POP whose size is N, this method only needs to make comparisons among m+1 points, instead of N points (mN), which can reduce much time consumption.

The value of ith component of Vi is not more than fi(x), and the angle of Vi and F(x) is very small, which make F(x) easily dominated by Vi. Thus, the number of nondominated solutions generated by the method is not too large, and the selection pressure can be enhanced.

In this method, maximum m points need additional space to store, which only takes a small amount of storage space.

This method can balance the convergence and diversity by adjusting the parameter H.

3. Algorithm

In this section, a many-objective optimization evolutionary algorithm is presented. The algorithm uses DE [24, 25] operator to generate offspring and the (μ+λ) selection scheme to generate the next population. DE is a good optimizer for continuous optimization. The trial vector generation comprises two operators: mutation and crossover operators. The mutation operator which is used in our algorithm is performed as follows: (6)uit=xr1t+L(xr2t-xr3t), where t is the generation number; xr1t and xr2t are two random nondominated solutions among the current population POP(t); xr3t is a random solution among POP(t) except for xr1t and xr2t; uit is ith mutation offspring; L is the scale factor which is usually in (0,1].

The crossover operator is described as follows: (7)vi,jt={ui,jt,if  randCR  and  ljui,jtujxi,jt,else, where CR is the crossover rate which is usually in (01); lj and uj are the lower bound and upper bound of the variable xj, respectively; vit=(vi,1t,,vi,nt) is the offspring.

In classical DE, the control parameters F and CR highly affect its performance. Different settings of those parameters influence the quality of offspring which are generated by the mutation and crossover. In this paper, a self-adjusted parameter control strategy is presented to balance exploration and exploitation. For each solution xt, its two control parameters which are used to generate the offspring are determined as follows: (8)CR={0.5,if  rand<0.5e-2t/gmax,else,L={0.5,if  rand<0.5e-2t/gmax,else, where gmax is the maximal generation.

Based on the above methods, a new multiobjective differential evolution based on the new ranking method is proposed (MODER) for many-objective optimization problems and the steps of the algorithm MODER are described as follows.

Step 1 (initialization).

Randomly generate an initial population POP(t) whose size is N and t=0.

Step 2 (fitness).

Solutions of POP(t) are firstly divided into two sets (the set of nondominated solutions and the set of dominated solutions) by the proposed ranking method (Algorithm 1). For each set, the fitness value of each solution in the set is calculated by the crowding distance. Then N better solutions are selected from the population POP(t) and put these solutions into the population POP. In this work, binary tournament selection is used.

<bold>Algorithm 1: </bold>The ranking algorithm.

(1) The population POP

(2) Each solution of POP is firstly considered as a non-dominate solution

(3) Determine the values of R and P

(4) for each solution xPOP do

(5) i=1, while im

(6) Obtained the ith vector Di by optimizing the formula (4);

(7) then set Vi=R*Di/DiP

(8) If Vi dominates F(x) then

(9) x is a dominate solution and set i=m+1

(10) else

(11) i=i+1

(12) end if

(13) end while

(14) end for

Step 3 (new solutions).

Apply (6)–(8) or simulated binary crossover  to the parent population POP to generate offspring. The set of all these offspring is denoted by O and its size is N.

Step 4 (generate <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M142"><mml:mi>P</mml:mi><mml:mi>O</mml:mi><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>).

Solutions of POP(t)O are firstly divided into two sets by the proposed ranking method. For each set, the fitness value of each solution in the set is calculated by the crowding distance. Then N better solutions are selected from the population POP(t)O and put these solutions into the population POP(t+1). Let t=t+1.

Step 5 (termination).

If stop condition is satisfied, stop; otherwise, go to Step 2.

4. Experimental Studies

In order to validate our algorithm, MODER is fully compared with MOEA/D , NSGAII , and NSGAII based on contracting or expanding the solutions’ dominance area  (denoted by NSGAII-CE) on DTLZ1 and DTLZ3 of the DTLZ family , each with 5–50 objectives.

4.1. Experimental Setting

The experiments are carried out on a personal computer (Intel Xeon CPU 2.53 GHz, 3.98 G RAM). The solutions are all coded as real vectors. Polynomial mutation  operators and differential evolution (DE)  are applied directly to real vectors in three algorithms, that is, MOEDR, MOEA/D, and NSGAII-CDAS. The distribution index and crossover probability in polynomial mutation  operators are set to 20 and 1/n, respectively. The crossover rate and scaling factor in DE operator are set to 1.0 and 0.5, respectively. The aggregate function of MOEA/D is the Tchebycheff approach , and weight vectors are generated by using the uniform design method for MOEA/D and UREA/D. The number of the weight vectors in the neighborhood in MOEA/D is set to 20 for all test problems. The parameter of the ranking method H is set to 2, and in each generation, the value of P is self-adaptive determined by the following expression: (9)P={k,if  R(1)<R(k)(1+m)21,else, where R(k)=min{F(x)kxPOP} and m is the number of objectives. In this algorithm, we set k=2. The parameter of CE is set to 0.25 for NSGAII-CE. 20 independent runs are performed with population size of 100 for all these instances. The maximal number of function evaluations is set to 100000 for all test problems. The values of default parameters are the same as in the corresponding papers.

4.2. Experimental Measures

In order to compare the performance of the four compared algorithms quantitatively, the following three widely used performance metrics are adopted: generational distance (GD), inverted generational distance (IGD) , and Wilcoxon rank-sum test , where GD measures how far the obtained Pareto front is away from the true Pareto front, which allows us to observe whether the algorithm can converge to the true PF. If GD is equal to 0, all points of the obtained PF belong to the true PF. IGD measures how far the true PF is away from the obtained PF, which shows whether points of the obtained PF are evenly distributed throughout the true PF. If IGD is equal to 0, the obtained PF contains every point of the true PF. In particular, GD and IGD indicators are used simultaneously to observe whether the solutions are distributed over the entire PF or the solutions concentrate in some regions of the true PF. In our experiments, 100000 points which are generated by using the uniform design  for all these test instances which are uniformly sampled on the true PF are used to calculate the metrics GD and IGD of solutions obtained by an algorithm.

Wilcoxon rank-sum test  is used in the sense of statistics to compare the mean IGD and GD of the compared algorithms. It tests whether the performance of MODER on each test problem is better than (“+”), same as (“=”), or worse than (“−”) that of the compared algorithms at a significance level of 0.05 by a two-tailed test. These results are given in Tables 2 and 3.

IGD, GD, and HV obtained by MODER, MOEA/D, NSGAII, and NSGAII-CE on DTLZ1.

IGD GD
Mean Std. Mean Std.
DTLZ1-5 MODER 0.0719 0.0067 0.0311 0.0025
MOEA/D 0.0756(+) 0.0074 0.0317(+) 0.0035
NSGAII 23.789(+) 15.261 233.69(+) 20.501
NSGAII-CE 0.2989(+) 0.1023 0.0124(−) 0.0202

DTLZ1-10 MODER 0.0921 0.0146 0.1160 0.0075
MOEA/D 0.1045(+) 0.0081 0.1141(=) 0.0242
NSGAII 25.065(+) 6.9643 343.45(+) 10.444
NSGAII-CE 0.3246(+) 0.0969 0.0055(−) 0.0079

DTLZ1-15 MODER 0.0968 0.0063 0.1925 0.0103
MOEA/D 0.1141(+) 0.0101 0.1340(−) 0.0198
NSGAII 15.866(+) 11.499 222.14(+) 5.2899
NSGAII-CE 0.4152(+) 0.0942 0.5510(+) 0.0457

DTLZ1-20 MODER 0.1523 0.0613 0.2289 0.0056
MOEA/D 0.1162(−) 0.0176 0.1542(−) 0.0133
NSGAII 32.596(−) 13.770 373.34(+) 5.1034
NSGAII-CE 0.4197(−) 0.0604 0.1218(−) 0.1244

DTLZ1-25 MODER 0.2030 0.0585 0.2679 0.0302
MOEA/D 0.1599(−) 0.0881 0.2165(−) 0.1604
NSGAII 37.273(+) 21.605 389.85(+) 6.1814
NSGAII-CE 0.4694(+) 0.0558 0.1237(−) 0.1144

DTLZ1-50 MODER 0.3180 0.0392 0.2865 0.0845
MOEA/D 0.1623(−) 0.0142 0.1570(−) 0.0290
NSGAII 33.276(+) 13.482 364.25(+) 9.7122
NSGAII-CE 1.1990(+) 2.3179 1.5583(+) 3.5537

“+” means that MODER outperforms its competitor algorithm, “−” means that MODER is outperformed by its competitor algorithm, and “=” means that the competitor algorithm has the same performance as MODER.

IGD, GD, and HV obtained by MODER, MOEA/D, NSGAII, and NSGAII-CE on DTLZ3.

IGD GD
Mean Std. Mean Std.
DTLZ3-5 MODER 0.231 0.0168 0.0837 0.0040
MOEA/D 0.2499(+) 0.0667 0.0869(+) 00071
NSGAII 0.6093(+) 0.1186 0.8738(+) 0.1039
NSGAII-CE 1.0903(+) 02406 0.0913(+) 0.0914

DTLZ3-10 MODER 0.4099 0.0171 0.3130 0.0126
MOEA/D 0.4264(+) 0.0585 0.3219(+) 0.0182
NSGAII 65.508(+) 14.659 812.75(+) 23.198
NSGAII-CE 1.0413(+) 0.3021 152.583(+) 234.71

DTLZ3-15 MODER 0.4723 0.0106 0.5662 0.0436
MOEA/D 0.6953(+) 0.0585 0.7038(+) 0.0703
NSGAII 85.563(+) 18.166 937.31(+) 22.914
NSGAII-CE 1.1584(+) 0.3419 20.217(+) 53.426

DTLZ3-20 MODER 0.5642 0.1296 0.8176 0.0746
MOEA/D 0.7542(+) 0.0233 1.0025(+) 0.0512
NSGAII 36.307(+) 17.207 573.32(+) 11.818
NSGAII-CE 1.2466(+) 0.3100 0.9260(+) 0.5774

DTLZ3-25 MODER 0.5292 0.0340 0.9388 0.0508
MOEA/D 0.7894(+) 0.1075 1.1898(+) 0.0433
NSGAII 90.042(+) 111.47 931.18(+) 14.475
NSGAII-CE 1.4141(+) 0.0002 6.7310(+) 15.187

DTLZ3-50 MODER 0.7507 0.0337 0.5076 0.0048
MOEA/D 0.9589(+) 0.2665 1.1179(+) 0.4531
NSGAII 71.523(+) 31.877 703.20(+) 284.21
NSGAII-CE 0.9211(+) 0.4885 1.9211(+) 0.4885

“+” means that MODER outperforms its competitor algorithm, “−” means that MODER is outperformed by its competitor algorithm, and “=” means that the competitor algorithm has the same performance as MODER.

4.3. Comparisons of MODER with MOEA/D, NSGAII, and NSGAII-CE

In this section, some simulation results and comparisons that demonstrate the potential of MODER are presented, and the comparisons mainly focus on two aspects: (1) the ability of the ranking method to improve the selection pressure and (2) the ability of the ranking method to maintain the diversity of solutions.

Tables 2 and 3 show the mean and standard deviation of the GD metric and IGD metric obtained by these four algorithms for test problems with 5–50 objectives, respectively. DTLZ1-5 represents that the number of objectives of DTLZ1 is 5. It can be seen from Tables 2 and 3 that, for the IGD metric, MODER outperforms NSGAII and NSGAII-CE on all twelve test problems and outperforms MOEA/D on nine test problems. For the GD metric, MODER outperforms NSGAII on all twelve test problems, outperforms NSGAII-CE on eight test problems, outperforms MOEA/D on seven test problems, and performs worse than NSGAII-CE and MOEA/D on four test problems. These results indicate that MODER outperforms all three compared algorithms.

These results of Tables 2 and 3 also show that the mean values of GD and IGD obtained by NSGAII are the largest among the four algorithms, which indicate that NSGAII has the worst performance in the convergence to the true PF among these four algorithms and Pareto dominance has little selection pressure toward the true PF. For problem DTLZ1 with 20–50 objectives, the mean values of IGD obtained by MODER are slightly larger than those obtained by MOEA/D, which indicates that MOEA/D can better maintain the diversity of solutions than MODER for these problems. However, for problem DTLZ1 with 5–15 objectives, the mean values of IGD obtained by MODER are smaller than those obtained by MOEA/D, and for problem DTLZ1 with 5–50 objectives, the mean values of IGD obtained by MODER are smaller than those obtained by NSGAII-CE, which indicate that MODER has a better ability to maintain the diversity than NSGAII-CE and well maintains the diversity of obtained solutions. These also show that the new ranking method can better maintain the diversity of solutions than CE. For the convergence metric GD, the mean values of GD obtained by MODER are smaller than 0.29 for DTLZ1 with 5–50 objectives and are slightly larger than those obtained by MOEA/D. MOEA/D decomposes a multiobjective problem (MOP) into a number of scalar optimization subproblems and solves them simultaneously. The objective of each of these subproblems is an aggregation of all the objectives in the MOP under consideration. Single objective is more likely to converge to the optimal solution, and its degree of convergence is relatively high. These show that the convergence of solutions obtained by MODER to the true PF is good and the new ranking method can enhance the selection pressure toward the true PF. For problem DTLZ3 with 5–50 objectives, the mean values of IGD obtained by MODER are smaller than those obtained by MOEA/D and NSGAII-CE, which show that the diversity of solutions obtained by MODER is better than those obtained by MOEA/D and NSGAII-CE; the mean values of GD obtained by MODER are smaller than those obtained by MOEA/D and NSGAII-CE, which indicate that the convergence of solutions obtained by MODER is better than those obtained by MOEA/D and NSGAII-CE. These imply that MODER can converge to the true PF and maintain the diversity of obtained solutions; even for higher dimensional objectives, its convergence and diversity performance are still good. In other words, the new ranking method not only can enhance the selection pressure toward the true PF but also can well maintain the diversity of solutions.

5. Conclusions

In this work, a new ranking method for many-objective optimization problems is proposed to decrease the number of nondominated solutions and reduce the number of comparisons in the process of verifying the nondominated solutions. For each solution x, the approach generates a small number of virtual vectors which are evenly distributed around the objective vector of the solution x and can determine whether x is a nondominated solution by only making the comparison between F(x) and every virtual vector. Thus, this method not only uses a small quantity of comparisons when determining the nondominated solutions, but also is able to contract the nondominated area of the solution to increase the selection pressure toward the true PF. Moreover, a multiobjective adaptive differential evolution algorithm based on the new ranking method is designed for many-objective problems. The algorithm was tested on problems with up to 50 objectives. Compared with the existing algorithms, simulation results show that the proposed algorithm is able to maintain the diversity of the obtained solutions and has good convergence ability on problems with large number of many objectives. Future works include expanding the experiments to other problems and search another crossover operator which is more suitable for many-objective problems.

Conflict of Interests

The authors have declared that no conflict of interests exists.

Coello C. A. C. Lamont G. B. Veldhuizen D. A. V. Evolutionary Algorithms for Solving Multi-Objective Problems (Genetic and Evolutionary Computation) 2006 New York, NY, USA Springer Kokshenev I. Padua Braga A. An efficient multi-objective learning algorithm for RBF neural network Neurocomputing 2010 73 16–18 2799 2808 10.1016/j.neucom.2010.06.022 2-s2.0-78649974166 Deb K. Pratap A. Agarwal S. Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II IEEE Transactions on Evolutionary Computation 2002 6 2 182 197 10.1109/4235.996017 2-s2.0-0036530772 Nebro A. Durillo J. Garcia-Nieto J. Coello C. A. C. Luna F. Alba E. SMPSO: a new PSO-based metaheuristic for multi-objective optimization Proceedings of the IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making (MCDM '09) 2009 66 73 Beume N. Naujoks B. Emmerich M. SMS-EMOA: Multiobjective selection based on dominated hypervolume European Journal of Operational Research 2007 181 3 1653 1669 10.1016/j.ejor.2006.08.008 ZBL1123.90064 2-s2.0-33947669974 Purshouse R. C. Fleming P. J. On the evolutionary optimization of many conflicting objectives IEEE Transactions on Evolutionary Computation 2007 11 6 770 784 10.1109/TEVC.2007.910138 2-s2.0-36549051604 Friedrich T. Horoba C. Neumann F. Multiplicative approximations and the hypervolume indicator Proceedings of the 11th Annual Genetic and Evolutionary Computation Conference (GECCO '09) July 2009 571 578 10.1145/1569901.1569981 2-s2.0-72749088701 Auger A. Bader J. Brockhoff D. Zitzler E. Theory of the hypervolume indicator: Optimal μ -distributions and the choice of the reference point Proceedings of the 10th ACM SIGEVO Conference on Foundations of Genetic Algorithms 2009 87 102 10.1145/1527125.1527138 MR2895109 2-s2.0-70349104508 Zitzler E. Künzli S. Indicator-based selection in multiobjective search Parallel Problem Solving from Nature—PPSN VIII 2004 3242 Berlin, Germany Springer 832 842 Lecture Notes in Computer Science 10.1007/978-3-540-30217-9_84 Wagner T. Beume N. Naujoks B. Pareto-, aggregation-, and indicator-based methods in manyobjective optimization 4403 Proceedings of the Evolutionary Multi-Criterion Optimization (EMO '07) 2007 Springer 742 756 Lecture Notes in Computer Science Bader J. Zitzler E. HypE: an algorithm for fast hypervolume-based many-objective optimization Evolutionary Computation 2011 19 1 45 76 10.1162/EVCO_a_00009 2-s2.0-79951564654 Hughes E. J. Evolutionary many-objective optimisation: many once or one many? 1 Proceeding of the IEEE Congress on Evolutionary Computation (CEC '05) September 2005 Edinburgh, Scotland 222 227 2-s2.0-27144473875 Hughes E. J. MSOPS-II: a general-purpose many-objective optimiser Proceedings of the IEEE Congress on Evolutionary Computation (CEC '07) September 2007 3944 3951 10.1109/CEC.2007.4424985 2-s2.0-79955353433 Ishibuchi H. Nojima Y. Optimization of scalarizing functions through evolutionary multiobjective optimization Evolutionary Multi-Criterion Optimization 2007 4403 Berlin, Germany Springer 51 65 Lecture Notes in Computer Science 10.1007/978-3-540-70928-2_8 Zhang Q. Li H. MOEA/D: a multiobjective evolutionary algorithm based on decomposition IEEE Transactions on Evolutionary Computation 2007 11 6 712 731 10.1109/TEVC.2007.892759 2-s2.0-34548108555 Ishibuchi H. Sakane Y. Tsukamoto N. Nojima Y. Evolutionary many-objective optimization by NSGA-II and MOEA/D with large populations Proceedings of the IEEE International Conference on Systems, Man and Cybernetics October 2009 1820 1825 10.1109/ICSMC.2009.5346628 2-s2.0-74849127345 Li H. Zhang Q. Multiobjective optimization problems with complicated pareto sets, MOEA/D and NSGA-II IEEE Transactions on Evolutionary Computation 2009 13 2 284 302 10.1109/TEVC.2008.925798 2-s2.0-67349108023 Tan Y. Jiao Y. Li H. Wang X. MOEA/D + uniform design: a new version of MOEA/D For optimization problems with many objectives Computers & Operations Research 2013 40 6 1648 1660 10.1016/j.cor.2012.01.001 MR3041436 2-s2.0-84875724394 Bentley P. J. Wakefield J. P. Finding Acceptable Solutions in the Pareto-Optimal Range using Multiobjective Genetic Algorithms Proceedings of the World Conference on Soft Computing in Design and Manufacturing 1997 231 240 Ikeda K. Kita H. Kobayashi S. Failure of Pareto-based MOEAs: does non-dominated really mean near to optimal? 2 Proceeding of the Congress on Evolutionary Computation May 2001 Seoul, Korea 957 962 10.1109/CEC.2001.934293 2-s2.0-0034863644 Farina M. Amato P. A fuzzy definition of “optimality” for many-criteria optimization problems IEEE Transactions on Systems, Man, and Cybernetics A—Systems and Humans 2004 34 3 315 326 10.1109/TSMCA.2004.824873 2-s2.0-2442507741 Sato H. Aguirre H. E. Tanaka K. Obayashi S. Deb K. Poloni C. Hiroyasu T. Murata T. Controlling dominance area of solutions and its impact on the performance of mOEAs Evolutionary Multi-Criterion Optimization 2007 4403 Heidelberg, Germany Springer 5 20 Lecture Notes in Computer Science van Veldhuizen D. A. Multiobjective evolutionary algorithms: Classifications, analyses, and new innovations [Ph.D. thesis] 1999 Wright-Patterson AFB, Ohio, USA Department of Electrical & Computer Engineering, Graduate School of Engineering, Air Force Institute of Technology Storn R. Price K. Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces Journal of Global Optimization 1997 11 4 341 359 10.1023/A:1008202821328 MR1479553 2-s2.0-0142000477 Ali M. Pant M. Abraham A. A modified differential evolution algorithm and its application to engineering problems Proceedings of the International Conference on Soft Computing and Pattern Recognition (SoCPaR '09) December 2009 196 201 10.1109/SoCPaR.2009.48 2-s2.0-77649289676 Deb K. Multiobjective Optimization Using Evolutionary Algorithms 2001 Wiley New York, NY, USA MR1840619 Deb K. Thiele L. Laumanns M. Zitzler E. Scalable test problems for evolutionary multi-objective optimization 2001 112 Zurich, Switzerland Swiss Federal Institute of Technology (ETH) Deb K. Multiobjective Optimization Using Evolutionary Algorithms 2001 New York, NY, USA John Wiley & Sons Coello C. A. C. Cortés N. C. Solving multiobjective optimization problems using an artificial immune system Genetic Programming and Evolvable Machines 2005 6 2 163 190 10.1007/s10710-005-6164-x 2-s2.0-17444430405 Robert S. Torrie J. Dickey D. Principles and Procedures of Statistics: A Biometrical Approach 1997 New York, NY, USA McGraw-Hill Fang K. Wang Y. Number-Theoretic Method in Statistics 1994 London, UK Chapman and Hall 10.1007/978-1-4899-3095-8 MR1284470