DDNSDiscrete Dynamics in Nature and Society1607-887X1026-0226Hindawi Publishing Corporation56978410.1155/2011/569784569784Research ArticleSolving Multiobjective Optimization Problems Using Artificial Bee Colony AlgorithmZouWenping1, 2ZhuYunlong1ChenHanning1ZhangBeiwei1, 2ZhangBinggen1Key Laboratory of Industrial InformaticsShenyang Institute of AutomationChinese Academy of SciencesShenyang 110016Chinacas.cn2Graduate School of the Chinese Academy of SciencesBeijing 100039Chinacas.cn2011311201120111005201109082011230820112011Copyright © 2011 Wenping Zou et al.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Multiobjective optimization has been a difficult problem and focus for research in fields of science and engineering. This paper presents a novel algorithm based on artificial bee colony (ABC) to deal with multi-objective optimization problems. ABC is one of the most recently introduced algorithms based on the intelligent foraging behavior of a honey bee swarm. It uses less control parameters, and it can be efficiently used for solving multimodal and multidimensional optimization problems. Our algorithm uses the concept of Pareto dominance to determine the flight direction of a bee, and it maintains nondominated solution vectors which have been found in an external archive. The proposed algorithm is validated using the standard test problems, and simulation results show that the proposed approach is highly competitive and can be considered a viable alternative to solve multi-objective optimization problems.

1. Introduction

In the real world, many optimization problems have to deal with the simultaneous optimization of two or more objectives. In some cases, however, these objectives are in contradiction with each other. While in single-objective optimization the optimal solution is usually clearly defined, this does not hold for multiobjective optimization problems. Instead of a single optimum, there is rather a set of alternative trade-offs, generally known as Pareto optimal solutions. These solutions are optimal in the wider sense that no other solutions in the search space are superior to them when all objectives are considered.

In the 1950s, in the area of operational research, a variety of methods have been developed for the solution of multiobjective optimization problems (MOP). Some of the most representative classical methods are linear programming, the weighted sum method, and the goal programming method . Over the past two decades, a number of multiobjective evolutionary algorithms (MOEAs) have been proposed [2, 3]. A few of these algorithms include the nondominated sorting genetic algorithm II (NSGA-II) , the strength Pareto evolutionary algorithm 2 (SPEA2) , and the multiobjective particle swarm optimization (MOPSO) which is proposed by Coello and Lechuga . MOEA’s success is due to their ability to find a set of representative Pareto optimal solutions in a single run.

Artificial bee colony (ABC) algorithm is a new swarm intelligent algorithm that was first introduced by Karaboga in Erciyes University of Turkey in 2005 , and the performance of ABC is analyzed in 2007 . The ABC algorithm imitates the behaviors of real bees in finding food sources and sharing the information with other bees. Since ABC algorithm is simple in concept, easy to implement, and has fewer control parameters, it has been widely used in many fields. For these advantages of the ABC algorithm, we present a proposal, called “Multiobjective Artificial Bee Colony” (MOABC), which allows the ABC algorithm to be able to deal with multiobjective optimization problems. We aim at presenting a kind of efficient and simple algorithm for multiobjective optimization, meanwhile filling up the research gap of the ABC algorithm in the field of multiobjective problems. Our MOABC algorithm is based on Nondominated Sorting strategy. We use the concept of Pareto dominance to determine which solution vector is better and use an external archive to maintain nondominated solution vectors. We also use comprehensive learning strategy which is inspired by comprehensive learning particle swarm optimizer (CLPSO)  to ensure the diversity of population. In order to evaluate the performance of the MOABC, we compared the performance of the MOABC algorithm with that of NSGA-II and MOCLPSO  on a set of well-known benchmark functions. Seven of these test functions SCH, FON, ZDT1 to ZDT4, and ZDT6 are of two objectives, while the other four DTLZ1 to DTLZ3 and DTLZ6 are of three objectives. Meanwhile, a version of MOABC with Nondominated Sorting strategy only is also compared to illustrate the comprehensive learning strategy effect for population diversity. This version algorithm is called nondominated Sorting artificial bee colony (NSABC). From the simulation results, the MOABC algorithm shows remarkable performance improvement over other algorithms in all benchmark functions.

The remainder of the paper is organized as follows. Section 2 gives relevant MOP concepts. In Section 3, we will introduce the ABC algorithm. In Section 4, the details of MOABC algorithm will be given. Section 5 presents a comparative study of the proposed MOABC with NSABC, NSGA-II, and MOCLPSO on a number of benchmark problems, and we will draw conclusions from the study. Section 6 summarizes our discussion.

2. Related ConceptsDefinition 1 (multiobjective optimization problem).

A multiobjective optimization problem (MOP) can be stated as follows: MinimizeF(x)=(f1(x),,fm(x))Subject  toxX, where x=(x1,,xn) is called decision (variable) vector, XRn is the decision (variable) space, Rm is the objective space, and F:XRm consists of m  (m2) real-valued objective functions. F(x) is the objective vector. We call problem (2.1) a MOP.

Definition 2 (Pareto optimal).

For (2.1), let a=(a1,,am), b=(b1,,bm)Rm be two vectors, a is said to dominate b if aibi for all i=1,,m, and ab. A point x*X is called (globally) Pareto optimal if there is no xX such that F(x) dominates F(x*). Pareto optimal solutions are also called efficient, nondominated, and noninferior solutions. The set of all the Pareto optimal solutions, denoted by PS, is called the Pareto set. The set of all the Pareto objectives vectors, PF={F(x)RmxPS}, is called the Pareto front [11, 12]. Illustrative example can be seen in Figure 1.

Illustrative example of Pareto optimality in objective space (a) and the possible relations of solutions in objective space (b).

3. The Original ABC Algorithm

The artificial bee colony algorithm is a new population-based metaheuristic approach, initially proposed by Karaboga  and Karaboga and Basturk  and further developed by Karaboga and Basturk  and Karaboga and Akay . It has been used in various complex problems. The algorithm simulates the intelligent foraging behavior of honey bee swarms. The algorithm is very simple and robust. In the ABC algorithm, the colony of artificial bees is classified into three categories: employed bees, onlookers, and scouts. Employed bees are associated with a particular food source that they are currently exploiting or are “employed” at. They carry with them information about this particular source and share the information to onlookers. Onlooker bees are those bees that are waiting on the dance area in the hive for the information to be shared by the employed bees about their food sources and then make decision to choose a food source. A bee carrying out random search is called a scout. In the ABC algorithm, the first half of the colony consists of the employed artificial bees, and the second half includes the onlookers. For every food source, there is only one employed bee. In other words, the number of employed bees is equal to the number of food sources around the hive. The employed bee whose food source has been exhausted by the bees becomes a scout. The position of a food source represents a possible solution to the optimization problem, and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution represented by that food source. Onlookers are placed on the food sources by using a probability-based selection process. As the nectar amount of a food source increases, the probability value with which the food source is preferred by onlookers increases, too [7, 8]. The main steps of the algorithm are given in Algorithm 1.

<bold>Algorithm 1: </bold>Pseudocode for ABC algorithm.

Main steps of the ABC algorithm

(1) cycle = 1

(2) Initialize the food source positions (solutions) xi, i=1,,SN

(3) Evaluate the nectar amount (fitness function fiti) of food sources

(4) repeat

(5)  Employed Bees’ Phase

For each employed bee

Produce new food source positions vi

Calculate the value fiti

If new position better than previous position

Then memorizes the new position and forgets the old one.

End For.

(6)  Calculate the probability values pi for the solution.

(7)  Onlooker Bees’ Phase

For each onlooker bee

Chooses a food source depending on pi

Produce new food source positions vi

Calculate the value fiti

If new position better than previous position

Then memorizes the new position and forgets the old one.

End For

(8)  Scout Bee Phase

If there is an employed bee becomes scout

Then replace it with a new random source positions

(9)  Memorize the best solution achieved so far

(10)  cycle = cycle + 1.

(11) until cycle = Maximum Cycle Number

In the initialization phase, the ABC algorithm generates randomly distributed initial food source positions of SN solutions, where SN denotes the size of employed bees or onlooker bees. Each solution xi(i=1,  2,,SN) is a n-dimensional vector. Here, n is the number of optimization parameters. And then evaluate each nectar amount fiti. In the ABC algorithm, nectar amount is the value of benchmark function.

In the employed bees’ phase, each employed bee finds a new food source vi in the neighborhood of its current source xi. The new food source is calculated using the following expression:vij=xij+ϕij(xij-xkj), where k(1,  2,,SN) and j(1,  2,,n) are randomly chosen indexes and ki. ϕij is a random number between [-1,1]. It controls the production of a neighbour food source position around xij. And then employed bee compares the new one against the current solution and memorizes the better one by means of a greedy selection mechanism.

In the onlooker bees’ phase, each onlooker chooses a food source with a probability which is related to the nectar amount (fitness) of a food source shared by employed bees. Probability is calculated using the following expression: pi=fitin=1SNfiti.

In the scout bee phase, if a food source cannot be improved through a predetermined cycles, called “limit”, it is removed from the population, and the employed bee of that food source becomes scout. The scout bee finds a new random food source position using xij=xminj+rand[0,1](xmaxj-xminj), where xminj and xmaxj are lower and upper bounds of parameter j, respectively.

These steps are repeated through a predetermined number of cycles, called maximum cycle number (MCN), or until a termination criterion is satisfied [7, 8, 15].

4. The Multiobjective ABC Algorithm4.1. External Archive

As opposed to single-objective optimization, MOEAs usually maintain a nondominated solutions set. In multiobjective optimization, for the absence of preference information, none of the solutions can be said to be better than the others. Therefore, in our algorithm, we use an external archive to keep a historical record of the nondominated vectors found along the search process. This technique is used in many MOEAs [5, 16].

In the initialization phase, the external archive will be initialized. After initializing the solutions and calculating the value of every solution, they are sorted based on nondomination. We compare each solution with every other solution in the population to find which one is nondominated solution. We then put all nondominated solutions into external archive. The external archive will be updated at each generation.

4.2. Diversity

MOEA’s success is due to their ability to find a set of representative Pareto optimal solutions in a single run. In order to approximate the Pareto optimal set in a single optimization run, evolutionary algorithms have to perform a multimodal search where multiple, widely different solutions are to be found. Therefore, maintaining a diverse population is crucial for the efficacy of an MOEA. ABC algorithm has been demonstrated to possess superior performance in the single-objective domain. However, NSABC that we presented as the first version multiobjective ABC algorithm cannot get satisfactory results in terms of diversity metric. So as to apply ABC algorithm to multiobjective problems, we use the comprehensive learning strategy which is inspired by comprehensive learning particle swarm optimizer (CLPSO)  to ensure the diversity of population.

In our algorithm, all solutions in the external archive are regarded as food source positions, and all bees are regarded as onlooker bees. There do not exist employed bees and scouts. In each generation, each onlooker randomly chooses a food source from external archive, goes to the food source area, and then chooses a new food source. In original ABC algorithm, each bee finds a new food source by means of the information in the neighborhood of its current source. In our proposed MOABC algorithm, however, we use the comprehensive learning strategy. Like CLPSO, m dimensions of each individual are randomly chosen to learn from a random nondominated solution which comes from external archive. And each of the other dimensions of the individual learns from the other nondominated solutions. In our proposed NSABC algorithm, just one dimension of each individual is randomly chosen to learn from a random nondominated solution.

4.3. Update External Archive

As the evolution progresses, more and more new solutions enter the external archive. Considering that each new solution will be compared with every nondominated solution in the external archive to decide whether this new solution should stay in the archive, and the computational time is directly proportional to the number of comparisons, the size of external archive must be limited.

In our algorithm, each individual will find a new solution in each generation. If the new solution dominates the original individual, then the new solution is allowed to enter the external archive. On the other hand, if the new solution is dominated by the original individual, then it is denied access to the external archive. If the new solution and the original individual do not dominate each other, then we randomly choose one of them to enter the external archive. After each generation, we update external archive. We select nondominated solutions from the archive and keep them in the archive. If the number of nondominated solutions exceeds the allocated archive size, then crowding distance  is applied to remove the crowded members and to maintain uniform distribution among the archive members.

4.4. Crowding Distance

Crowding distance is used to estimate the density of solutions in the external archive. Usually, the perimeter of the cuboid formed by using the nearest neighbors as the vertices is called crowding distance. In Figure 2, the crowding distance of the ith solution is the average side length of the cuboid (shown with a dashed box) .

Crowding distance: Points are all nondominated solutions in the external archive.

The crowding distance computation requires sorting the population in the external archive according to each objective function value in ascending order of magnitude. Thereafter, for each objective function, the boundary solutions (solutions with smallest and largest function values) are assigned as an infinite distance value. All other intermediate solutions are assigned a distance value equal to the absolute normalized difference in the function values of two adjacent solutions. This calculation is continued with other objective functions. The overall crowding distance value is calculated as the sum of individual distance values corresponding to each objective. Each objective function is normalized before calculating the crowding distance .

4.5. The Multiobjective ABC Algorithm

The ABC algorithm is very simple when compared to the existing swarm-based algorithms. Therefore, we develop it to deal with multiobjective optimization problems. The main steps of the MOABC algorithm are shown in Algorithm 2.

<bold>Algorithm 2: </bold>Pseudocode for MOABC algorithm.

Main steps of the MOABC algorithm

(1) cycle = 1

(2) Initialize the food source positions (solutions) xi, i=1,,SN

(3) Evaluate the nectar amount (fitness fiti) of food sources

(4) The initialized solutions are sorted based on nondomination

(5) Store nondominated solutions in the external archive EA

(6) repeat

(7)  Onlooker Bees’ Phase

For each onlooker bee

Randomly chooses a solution from EA

Produce new solution vi by using expression  (4.1)

Calculate the value fiti

Apply greedy selection mechanism in Algorithm 3 to decide which solution enters EA

EndFor

(8)  The solutions in the EA are sorted based on nondomination

(9)  Keep the nondomination solutions of them staying in the EA

(10) If the number of nondominated solutions exceeds the allocated the size of EA

Use crowding distance to remove the crowded members

(11) cycle = cycle + 1.

(12) until cycle = Maximum Cycle Number

In the initialization phase, we evaluate the fitness of the initial food source positions and sort them based on nondomination. Then we select nondominated solutions and store them in the external archive EA. This is the initialization of the external archive.

In the onlooker bees’ phase, we use comprehensive learning strategy to produce new solutions vi. For each bee xi, it randomly chooses m dimensions and learns from a nondominated solution which is randomly selected from EA. And the other dimensions learn from the other nondominated solutions. The new solution is produced by using the following expression:vi,f(m)=xi,f(m)+ϕ(m)(EAk,f(m)-xi,f(m)), where k(1,2,,p) is randomly chosen index and p is the number of solutions in the EA. f(m) is the first m integers of a random permutation of the integers 1 : n, and f(m) defines which xi’s dimensions should learn from EAk. As opposed to ϕij in original ABC algorithm, ϕ(m) produce m random numbers which are all between [0,2]. And the m random numbers correspond to the m dimensions above. This modification makes the potential search space around EAk. The potential search spaces of MOABC on one dimension are plotted as a line in Figure 3. Each remaining dimension learns from the other nondominated solutions by using vij=xij+ϕij(EAlj-xij), where lk, j(1,2,,p) andjf(m). For NSABC algorithm, each bee xi randomly chooses a dimension and learns from a nondominated solution which is randomly selected from EA. The new solution vi is produced by just using expression (4.2). After producing new solution, we calculate the fitness and apply greedy selection mechanism to decide which solution enters EA. The selection mechanism is shown in Algorithm 3.

<bold>Algorithm 3: </bold>Greedy selection mechanism.

Greedy selection mechanism

If vi dominates xi

Put vi into EA

Else if xi dominates vi

Do nothing

Else if xi and vi are not dominated by each other

Put vi into EA

Produce a random number r drawn from a uniform distribution on the unit interval

If r<0.5

The the original solution is replaced by the new solution as new food source position.

That means xi is replaced by vi.

Else

Do nothing

End If

End If

Possible search regions per dimensions: 0<ϕj<1 (left); 1<ϕj<2 (right).

In nondominated sorting phase, after a generation, the solutions in the EA are sorted based on nondomination, and we keep the nondomination solutions of them staying in the EA. If the number of nondominated solutions exceeds the allocated size of EA, we use crowding distance to remove the crowded members. Crowding distance algorithm is seen in .

5. Experiments

In the following, we will first describe the benchmark functions used to compare the performance of MOABC with NSABC, NSGA-II, and MOCLPSO. And then we will introduce performance measures. For every algorithm, we will give the parameter settings. At last, we will present simulation results for benchmark functions.

5.1. Benchmark Functions

In order to illustrate the performance of the proposed MOABC algorithm, we used several well-known test problems SCH, FON, ZDT1 to ZDT4, and ZDT6 as the two-objective test functions. And we used Deb-Thiele-Laumanns-Zitzler (DTLZ) problem family as the three-objective test functions.

SCH

Although simple, the most studied single-variable test problem is Schaffer’s two-objective problem [17, 18] SCH:{Minimizef1(x)=x2,Minimizef2(x)=(x-2)2-103x103. This problem has Pareto optimal solution x*[0,2], and the Pareto optimal set is a convex set: f2*=(f1*-2)2.

FON

Schaffer and Fonseca and Fleming used a two-objective optimization problem (FON) [18, 19] having n variables: FON:{Minmizef1(x)=1-exp(-i=1n(xi-1n)2),Minmizef2(x)=1-exp(-i=1n(xi+1n)2)-4xi4i=1,2,,n.

The Pareto optimal solution to this problem is xi*=[-1/n,1/n]  for i=1,2,,n. These solutions also satisfy the following relationship between the two function values: f2*=1-exp{-[2--ln(1-f1*)]2} in the range 0f1*1-exp(-4). The interesting aspect is that the search space in the objective space and the Pareto optimal function values do not depend on the dimensionality (the parameter n) of the problem. In our paper, we set n=3. And the optimal solutions are x1=x2=x3[-1/3,1/3].

ZDT1

This is a 30-variable (n=30) problem having a convex Pareto optimal set. The functions used are as follows: ZDT1:{Minimizef1(x)=x1,Minimizef2(x)=g(x)[1-x1g(x)  ],g(x)=1+9(i=2nxi)n-1  . All variables lie in the range [0,1]. The Pareto optimal region corresponds to 0x1*1 and x1*=0 for i=2,3,,30 .

ZDT2

This is also an n=30 variable problem having a nonconvex Pareto optimal set: ZDT2:{Minimizef1(x)=x1,Minimizef2(x)=g(x)[1-(x1g(x))2],g(x)=1+9(i=2nxi)n-1. All variables lie in the range [0,1]. The Pareto optimal region corresponds to 0x1*1 and x1*=0 for i=2,3,,30 .

ZDT3

This is an n=30 variable problem having a number of disconnected Pareto optimal fronts: ZDT3:{Minimizef1(x)=x1,Minimizef2(x)=g(x)[1-x1g(x)-x1g(x)sin(10πx1)],g(x)=1+9(i=2nxi)n-1. All variables lie in the range [0,1]. The Pareto optimal region corresponds to x1*=0 for i=2,3,,30, and hence not all points satisfying 0x1*1 lie on the Pareto optimal front .

ZDT4

This is an n=10 variable problem having a convex Pareto optimal set: ZDT4:{Minimizef1(x)=x1,Minimizef2(x)=g(x)[1-x1g(x)],g(x)=1+10(n-1)+i=2n[xi2-10cos(4πxi)]. The variable x1 lies in the range [0,1], but all others in the range [-5,5]. The Pareto optimal region corresponds to 0x1*1 and x1*=0 for i=2,3,,10 .

ZDT6

This is a 10-variable problem having a nonconvex Pareto optimal set. Moreover, the density of solutions across the Pareto optimal region is nonuniform, and the density towards the Pareto optimal front is also thin: ZDT6:{Minimizef1(x)=1-exp(-4x1)sin6(6πx1),Minimizef2(x)=g(x)[1-(f1(x)g(x))2],g(x)=1+9[i=2nxi  n-1]0.25. All variables lie in the range [0,  1]. The Pareto optimal region corresponds to 0x1*1 and x1*=0 for i=2,3,,10 .

DTLZ1

A simple test problem uses M objectives with a linear Pareto optimal front: DTLZ1:  {Minimizef1(x)=12x1x2xM-1(1+g(xM)),Minimizef2(x)=12x1x2(1-xM-1)(1+g(xM)),MinimizefM-1(x)=12x1(1-x2)(1+g(xM)),MinimizefM(x)=12(1-x1)(1+g(xM))Subject  to0xi1,fori=1,,n,Whereg(xM)=100(|xM|+xixM(xi-0.5)2-cos(20π(xi-0.5))). The Pareto optimal solution corresponds to xi*=0.5 (xi*xM), and the objective function values lie on the linear hyperplane: m=1Mfm*=0.5. A value of k=5 is suggested here. In the above problem, the total number of variables is n=M+k-1. The difficulty in this problem is to converge to the hyperplane [21, 22].

DTLZ2

This test problem has a spherical Pareto optimal front: DTLZ2:{Minimizef1(x)=(1+g(xM))cos(x1π2)cos(xM-1π2),Minimizef2(x)=(1+g(xM))cos(x1π2)sin(xM-1π2),MinimizefM(x)=(1+g(xM))sin(x1π2)Subject  to0xi1,fori=1,,n,Whereg(xM)=xixM(xi-0.5)2. The Pareto optimal solutions correspond to xi*=0.5 (xi*xM), and all objective function values must satisfy the m=1M(fm*)2=1. As in the previous problem, it is recommended to use k=|xM|=10. The total number of variables is n=M+k-1 as suggested [21, 22].

DTLZ3

In order to investigate an MOEA’s ability to converge to the global Pareto optimal front, the g function given in ZDTL1 is used in ZDTL2 problem: DTLZ3:{Minimizef1(x)=(1+g(xM))cos(x1π2)cos(xM-1π2),Minimizef2(x)=(1+g(xM))cos(x1π2)sin(xM-1π2),MinimizefM(x)=(1+g(xM))sin(x1π2)Subjectto0xi1,fori=1,,n,Whereg(xM)=100(|xM|+xixM(xi-0.5)2-cos(20π(xi-0.5))). It is suggested that k=|xM|=10. There are a total of n=M+k-1 decision variables in this problem. The Pareto optimal solution corresponds to xi*=0.5 (xi*xM) [21, 22].

DTLZ6

This test problem has 2M-1 disconnected Pareto optimal regions in the search space: DTLZ6:{Minimizef1(x)=x1,Minimizef2(x)=x2,MinimizefM-1(x)=xM-1,MinimizefM(x)=(1+g(xM))h(f1,f2,,fM-1,g)Subjectto0xi1,fori=1,,n,Whereg(xM)=1+g|xM|xixMxi,h=M-i=1M-1[fi1+g(1+sin(3πfi))]. The functional g requires k=|xM| decision variables, and the total number of variables is n=M+k-1. It is suggested that k=20 [21, 22].

5.2. Performance Measures

In order to facilitate the quantitative assessment of the performance of a multiobjective optimization algorithm, two performance metrics are taken into consideration: (1) convergence metric Υ; (2) diversity metric Δ .

5.2.1. Convergence Metric

This metric measures the extent of convergence to a known set of Pareto optimal solutions, as follows:Υ=i=1NdiN, where N is the number of nondominated solutions obtained with an algorithm and di is the Euclidean distance between each of the nondominated solutions and the nearest member of the true Pareto optimal front. To calculate this metric, we find a set of H=10000 uniformly spaced solutions from the true Pareto optimal front in the objective space. For each solution obtained with an algorithm, we compute the minimum Euclidean distance of it from H chosen solutions on the Pareto optimal front. The average of these distances is used as the convergence metric Υ. Figure 4 shows the calculation procedure of this metric .

Convergence metric Υ.

5.2.2. Diversity Metric

This metric measure the extent of spread achieved among the obtained solutions. Here, we are interested in getting a set of solutions that spans the entire Pareto optimal region. This metric is defined as:Δ=df+dl+i=1N-1|di-d¯|df+dl+(N-1)d¯, where di is the Euclidean distance between consecutive solutions in the obtained nondominated set of solutions and N is the number of nondominated solutions obtained with an algorithm. d¯ is the average value of these distances. df and dl are the Euclidean distances between the extreme solutions and the boundary solutions of the obtained nondominated set, as depicted in Figure 5.

Diversity metric Δ.

5.3. Compared Algorithms and Parameter Settings5.3.1. Nondominated Sorting Genetic Algorithm II (NSGA-II)

This algorithm was proposed by Deb et al. It is a revised version of the NSGA proposed by Guria . NSGA-II is a computationally fast and elitist MOEA based on a nondominated sorting approach. It replaced the use of sharing function with the new crowded comparison approach, thereby eliminating the need of any user-defined parameter for maintaining the diversity among the population members. NSGA-II has proven to be one of the most efficient algorithms for multiobjective optimization due to simplicity, excellent diversity-preserving mechanism, and convergence near the true Pareto optimal set.

The original NSGA-II algorithm uses simulated binary crossover (SBX) and polynomial crossover. We use a population size of 100. Crossover probability pc=0.9 and mutation probability is pm=1/n, where n is the number of decision variables. The distribution indices for crossover and mutation operators are ηcmu = 20 and ηmmum = 20, respectively .

5.3.2. Multiobjective Comprehensive Learning Particle Swarm Optimizer (MOCLPSO)

MOCLPSO was proposed by Huang et al. . It extends the comprehensive learning particle swarm optimizer (CLPSO) algorithm to solve multiobjective optimization problems. MOCLPSO incorporates Pareto dominance concept into CLPSO. It also uses an external archive technique to keep a historical record of the nondominated solutions obtained during the search process.The MOCLPSO uses population size NP=50, archive size A=100, learning probability pc=0.1, elitism probability pm=0.4 .

For the MOABC, a colony size of 50, archive size A=100, elitism probability pm=0.4. NSABC algorithm does not need elitism probability. In the experiment, so as to compare the different algorithms, a fair time measure must be selected. It is, therefore, decided to use the number of function evaluations (FEs) as a time measure . Thereby FEs in a time measure is our termination criterion.

5.4. Simulation Results for Benchmark Functions

The experimental results, including the best, worst, average, median, and standard deviation of the convergence metric and diversity metric values found in 10 runs are proposed in Tables 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, and 22 and all algorithms are terminated after 10000 and 20000 function evaluations, respectively. Figures 6, 7, 8, 9, 10, and 12 show the optimal front obtained by four algorithms for two-objective problems. The continuous lines represent the Pareto optimal front; star spots represent nondominated solutions found. Figure 11 shows the results of NSABC, MOABC, and NSGA-II algorithms optimizing the test function ZDT4. Figures 13, 14, 15, 16, 17, 18, 19, and 20 show the true Pareto optimal front and the optimal front obtained by four algorithms for DTLZ series problems.

Comparison of performance on SCH after 10000 FEs.

SCHNSABCMOABCMOCLPSONSGA-II
Converge metricAverage1.2621e-0041.1716e-0044.3657e-0041.8141e-004
Median1.2791e-0041.0273e-0044.6393e-0041.8131e-004
Best8.8590e-0058.3119e-0051.1484e-0041.2924e-004
Worst2.1045e-0042.9875e-0047.5822e-0042.3095e-004
Std3.8892e-0056.4562e-0052.1962e-0043.2392e-005

Diversity metricAverage3.5804e-0016.6144e-0018.1418e-0017.5673e-001
Median3.5641e-0016.6444e-0018.0321e-0017.5785e-001
Best3.0112e-0016.4593e-0017.5339e-0017.2893e-001
Worst3.9650e-0016.7325e-0018.8260e-0017.7713e-001
Std2.9056e-0021.0953e-0025.3376e-0021.5143e-002

Comparison of performance on SCH after 20000 FEs.

SCHNSABCMOABCMOCLPSONSGA-II
Converge metricAverage1.4112e-0041.2621e-0043.5428e-0042.1868e-004
Median1.2137e-0041.2791e-0042.4166e-0042.0044e-004
Best8.5338e-0058.8590e-0051.1154e-0041.2921e-004
Worst3.1097e-0042.1045e-0041.2674e-0033.5484e-004
Std6.4495e-0053.8892e-0053.4346e-0048.3707e-005

Diversity metricAverage3.5641e-0013.5804e-0014.1216e-0017.3525e-001
Median3.5852e-0013.5641e-0013.6338e-0017.3941e-001
Best3.0512e-0013.0112e-0012.7346e-0017.1568e-001
Worst4.1034e-0013.9650e-0016.7878e-0017.5692e-001
Std3.0259e-0022.9056e-0021.3514e-0011.4628e-002

Comparison of performance on FON after 10000 FEs.

FONNSABCMOABCMOCLPSONSGA-II
Converge metricAverage1.8273e-0032.5970e-0032.5497e-0031.3845e-003
Median1.7880e-0032.5151e-0032.5663e-0031.0898e-003
Best1.4036e-0032.1860e-0032.2776e-0034.7267e-004
Worst2.4229e-0033.2428e-0032.8616e-0033.1236e-003
Std2.9514e-0043.1462e-0041.6583e-0049.5901e-004

Diversity metricAverage2.9810e-0012.9219e-0012.9647e-0017.9613e-001
Median2.9468e-0012.9295e-0012.9470e-0018.3509e-001
Best2.6703e-0012.7029e-0012.5571e-0016.9076e-001
Worst3.4473e-0013.2248e-0013.3743e-0019.2519e-001
Std2.6981e-0021.7651e-0022.3500e-0029.1073e-002

Comparison of performance on FON after 20000 FEs.

FONNSABCMOABCMOCLPSONSGA-II
Converge metricAverage1.7538e-0032.1794e-0032.2194e-0031.9599e-003
Median1.6810e-0032.2196e-0032.3107e-0032.2283e-003
Best1.4394e-0031.8691e-0031.5167e-0034.8366e-004
Worst2.3287e-0032.4895e-0033.0426e-0033.1664e-003
Std2.7467e-0042.1335e-0044.4931e-0049.8050e-004

Diversity metricAverage3.0233e-0012.3371e-0012.5764e-0018.2412e-001
Median2.9958e-0012.3321e-0012.5815e-0018.3561e-001
Best2.8358e-0011.9060e-0012.3411e-0016.8418e-001
Worst3.4571e-0012.9052e-0012.9385e-0019.4643e-001
Std1.8006e-0023.2232e-0022.0916e-0028.5596e-002

Comparison of performance on ZDT1 after 10000 FEs.

ZDT1NSABCMOABCMOCLPSONSGA-II
Converge metricAverage1.0497e-0012.3149e-0022.4169e-0032.2278e-001
Median5.0541e-0022.2455e-0022.4156e-0031.4410e-001
Best1.1793e-0021.8867e-0021.7281e-0037.2072e-002
Worst3.0166e-0012.7309e-0023.1017e-0038.7348e-001
Std1.1094e-0012.6057e-0034.9069e-0042.3806e-001

Diversity metricAverage6.5022e-0013.3061e-0012.9773e-0015.9362e-001
Median6.4527e-0013.3108e-0012.9207e-0015.2694e-001
Best5.5856e-0012.8898e-0012.6434e-0014.5810e-001
Worst7.6730e-0013.7446e-0013.5083e-0019.1451e-001
Std5.8890e-0022.4324e-0022.4574e-0021.5649e-001

Comparison of performance on ZDT1 after 20000 FEs.

ZDT1NSABCMOABCMOCLPSONSGA-II
Converge metricAverage9.9284e-0021.5898e-0042.0862e-0031.1987e-001
Median6.2180e-0021.6574e-0042.0433e-0036.6423e-002
Best1.1536e-0039.8722e-0051.1930e-0032.6298e-002
Worst5.3893e-0012.0130e-0042.8804e-0035.1706e-001
Std1.5779e-0014.1578e-0055.1917e-0041.5036e-001

Diversity metricAverage8.0721e-0013.4882e-0013.1462e-0014.9501e-001
Median8.0156e-0013.5069e-0013.1627e-0014.6744e-001
Best6.0949e-0012.9640e-0012.9149e-0014.2585e-001
Worst9.1995e-0014.1645e-0013.3882e-0016.5378e-001
Std9.4948e-0023.5944e-0021.7857e-0027.4913e-002

Comparison of performance on ZDT2 after 10000 FEs.

ZDT2NSABCMOABCMOCLPSONSGA-II
Converge metricAverage1.9550e+0001.0023e-0039.7365e-0042.9476e-001
Median2.0100e+0009.5537e-0041.0350e-0031.8105e-001
Best3.8637e-0017.2183e-0043.3829e-0041.0668e-001
Worst3.6481e+0001.5690e-0031.4223e-0039.8218e-001
Std9.2515e-0012.5380e-0043.7382e-0042.6288e-001

Diversity metricAverage8.9986e-0012.9352e-0013.1760e-0017.6302e-001
Median9.2174e-0012.9185e-0013.1660e-0017.2999e-001
Best7.6513e-0012.5003e-0012.6717e-0014.6617e-001
Worst9.6541e-0013.3111e-0013.5091e-0011.0542e+000
Std5.8934e-0022.2147e-0022.6171e-0022.3567e-001

Comparison of performance on ZDT2 after 20000 FEs.

ZDT2NSABCMOABCMOCLPSONSGA-II
Converge metricAverage8.6101e-0021.0592e-0041.0238e-0031.1274e-001
Median7.2305e-0029.5653e-0051.0092e-0039.8693e-002
Best4.3562e-0027.4842e-0056.2493e-0043.5737e-002
Worst3.1215e-0011.4481e-0041.4683e-0033.3153e-001
Std9.1252e-0012.5134e-0052.6058e-0048.5585e-002

Diversity metricAverage8.4325e-0013.4310e-0013.1718e-0015.5863e-001
Median8.5472e-0013.4839e-0012.9621e-0015.1789e-001
Best7.3456e-0012.9056e-0013.7037e-0014.4479e-001
Worst9.3123e-0013.8678e-0012.0841e-0021.0050e+000
Std7.0645e-0023.1759e-0023.1718e-0011.6478e-001

Comparison of performance on ZDT3 after 10000 FEs.

ZDT3NSABCMOABCMOCLPSONSGA-II
Converge metricAverage6.1941e-0031.5551e-0032.5320e-0032.1662e-001
Median1.2600e-0031.5186e-0032.4152e-0031.5563e-001
Best1.0374e-0031.2901e-0031.6893e-0037.9710e-002
Worst3.8055e-0022.1061e-0033.1702e-0036.3936e-001
Std1.1842e-0022.2384e-0045.3581e-0041.7221e-001

Diversity metricAverage9.0080e-0016.6057e-0017.2017e-0017.0351e-001
Median9.0374e-0016.6325e-0017.2578e-0016.9255e-001
Best8.0100e-0016.3194e-0016.5412e-0016.5156e-001
Worst1.0213e+0006.8535e-0017.8368e-0018.1501e-001
Std6.9121e-0021.8462e-0023.3709e-0024.6110e-002

Comparison of performance on ZDT3 after 20000 FEs.

ZDT3NSABCMOABCMOCLPSONSGA-II
Converge metricAverage1.5543e-0043.5783e-0041.0331e-0036.9862e-002
Median1.1933e-0043.6670e-0041.0744e-0032.9758e-002
Best7.5450e-0052.6324e-0047.3706e-0041.6795e-002
Worst4.0513e-0044.5799e-0041.1982e-0033.0308e-001
Std9.9259e-0056.2428e-0051.6711e-0049.2471e-002

Diversity metricAverage1.0943e+0006.2668e-0016.3686e-0016.8500e-001
Median1.1146e+0006.2607e-0016.3902e-0016.7755e-001
Best9.4478e-0016.0343e-0016.0709e-0016.4228e-001
Worst1.1992e+0006.5011e-0016.5457e-0017.9273e-001
Std8.7036e-0021.5613e-0021.3501e-0024.1238e-002

Comparison of performance on ZDT4 after 10000 FEs.

ZDT4NSABCMOABCMOCLPSONSGA-II
Converge metricAverage2.1278e+0015.3904e+000N/A3.2868e+001
Median1.3447e+0015.2482e+000N/A3.3219e+001
Best7.0411e+0003.2012e+000N/A2.9451e+001
Worst5.4023e+0018.3100e+000N/A3.6560e+001
Std1.7517e+0011.6624e+000N/A2.1154e+000

Diversity metricAverage9.2818e-0019.0817e-001N/A9.8813e-001
Median9.2668e-0018.9116e-001N/A9.9170e-001
Best8.9696e-0018.3111e-001N/A9.6282e-001
Worst9.6675e-0019.9888e-001N/A1.0172e+000
Std2.5872e-0026.5026e-002N/A1.6292e-002

Comparison of performance on ZDT4 after 20000 FEs.

ZDT4NSABCMOABCMOCLPSONSGA-II
Converge metricAverage2.8242e+0013.9615e+000N/A1.5026e+001
Median2.8297e+0013.7351e+000N/A1.3504e+001
Best5.3764e+0003.9397e-001N/A9.2747e+000
Worst5.5253e+0017.6236e+000N/A2.8434e+001
Std1.4687e+0012.1985e+000N/A5.3418e+000

Diversity metricAverage9.3404e-0018.2135e-001N/A9.4884e-001
Median9.4428e-0018.5428e-001N/A9.4495e-001
Best8.6520e-0015.8305e-001N/A9.1134e-001
Worst9.5992e-0018.8787e-001N/A1.0004e+000
Std2.8531e-0029.0785e-002N/A2.4879e-002

Comparison of performance on ZDT6 after 10000 FEs.

ZDT6NSABCMOABCMOCLPSONSGA-II
Converge metricAverage9.1927e-0013.9988e-0031.5839e-0023.3642e+000
Median1.8809e-0052.8035e-0058.1531e-0033.9688e+000
Best1.1153e-0051.9642e-0052.3569e-0056.9186e-001
Worst3.2480e+0002.6604e-0025.3588e-0024.7179e+000
Std1.4822e+0008.9540e-0031.9448e-0021.4822e+000

Diversity metricAverage9.0099e-0016.0121e-0018.1936e-0011.1219e+000
Median9.5394e-0014.7318e-0017.5949e-0011.1187e+000
Best6.9802e-0014.3742e-0013.9620e-0011.0231e+000
Worst1.0392e+0001.2171e+0001.3417e+0001.2849e+000
Std1.1021e-0012.7989e-0014.2953e-0018.8914e-002

Comparison of performance on ZDT6 after 20000 FEs.

ZDT6NSABCMOABCMOCLPSONSGA-II
Converge metricAverage4.3038e-0056.7609e-0041.4469e-0024.6871e-001
Median4.2428e-0052.4510e-0052.4791e-0054.1021e-001
Best3.8692e-0062.1576e-0052.2898e-0057.1529e-002
Worst4.9351e-0036.5408e-0039.1647e-0021.2285e+000
Std2.9946e-0042.0606e-0032.9694e-0023.7985e-001

Diversity metricAverage7.7320e-0014.9926e-0016.3819e-0011.1519e+000
Median7.3399e-0014.5840e-0014.1180e-0011.1497e+000
Best6.4696e-0014.4708e-0013.5906e-0011.0412e+000
Worst1.0856e+0008.1645e-0011.2848e+0001.2939e+000
Std1.2741e-0011.1284e-0013.9246e-0018.1261e-002

Comparison of performance on DTLZ1 after 10000 FEs.

DTLZ1NSABCMOABCMOCLPSONSGA-II
Converge metricAverage5.6686e+0006.9637e+000N/A1.4780e+001
Median4.3841e+0007.4922e+000N/A1.4598e+001
Best1.0581e+0001.5215e+000N/A9.8158e+000
Worst9.7176e+0001.1092e+001N/A1.8511e+001
Std3.5367e+0004.1292e+000N/A2.7797e+000

Diversity metricAverage4.6157e-0014.5170e-001N/A5.5704e-001
Median4.6061e-0014.4991e-001N/A5.4700e-001
Best3.9987e-0014.0565e-001N/A4.8393e-001
Worst5.1209e-0014.9551e-001N/A6.5260e-001
Std3.3377e-0023.1004e-002N/A5.1337e-002

Comparison of performance on DTLZ1 after 20000 FEs.

DTLZ1NSABCMOABCMOCLPSONSGA-II
Converge metricAverage4.5813e+0002.3609e+000N/A1.6197e+001
Median4.2016e+0001.3339e+000N/A1.6815e+001
Best1.1025e+0006.9640e-001N/A9.7891e+000
Worst1.0148e+0017.9295e+000N/A2.2004e+001
Std2.7762e+0002.2890e+000N/A3.7379e+000

Diversity metricAverage4.5833e-0014.6144e-001N/A5.7586e-001
Median4.6618e-0014.4855e-001N/A5.9588e-001
Best4.0286e-0014.0314e-001N/A4.0561e-001
Worst4.8518e-0015.4159e-001N/A6.8804e-001
Std2.6031e-0025.0967e-002N/A8.7688e-002

Comparison of performance on DTLZ2 after 10000 FEs.

DTLZ2NSABCMOABCMOCLPSONSGA-II
Converge metricAverage2.1341e-0027.6836e-0031.0976e-0018.7363e-002
Median2.1827e-0026.7337e-0031.1294e-0018.9419e-002
Best1.8141e-0024.9233e-0039.4400e-0026.0319e-002
Worst2.4931e-0021.2708e-0021.1705e-0011.1310e-001
Std2.3165e-0032.4500e-0037.4233e-0031.6418e-002

Diversity metricAverage4.1414e-0014.1177e-0014.1629e-0016.0903e-001
Median4.1040e-0014.1107e-0014.1578e-0015.0881e-001
Best3.9349e-0013.5942e-0013.7554e-0013.7879e-001
Worst4.7171e-0014.7267e-0014.7455e-0017.3358e-001
Std2.1464e-0023.1091e-0023.0363e-0021.8498e-002

Comparison of performance on DTLZ2 after 20000 FEs.

DTLZ2NSABCMOABCMOCLPSONSGA-II
Converge metricAverage1.6461e-0022.9095e-0031.0504e-0018.8402e-002
Median1.6400e-0022.8372e-0031.0476e-0019.4425e-002
Best1.3958e-0022.6826e-0039.5225e-0026.6176e-002
Worst1.8900e-0023.3161e-0031.1362e-0011.0847e-001
Std1.6644e-0032.3986e-0045.5136e-0031.3819e-002

Diversity metricAverage4.1713e-0014.1041e-0014.1786e-0016.2601e-001
Median4.1918e-0014.0989e-0014.1565e-0016.1671e-001
Best3.6375e-0013.9217e-0013.8277e-0013.8379e-001
Worst4.8046e-0014.3465e-0014.7103e-0017.3020e-001
Std3.6245e-0021.3692e-0022.7804e-0024.3631e-002

Comparison of performance on DTLZ3 after 10000 FEs.

DTLZ3NSABCMOABCMOCLPSONSGA-II
Converge metricAverage1.0604e+0025.2341e+001N/A1.0995e+002
Median1.0317e+0025.0666e+001N/A1.0583e+002
Best5.0475e+0012.2050e+001N/A6.3606e+001
Worst1.5592e+0028.7997e+001N/A1.5401e+002
Std3.0211e+0011.9430e+001N/A2.6229e+001

Diversity metricAverage4.3775e-0014.4570e-001N/A9.6799e-001
Median4.3628e-0014.4365e-001N/A9.0815e-001
Best3.9653e-0014.2066e-001N/A5.7383e-001
Worst4.6588e-0014.9020e-001N/A1.3928e+000
Std2.2679e-0022.0921e-002N/A2.6257e-001

Comparison of performance on DTLZ3 after 20000 FEs.

DTLZ3NSABCMOABCMOCLPSONSGA-II
Converge metricAverage9.3970e+0012.9514e+001N/A8.9755e+001
Median8.9480e+0011.9201e+001N/A8.7459e+001
Best5.8510e+0011.5806e+001N/A5.8992e+001
Worst1.3147e+0026.5907e+001N/A1.2936e+002
Std2.3911e+0011.8973e+001N/A2.2698e+001

Diversity metricAverage4.3399e-0014.2490e-001N/A9.0585e-001
Median4.3223e-0014.1868e-001N/A8.7254e-001
Best4.0358e-0013.7111e-001N/A7.2752e-001
Worst4.8145e-0014.7686e-001N/A1.2918e+000
Std2.5498e-0023.5592e-002N/A1.7067e-001

Comparison of performance on DTLZ6 after 10000 FEs.

DTLZ6NSABCMOABCMOCLPSONSGA-II
Converge metricAverage5.5914e-0012.9901e-0022.3412e-0026.6160e-001
Median2.9461e-0022.7952e-0022.5935e-0022.8640e-001
Best2.0189e-0021.9106e-0025.6145e-0031.6724e-001
Worst4.8909e+0003.7997e-0023.4578e-0023.1497e+000
Std1.5271e+0006.7314e-0038.1996e-0039.2366e-001

Diversity metricAverage5.4587e-0015.4715e-0015.2084e-0015.7305e-001
Median5.4631e-0015.5144e-0014.9934e-0015.5839e-001
Best4.7349e-0015.0172e-0014.6254e-0014.5813e-001
Worst6.3240e-0016.0581e-0016.2609e-0017.0776e-001
Std4.9975e-0022.9597e-0025.1694e-0027.2624e-002

Comparison of performance on DTLZ6 after 20000 FEs.

DTLZ6NSABCMOABCMOCLPSONSGA-II
Converge metricAverage1.4560e-0012.2293e-0023.0709e-0024.1934e-001
Median1.1240e-0012.2006e-0023.1585e-0022.7839e-001
Best5.4431e-0021.8476e-0022.1425e-0021.2079e-001
Worst4.6437e-0012.7389e-0023.9060e-0021.9577e+000
Std1.1962e-0012.8880e-0034.9988e-0035.4684e-001

Diversity metricAverage5.1947e-0015.2313e-0015.2114e-0015.8184e-001
Median5.1032e-0015.2860e-0015.1470e-0015.9360e-001
Best4.9349e-0014.7949e-0014.9429e-0014.9585e-001
Worst5.4816e-0015.8122e-0015.8158e-0016.7556e-001
Std1.9948e-0023.2645e-0022.9098e-0025.8605e-002

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on SCH after 20000 FEs.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on FON after 20000 FEs.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on ZDT1 after 20000 FEs.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on ZDT2 after 20000 FEs.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on ZDT3 after 20000 FEs.

Pareto fronts obtained by NSABC, MOABC, and NSGA-II on ZDT4 after 20000 FEs.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on ZDT6 after 20000 FEs.

The true Pareto optimal front on DTLZ1.

Pareto fronts obtained by NSABC, MOABC, and NSGA-II on DTLZ1 after 20000 FEs.

The true Pareto optimal front on DTLZ2.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on DTLZ2 after 20000 FEs.

The true Pareto optimal front on DTLZ3.

Pareto fronts obtained by NSABC, MOABC, and NSGA-II on DTLZ3 after 20000 FEs.

The true Pareto optimal front on DTLZ6.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on DTLZ6 after 20000 FEs.

From Tables 1 and 2, we can see that the performances of all the four algorithms in convergence metric have been competitively good over this problem. All solutions of them are almost on the true Pareto front. In diversity metric aspect, however, the performance of NSABC is better than the other algorithms. When given 20000 function evaluations for them, MOABC and MOCLPSO algorithms improve diversity metric. MOABC is as good as NSABC in diversity metric. From Figure 6, it can be seen that the front obtained from MOABC, NSABC, and MOCLPSO are found to be uniformly distributed. However, NSGA-II algorithm is not able to cover the full Pareto front. On the whole the MOABC and NSABC algorithms are a little better than MOCLPSO and NSGA-II on SCH problem.

For FON problem, like SCH, it can be observed from Tables 3 and 4 that all algorithms perform very well in convergence metric. In diversity metric aspect, MOABC, NSABC, and MOCLPSO algorithms can guarantee a good performance. On the other hand, even though NSGA-II is able to find the true Pareto front for this problem, it cannot maintain a relatively good diversity metric. It is only able to cover the half Pareto front, supporting the results of Figure 7. From Tables 3 and 4, we can also see that the performances of all the four algorithms could not be improved even in 20000 function evaluations.

On ZDT1 function, when given 10000 function evaluations for four algorithms, Table 5 shows that the performance of MOABC in convergence metric is one order of magnitude better than that of NSABC and NSGA-II, but it is one order of magnitude worse than that of MOCLPSO. However, when the number of function evaluations is 20000, we can see from Tables 5 and 6 that MOABC can greatly improve the performance of convergence and it has been two orders of magnitude better than that of MOCLPSO. For diversity metric, one can note from tables and algorithms that MOABC outperform NSABC in terms of diversity metric. Figure 8 shows that MOABC and MOCLPSO can discover a well-distributed and diverse solution set for this problem. However, NSABC and NSGA-II only find a sparse distribution, and they cannot archive the true Pareto front for ZDT1.

On ZDT2 function, the results of the performance measures show that MOABC and MOCLPSO have better convergence and diversity compared to the NSABC and NSGA-II. From Table 8, one can note that the performance of MOABC in convergence metric is one order of magnitude better than that of MOCLPSO after 20000 function evaluations. Figure 9 shows that NSGA-II produces poor results on this test function and it cannot achieve the true Pareto front. In spite of NSABC getting a relatively good convergence metric, it cannot maintain a good diversity metric.

ZDT3 problem has a number of disconnected Pareto optimal fronts. The situation is the same on ZDT2 problem. The performances of MOABC and MOCLPSO are almost the same after 10000 function evaluations, and they are a little better than that of NSABC, and two orders of magnitude better than that of NSGA-II in convergence metric. However, when the number of function evaluations is 20000, it is found from Table 10 that the performances of MOCLPSO and NSGA-II could not be improved while MOABC and NSABC can improve the convergence metric by almost one order of magnitude. In diversity metric aspect, the results show that NSABC has worse diversity compared to the other algorithms, as can be seen in Figure 10.

The problem ZDT4 has 219 or about 8(1011) different local Pareto optimal fronts in the search space, of which only one corresponds to the global Pareto optimal front. The Euclidean distance in the decision space between solutions of two consecutive local Pareto optimal sets is 0.25 . From Tables 11 and 12, it can be seen that MOABC, NSABC, and NSGA-II produce poor results in convergence metric. These three algorithms get trapped in one of its 219 local Pareto optimal fronts. Even though they cannot find the true Pareto front, they are able to find a good spread of solutions at a local Pareto optimal front, as shown in Figure 11. Since MOCLPSO converges poorly on this problem, we do not show MOCLPSO results on Figure 11 and Tables 11 and 12.

From Tables 13 and 14, we can see that the performances of MOABC, NSABC, and MOCLPSO are very good on this problem, and they outperform NSGA-II in terms of convergence metric and diversity metric. NSGA-II has difficulty in converging near the global Pareto optimal front. Considering the average and median values, we can observe that the proposed MOABC algorithm has better convergence in most test runs. For diversity metric, Figure 12 shows that MOABC and MOCLPSO effectively find nondominated solutions spread over the whole Pareto front. In spite of NSABC archiving the true Pareto front, it cannot maintain a good diversity metric. NSGA-II gets the worse distribution of solutions on the set of optimal solutions found.

From Tables 15 and 16, we can see that the performances of MOABC and NSABC are one order of magnitude better than that of NSGA-II in convergence metric. Since MOCLPSO converges poorly on this problem, we do not show MOCLPSO results. For diversity metric, Figure 14 shows that NSABC and MOABC effectively find nondominated solutions spreading over the whole Pareto front.

However, NSGA-II gets the worse distribution of solutions on the set of optimal solutions found.

On DTLZ2 function, from Tables 16 and 17, we can see that the performances of all the four algorithms in convergence metric have been competitively good over this problem. However, we can see that the performance of MOABC in convergence metric is one order of magnitude better than that of NSABC and NSGA-II and two orders of magnitude better than that of MOCLPSO. From Figure 16, it can be seen that the front obtained from MOABC, NSABC, and MOCLPSO is found to be uniformly distributed. However, NSGA-II algorithm is not able to cover the full Pareto front.

Like the DTLZ1 function, MOCLPSO converges poorly on DTLZ3 problem. For this problem, all algorithms could not quite converge on to the true Pareto optimal fronts after 20000 FEs. MOABC is a little better than the other algorithms in convergence metric. From tables, algorithms, and Figure 16, we have found that NSGA-II is bad in solving DTLZ3 in diversity metric. However, MOABC and NSABC can discover a well-distributed and diverse solution set for this problem.

DTLZ6 problem has 2M-1 disconnected Pareto optimal regions in the search space. This problem will test an algorithm’s ability to maintain subpopulation in different Pareto optimal regions. From Tables 21 and 22, we can observe that the performance of MOABC and MOCLPSO in convergence metric is one order of magnitude better than that of NSABC and NSGA-II. From Figure 20, MOABC and MOCLPSO can discover a well-distributed and diverse solution set for this problem. However, the performances of NSABC and NSGA-II in diversity metric are a little worse than that of MOABC and MOCLPSO.

6. Summary and Conclusion

In this paper, we present a novel article bee colony (ABC) algorithm to solving multiobjective optimization problems, namely, multiobjective artificial bee colony (MOABC). In our algorithm, we use Pareto concept and external archive strategy to make the algorithm converge to the true Pareto optimal front and use comprehensive learning strategy to ensure the diversity of population. The best advantage of MOABC is that it could use less control parameters to get the most competitive performance. In order to demonstrate the performance of the MOABC algorithm, we compared the performance of the MOABC with those of NSABC (with nondominated sorting strategy only), MOCLPSO, and NSGA-II optimization algorithms on several two-objective test problems and three-objective functions.

For the two-objective test problems, from the simulation results, it is concluded that MOABC algorithm could converge to the true Pareto optimal front and maintain good diversity along the Pareto front. We can also see that MOABC is much better than NSABC in terms of diversity metric. We believe that comprehensive learning strategy could help find more nondominated solutions and improve the capabilities of the algorithm to distribute uniformly the nondominated vectors found. Additionally, from the simulation result we also know that MOABC has the same performance as MOCLPSO after 10000 FEs. They are much better than the other two algorithms. NSABC is a little better than NSGA-II. However, when the number of function evaluations is 20000, we can see that MOABC can greatly improve the performance of convergence, and it is 1-2 orders of magnitude better than that of MOCLPSO. NSABC also has a great improvement in convergence metric. However, it still has a bad diversity metric.

For DTLZ series problems, we can see that the performances of all algorithms are improved very small, when we increase the number of function evaluation. Simulation results show that MOCLPSO converges poorly on DTLZ1 and DTLZ3 problems. That means that MOCLPSO performs badly in terms of convergence for three-objective problem. We also know that MOABC and NSABC perform the best in terms of convergence and MOABC is a little better than NSABC. In diversity metric aspect, MOABC and NSABC can discover a well-distributed and diverse solution set for DTLZ series problems. However, In spite of NSGA-II getting a relatively good convergence metric, it cannot maintain a good diversity metric for DTLZ series problems.

On the whole, the proposed MOABC significantly outperforms three other algorithms in terms of convergence metric and diversity metric. Therefore, the proposed MOABC optimization algorithm can be considered as a viable and efficient method to solve multiobjective optimization problems. As our algorithm is not applied in the real problems, robustness of the algorithm needs to be verified. In the future work, we will apply MOABC algorithm to the real engineering problems.

Acknowledgment

This work is supported by National Natural Science Foundation of China under nos. 61174164, 61003208, and 61105067.

DantzigG. B.ThapaM. N.Linear Programming I: Introduction1997New York, NY, USASpringer-Verlagxxxviii+435Springer Series in Operations Research1485773ZBL0953.90561DebK.Multi-Objective Optimization Using Evolutionary Algorithms2001Chichester, UKJohn Wiley & Sonsxx+497Wiley-Interscience Series in Systems and Optimization1840619ZBL1131.90447Coello CoelloC. A.AbrahamA.JainL. C.GoldbergR.Recent trends in evolutionary multi-objective optimizationEvolutionary Multi-Objective Optimization: Theoretical Advances and Applications2005Berlin, GermanySpringer732DebK.PratapA.AgarwalS.MeyarivanT.A fast and elitist multiobjective genetic algorithm: NSGA-IIIEEE Transactions on Evolutionary Computation2002621821972-s2.0-003653077210.1109/4235.996017ZitzlerE.LaumannsM.ThieleL.SPEA2: improving the strength Pareto evolutionary algorithmMay 2001Zurich, SwitzerlandComputer Engineering and Networks Laboratory [TIK], Swiss Federal Institute of Technology (ETH)CoelloC. A. C.LechugaM. S.MOPSO: a proposal for multiple objective particle swarm optimization2Proceedings of Congresson Evolutionary Computation (CEC '02)2002IEEE Press10511056KarabogaD.An idea based on honey bee swarm for numerical optimization2005TR06Erciyes University, Engineering Faculty, Computer Engineering DepartmentKarabogaD.BasturkB.On the performance of artificial bee colony (ABC) algorithmApplied Soft Computing Journal2008816876972-s2.0-3454847902910.1016/j.asoc.2007.05.007LiangJ. J.QinA. K.SuganthanP. N.BaskarS.Comprehensive learning particle swarm optimizer for global optimization of multimodal functionsIEEE Transactions on Evolutionary Computation20061032812952-s2.0-3374473079710.1109/TEVC.2005.857610HuangV. L.SuganthanP. N.LiangJ. J.Comprehensive learning particle swarm optimizer for solving multiobjective optimization problemsInternational Journal of Intelligent Systems20062122092262-s2.0-3264444987910.1002/int.20128MiettinenK.Nonlinear multiobjective optimization1999Boston, MAKluwer Academic Publishersxxii+298International Series in Operations Research & Management Science, 121784937ZBL1058.53510LiH.ZhangQ.Multiobjective optimization problems with complicated pareto sets, MOEA/D and NSGA-IIIEEE Transactions on Evolutionary Computation20091322843022-s2.0-6734910802310.1109/TEVC.2008.925798KarabogaD.BasturkB.Artificial Bee Colony (ABC) optimization algorithm for solving constrained optimization problems4529Proceedings of the 12th international Fuzzy Systems Association world congress on Foundations of Fuzzy Logic and Soft Computing (IFSA '07)2007Springer789798KarabogaD.AkayB.Artificial Bee Colony (ABC) Algorithm on training artificial neural networksProceedings of the IEEE 15th Signal Processing and Communications Applications (SIU '07)June 20072-s2.0-5024910335810.1109/SIU.2007.4298679AkayB.bahriye@erciyes.edu.trKarabogaD.karaboga@erciyes.edu.trParameter tuning for the artificial bee colony algorithm5796 LNAIProceedings of the 1st International Conference on Computational Collective Intelligence (ICCCI '09)2009Wroclaw, Poland60861910.1007/978-3-642-04441-0-53KnowlesJ. D.CorneD. W.Approximating the nondominated front using the Pareto Archived Evolution StrategyEvolutionary computation200082149172DebK.Multi-Objective Optimization Using Evolutionary Algorithms2001Chichester, UKJohn Wiley & Sonsxx+497Wiley-Interscience Series in Systems and Optimization1840619ZBL1131.90447SchafferJ. D.GrefenstteteJ. J.Multipleobjective optimization with vector evaluated genetic algorithmsProceedings of the First International Conference on Genetic Algorithms1987Hillsdale, NJ, USALawrence Erlbaum93100FonsecaC. M.FlemingP. J.Multiobjective optimization and multiple constraint handling with evolutionary algorithms—part II: application exampleIEEE Transactions on Systems, Man, and Cybernetics Part A199828138472-s2.0-0031698375ZitzlerE.DebK.ThieleL.Comparison of multiobjective evolutionary algorithms: empirical resultsEvolutionary computation2000821731952-s2.0-0034199979DebK.ThieleL.LaumannsM.ZitzlerE.Scalable test problems for evolutionary multi-objective optimization20012001001Kanpur, IndiaKanpur Genetic Algorithms Laboratory (KanGAL), Indian Institute of TechnologyCoello CoelloC. A.LamontG. B.Van VeldhuizenD. A.Evolutionary Algorithms for Solving Multi-Objective Problems20072ndNew York, NY, USASpringerxxii+800Genetic and Evolutionary Computation Series2350880ZBL1200.90065GuriaC.BhattacharyaP. K.GuptaS. K.Multi-objective optimization of reverse osmosis desalination units using different adaptations of the non-dominated sorting genetic algorithm (NSGA)Computers and Chemical Engineering2005299197719952-s2.0-2414446569810.1016/j.compchemeng.2005.05.002WachowiakM. P.SmolíkováR.ZhengY.ZuradaJ. M.ElmaghrabyA. S.An approach to multimodal biomedical image registration utilizing particle swarm optimizationIEEE Transactions on Evolutionary Computation2004832893012-s2.0-314267435010.1109/TEVC.2004.826068