^{1, 2}

^{1}

^{1}

^{1, 2}

^{1}

^{2}

Multiobjective optimization has been a difficult problem and focus for research in fields of science and engineering. This paper presents a novel algorithm based on artificial bee colony (ABC) to deal with multi-objective optimization problems. ABC is one of the most recently introduced algorithms based on the intelligent foraging behavior of a honey bee swarm. It uses less control parameters, and it can be efficiently used for solving multimodal and multidimensional optimization problems. Our algorithm uses the concept of Pareto dominance to determine the flight direction of a bee, and it maintains nondominated solution vectors which have been found in an external archive. The proposed algorithm is validated using the standard test problems, and simulation results show that the proposed approach is highly competitive and can be considered a viable alternative to solve multi-objective optimization problems.

In the real world, many optimization problems have to deal with the simultaneous optimization of two or more objectives. In some cases, however, these objectives are in contradiction with each other. While in single-objective optimization the optimal solution is usually clearly defined, this does not hold for multiobjective optimization problems. Instead of a single optimum, there is rather a set of alternative trade-offs, generally known as Pareto optimal solutions. These solutions are optimal in the wider sense that no other solutions in the search space are superior to them when all objectives are considered.

In the 1950s, in the area of operational research, a variety of methods have been developed for the solution of multiobjective optimization problems (MOP). Some of the most representative classical methods are linear programming, the weighted sum method, and the goal programming method [

Artificial bee colony (ABC) algorithm is a new swarm intelligent algorithm that was first introduced by Karaboga in Erciyes University of Turkey in 2005 [

The remainder of the paper is organized as follows. Section

A multiobjective optimization problem (MOP) can be stated as follows:

For (

Illustrative example of Pareto optimality in objective space (a) and the possible relations of solutions in objective space (b).

The artificial bee colony algorithm is a new population-based metaheuristic approach, initially proposed by Karaboga [

Main steps of the ABC algorithm

(1) cycle = 1

(2) Initialize the food source positions (solutions)

(3) Evaluate the nectar amount (fitness function

(4)

(5) Employed Bees’ Phase

For each employed bee

Produce new food source positions

Calculate the value

If new position better than previous position

Then memorizes the new position and forgets the old one.

End For.

(6) Calculate the probability values

(7) Onlooker Bees’ Phase

For each onlooker bee

Chooses a food source depending on

Produce new food source positions

Calculate the value

If new position better than previous position

Then memorizes the new position and forgets the old one.

End For

(8) Scout Bee Phase

If there is an employed bee becomes scout

Then replace it with a new random source positions

(9) Memorize the best solution achieved so far

(10) cycle = cycle + 1.

(11)

In the initialization phase, the ABC algorithm generates randomly distributed initial food source positions of

In the employed bees’ phase, each employed bee finds a new food source

In the onlooker bees’ phase, each onlooker chooses a food source with a probability which is related to the nectar amount (fitness) of a food source shared by employed bees. Probability is calculated using the following expression:

In the scout bee phase, if a food source cannot be improved through a predetermined cycles, called “limit”, it is removed from the population, and the employed bee of that food source becomes scout. The scout bee finds a new random food source position using

These steps are repeated through a predetermined number of cycles, called maximum cycle number (MCN), or until a termination criterion is satisfied [

As opposed to single-objective optimization, MOEAs usually maintain a nondominated solutions set. In multiobjective optimization, for the absence of preference information, none of the solutions can be said to be better than the others. Therefore, in our algorithm, we use an external archive to keep a historical record of the nondominated vectors found along the search process. This technique is used in many MOEAs [

In the initialization phase, the external archive will be initialized. After initializing the solutions and calculating the value of every solution, they are sorted based on nondomination. We compare each solution with every other solution in the population to find which one is nondominated solution. We then put all nondominated solutions into external archive. The external archive will be updated at each generation.

MOEA’s success is due to their ability to find a set of representative Pareto optimal solutions in a single run. In order to approximate the Pareto optimal set in a single optimization run, evolutionary algorithms have to perform a multimodal search where multiple, widely different solutions are to be found. Therefore, maintaining a diverse population is crucial for the efficacy of an MOEA. ABC algorithm has been demonstrated to possess superior performance in the single-objective domain. However, NSABC that we presented as the first version multiobjective ABC algorithm cannot get satisfactory results in terms of diversity metric. So as to apply ABC algorithm to multiobjective problems, we use the comprehensive learning strategy which is inspired by comprehensive learning particle swarm optimizer (CLPSO) [

In our algorithm, all solutions in the external archive are regarded as food source positions, and all bees are regarded as onlooker bees. There do not exist employed bees and scouts. In each generation, each onlooker randomly chooses a food source from external archive, goes to the food source area, and then chooses a new food source. In original ABC algorithm, each bee finds a new food source by means of the information in the neighborhood of its current source. In our proposed MOABC algorithm, however, we use the comprehensive learning strategy. Like CLPSO,

As the evolution progresses, more and more new solutions enter the external archive. Considering that each new solution will be compared with every nondominated solution in the external archive to decide whether this new solution should stay in the archive, and the computational time is directly proportional to the number of comparisons, the size of external archive must be limited.

In our algorithm, each individual will find a new solution in each generation. If the new solution dominates the original individual, then the new solution is allowed to enter the external archive. On the other hand, if the new solution is dominated by the original individual, then it is denied access to the external archive. If the new solution and the original individual do not dominate each other, then we randomly choose one of them to enter the external archive. After each generation, we update external archive. We select nondominated solutions from the archive and keep them in the archive. If the number of nondominated solutions exceeds the allocated archive size, then crowding distance [

Crowding distance is used to estimate the density of solutions in the external archive. Usually, the perimeter of the cuboid formed by using the nearest neighbors as the vertices is called crowding distance. In Figure

Crowding distance: Points are all nondominated solutions in the external archive.

The crowding distance computation requires sorting the population in the external archive according to each objective function value in ascending order of magnitude. Thereafter, for each objective function, the boundary solutions (solutions with smallest and largest function values) are assigned as an infinite distance value. All other intermediate solutions are assigned a distance value equal to the absolute normalized difference in the function values of two adjacent solutions. This calculation is continued with other objective functions. The overall crowding distance value is calculated as the sum of individual distance values corresponding to each objective. Each objective function is normalized before calculating the crowding distance [

The ABC algorithm is very simple when compared to the existing swarm-based algorithms. Therefore, we develop it to deal with multiobjective optimization problems. The main steps of the MOABC algorithm are shown in Algorithm

Main steps of the MOABC algorithm

(1) cycle = 1

(2) Initialize the food source positions (solutions)

(3) Evaluate the nectar amount (fitness

(4) The initialized solutions are sorted based on nondomination

(5) Store nondominated solutions in the external archive EA

(6)

(7) Onlooker Bees’ Phase

For each onlooker bee

Randomly chooses a solution from EA

Produce new solution

Calculate the value

Apply greedy selection mechanism in Algorithm

EndFor

(8) The solutions in the EA are sorted based on nondomination

(9) Keep the nondomination solutions of them staying in the EA

(10) If the number of nondominated solutions exceeds the allocated the size of EA

Use crowding distance to remove the crowded members

(11) cycle = cycle + 1.

(12)

In the initialization phase, we evaluate the fitness of the initial food source positions and sort them based on nondomination. Then we select nondominated solutions and store them in the external archive EA. This is the initialization of the external archive.

In the onlooker bees’ phase, we use comprehensive learning strategy to produce new solutions

Greedy selection mechanism

If

Put

Else if

Do nothing

Else if

Put

Produce a random number

If

The the original solution is replaced by the new solution as new food source position.

That means

Else

Do nothing

End If

End If

Possible search regions per dimensions:

In nondominated sorting phase, after a generation, the solutions in the EA are sorted based on nondomination, and we keep the nondomination solutions of them staying in the EA. If the number of nondominated solutions exceeds the allocated size of EA, we use crowding distance to remove the crowded members. Crowding distance algorithm is seen in [

In the following, we will first describe the benchmark functions used to compare the performance of MOABC with NSABC, NSGA-II, and MOCLPSO. And then we will introduce performance measures. For every algorithm, we will give the parameter settings. At last, we will present simulation results for benchmark functions.

In order to illustrate the performance of the proposed MOABC algorithm, we used several well-known test problems SCH, FON, ZDT1 to ZDT4, and ZDT6 as the two-objective test functions. And we used Deb-Thiele-Laumanns-Zitzler (DTLZ) problem family as the three-objective test functions.

Although simple, the most studied single-variable test problem is Schaffer’s two-objective problem [

Schaffer and Fonseca and Fleming used a two-objective optimization problem (FON) [

The Pareto optimal solution to this problem is

This is a 30-variable (

This is also an

This is an

This is an

This is a 10-variable problem having a nonconvex Pareto optimal set. Moreover, the density of solutions across the Pareto optimal region is nonuniform, and the density towards the Pareto optimal front is also thin:

A simple test problem uses

This test problem has a spherical Pareto optimal front:

In order to investigate an MOEA’s ability to converge to the global Pareto optimal front, the g function given in ZDTL1 is used in ZDTL2 problem:

This test problem has

In order to facilitate the quantitative assessment of the performance of a multiobjective optimization algorithm, two performance metrics are taken into consideration: (1) convergence metric

This metric measures the extent of convergence to a known set of Pareto optimal solutions, as follows:

Convergence metric

This metric measure the extent of spread achieved among the obtained solutions. Here, we are interested in getting a set of solutions that spans the entire Pareto optimal region. This metric is defined as:

Diversity metric

This algorithm was proposed by Deb et al. It is a revised version of the NSGA proposed by Guria [

The original NSGA-II algorithm uses simulated binary crossover (SBX) and polynomial crossover. We use a population size of 100. Crossover probability

MOCLPSO was proposed by Huang et al. [

For the MOABC, a colony size of 50, archive size

The experimental results, including the best, worst, average, median, and standard deviation of the convergence metric and diversity metric values found in 10 runs are proposed in Tables

Comparison of performance on SCH after 10000 FEs.

SCH | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on SCH after 20000 FEs.

SCH | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on FON after 10000 FEs.

FON | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on FON after 20000 FEs.

FON | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on ZDT1 after 10000 FEs.

ZDT1 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on ZDT1 after 20000 FEs.

ZDT1 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on ZDT2 after 10000 FEs.

ZDT2 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on ZDT2 after 20000 FEs.

ZDT2 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on ZDT3 after 10000 FEs.

ZDT3 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on ZDT3 after 20000 FEs.

ZDT3 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on ZDT4 after 10000 FEs.

ZDT4 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | N/A | |||

Median | N/A | ||||

Best | N/A | ||||

Worst | N/A | ||||

Std | N/A | ||||

Diversity metric | Average | N/A | |||

Median | N/A | ||||

Best | N/A | ||||

Worst | N/A | ||||

Std | N/A |

Comparison of performance on ZDT4 after 20000 FEs.

ZDT4 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | N/A | |||

Median | N/A | ||||

Best | N/A | ||||

Worst | N/A | ||||

Std | N/A | ||||

Diversity metric | Average | N/A | |||

Median | N/A | ||||

Best | N/A | ||||

Worst | N/A | ||||

Std | N/A |

Comparison of performance on ZDT6 after 10000 FEs.

ZDT6 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on ZDT6 after 20000 FEs.

ZDT6 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on DTLZ1 after 10000 FEs.

DTLZ1 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | N/A | |||

Median | N/A | ||||

Best | N/A | ||||

Worst | N/A | ||||

Std | N/A | ||||

Diversity metric | Average | N/A | |||

Median | N/A | ||||

Best | N/A | ||||

Worst | N/A | ||||

Std | N/A |

Comparison of performance on DTLZ1 after 20000 FEs.

DTLZ1 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | N/A | |||

Median | N/A | ||||

Best | N/A | ||||

Worst | N/A | ||||

Std | N/A | ||||

Diversity metric | Average | N/A | |||

Median | N/A | ||||

Best | N/A | ||||

Worst | N/A | ||||

Std | N/A |

Comparison of performance on DTLZ2 after 10000 FEs.

DTLZ2 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on DTLZ2 after 20000 FEs.

DTLZ2 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on DTLZ3 after 10000 FEs.

DTLZ3 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | N/A | |||

Median | N/A | ||||

Best | N/A | ||||

Worst | N/A | ||||

Std | N/A | ||||

Diversity metric | Average | N/A | |||

Median | N/A | ||||

Best | N/A | ||||

Worst | N/A | ||||

Std | N/A |

Comparison of performance on DTLZ3 after 20000 FEs.

DTLZ3 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | N/A | |||

Median | N/A | ||||

Best | N/A | ||||

Worst | N/A | ||||

Std | N/A | ||||

Diversity metric | Average | N/A | |||

Median | N/A | ||||

Best | N/A | ||||

Worst | N/A | ||||

Std | N/A |

Comparison of performance on DTLZ6 after 10000 FEs.

DTLZ6 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Comparison of performance on DTLZ6 after 20000 FEs.

DTLZ6 | NSABC | MOABC | MOCLPSO | NSGA-II | |
---|---|---|---|---|---|

Converge metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std | |||||

Diversity metric | Average | ||||

Median | |||||

Best | |||||

Worst | |||||

Std |

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on SCH after 20000 FEs.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on FON after 20000 FEs.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on ZDT1 after 20000 FEs.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on ZDT2 after 20000 FEs.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on ZDT3 after 20000 FEs.

Pareto fronts obtained by NSABC, MOABC, and NSGA-II on ZDT4 after 20000 FEs.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on ZDT6 after 20000 FEs.

The true Pareto optimal front on DTLZ1.

Pareto fronts obtained by NSABC, MOABC, and NSGA-II on DTLZ1 after 20000 FEs.

The true Pareto optimal front on DTLZ2.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on DTLZ2 after 20000 FEs.

The true Pareto optimal front on DTLZ3.

Pareto fronts obtained by NSABC, MOABC, and NSGA-II on DTLZ3 after 20000 FEs.

The true Pareto optimal front on DTLZ6.

Pareto fronts obtained by NSABC, MOABC, MOCLPSO, and NSGA-II on DTLZ6 after 20000 FEs.

From Tables

For FON problem, like SCH, it can be observed from Tables

On ZDT1 function, when given 10000 function evaluations for four algorithms, Table

On ZDT2 function, the results of the performance measures show that MOABC and MOCLPSO have better convergence and diversity compared to the NSABC and NSGA-II. From Table

ZDT3 problem has a number of disconnected Pareto optimal fronts. The situation is the same on ZDT2 problem. The performances of MOABC and MOCLPSO are almost the same after 10000 function evaluations, and they are a little better than that of NSABC, and two orders of magnitude better than that of NSGA-II in convergence metric. However, when the number of function evaluations is 20000, it is found from Table

The problem ZDT4 has 21^{9} or about 8(10^{11}) different local Pareto optimal fronts in the search space, of which only one corresponds to the global Pareto optimal front. The Euclidean distance in the decision space between solutions of two consecutive local Pareto optimal sets is 0.25 [^{9} local Pareto optimal fronts. Even though they cannot find the true Pareto front, they are able to find a good spread of solutions at a local Pareto optimal front, as shown in Figure

From Tables

From Tables

However, NSGA-II gets the worse distribution of solutions on the set of optimal solutions found.

On DTLZ2 function, from Tables

Like the DTLZ1 function, MOCLPSO converges poorly on DTLZ3 problem. For this problem, all algorithms could not quite converge on to the true Pareto optimal fronts after 20000 FEs. MOABC is a little better than the other algorithms in convergence metric. From tables, algorithms, and Figure

DTLZ6 problem has

In this paper, we present a novel article bee colony (ABC) algorithm to solving multiobjective optimization problems, namely, multiobjective artificial bee colony (MOABC). In our algorithm, we use Pareto concept and external archive strategy to make the algorithm converge to the true Pareto optimal front and use comprehensive learning strategy to ensure the diversity of population. The best advantage of MOABC is that it could use less control parameters to get the most competitive performance. In order to demonstrate the performance of the MOABC algorithm, we compared the performance of the MOABC with those of NSABC (with nondominated sorting strategy only), MOCLPSO, and NSGA-II optimization algorithms on several two-objective test problems and three-objective functions.

For the two-objective test problems, from the simulation results, it is concluded that MOABC algorithm could converge to the true Pareto optimal front and maintain good diversity along the Pareto front. We can also see that MOABC is much better than NSABC in terms of diversity metric. We believe that comprehensive learning strategy could help find more nondominated solutions and improve the capabilities of the algorithm to distribute uniformly the nondominated vectors found. Additionally, from the simulation result we also know that MOABC has the same performance as MOCLPSO after 10000 FEs. They are much better than the other two algorithms. NSABC is a little better than NSGA-II. However, when the number of function evaluations is 20000, we can see that MOABC can greatly improve the performance of convergence, and it is 1-2 orders of magnitude better than that of MOCLPSO. NSABC also has a great improvement in convergence metric. However, it still has a bad diversity metric.

For DTLZ series problems, we can see that the performances of all algorithms are improved very small, when we increase the number of function evaluation. Simulation results show that MOCLPSO converges poorly on DTLZ1 and DTLZ3 problems. That means that MOCLPSO performs badly in terms of convergence for three-objective problem. We also know that MOABC and NSABC perform the best in terms of convergence and MOABC is a little better than NSABC. In diversity metric aspect, MOABC and NSABC can discover a well-distributed and diverse solution set for DTLZ series problems. However, In spite of NSGA-II getting a relatively good convergence metric, it cannot maintain a good diversity metric for DTLZ series problems.

On the whole, the proposed MOABC significantly outperforms three other algorithms in terms of convergence metric and diversity metric. Therefore, the proposed MOABC optimization algorithm can be considered as a viable and efficient method to solve multiobjective optimization problems. As our algorithm is not applied in the real problems, robustness of the algorithm needs to be verified. In the future work, we will apply MOABC algorithm to the real engineering problems.

This work is supported by National Natural Science Foundation of China under nos. 61174164, 61003208, and 61105067.