A Simple and Efficient Artificial Bee Colony Algorithm

,


Introduction
Optimization problems arise in many application areas such as engineering, economy, and management.Effective and efficient optimization algorithms are always required to tackle increasingly complex real-world optimization problems.In the past several years, some swarm intelligence algorithms, inspired by the social behaviors of birds, fish, or insects, have been proposed to solve optimization problems, such as particle swarm optimization (PSO) [1], ant colony optimization (ACO) [2], artificial bee colony (ABC) [3], and firefly algorithm (FA) [4].A recent study has shown that ABC performs significantly better or at least comparable to other swarm intelligence algorithms [5].
ABC is a new swarm intelligence algorithm proposed by Karaboga in 2005, which is inspired by the behavior of honey bees [3].Since the development of ABC, it has been applied to solve different kinds of problems [6].Similar to other stochastic algorithms, ABC also faces up some challenging problems.For example, ABC shows slow convergence speed during the search process.Due to the special search pattern of bees, a new candidate solution is generated by updating a random dimension vector of its parent solution.
Therefore, the offspring (new candidate solution) is similar to its parent, and the convergence speed becomes slow.Moreover, ABC easily falls into local minima when handling complex multimodal problems.The search pattern of bees is good at exploration but poor at exploitation [7].However, a good optimization algorithm should balance exploration and exploitation during the search process.
To improve the performance of ABC, this paper proposes a new search pattern for both employed and onlooker bees.
In the new approach, some best solutions are utilized to accelerate the convergence speed.In addition, a solution pool is constructed by storing the best 100% solutions in the current swarm with  ∈ (0, 1].The best solution used in the search pattern is randomly selected from the solution pool.This is helpful to balance the exploration and exploitation.Experiments are conducted on twelve benchmark functions.Simulation results show that our approach outperforms the original ABC and several other stochastic algorithms.
The rest of the paper is organized as follows.In Section 2, the original ABC algorithm is presented.Section 3 gives a brief overview of related work.Section 4 describes the proposed approach.In Section 5, experimental studies are presented.Finally, the work is concluded in Section 6.

Artificial Bee Colony
Artificial bee colony (ABC) algorithm is a recently proposed optimization technique which simulates the intelligent foraging behavior of honey bees.A set of honey bees is called swarm which can successfully accomplish tasks through social cooperation.In the ABC algorithm, there are three types of bees: employed bees, onlooker bees, and scout bees.The employed bees search food around the food source in their memory; meanwhile they share the information of these food sources to the onlooker bees.The onlooker bees tend to select good food sources from those found by the employed bees.The food source that has higher quality (fitness) will have a large chance to be selected by the onlooker bees than the one of lower quality.The scout bees are translated from a few employed bees, which abandon their food sources and search new ones [8].
In the ABC algorithm, the first half of the swarm consists of employed bees, and the second half constitutes the onlooker bees.The number of employed bees or the onlooker bees is equal to the number of solutions in the swarm [3].
The ABC generates a randomly distributed initial population of SN solutions (food sources), where SN denotes the swarm size.Let   = { ,1 ,  ,2 , . . .,  , } represent the th solution in the swarm, where  is the dimension size.Each employed bee   generates a new candidate solution   in the neighborhood of its present position as follows: where   is a randomly selected candidate solution ( ̸ = ),  is a random dimension index selected from the set {1, 2, . . ., }, and  , is a random number within [−1, 1].
Once the new candidate solution   is generated, a greedy selection is used.If the fitness value of   is better than that of its parent   , then update   with   ; otherwise keep   unchangeable.
After all employed bees complete the search process, they share the information of their food sources with the onlooker bees through waggle dances.An onlooker bee evaluates the nectar information taken from all employed bees and chooses a food source with a probability related to its nectar amount.This probabilistic selection is really a roulette wheel selection mechanism which is described as follows: where fit  is the fitness value of the th solution in the swarm.As seen, the better the solution , the higher the probability of the th food source selected.If a position cannot be improved over a predefined number (called limit) of cycles, then the food source is abandoned.Assume that the abandoned source is   , then the scout bee discovers a new food source to be replaced with   as follows: where rand(0, 1) is a random number within [0, 1] based on a normal distribution and ,  are lower and upper boundaries of theth dimension, respectively.

Related Work
Since the development of ABC, it has attracted much attention for its excellent characteristics.In the last decade, different versions of ABCs have been applied to various problems.In this section, we present a brief review of these ABC algorithms.Karaboga and Akay [5] presented a comparative study of ABC.A large set of benchmark functions are tested in the experiments.Results show that the ABC is better than or similar to those of other population-based algorithms with the advantage of employing fewer control parameters.Inspired by differential evolution (DE) algorithm, Gao and Liu [7,9] proposed two improved versions of ABC.In [7], a new search pattern called ABC/best/1 is utilized to accelerate the convergence speed.In [9], ABC/best/1 and another search pattern called ABC/rand/1 are employed.Moreover, a parameter  is intruded to control the frequency of these two patterns.Zhu and Kwong [8] utilized the search information of the global best solution ( best ) to guide the search of ABC.Reported results show that the new approach achieves better results than the original ABC algorithm.Akay and Karaboga [10] proposed a modified ABC algorithm, in which two new search patterns, frequency and magnitude of the perturbation, are employed to improve the convergence rate.Results show that the original ABC algorithm can efficiently solve basic and simple functions, while the modified ABC algorithm obtains promising results on hybrid and complex functions when compared to some state-of-the-art algorithms.Banharnsakun et al. [11] modified the search pattern of the onlooker bees, in which the best feasible solutions found so far are shared globally among the entire swarm.Therefore, the new candidate solutions are similar to the current best solution.Kang et al. [12] proposed a Rosenbrock ABC (RABC) algorithm which combines Rosenbrock's rotational direction method with the original ABC.There are two alternative phases of RABC: the exploration phase realized by ABC and the exploitation phased completed by the Rosenbrock method.Wu et al. [13] combined harmony search (HS) and the ABC algorithm to construct a hybrid algorithm.Comparison results show that the hybrid algorithm outperforms ABC, HS, and other heuristic algorithms.Li et al. [14] proposed an improved ABC algorithm called I-ABC, in which the best-so-far solution, inertia weight, and acceleration coefficients are introduced to modify the search process.Moreover, a hybrid ABC algorithm (PS-ABC) based on  best -guided ABC (GABC) [8] and I-ABC is proposed.Results show that PS-ABC converges faster than I-ABC and ABC.
Karaboga and Ozturk [15] used ABC algorithm for data clustering.Experiments are conducted on thirteen typical test data sets from UCL Machine Learning Repository.The performance of ABC is compared with PSO and other nine classification techniques.Simulation results demonstrate that the ABC algorithm can efficiently solve data clustering.Zhang et al. [16] also used ABC algorithm for clustering.Three data sets are tested.The performance of ABC is compared with genetic algorithm, simulated annealing, tabu search, ACO, and K-NM-PSO.Results demonstrate the effectiveness of ABC on clustering.Karaboga and Ozturk [17] applied ABC to solve fuzzy clustering.Three data sets including cancer, diabetes, and heart chosen from UCI database are tested.Results indicate that the performance of ABC is successful in fuzzy clustering.
The ABC algorithm is usually used to solve unconstrained optimization problems.In [18], Karaboga and Akay investigated the performance of ABC on constrained optimization problems.In order to handle constraints, Deb's rules consisting of three simple heuristic rules are employed.Mezura-Montes and Velez-Koeppel [19] proposed an elitist ABC algorithm for constrained real-parameter optimization, in which the operators used by different types of bees are modified.Additionally, a dynamic tolerance control mechanism for equality constraints is utilized to facilitate the approach to the feasible region of the search space.Yeh and Hsieh [20] proposed a penalty-guided ABC algorithm to solve reliability redundancy allocation problems.Sabat et al. [21] presented an application of ABC to extract the small signal equivalent circuit model parameters of GaAS metal-extended semiconductor field effect transistor (MESFT) device.The performance comparison shows that ABC is better than PSO.
It is known that the ABC algorithm is good at solving optimization problems over continuous search space.For discrete optimization problems, it is a big challenge for the ABC algorithm.Li et al. [22] used a hybrid Pareto-based ABC algorithm to solve flexible job shop-scheduling problems.In the new algorithm, each food sources is represented by two vectors, that is, the machine assignment and the operation scheduling.Moreover, an external Pareto archive set is utilized to record nondominated solutions.In [23], Kashan et al. designed a new ABC algorithm called DisABC to optimize binary structured problems.Szeto et al. [24] proposed an enhanced ABC algorithm to solve capacitated vehicle routing problem.The performance of the new approach is tested on two sets of standard benchmark instances.Simulation results show that the new algorithm outperforms the original ABC and several other existing algorithms.Pan et al. [25] presented a discrete ABC algorithm hybridized with a variant of iterated greedy algorithm to solve a permutation flow shopscheduling problem with the total flow time criterion.

Proposed Approach
Differential evolution (DE) has shown excellent search abilities on many optimization problems.Like other populationbased stochastic algorithms, DE also starts with an initial population with randomly generated candidate solutions.After initialization, DE repeats three operations: mutation, crossover, and selection.Among these operations, mutation operation is very important.The mutation scheme highly influences the performance of DE.There are several different mutation schemes, such as DE/rand/1, DE/rand/2, DE/best/1, and DE/best2 [26].
The property of a mutation scheme determines the search behavior of individuals in the population.For DE/rand/1, it results in good exploration but slow convergence speed.For DE/best/1, it obtains fast convergence speed but poor exploration.The DE/rand/1 and DE/best/1 are described as follows: , =  1, +  ⋅ ( 2, −  3, ) , where  1 ,  2 , and  3 are three randomly selected individuals from the current population,  ̸ = 1 ̸ = 2 ̸ = 3,  best is the best individual found so far, and the parameter  is known as the scale factor which is usually set to 0.5.
As seen, the search pattern of employed and onlooker bees is similar to the mutation schemes of DE.It is known that the ABC algorithm is good at exploration, but it shows slow convergence speed.By combining the DE/best/1 and the ABC algorithm, it may accelerate the convergence speed of ABC.However, this hybridization is not a new idea.In [7], Gao and Liu embedded DE/rand/1 and DE/best/1 into the ABC algorithm.To balance the exploration and exploitation, a new parameter  is introduced.Results reported in [7] show that the parameter  is problem oriented, and an empirical value  = 0.7 is used.
In this paper, we propose a new ABC (called NABC) algorithm by employing a modified DE/best/1 strategy.NABC differs from other hybrid algorithms [7,9], which combine ABC and DE.Although the global best individual used in DE/best/1 can accelerate the convergence speed by the attraction, it may result in attracting too fast.It means that new solutions move to the global best solution very quickly.To tackle this problem, a solution pool is constructed by storing the best 100p% solutions in the current swarm with  ∈ (0, 1].The idea is inspired by an adaptive DE algorithm (JADE) [27].It shares in common with the concept of belief space of cultural algorithm (CA) [28].Both of them utilize some successful solutions stored in solution pool or situational knowledge to guide other individuals.But the updating rule of the solution pool or situational knowledge is different.The new ABC/best/1 strategy is described as follows: where   best, is randomly chosen from the solution pool,  1 ,  2 are two randomly selected candidate solutions from the current swarm,  ̸ = 1 ̸ = 2,  is a random dimension index selected from the set {1, 2, . . ., }, and  , is a random number within [−1, 1].Empirical studies show that a good choice of the parameter  should be set between 0.08 and 0.15.In this paper,  is set to 0.1 for all experiments.
According to the new search pattern described in (5), new candidate solutions are generated around some best solutions.This is helpful to accelerate the convergence speed.For the existing ABC/best/1 strategy proposed in [7], it only searches the neighborhood of the global best solution.In our approach, bees can search the neighborhood of different best solutions.This can help avoid fast attraction.
The main steps of our new approach NABC algorithm are listed as follows.
Step 2. Update the solution pool.Step 3.For each employed bee, generate a new candidate solution   according to (5).Evaluate the fitness of   and use a greed selection to choose a better one between   and   as the new   .
Step 5. Generate a new   according to (5) based on   and the current solution   (food source).Evaluate the fitness of   and use a greed selection to choose a better one between   and   as the new   .
Step 6.The scout bee determines the abandoned   , if exists and update it by (3).
Step 7. Update the best solution found so far, and cycle = cycle + 1.
Step 8.If the number of cycles reaches to the maximum value MCN, then stop the algorithm and output the results; otherwise go to Step 2.
Compared to the original ABC algorithm, our approach NABC does not add extra operations except for the construction of the solution pool.However, this operation does not add the computational complexity.Both NABC and the original ABC have the same computational complexity.

Test Functions.
In order to verify the performance of NABC, experiments are conducted on a set of twelve benchmark functions.These functions were early used in [29].
According to their properties, they are divided into two classes: unimodal functions ( 1 −  7 ) and multimodal functions ( 8 − 12 ).All functions are minimization problems, and their dimensional size is 30.Table 1 presents the descriptions of these functions, where Opt is the global optimum.
The experiments are performed on the same computer with Intel (R) Core (TM)2 Duo CPU T6400 (2.00 GHz) and 2 GB RAM.Our algorithm is implemented using C++ and complied with Microsoft Visual C++ 6.0 under the Windows XP (SP3).

Comparison of NABC with ABC.
In order to investigate the effectiveness of our new search pattern, this section presents a comparison of NABC with the original ABC algorithm.In the experiments, both NABC and ABC use the same parameter settings.The population size SN, limit, and maximum number of cycles (MSN) are set to 100, 100, and 1000, respectively.The parameter  is set to 0.1 based on empirical studies.All results reported in this section are averaged over 30 independent runs.Table 2 presents the computational results of ABC and NABC on the twelve functions, where "Mean" indicates the mean function value and "Std Dev" represents the standard deviation.The best results between ABC and NABC are shown in bold.From the results, it can be seen that NABC achieves better results than ABC on all test functions except for  6 .On this function, both the two algorithms can find the global optimum.For  8 and  9 , NABC can successfully find the global optimum, while ABC converges to near-optimal solutions.It demonstrates that the new search pattern used in NABC is helpful to improve the accuracy of solutions.
In order to compare the convergence speed of NABC and ABC, Figure 1 lists the convergence processes of them on

Comparison of NABC with Other Algorithms.
To further verify the performance of NABC, this section compares NABC with other population-based algorithms, including some recently proposed ABC algorithms.

Comparison of NABC with Evolution
Strategies.This section focuses on the comparison of the NABC algorithm with Evolution Strategies (ES).The versions of the ES include classical evolution strategies (CES) [30], fast evolution strategies (FES) [30], covariance matrix adaptation evolution strategies (CMA-ES) [31], and evolutionary strategies learned with automatic termination (ESLAT) [32].
The parameter settings of CES, FES, CMA-ES, and ESLAT can be found in [32].For NABC, the population size and the maximum number of fitness evaluations are set to 20 and 100000 (it means that the MSN is 2500), respectively.The parameter limit is set to 600 [5].The parameter  is set to 0.1 based on empirical studies.All algorithms are conducted on 50 runs for each test function.
Table 3 presents the comparison results of CES, FES, CMA-ES, ESLAT, and NABC.Results of CES, FES, CMA-ES, and ESLAT were taken from Table 20 in [5].Among these algorithms, the best results are shown in bold.The last column of Table 3 reports the statistical significance level of the difference of the means of NABC and the best algorithm among the four evolution strategies.Note that here "+" represents the  value of 49 degrees of freedom which is significant at a 0.05 level of significance by two-tailed test, "⋅" indicates the difference of means which is not statistically significant, and "NA" means not applicable, covering cases for which the two algorithms achieve the same accuracy results [33].
From the results, it can be seen that NABC outperforms CES and FES on eight functions, while CES and FES achieve better results on three.For  6 , CES, FES, and NABC find the global optimum, while ESLAT and CMA-ES fail to solve it.NABC performs better than ESLAT on ten functions, while ESLAT outperforms NABC for the rest of the two functions.CMA-ES achieves better results than NABC on three functions, while NABC performs better for the rest of the nine functions.The comparison results show that the evolutionary strategies perform better than NABC on unimodal functions, such as  1 −  4 .NABC outperforms the evolutionary strategies on all multimodal functions ( 8 − 12 ).

Comparison of NABC with
Other Improved ABC Algorithms.In this section, we present a comparison of NABC with three recently proposed ABC algorithms.The involved algorithms are listed as follows.
In the experiments, the population size SN is set to 40, and limit equals 200.The maximum number of cycles is set as 1000.Other parameter settings of GABC, I-ABC, and PS-ABC can be found in [14].The parameter  used in NABC is set to 0.1 based on empirical studies.All algorithms are conducted 30 times for each test function, and the mean function values are reported.
Table 4 presents the comparison results of NABC with three other ABC algorithms.Results of GABC, I-ABC and PS-ABC were taken from Tables 4 and 5 in [14].The best results among the four algorithms are shown in bold.From the results, NABC outperforms GABC on all test functions except for  2 .On this function, GABC is slightly better than NABC.I-ABC achieves better results than NABC on five functions, while NABC performs better on six functions.For the rest of  9 , I-ABC, PS-ABC, and NABC can find the global optimum.PS-ABC obtains better results than NABC on six functions, while NABC outperforms PS-ABC on five functions.Both I-ABC and PS-ABC achieve significantly better results on three unimodal functions, such as  1 ,  2 , and

Conclusions
Artificial bee colony is a new optimization technique which has shown to be competitive to other population-based stochastic algorithms.However, ABC and other stochastic algorithms suffer from the same problems.For example, the convergence speed of ABC is typically slower than PSO and DE.Moreover, the ABC algorithm easily gets stuck when handling complex multimodal problems.The main reason is that the search pattern of both employed and onlooker bees is good at exploration but poor at exploitation.In order to balance the exploration and exploitation of To verify the performance of our approach, a set of twelve benchmark functions are used in the experiments.Comparison of NABC with ABC demonstrates that our new search pattern can effectively accelerate the convergence speed and improve the accuracy of solutions.Another comparison demonstrates that NABC is significantly better or at least comparable to other stochastic algorithms.Compared to other improved ABC algorithms, our approach is simpler and easier to implement.

Table 1 :
Benchmark functions used in the experiments.

Table 2 :
Results achieved by the ABC algorithm and NABC.

Table 3 :
Comparison of NABC with evolution strategies.

Table 4 :
Comparison of NABC with other ABC algorithms. 4 .On these functions, they can find the global optimum, while GABC and NABC only find near-optimal solutions except for  4 .For  4 , both GABC and NABC fall into local minima.I-ABC and PS-ABC successfully find the global optimum on  11 , while GABC and NABC fail.For function  10 , I-ABC and PS-ABC are slightly better than NABC.For other two multimodal functions  8 and  12 , NABC performs better than other three ABC algorithms.Compared to I-ABC and PS-ABC, our approach NABC is simpler and easier to implement.
ABC, this paper proposes a new ABC variant (NABC).It is known that DE/best/1 mutation scheme is good at exploitation.Based on DE/best/1, a new search pattern called ABC/best/1 with solution pool is proposed.Our approach differs from other improved ABC algorithms by hybridization of DE/best/1 and ABC.